๐ท SPDP Inference Polytope
The Big Picture
The three guards โ honesty, stability, and holonomy โ each define a constraint. The intersection of all three constraints creates a geometric shape in inference space: a polytope.
If the model's reasoning stays inside this polytope, it's safe. The distance from the boundary tells you the safety margin.
What is SPDP?
Shifted Partial Derivative Projection. Given a polynomial encoding of a computational object, take all partial derivatives up to a fixed order, shift by bounded-degree monomials, then project through the observer's resource constraints.
The rank of the resulting matrix measures residual complexity:
- Low rank = tractable โ the observer can verify the computation (inside the polytope)
- High rank = intractable โ the computation is beyond the observer's reach (outside)
MSPDP[i,j] = coefficient of xi in โj(f) ยท xshift
Rank as Safety Measure:
rank(MSPDP) โค r โน computation is r-tractable for the observer
Polytope Definition:
P = { s โ โn : honesty(s) โฅ hโ, stability(s) โฅ tโ, holonomy(s) โค ฮตโ, rank(s) โค rโ }
The safe region is the intersection of all constraint half-spaces.
Visualising the Safe Region
In 2D, the polytope is a polygon. In 3D, a polyhedron. In higher dimensions, it's a convex body โ but the principle is the same.
The model's state is a point. Safety = point inside the body. Risk = distance to nearest face. If it crosses a face, one of the three guards has failed.
Boundary Distance
How close to the edge of safety? If the model is deep inside the polytope, it has a large safety margin. If it's near a face, it's at risk. If it crosses a face, one of the three guards has failed.
The boundary distance is computed as the minimum distance from the current state to each constraint hyperplane. The smallest distance is the bottleneck โ the weakest safety guarantee.
Code Example
Connection to Complexity Theory
The SPDP rank connects to algebraic complexity โ problems with low rank are computationally tractable for the observer, problems with high rank are beyond reach.
This is the mathematical foundation for why certain AI behaviours are verifiable and others aren't. If a model's reasoning has low SPDP rank, an observer with bounded resources can check it. If the rank is high, the reasoning is opaque โ the observer literally cannot verify it with available computation.
The polytope boundary therefore represents the frontier of verifiability: inside, the observer can check safety; outside, they can't.