Sheaf Transport Engine
Machine-verified proof transport across representation boundaries via differentiable sheaf-theoretic ATP.
A theorem proved in one algebraic lens—graph-theoretic, tensorial, topological—can be transported to a different lens with machine-checked correctness guarantees. Every transport step is verified in the Lean 4 kernel.
DRAG-ALONG PROCESS — A lattice of ontology nodes (white) contains a proved subgraph (cyan). The sheaf transport engine identifies direct connections from the subgraph boundary to external nodes (orange), then drags along new solutions at second-hop nodes (green). When the sweep completes, the entire expanded subgraph unifies (gold) — new proofs inherited automatically from the original subgraph via verified transport.
Multi-representational proof systems break at boundaries.
Mathematical objects admit many algebraic representations. A ring identity has a natural tensor encoding; a graph property lives naturally in list/adjacency form; a set-theoretic fact belongs in a topological lens. When an automated prover finds a proof in one representation, that proof is locked there—it cannot be reused or composed with proofs found in other representations without manual reconstruction.
The Sheaf Transport Engine solves this by implementing cross-lens transport as a first-class operation on proof terms, with every transport step constrained by a verified cocycle condition. The result: proofs flow across representation boundaries automatically, and the transport itself is an auditable artifact.
How it works
The nucleus-retracted subspace
All operations respect a combinator-level nucleus operator nucleusR that identifies stable fixed points in the SKY combinator algebra. The operator is proved idempotent: applying it twice is the same as applying it once. Every gradient step, every lens projection, and every transport lands in this fixed-point subspace—not by convention, but by a chain of preservation lemmas culminating in retractedStep_preserves_fixed.
This gives the engine a property most differentiable provers lack: the continuous optimization and the discrete proof extraction operate over the same algebraic structure. The search space is closed under all engine operations.
The lens lattice
Proof search operates across seven algebraic lenses, each providing a different view of the same combinator space via basis projection. The lenses form a partial order with proved incomparability pairs—meaning neither lens subsumes the other, and transport between them must pass through the cocycle verification machinery.
| LENS | BASIS | RES | ROLE |
|---|---|---|---|
| omega | K, S, Y, SK, KS, YK | 5 | Universal — reaches every lens |
| tensor | K, S, Y, KS | 4 | Algebraic / polynomial |
| graph | K, S, Y, SK | 3 | List / graph structure |
| matula | K, S, Y, SK, KS | 4 | Tree encoding |
| clifford | K, S, SK | 2 | Geometric algebra |
| topology | K, S, Y | 2 | Set / topological |
| region | K, S | 1 | Minimal — only self-reachable |
graph_tensor_incomparable — Graph ⊄ Tensor and Tensor ⊄ Graph. clifford_topology_incomparable — Clifford ⊄ Topology and Topology ⊄ Clifford. These are structural facts: proofs in incomparable lenses cannot be naively projected—they must be composed through the gluing machinery.
Lax cross-lens transport
Strict round-trip invertibility (RT-2) cannot hold for non-trivial restriction maps. Instead, the engine uses lax transport contracts: idempotent round-trips rather than exact inverses. Each lens defines a toPlato / fromPlato pair satisfying:
fromPlato(toPlato(x)) = fromPlato(toPlato(fromPlato(toPlato(x))))
The round-trip is a fixed point of itself.
toPlato(fromPlato(p)) = toPlato(fromPlato(toPlato(fromPlato(p))))
The reverse round-trip is equally stable.
Sheaf glue composition
When a proof obligation spans multiple lenses, the engine solves subgoals locally in their natural representations, then glues them into a global proof term. The gluing is guarded by two runtime checks:
On basis intersections between lenses, local proof terms must agree on their combinator coefficients. This is the matching family condition.
Transport to the target lens must produce the same result regardless of the intermediate lens path. This is the cocycle condition, proved once and enforced at every glue site.
If either check fails, the engine rejects the glue—it does not approximate. The glued proof either type-checks in the kernel or it doesn’t. No probabilistic acceptance.
Verified pipeline stages
GoalEncoder
Maps textual goal states to FSum seeds with lens-appropriate combinator bases and IO examples.
Differentiable Search
Gradient-based search over the nucleus-retracted subspace. Retracted GD, gradient probing, multiway macro steps, closed-loop feedback.
CombToExpr + TacticDecoder
Extracts discrete proof terms from continuous optima and translates combinator expressions to Lean tactics.
KernelVerifier
Replays decoded tactics via lake env lean. The proof either type-checks in the Lean 4 kernel or is rejected. Deterministic, final.
Multi-hop transported search
The search algorithm explores proof space across lenses, not just within them. Each search move is either a tactic application or a lens transition (with a proved safety certificate). Max 3 lens hops per path.
Origin lens
Try solving in the starting lens first. If it works, no transport needed.
1-hop transitions
For each safely-reachable lens, project the goal and search there. Back-transport the proof if found.
2-hop transitions
For each mid-point lens, search in all its safe targets. Longer paths discover proofs invisible to single-lens search.
Lenses are explored in resolution-descending order. Back-transport is only accepted when the origin-to-proof lens path is certified non-lossy (via isSafeTransport). Every search emits per-lens node counts, wall time, and the full transition path for audit.
Zero sorry. Zero admit.
Every theorem below is kernel-verified. These are the actual lemma names from the Lean 4 source—not paraphrases.
| LEMMA | STATEMENT |
|---|---|
| projectToFixedWeights_idempotent | R(R(w)) = R(w) for nucleus-filtered FSum |
| lensProjection_idempotent | Projecting to a lens basis twice is the same as once |
| lensRetractWeights_idempotent | Combined nucleus + lens retraction is idempotent |
| retractedStep_preserves_fixed | Retracted gradient descent stays in the fixed-point subspace |
| sheaf_forward_cocycle | Transport through intermediate lenses is path-independent |
| sheaf_restriction_commutes | Lens restriction commutes with nucleus retraction |
| graph_tensor_incomparable | Graph and Tensor are incomparable — neither subsumes the other |
| clifford_topology_incomparable | Clifford and Topology are incomparable |
| omega_reaches_all | Omega lens can safely transport to every other lens |
| glue_singleton | Gluing a single local proof succeeds under safe transport |
| glue_same_lens | Gluing two proofs from the same lens succeeds when cocycle checks pass |
| resolution_monotone_of_basisSubset | Coarser lenses have lower resolution |
Orchestration surface
The Lean kernel modules are wrapped by a suite of Python orchestration tools exposed via MCP (Model Context Protocol) for both human and agent use.
Sheaf Glue E2E Workflow
One-command pipeline: unlock + innovation review → prove-now closure → transport regression, with durable step artifacts.
Drag-Aware Unlock Engine
Builds proved-class closure, scores nearby expansion candidates, and auto-tunes objective weights for maximal unlock gain under proof-cost constraints.
Sheaf Glue Transport
Selects glue chains, transports equivalent candidates across proved edges, and emits projection obligations with unresolved pair blockers.
Promotion Gate
Enforces pairwise completeness, unresolved-pair zero policy, and full directed chain coverage before promoting results.
Innovation Review
Overlay-gated enrichment, self-consistent conjecture synthesis, reflection-memory penalties, diversity rerank, and falsification checks.
Unlock KPI Dashboard
Aggregates trends across workflow runs: unlock gain, seeded rows, strict/fallback mode rates, objective convergence.
Live experiment: drag-along graph analysis
The unlock engine was run against the full HeytingLean ontology graph—121 nodes across 6 semantic domains, 14,520 directed edges, 72 proved. The transport planner found glue chains and the unlock engine evaluated 3,982 candidate pairs to recommend the highest-value expansion targets.
How the drag-along pipeline works
The sheaf transport engine runs a six-stage pipeline. Each stage feeds the next—from loading the raw ontology graph through to a classified research output with actionable build/prove recommendations.
Graph Ingest
Load the ontology graph from the glued chain registry. Parse nodes, proved edges, and equivalence classes. Compute hub centrality and drag neighborhoods.
Transport Planner
Select an anchor drag goal and a transport profile (e.g. fixedpoint_self_reference). Identify the glue chain — the sequence of semantic nodes the drag must traverse.
Glue Path Finder
Find confidence-weighted shortest paths between chain nodes using proved edges. Transport candidates across glue paths and emit projection obligations for each hop.
Unlock Engine
Evaluate all candidate pairs against a 5-factor objective (unlock, near, hub, drag, cost). Auto-tune weights over gradient steps. Rank top candidates by projected edge unlock.
Innovation Review
Generate conjectures from top candidates. Score each by utility, confidence, and uncertainty. Assign scheduler buckets (prove_now / explore) and schedule arms.
Research Analysis
Cross-reference generated conjectures against the existing Lean codebase. Classify into tiers by mathematical depth: routine closure, bridge extensions, or substantive buildout.
Each stage is idempotent and produces a JSON artifact. The full pipeline can be re-run against any ontology graph snapshot. Below: the live results from running this pipeline on the HeytingLean 121-node graph.
Nodes
121
Ontology concepts
Edges
14,520
Directed potential
Proved
72
Kernel-verified
Classes
113
Equivalence classes
Largest
9
Superclass members
Domains
6
Semantic categories
Transport planner: Euler anchor drag
Goal: invariant_anchor (drag_update euler r_nucleus) = invariant_anchor euler
eigenform → euler → sheaf
Profile: fixedpoint_self_reference
eigenform→euler via fixedpoint_transport (conf 0.80)
eigenform→sheaf via pairwise_required (conf 0.80)
6/6 required pairs resolved
12 transported candidates
0 unresolved blockers
Unlock engine: multi-objective optimization
The engine evaluated 3,982 candidate pairs and selected the top 16 by a 5-factor objective, auto-tuned over 38 gradient steps. The top 8 candidates each unlock 18 directed edges through equivalence class merging—288 new edges total, a 4× increase in proved coverage.
Unlock
0.44
Edge gain
Near
0.20
Drag proximity
Hub
0.14
Centrality
Drag
0.12
Drag bonus
Cost
0.10
Proof cost
Research findings: three-tier classification
The innovation review generated 24 conjectures across two themes—anchor-invariant drag and transport coherence. Analysis against the existing codebase classified them into three tiers:
| TIER | COUNT | MATH DEPTH | ACTION | UNLOCK |
|---|---|---|---|---|
| 1: Routine closure | 13 | Trivial | PROVE | +234 edges |
| 2: Bridge extensions | 5 | Shallow | PROVE | +36 edges |
| 3: Substantive buildout | 3 | Genuine | BUILD | Semantic grounding |
Grothendieck closure as nucleus. J.close is proved inflationary, idempotent, meets-preserving, and pullback-stable. Three-way equivalence: range ≅ fixed points ≅ IsClosed.
Frame-level J-ratchet fixed point. FrameLevelFixedPointCoreContract ↔ OntologicalDriverContract. The abstract mathematical fixed-point property IS the ratchet dynamics contract.
Lens section / belief state projection. Cross-lens roundtrip hooks and singleton glue obligations from the perspectival plenum sheaf category.
CID: bafybeigg4773f7d63vqk4a7kcdszfkmvwfftczs7dlrojm7dsdtx663zpq
date: 2026-03-14T02:25:29Z
engine: atp_sheaf_glue_unlock_engine_v1 + atp_sheaf_glue_transport_v1
graph: 121 nodes, 14520 edges, 72 proved, 113 eq classes
candidates_evaluated: 3,982
auto_tune_steps: 38
projected_unlock: 288 directed edges (top 16)
conjectures_generated: 24 accepted (12 prove_now, 12 explore)
git_head: 4410bef8a5
ipfs_cid: bafybeigg4773f7d63vqk4a7kcdszfkmvwfftczs7dlrojm7dsdtx663zpq
Current (v0.1)
- ✓7-lens lattice with proved subset/incomparability structure
- ✓Lax cross-lens transport with cocycle verification
- ✓Multi-hop search (origin → 1-hop → 2-hop)
- ✓Retracted gradient descent with fixed-point preservation
- ✓GoalEncoder / CombToExpr / TacticDecoder / KernelVerifier pipeline
- ✓Zero sorry/admit across all transport modules
- ✓MCP orchestration tools for end-to-end workflows
In progress
- ●Full Čech cohomology obstruction classes (currently formalized for contextuality, graduating to general sheaf transport)
- ●KAN-based tactic selector training for lens-aware routing
- ●Koopman lifting from categorical bridge to full weight-space initialization
- ●Benchmark suite comparing gradient modes across lens configurations
Theoretical foundations
Sheaf theory on sites. Grothendieck topologies and the sheaf condition as formalized in HeytingLean’s PerspectivalPlenum.SheafLensCategory module, with MatchingFamily and Amalgamates predicates.
Nucleus operators. The R-nucleus provides the retraction that makes the continuous-discrete bridge algebraically coherent. Fixed points form a stable subspace closed under all engine operations.
Perspectival descent carriers. Lax transport induces a PDC structure via fixed points of local idempotent round-trips, connecting the transport layer to HeytingLean’s perspectival architecture.
Differentiable theorem proving. Gradient-based proof search over combinator algebras with algebraic (not statistical) correctness guarantees. The retraction ensures the optimization lives in the proof space, not an ad hoc embedding.
The Sheaf Transport Engine is part of the HeytingLean formal verification framework. Core modules: ATP.DifferentiableATP.SheafResolution, ATP.DifferentiableATP.SheafTransport, ATP.LensTransport.*.
