Apoth3osis
<_RESEARCH/RULIOLOGY

Hybrid Zeckendorf Arithmetic

VERIFIED6 PHASES11 LEAN MODULES68 THEOREMS0 SORRY7 EXPERIMENTS55x OVER GMPR8 COMPLETELean 4 + Rust
>THE_QUESTION

Can a formally verified number system beat GMP?

GMP is the gold standard for arbitrary-precision arithmetic. It is hand-optimized assembly, decades of engineering, used everywhere from cryptography to computer algebra. Beating it at anything, ever, is not supposed to happen—and certainly not with a formally verified implementation.

Vladimir Veselov's Hybrid Zeckendorf paper proposes an alternative: instead of flat bit strings, decompose integers into a doubly-exponential weight hierarchy where each level carries a small Fibonacci-encoded coefficient. A million-bit number uses only ~11 levels. For sparse numbers (few nonzero Fibonacci indices), addition is O(K) where K is the total index count—versus GMP's O(N/64) scan of every machine word.

We formalized the complete theory in Lean 4 (68 theorems, zero sorry) and built a rigorous Rust benchmark suite with anti-hack constraints. The result: 55x speedup over GMP on sparse addition at 1M bits. To our knowledge, this is the first empirically-demonstrated case where a formally-verified number representation outperforms GMP at any operation, at any scale, under fair benchmarking conditions.

>RULIOLOGY_CONTEXT

Why this is Ruliology

Veselov's paper proposed the representation. Our Lean formalization proved its mathematical properties. The ruliology—the discovery that emerged through the verification process—changed how we understand the system.

Formalizing the normalization algorithm found a bug in the paper. Algorithm 3 specifies descending-level carry (reverse=True), but the carry semantics require ascending order. Our Lean proofs use independent carryAt calls with fuel; our Rust implementation uses bottom-up iteration. Both are correct despite the pseudocode error. This is exactly the kind of finding that only emerges when you force every claim through a type checker.

The closure operator result was unexpected. NucleusBridge proves that normalization is idempotent (normalizing a normal form returns itself), which sounds obvious but requires careful proof. The negative result— carryAt_not_pointwise_monotone—was a surprise: carry at one level can decrease the coefficient at another level. This subtlety is invisible in informal reasoning.

The deepest ruliology finding: formal verification and computational advantage are not in tension. The structural properties that make the Lean proofs work (evaluation preservation, canonical uniqueness, carry soundness) are the same properties that make the computation fast. The proof is the optimization.

>ARCHITECTURE
Lean 4 Formalization (HeytingLean/Bridge/Veselov/HybridZeckendorf/)
  WeightSystem.lean ───── w(k+2) = w(k+1)², closed form, strict mono
  HybridNumber.lean ──── core types, evaluation maps
  FibIdentities.lean ─── Fibonacci identities, Zeckendorf helpers
  Normalization.lean ─── normalize_sound, normalize_canonical, carry
  Addition.lean ──────── addLazy_eval, add_correct
  Multiplication.lean ── multiplyBinary_correct
  DensityBounds.lean ─── active_levels_bound ≤ log₁₀₀₀(n) + 2
  Uniqueness.lean ────── canonical_eval_injective
  NucleusBridge.lean ─── normalize_is_closure_operator (idempotent)
  DragStability.lean ─── transport stability
  TransportCoherence ─── round-trip coherence
       │
  CAB Pipeline (Lean → C → Rust)
       │
Rust Benchmark Suite (bench/hybrid_zeckendorf/, 2,415 lines)
  weight.rs ─────── weight hierarchy (// Mirrors: WeightSystem)
  zeckendorf.rs ─── Zeckendorf encoding (// Mirrors: FibIdentities)
  flat_hybrid.rs ── HybridNumber types (// Mirrors: HybridNumber)
  normalization.rs  normalize + carry (// Mirrors: Normalization)
  arithmetic.rs ─── add, multiply (// Mirrors: Addition/Multiplication)
  sparse.rs ─────── sparse number construction + E4/E5/E7
  density.rs ────── density measurement (// Mirrors: DensityBounds)
       │
Experiments (7 total)
  E1: modular exponentiation  │  E5: lazy accumulation
  E2: polynomial multiplication│  E6: sparse modular exponentiation
  E3: random density           │  E7: crossover determination
  E4: sparse addition (CORE RESULT)
>BENCHMARK_RESULTS

Sparse Addition: HZ vs GMP (E4)

100,000 bits (30 samples)

ρKHZ (ns)GMP (ns)Speedup
10&sup5;14878415.2x
10⁻&sup4;141128407.0x
10⁻³1442729123.4x
10⁻²1,4406889201.2x

1,000,000 bits (30 samples)

ρKHZ (ns)GMP (ns)Speedup
10⁻&sup5;141126,31055.4x
10⁻&sup4;1442726,19222.8x
10⁻³1,4407846,0888.2x
10⁻²14,4043,3126,2721.9x

SCALING

Advantage grows with input size: 2.4–3.6x per 10x increase in N. GMP is flat at ~6,200 ns (scans all words). HZ scales only with K.

CROSSOVER (E7)

ρ* ≈ 5.6×10⁻³ at 100k bits. ρ* ≈ 3.2×10⁻² at 1M bits. Advantage regime expands 6x with input size.

LAZY ACCUM (E5)

1000 sparse additions: 50–593x faster than eager. R5–R8 normalizer campaign resolved: split-update kernel passes S9 scaling gate at ρ=10⁻&sup5; (32x→20x speedup, ratio 0.61). Normalization cost reduced 64x per pass vs original Vec kernel.

>MATERIALIZED_ARITHMETIC

Normalization Campaign: R5–R8

Lazy addition is O(K), but the deferred normalization cost matters for materialized arithmetic (add + normalize). The R5–R8 campaign diagnosed and fixed the sparse normalizer’s scaling bottleneck through four rounds of hostile-audited implementation and measurement.

ρ = 10⁻&sup5; (very sparse, 1M bits)

CountSpeedupnorm (ns)S9
1032.2x1,232
10029.3x13,488
50023.9x92,272
100019.6x244,833PASS (0.61)

Normalizer Kernel Evolution

Kernelnorm (ns)fuelns/pass
R5 Vec::insert244,762k548,952k
R7 BTreeMap45,847k95,094k
R8 split-update6,831k9759k

At ρ=10⁻&sup4;, count=1000. R8 is 64x cheaper per pass than Vec, 6.7x cheaper than BTreeMap.

RESOLVED

R8 split-update sorted Vec: binary-search existing entries for O(1) update, buffer new entries for single O(K+D) merge after each pass. Eliminates Vec::insert's O(K) per-element shifting while preserving cache-friendly sorted storage. S9 scaling gate passes at ρ=10⁻&sup5;.

OPEN QUESTION

Deferred merge prevents within-pass cascading, adding a 1.8x fuel tax at denser regimes (ρ=10⁻&sup4;). S9 fails there (ratio 0.24). Restoring partial cascading or a split-phase normalization algorithm could close this gap.

>FORMAL_BOUNDARY

What is proved vs. what is measured

FORMALLY VERIFIED (Lean 4)

  • • w(i+1) = 1000^(2^i) (weight closed form)
  • • Normalization preserves evaluation
  • • Canonical forms are unique (injective)
  • • Lazy addition preserves value
  • • Multiplication correct
  • • Active levels ≤ log₁₀₀₀(n) + 2
  • • Normalization is idempotent closure operator
  • • Carry is NOT pointwise monotone (negative result)

EMPIRICALLY MEASURED (Rust bench)

  • • Sparse addition 15–55x faster than GMP
  • • Dense arithmetic 5–48x slower than GMP
  • • Crossover density ρ* ≈ 5.6×10⁻³ – 3.2×10⁻²
  • • Lazy accumulation 50–593x faster than eager
  • • R5–R8 normalizer campaign: split-update kernel (64x cheaper per pass)
  • • Fuel profiling: Vec vs BTreeMap vs split-update convergence dynamics
  • • Materialized arithmetic: 32x at count=10, 20x at count=1000 (ρ=10⁻&sup5;)
  • • End-to-end pipeline with deferred normalization
>PHASES
PHASE 1VERIFIED12 THM
3 modules (270 lines)

Weight System & Core Types

Super-exponential weight hierarchy w(0)=1, w(1)=1000, w(k+2)=w(k+1)². Closed-form: w(i+1) = 1000^(2^i). Strict monotonicity. Core HybridNumber and LazyHybridNumber types with evaluation maps. Fibonacci double identity and Zeckendorf encoding helpers.

WeightSystem.leanHybridNumber.leanFibIdentities.lean

Key Results

  • weight_closed (w(i+1) = 1000^(2^i))
  • weight_strict_mono (i < j ⟹ w(i) < w(j))
  • eval_fromNat (evaluation map correctness)
  • fib_double_identity, lazyEvalFib_append
PHASE 2VERIFIED15 THM
Normalization.lean

Normalization & Carry

The central algorithmic content: inter-level carry that transforms any lazy representation into canonical form. normalize_sound: normalization preserves evaluation. normalize_canonical: the result is unique. carryAt_preserves_eval: each carry step preserves value. Errata: paper's Algorithm 3 specifies descending carry, but the semantics require ascending order.

Normalization.lean

Key Results

  • normalize_sound (evaluation preservation)
  • normalize_canonical (unique canonical form)
  • carryAt_preserves_eval (carry soundness)
  • interCarry_idempotent_of_canonical
  • Errata: carry must be ascending, not descending
PHASE 3VERIFIED8 THM
2 modules

Arithmetic Operations

Lazy addition: concatenate payload vectors at each level without normalizing. addLazy_eval proves this preserves value. add_correct proves the normalized pipeline (lazy + normalize) is correct. Binary multiplication via shift-and-add. multiplyBinary_correct proves the multiplication algorithm correct.

Addition.leanMultiplication.lean

Key Results

  • addLazy_eval (lazy addition preserves value)
  • add_correct (normalized addition preserves value)
  • multiplyBinary_correct
  • add_comm_repr (addition commutes at repr level)
PHASE 4VERIFIED18 THM
3 modules

Uniqueness, Density & Nucleus

canonical_eval_injective: canonical forms are unique — equal evaluation implies identical representation. active_levels_bound: support cardinality ≤ log₁₀₀₀(n) + 2. NucleusBridge: normalization is an idempotent closure operator (sheaf-glue extension). Subtle negative result: carryAt_not_pointwise_monotone — carry is NOT monotone per-level.

Uniqueness.leanDensityBounds.leanNucleusBridge.lean

Key Results

  • canonical_eval_injective (unique representation)
  • active_levels_bound (≤ log₁₀₀₀(n) + 2 levels)
  • density_upper_bound
  • normalize_is_closure_operator (idempotent)
  • carryAt_not_pointwise_monotone (negative result)
PHASE 5VERIFIED
Rust benchmark

Benchmark: Dense Regime (E1–E3)

Establishing the baseline: HZ loses in the dense regime, as expected. E1 (modular exponentiation): 5–48x slower than GMP. E2 (polynomial multiplication): millions of times slower. E3 (random integer density): ρ ≈ 0.27 for random numbers — dense. The proved level-count bound holds with significant slack. These results confirm the theory: HZ advantage requires sparse inputs.

arithmetic.rsdensity.rs

Key Results

  • E1: modular exponentiation 5–48x slower than GMP
  • E2: polynomial multiplication millions of times slower
  • E3: random density ρ ≈ 0.27 (dense regime)
  • Level-count bound holds with slack
  • Confirms: HZ advantage requires sparse inputs
PHASE 6VERIFIED
Rust benchmark

Benchmark: Sparse Regime (E4–E7)

The core positive result. E4 (sparse addition): 15.2x at 100k bits/ρ=10⁻⁵, 55.4x at 1M bits/ρ=10⁻⁵. GMP timing is flat (~6,200 ns at 1M) because it scans every word; HZ timing scales only with K. Scaling factor 2.4–3.6x per 10x input size. E5 (lazy accumulation): 50–593x faster than eager for 1000 sparse additions. E7 (crossover): ρ* ≈ 5.6×10⁻³ at 100k, ρ* ≈ 3.2×10⁻² at 1M — advantage regime expands 6x with input size.

sparse.rsnormalization.rs

Key Results

  • E4: 15.2x speedup at 100k/ρ=10⁻⁵, 55.4x at 1M/ρ=10⁻⁵
  • GMP flat at ~6,200 ns (scans all words); HZ scales with K only
  • Scaling factor 2.4–3.6x per 10x increase in N
  • E5: lazy accumulation 50–593x faster than eager
  • E7: crossover regime expands 6x from 100k to 1M bits
>_MENTAT.CERTIFICATES

Contribution Certificates

Immutable contribution records per MENTAT-CA-001. Each certificate is a cryptographically anchored MCR documenting authorship, contribution level, and provenance chain. Artifacts are content-addressed and pinned to IPFS.

MENTAT-CA-001|MCR-HZ-001
2026-02-26

MENTAT Contribution Record

IDEA

Conceptual Contribution

CONTRIBUTION LEVEL: IDEA

Ontological Engineer

Doubly-Exponential Weight Hierarchy for Number Representation

Contributor

Vladimir Veselov

Moscow Institute of Electronic Technology (MIET)

Conceptual insight that a doubly-exponential weight hierarchy w(0)=1, w(1)=1000, w(k+2)=w(k+1)² combined with Zeckendorf encoding at each level yields a number representation where sparse numbers (low density ρ) require dramatically fewer active levels than conventional representations. Key idea: a million-bit number uses only ~11 levels, enabling O(K) lazy addition vs GMP’s O(N/64) full-word scan.

MENTAT · Mesh-Encrypted Network for Trusted Autonomous TransactionsImmutable · Content-Addressed · Tamper-Proof
MENTAT-CA-001|MCR-HZ-002
2026-02-26

MENTAT Contribution Record

THEORY

Mathematical Foundation

CONTRIBUTION LEVEL: THEORY

Ontological Engineer

Hybrid Hierarchical Representation of Numbers Based on the Zeckendorf System

Contributor

Vladimir Veselov

Moscow Institute of Electronic Technology (MIET)

Complete mathematical framework: weight hierarchy w(k+2) = w(k+1)² with closed-form w(i+1) = 1000^(2^i). Zeckendorf coefficient encoding with uniqueness. Lazy addition algorithm (concatenate payloads without normalizing). Normalization via inter-level carry with evaluation preservation. Complexity analysis: O(K) for sparse inputs vs O(N/64) for GMP.

Builds Upon

MCR-HZ-001
MENTAT · Mesh-Encrypted Network for Trusted Autonomous TransactionsImmutable · Content-Addressed · Tamper-Proof
MENTAT-CA-001|MCR-HZ-003
2026-02-26

MENTAT Contribution Record

APPLICATION

Applied Contribution

CONTRIBUTION LEVEL: APPLICATION

Ontological Engineer

Hybrid Zeckendorf Applied Arithmetic and Performance Analysis

Contributor

Vladimir Veselov

Moscow Institute of Electronic Technology (MIET)

Applied contribution specifying concrete algorithms for addition, multiplication, and normalization in the hybrid Zeckendorf system. Performance analysis showing 15–55x speedup over GMP for sparse addition at 100k–1M bits. Crossover density ρ* characterization. Lazy accumulation design enabling 50–593x faster batch operations by deferring normalization.

Builds Upon

MCR-HZ-001MCR-HZ-002
MENTAT · Mesh-Encrypted Network for Trusted Autonomous TransactionsImmutable · Content-Addressed · Tamper-Proof
MENTAT-CA-001|MCR-HZ-004
2026-03-04

MENTAT Contribution Record

PROOF

Formally Verified

CONTRIBUTION LEVEL: PROOF

Lean 4 Formalization of Hybrid Zeckendorf Arithmetic

Contributor

Apoth3osis Labs

R&D Division

Machine-checked Lean 4 formalization of the complete hybrid Zeckendorf theory. 11 modules, 1,430 lines, 68 theorems, zero sorry/admit. Key results: weight_closed (w(i+1) = 1000^(2^i)), normalize_sound (evaluation preservation), normalize_canonical (unique canonical form), addLazy_eval (lazy addition correctness), canonical_eval_injective (representation uniqueness), normalize_is_closure_operator (idempotent normalization). Found errata in paper's Algorithm 3: carry must be ascending, not descending.

Builds Upon

MCR-HZ-001MCR-HZ-002MCR-HZ-003
MENTAT · Mesh-Encrypted Network for Trusted Autonomous TransactionsImmutable · Content-Addressed · Tamper-Proof
MENTAT-CA-001|MCR-HZ-005
2026-03-06

MENTAT Contribution Record

KERNEL

Computationally Verified

CONTRIBUTION LEVEL: KERNEL

Hybrid Zeckendorf Verified Kernel — 7-Round Hostile Audit

Contributor

Apoth3osis Labs

R&D Division

Complete formal verification through seven rounds of adversarial hostile audit. All 68 theorems kernel-checked by Lean 4. Sheaf-glue closure extension proved normalization is an unconditional idempotent closure operator. Verified C and safe Rust artifacts generated via CAB pipeline. All artifacts content-addressed and pinned to IPFS.

Builds Upon

MCR-HZ-001MCR-HZ-002MCR-HZ-003MCR-HZ-004
MENTAT · Mesh-Encrypted Network for Trusted Autonomous TransactionsImmutable · Content-Addressed · Tamper-Proof
MENTAT-CA-001|MCR-HZ-006
2026-03-08

MENTAT Contribution Record

BRIDGE

Cross-Level Connection

CONTRIBUTION LEVEL: BRIDGE

Rigorous Benchmark Suite: HZ vs GMP Under Anti-Hack Constraints

Contributor

Apoth3osis Labs

R&D Division

Bridge connecting formal proofs to empirical performance. Rust benchmark suite (2,415 lines, 14 unit tests) faithfully transliterating the Lean formalization, annotated with // Mirrors: HeytingLean.Bridge.Veselov.HybridZeckendorf.* references. 7 experiments across dense and sparse regimes against GMP (rug 1.24). Anti-hack constraints: black_box on all timed code, identical mathematical inputs, sparse numbers constructed at HZ level (not converted from flat integers), conservative GMP timing including clone overhead. Result: 15–55x speedup on sparse addition at 100k–1M bits.

Builds Upon

MCR-HZ-001MCR-HZ-002MCR-HZ-003MCR-HZ-004MCR-HZ-005
MENTAT · Mesh-Encrypted Network for Trusted Autonomous TransactionsImmutable · Content-Addressed · Tamper-Proof

Governed by MENTAT-CA-001 v1.0 · March 2026