Nucleus GraftingVerified Quantization-Aware NN Compression
When a neural network is quantized to INT8 for deployment, the set of representable activation values forms a finite Heyting algebra. The ReLU gate on this lattice is a nucleus operator — extensive, idempotent, and meet-preserving — and this structure necessarily introduces an irreducible information gap that decomposes additively into quantization, gate, and reconstruction components.
What happens to information when neural network activations are quantized? We show the answer is algebraic: the quantization gate is a nucleus on a Heyting algebra, and the Hossenfelder no-go theorem guarantees that a non-Boolean nucleus creates an unavoidable boundary gap. This gap decomposes additively, the gate component dominates (5–6× quantization), and bitwidth monotonicity is provable: coarser quantization means larger gap.
- • qRelu(z, q) = max(q, z) — proved extensive, idempotent, meet-preserving on {-128, ..., 127}
- • Product lattice non-Booleanity — canonical incomparable pair witness a=[0,1], b=[1,0]
- • Additive gap decomposition — total = quantization + gate + reconstruction (hAdditive)
- • Bitwidth monotonicity — activationFixedSet_mono: higher zero-point = fewer fixed vectors = larger gap
- • Non-Boolean boundary — graftBoundaryNucleus is never Boolean, connecting to Hossenfelder no-go
- • 3 model architectures — MLP, CNN, and autoencoder (NBA)
- • 65,536-pair exhaustive verification — zero nucleus axiom violations per layer
- • Gate dominance — gate component is 5–6× the quantization component
- • Bitwidth monotonicity confirmed — INT8 > INT16 > FP16 gap across all models
- • 74–96% gap optimization — via fp16_threshold_p10 method
- • α,β-CROWN bounds — neural network verification via auto_LiRPA
A mathematically principled theory of quantization loss. The gap decomposition tells you where information is lost (gate >> quantization) and the monotonicity theorem tells you how much you lose at each bitwidth. The optimization results show the gap is largely recoverable.
First verified connection between Heyting algebra nuclei and neural network quantization. The activationFixedSet_mono lemma and bitwidthFamily construction provide a template for formalizing other quantization-aware architectures.
Formal Verification Certificate
Sorry-free formalization with 2-round hostile adversarial audit. All findings remediated. Silver-tier TieredCertificate.
Nucleus Grafting • Lean 4 + Mathlib • Apoth3osis Labs
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
Frank Herbert, Dune
A janitor who proves a theorem outranks a tenured professor who publishes noise.
Not as a slogan. As a structural fact of how the network operates. The only currency that matters is the quality of your contribution, measured not by committee but by mathematics.
A valid, original framing or conjecture
Formal argument with paper-level rigor
Connecting theory to observable outcomes
Working software the project depends on
Reproducible research with methodology and data
Machine-verified claim checked by a proof assistant
Foundational, load-bearing implementation
Connecting subsystems or knowledge domains end-to-end
Strategic direction & roadmaps
Writing, documentation & papers
Visual, UX & information design
Teaching, tutorials & workshops
Community, outreach & events
Partnerships, governance & policy
Translation, media & accessibility
Ethics, review & quality assurance
Every accepted contribution receives a MENTAT Contribution Record — cryptographically signed, IPFS-pinned, permanently yours. No committee decides your worth. The type checker does.
