We propose a geometric framework in which a single frequency pulse, separated by a decoherence interval, generates the dimensional structure of physical space and the organizing geometry of all crystalline matter. The framework, termed Time Ledger Theory (TLT), rests on three axioms: (1) a frequency pulse \(f\) separated by a decoherence parameter \(t\), written \(f|t\); (2) a decoherence ceiling at the ratio \(r = 0.5\), beyond which dimensional overflow occurs; and (3) the organizing pair \(\{2,3\}\) as the sole generators of periodic crystal geometry.
From these axioms, the theory derives a dimensional progression in which each spatial dimension possesses a characteristic framerate, spiral ratio, and void capacity. In three dimensions, the speed of light \(c\) emerges as the local framerate, and the golden ratio \(\phi = 1.618\) emerges as the governing spiral from the ratio of consecutive Fibonacci numbers in the framerate formula. Across 133 elements with known ambient-pressure crystal structures, all coordination numbers are products of 2 and 3 exclusively, with zero exceptions. Five-fold coordination is excluded from all periodic crystals, consistent with the crystallographic restriction theorem and the theory's identification of \(\{5\}\) as belonging to higher-dimensional cycles.
The theory proposes that dimensions organize in cycles of three (seed, flat, volumetric), with Cycle 1 spanning 1D through 3D and Cycle 2 beginning at 4D. At each dimensional boundary, a universal overflow mechanism produces characteristic signatures: chiral jets at the 2D-to-3D boundary (consistent with helium's anomalous properties at \({\sim}0.86\) meV), anti-particles at the 3D-to-4D boundary (pair production at 1.022 MeV), and a suggested new-physics component at the 4D-to-5D boundary (cosmic ray proton knee at \({\sim}3\) PeV). Chirality is derived as a consequence of framerate mismatch between consecutive dimensions rather than introduced as a separate axiom.
Three measured boundary energies, drawn from independent subfields of physics, yield an empirical scaling formula. Gravity is reinterpreted as the geometry of \(C_\mathrm{potential}\) curvature, consistent with general relativity, and conservation laws are proposed to be dimensionally local rather than universal. The framework generates specific, falsifiable predictions including dimension-dependent propagation speeds, a 5D-to-6D boundary energy of \({\sim}10^{25.3}\) eV, and a computational test (SIM-003) for deriving crystal structures from \(f|t\) axioms alone.
We document the theory's failures alongside its successes: SIM-003 iteration 1 produced identical results for all elements (the mass-dependent amplitude was absent from the wave equation), the original Fibonacci formula predicted dimensional regression at \(d > 4\) (replaced by the cycle-restart model), and the actinide resistivity prediction was falsified before being corrected through \(\{2,3\}\) decomposition of \(f\)-electron count. Three axioms, seven explicit assumptions, and six limitations are enumerated. All post-hoc explanations are distinguished from genuine predictions.
The standard model of particle physics contains 19 free parameters. General relativity introduces Newton's gravitational constant and the cosmological constant. Condensed matter physics relies on density functional theory with exchange-correlation functionals chosen to fit experimental data. Each domain achieves extraordinary predictive accuracy within its scope, but the combined apparatus involves dozens of inputs whose values are measured, not derived.
We ask a different question: what is the minimum input from which the maximum structure can be derived? This is a question of parsimony. If the answer is a single frequency pulse separated by a decoherence interval, and if the observed geometry of matter follows from that input without additional parameters, the resulting framework would possess an explanatory economy that warrants investigation regardless of whether it ultimately proves correct.
Time Ledger Theory (TLT) proposes that this minimum input is
where \(f\) is a frequency pulse expressed in one dimension and \(t\) is the decoherence interval (the pause between successive pulses). The theory holds that it is the pause—not the frequency—that differentiates physical outcomes. This is a hypothesis whose consequences are tested throughout this paper.
The notation \(f|t\) encodes the following physical picture. A frequency pulse originates from a one-dimensional potential well of indefinite depth. The pulse is not a single event but a continuous process: pulse, pause, pulse, pause. The decoherence interval \(t\) is the breath between expressions. Without this interval, no geometric lattice could form—the system would be a featureless standing wave.
Time in this framework is a ledger: sequential, unidirectional, and local. Each frame records the binary output of quantum expression. The previous frame no longer exists in space; only the current frame is physically real. Information passes only forward. Time is the observer—no external measurement apparatus is required for state collapse.
The key insight is that \(t\) is not a fixed constant. It ranges according to the local curvature of the bandwidth potential, denoted \(C_\mathrm{potential}\). Where energy coalescence is dense, the bandwidth curve steepens, and the decoherence interval increases. Where the curve is shallow, the interval decreases. This variable decoherence is the mechanism through which a single frequency produces different geometric outcomes at different locations on the potential curve.
This paper presents the foundational axioms (Sec. ), the organizing pair \(\{2,3\}\) and its universality across crystalline matter (Sec. ), the dimensional progression including the cycle-restart model (Sec. ), the framerate interpretation of the speed of light (Sec. ), the decoherence overflow mechanism (Sec. ), the self-regulating \(C_\mathrm{potential}\) (Sec. ), gravity as geometry (Sec. ), verified predictions and boundary data (Sec. ), and the theory's falsification criteria and known failures (Sec. ).
Companion papers address the dimensional recursion framework in detail (Paper 2), the geometric cipher for crystal classification (Paper 3), internal void resonance in crystal lattices (Paper 4), and the ab initio crystal genesis simulation (Paper 9).
The proposal that geometry underlies physics has a long history. Plato associated the five regular solids with the classical elements. Kepler attempted to derive planetary orbits from nested Platonic solids. In the twentieth century, Einstein demonstrated that gravity is the geometry of spacetime, and the gauge theories of the standard model rest on the geometry of fiber bundles over spacetime manifolds.
More recently, several frameworks have explored geometry as foundational. Loop quantum gravity (LQG) quantizes spacetime geometry directly, producing spin networks in which nodes carry S\text{U}(2) representations—notably, the fundamental representation is spin-\(1/2\), and area eigenvalues involve factors of 2 and 3. Garrett Lisi's E8 theory proposes that all particles and forces correspond to elements of the E8 Lie algebra, whose root system decomposes into \(\text{D}_4 + \text{D}_4\) components—the \(\text{D}_4\) lattice being the 4D structure whose Voronoi cell is the 24-cell. Conway's Game of Life, with its rule B3/S23 (birth at 3 neighbors, survival at 2 or 3), demonstrates that \(\{2,3\}\) as an organizing pair can generate unbounded complexity from simple initial conditions.
TLT differs from these frameworks in its starting point. Rather than beginning with a mathematical structure (Lie algebra, spin foam, cellular automaton) and mapping it to physics, TLT begins with a physical process (a frequency pulse with decoherence) and derives the mathematical structure. The \(\{2,3\}\) pair is not postulated—it emerges from the constraint that periodic lattices require translational symmetry, which excludes five-fold coordination by the crystallographic restriction theorem.
The theory does not compete with quantum field theory or general relativity at the level of quantitative prediction. It offers no Lagrangian, no \(S\)-matrix, no cross-sections. What it offers is a framework of parsimony: the claim that the geometry of matter follows from the minimum possible input, and that this framework organizes known facts from condensed matter, particle physics, and cosmology under a single interpretive umbrella.
We note convergences with the independent frameworks listed above not as evidence of correctness but as indicators that the \(\{2,3\}\) structure appears across multiple theoretical traditions. Convergence is suggestive, not probative.
The foundational axiom of TLT is expressed as:
where \(f\) denotes a frequency pulse of unspecified magnitude, expressed in one spatial dimension, and \(t\) denotes the decoherence interval separating successive pulses. The decoherence interval is not fixed but governed by the local position on the bandwidth potential curve (\(C_\mathrm{potential}\), Sec. ). The system operates with a binary (on/off) pulse structure; the decoherence interval between pulses varies with local energy density.
The extended form includes amplitude:
where \(A\) represents the amplitude of the system as measured by thermal energy and pressure. As \(A\) decreases, structural organization increases; the relationship is inverse. This extension was necessitated by SIM-003 iteration 1 (Sec. ), which demonstrated that \(f|t\) without mass-dependent amplitude produces identical outcomes for all elements.
In this framework, differentiation arises from the decoherence interval rather than the frequency itself. This is a hypothesis whose consequences are tested throughout this paper. The physical picture is that of a heartbeat: pulse, rest, pulse, rest. The sequential rest allows for two things: decoherence (the loss of phase information between pulses) and geometry (the formation of a lattice of interfering pulses).
The pause must be shorter than expression. If the decoherence interval exceeds the pulse duration, the system loses coherence entirely and no structure forms. This constraint defines the operating regime of the theory.
The decoherence ratio \(r\) is defined as the ratio of pause duration to total cycle time (pulse + pause). Within the mathematical framework developed in the B-chain analysis, the optimal decoherence ratio is \(r \sim 0.3\), and the hard ceiling is \(r = 0.5\).
At \(r = 0.5\), the interference pattern collapses locally. Energy that cannot be contained within the current dimension's frame capacity must overflow into the next dimension. This ceiling is derived within the \(f|t\) framework; its physical manifestation as a dimensional boundary is supported by the helium data (Sec. , Sec. ) but has not been independently measured as a fundamental constant.
For clarity, the axioms of TLT are:
Axiom 1: A single frequency pulse \(f\), separated by decoherence interval \(t\). Amplitude \(A\) (proportional to mass-energy) emerges as a natural consequence when \(C_\mathrm{potential}\) introduces curvature (Sec. ). The extended form \(f + A|t\) is derived, not assumed. Axiom 2: The coherence boundary \(r = 0.5\), beyond which dimensional overflow is mandatory. This is the structural limit of the geometry, not an imposed parameter.All other elements—\(C_\mathrm{potential}\), \(\{2,3\}\), Fibonacci framerates, local conservation, black holes as dimensional boundaries—are derived consequences (Sec. ).
The theory implies a directional information pipeline:
Each \(f|t\) pulse propagates as a wave containing all possible interference states. Geometry constrains the wave into specific structural patterns (which vertices activate, which voids form, which symmetries survive). The result is a discrete outcome—a binary geometric event—that is recorded in the dimensional lattice.
Time, in this framework, is not merely a ledger in the metaphorical sense. It is a crystal lattice: each pulse creates a geometric record that is structurally frozen into the dimensional fabric. The universe does not flow through time; time crystallizes the universe, one pulse at a time. The lattice grows. The record is permanent. This is what “time as a ledger” means physically: the geometric history of every pulse is encoded in the structure of space itself.
The coordination numbers observed in periodic crystal structures are:
All four coordination numbers are products of 2 and 3 exclusively. No other prime appears. This is not a theoretical claim but an arithmetic fact verified against published crystallographic data (CRC Handbook , ICSD) for 133 elements with known ambient-pressure crystal structures. The verification was triple-audited: Claude (8/10), Gemini (8/10), Grok (9/10).
Caveat: Verified for all 133 elements with known ambient-pressure crystal structures. High-pressure polymorphs and metastable phases have not been systematically checked. Molecular elements (H\(_2\), N\(_2\), O\(_2\), F\(_2\), Cl\(_2\), etc.) are excluded from the crystal analysis; their coordination environments may not follow \(\{2,3\}\) products.
\(\{2\}\) contributes cubic/square geometry (right angles, independent axes). \(\{3\}\) contributes triangular/close-packed geometry (60-degree angles, equilateral coordination).
Factor 3 present in the coordination number implies triangular close-packed layers, which implies metallic conductivity. Factor 3 absent implies no triangular layers, which implies insulating (Diamond, coord 4) or broadband (BCC, coord 8) behavior. This rule achieves 100\% accuracy for the 67 metallic elements in the catalogue.
Five-fold rotational symmetry is incompatible with translational periodicity. This is a theorem (the crystallographic restriction theorem), not an observation. Regular pentagons cannot tile two-dimensional or three-dimensional space without gaps.
In the TLT framework, \(\{5\}\) is excluded from periodic crystals because it belongs to higher-dimensional cycles (Cycle 2, dimensions 4–6). The quasicrystals discovered by Shechtman , which exhibit icosahedral and decagonal symmetry, are aperiodic structures that the theory identifies as projections from higher-dimensional periodic lattices—a mathematical characterization that is independently established (de Bruijn 1981 , Elser 1986).
A computational null hypothesis test compared six prime pairs (\(\{2,3\}\), \(\{2,5\}\), \(\{2,7\}\), \(\{3,5\}\), \(\{3,7\}\), \(\{5,7\}\)) against three criteria: generation of all crystal coordination numbers, exclusion of 5-fold coordination, and coverage of key numbers across atomic, particle, and cosmic scales.
Results:
\(\{2,3\}\) is the unique prime pair satisfying both the inclusion constraint (generate all observed coordination numbers) and the exclusion constraint (forbid 5-fold coordination). This addresses the “small-number bias” critique: \(\{2,3\}\) is not arbitrary but is the unique solution to a well-defined combinatorial problem.
The \(\{2,3\}\) framework extends beyond crystal structure classification to quantitative material property prediction. A systematic analysis of ionization energy ratios across the \(d\)-block reveals that \(N\)-body interaction hierarchies (\(\{2\}\)-bond, \(\{3\}\)-plane, \(\{4+\}\)-stacking) determine crystal structure selection, with the progression HCP \(\to\) BCC \(\to\) FCC repeating across Periods 4, 5, and 6 with 70\% position-match consistency. Period 6 shows relativistic compression of the pattern consistent with \(6s\) electron contraction.
These results, including the geometric cipher's 96.9\% accuracy across the full periodic table for band gap, ductility, and conductor classification, are documented in companion papers [Papers 3 and 4, in preparation]. The present paper establishes the \(\{2,3\}\) framework; the companion papers develop its material-property consequences.
The theory proposes that dimensions organize in cycles of three, following the pattern seed \(\to\) flat \(\to\) volumetric. This is speculative for Cycle 2 and beyond, but consistent with the known progression of lattice packings for Cycle 1.
Cycle 1: Dimensions 1, 2, 3The original Fibonacci framerate formula (Sec. ) predicted that dimensional properties would regress at \(d > 4\). This prediction was wrong and has been replaced by the cycle-restart model. The correction is documented here as required by the integrity protocol.
The progression does not reverse because:
Nothing reverses. Each dimension adds structure. Space itself opens up: 1D has no internal space (pure oscillation), 2D is flat with surface gaps only, 3D has internal voids where energy can be trapped (mass), 4D splits further with wider internal space and additional void types. Each dimensional step creates geometric room for increasing complexity.
The dimensional progression is also a regime transition. At 1D, there is no topology—pure oscillation without spatial structure. At 2D, topology emerges as flat tiling (connectivity and adjacency govern). At 3D, topology opens—internal voids create depth, and geometry begins to take over from pure topology. At 4D, the transition completes: the space between structures becomes the topology itself, and geometry is the governing principle. This progression from topological to geometric regime is developed fully in the companion paper [Paper 3].
The structural oscillation at each dimension mirrors the 1D pulse at increasing complexity. The 1D pulse is the simplest case: on, off, on, off. At 2D, the structure swings between tiling patterns. At 3D, the geometric breathing of void formation and collapse echoes the original pulse. Each dimension expresses the same fundamental oscillation in a more organized, more complex form—the pulse never stopped, it scaled [Paper 3].
Cycles of 3 dimensions are themselves grouped in pairs, forming a meta-cycle. The \(\{2,3\}\) structure appears at every level:
This self-similarity of \(\{2,3\}\) across all scales of the theory is noted as a structural feature, not claimed as a derived result.
We propose that the measured speed of light is the framerate specific to three-dimensional space. This reinterpretation predicts dimension-dependent propagation speeds. The proposal is speculative and contradicts special relativity as currently formulated, in which \(c\) is a universal constant independent of dimensionality.
The physical picture: speed is equivalent to framerate. Distance is equivalent to the number of frames that have passed. Massless particles naturally travel at the local framerate because they carry no internal structure that would require additional frame processing. The speed of light constitutes the 3D framerate; this is proposed as the reason why massless particles naturally travel at \(c\).
The framerate increases with each dimension because each dimension adds geometric complexity that must be recorded by the time lattice. The recording medium—time—must sample faster to faithfully capture more complex structures. This is consistent with information theory: a channel carrying a more complex signal requires greater bandwidth.
1D oscillation requires the lowest sampling rate (simplest structure). 2D tiling requires more bandwidth to record the interference patterns. 3D volumetric geometry with internal voids requires still more—the framerate at 3D (the measured speed of light \(c\)) reflects the bandwidth needed to maintain coherent 3D structure. Higher dimensions require proportionally faster framerates because they contain proportionally more geometric information per pulse.
The framerate is not arbitrary. It is the minimum sampling rate that preserves coherent geometric structure at that dimension's complexity level. This directly parallels Nyquist's sampling theorem: the sampling rate must exceed twice the highest frequency component to prevent aliasing. The dimensional framerate IS the Nyquist rate for that dimension's geometric content.
For Cycle 1, the framerate at dimension \(d\) is:
where \(F(n)\) is the \(n\)th Fibonacci number (\(1, 1, 2, 3, 5, 8, 13, 21, …\)).
| Dim | Framerate (\(\times c\)) | Fibonacci sum | Status |
|---|---|---|---|
| 1D | 0.375\(c\) | 3/8 | Theoretical |
| 2D | 0.625\(c\) | 5/8 | Predicted |
| 3D | 1.000\(c\) | 8/8 | Measured (\(c\)) |
| 4D | 1.625\(c\) | 13/8 | Indicated\(^*\) |
| 5D | 2.625\(c\) | 21/8 | Speculative |
| 6D | 4.250\(c\) | 34/8 | Speculative |
The Fibonacci framerate formula is empirically motivated, not derived from first principles within the theory. For 4D, the predicted \(c_\mathrm{4D} = 1.625c\) is consistent with the Steinberg (1993) tunneling measurement of \((1.7 \pm 0.2)c\). This specific value is indicated by an external measurement, albeit with 12\% uncertainty from a single experiment. Predictions for all other dimensions await experimental test.
The ratio of consecutive Fibonacci framerates is:
This ratio approximates \(\phi = 1.618\) (within 1.1\%). In the overflow mechanism (Sec. ), the framerate mismatch between 2D and 3D creates a helical trajectory whose pitch ratio is the framerate ratio. \(\phi\) governs 3D because the 2D-to-3D framerate ratio IS approximately \(\phi\). It is not imposed as a governing principle—it emerges from the Fibonacci framerate structure.
Similarly:
The framerate ratios converge toward \(\phi\) from above as \(d\) increases, which is a known property of ratios of consecutive Fibonacci numbers.
A critical distinction: in 3D, \(\phi\) is RELATIONAL. It is attained by comparing two different measurement points—the ratio of two external references (e.g., the framerate of dimension \(d+1\) to dimension \(d\)). \(\phi\) in 3D describes a relationship BETWEEN things. This is why \(\phi\) appears as a ratio throughout 3D nature: spiral arms to core, successive Fibonacci numbers, shell proportions. Each instance requires two measurements to reveal the ratio.
In 4D, \(\phi\) transitions from relational to GEOMETRIC. The 24-cell—the characteristic geometry of 4D (Paper 3, Section III)—is self-dual: it IS its own dual polytope. The \(\phi\) ratio is no longer between two external references; it is built into the structure itself. The geometry contains its own reference. \(\phi\) becomes self-referential.
This transition—from \(\phi\) as a measured ratio (3D) to \(\phi\) as an intrinsic geometric property (4D)—is a direct consequence of the dimensional progression. In 3D, you need two points to find \(\phi\). In 4D, the geometry IS \(\phi\). This is not inconsequential: it implies that the organizing principle of the universe shifts from relational to self-contained at the 3D-to-4D boundary. The full implications of this transition are developed in the companion paper [Paper 3].
The Fibonacci formula may not hold for Cycle 2 (\(d \geq 4\)). The theory anticipates additional correction terms beyond Fibonacci as complexity increases due to the \(\{5\}\) geometric correction entering at Cycle 2. The values in Table for \(d \geq 5\) are the Fibonacci baseline only. Actual framerates may differ. The original formula predicted dimensional regression at \(d > 4\), which was physically wrong and led to the cycle-restart correction. The Cycle 2 quantitative framework remains incomplete.
At every dimension, the decoherence ratio \(r\) has a hard ceiling at 0.5. When \(r\) reaches 0.5, the interference pattern collapses locally. Energy that cannot be contained within the current dimension's frame capacity must overflow into the next dimension.
The boundary is the same physics every time. What changes is:
Each overflow product is a preview of the next dimension's native structure. The overflow does not create something new—it exposes what already exists at the next level.
Boundary: 2D \(\to\) 3D \\ Energy scale: \({\sim}0.86\) meV (helium binding energy). \\ Overflow product: CHIRAL JETS.What it reveals about 3D:
What it reveals about 4D:
Note: Electron-positron pair production at 1.022 MeV (CODATA 2018 ) is identified in our framework as the 3D-to-4D boundary. This identification is a theoretical proposal; pair production has a well-established QED explanation.
Boundary: 4D \(\to\) 5D \\ Energy scale: \({\sim}3\) PeV (cosmic ray proton knee). \\ Overflow product: UNKNOWN from 3D perspective.Helium's anomalous properties are consistent with a 2D-to-3D boundary interpretation. The quantitative data:
| Property | Value |
|---|---|
| He-4 interatomic well depth | \(\epsilon/k_B \sim 10.2\) K |
| Zero-point energy per atom (liquid) | \({\sim}15\)–20 K |
| Ratio: zero-point / well depth | \({\sim}1.5\)–\(2.0\) |
| De Boer parameter \(\Lambda^*\) | 2.68 |
| Lindemann ratio (liquid He) | \(> 0.3\) |
| Normal Lindemann melting threshold | \({\sim}0.1\) |
| Solidification pressure at \(T \to 0\) | 25.3 atm |
Zero-point energy exceeds binding energy. In the TLT framework, this is the decoherence ratio exceeding 0.5—pattern collapse. Helium is the only element with de Boer parameter exceeding 1.0 (next is Ne at 0.593).
Published data supporting the dimensional interpretation:
These observations are also explained by conventional low-mass weak-interaction models without invoking dimensions. We argue that the dimensional boundary interpretation provides a unifying mechanism for all six phenomena simultaneously—but this is an interpretive claim, not a demonstrated fact.
Each boundary gives experimental access to the next dimension through its overflow products. We study 4D through anti-particles (3D overflow products), just as helium's anomalous properties provide access to the 3D overflow being visible in what is otherwise a 2D-boundary system.
\(C_\mathrm{potential}\) is the curvature of time acting at the local level at all scales. It is the placement on the bandwidth potential curve, which is Lagrangian at all levels measured—importantly including quantum levels. The amount of energy coalescence is proportional to the curvature it incurs at the local measurement.
The formula:
The decoherence interval is not static. It ranges based on the local position on the bandwidth curve. Where the curve is steep (dense energy coalescence), decoherence intervals are longer. Where the curve is shallow, intervals are shorter.
\(C_\mathrm{potential}\) provides negative feedback. Energy coalescence increases local curvature, which increases the decoherence interval, which reduces the rate of new structure formation. This self-limiting mechanism prevents runaway coalescence and maintains the system at or near equilibrium within each dimension.
\(C_\mathrm{potential}(\text{max})\) is dimensionally agnostic but spatially dependent. At smaller scales, the energy requirements to reach \(C_\mathrm{potential}(\text{max})\) are quantitatively smaller. The limits are defined by the thresholds of each dimension. Each dimension increases the energy thresholds of \(C_\mathrm{potential}(\text{max})\).
The coherence boundary \(r = 0.5\) defines \(C_\mathrm{potential}(\text{max})\) in each dimension. The relationship between \(C_\mathrm{potential}(\text{max})\) and the speed of light is:
where \(c(d)\) is the framerate of dimension \(d\). This states that the speed of light is the inverse of the maximum bandwidth curvature—the fastest rate at which \(f|t\) can propagate structure before the geometry loses coherence.
This relationship is consistent with general relativity, where \(c\) appears in the Einstein field equations and the Lagrangian density as the coupling constant between spacetime curvature and energy density. In the GR Lagrangian \(\mathcal{L} = (c^4 / 16\pi G)\, R\), the speed of light is not merely a velocity—it is the conversion factor governing how much curvature a given energy density produces. In TLT, this same role is filled by \(C_\mathrm{potential}(\text{max})\): it defines the maximum curvature the geometry can sustain before dimensional overflow.
The correspondence is:
Both frameworks identify \(c\) as the bandwidth limit of the geometry. GR treats \(c\) as a universal constant; TLT proposes that \(c\) is the framerate of the current dimension, making it dimension-dependent. At \(d = 3\), the framerate calibrates to the measured speed of light. At \(d = 2\) (if measurable), TLT predicts \(c(2) = 0.625c\)—a falsifiable difference.
The equivalence principle—that gravitational mass equals inertial mass, and no experiment can distinguish between gravity and uniform acceleration—is not an axiom in TLT. It is emergent.
Gravity is \(C_\mathrm{potential}\) curvature: the gradient of the decoherence interval across the bandwidth potential. Inertia is resistance to changing position on that same curvature. They are identical because they ARE the same geometric phenomenon: an object's relationship to the local \(C_\mathrm{potential}\) gradient.
In GR, the equivalence principle is postulated. In TLT, it follows from the fact that both gravity and inertia are manifestations of the same underlying quantity (\(C_\mathrm{potential}\)). There is no need to postulate their equality because there is only one thing, not two. The full gravitational implications are developed in the companion paper [Paper 2, Section II].
Black holes represent the 3D manifestation of \(C_\mathrm{potential}(\text{max})\): the point at which 3D geometry can no longer contain the energy density. The curvature reaches the coherence boundary (\(r = 0.5\)), and energy overflows into the next dimension. This is the same overflow mechanism that operates at every dimensional boundary, manifested at the extreme of 3D curvature.
The claim that \(C_\mathrm{potential}\) operates at all scales rests on established experimental physics, not theoretical speculation.
The Aharonov–Bohm effect (1959, experimentally confirmed by Chambers 1960, Tonomura et al.\ 1986) demonstrated that electromagnetic potentials produce measurable physical effects even in regions where the electromagnetic field is zero. An electron beam passing through a region with zero magnetic field but nonzero vector potential acquires a phase shift that is directly measurable as an interference pattern shift. The field is zero. The potential is not. The potential is what the electron responds to.
This result was initially controversial because classical physics treats potentials as mathematical conveniences with no physical reality. The Aharonov–Bohm experiment proved otherwise: the potential IS the physically real quantity. The field is a derivative of it.
Berry phase (1984) generalized this: any quantum system transported around a closed loop in parameter space acquires a geometric phase determined by the geometry of the potential landscape, independent of the speed of transport or the specific path taken. This geometric phase has been confirmed experimentally in photonics, condensed matter, and molecular physics.
The significance for \(C_\mathrm{potential}\) is direct:
\(C_\mathrm{potential}\) inherits this experimentally established reality. The bandwidth potential curve is Lagrangian at all levels measured—including quantum levels where Aharonov–Bohm and Berry phase operate, and gravitational levels where GR operates. The theory does not invent a new potential; it identifies the existing, experimentally confirmed potential structure as the mechanism governing decoherence intervals.
\(C_\mathrm{potential}\) is the symmetry-breaking mechanism. Decoherence breaks symmetry, not phase. The curvature allows for asymmetrical and multiple vector interactions. The Aharonov–Bohm effect provides the experimental precedent: a potential (with no associated field) breaks the symmetry of the electron wavefunction, producing a measurable asymmetric interference pattern.
We propose that conservation laws are dimensionally local—each dimension conserves independently. Energy is conserved within 3D; that is real, verified physics. But it is the 3D ledger balancing its own books within its own equilibrium.
The 1D well keeps injecting. The overflow keeps flowing. Conservation is an observation of local equilibrium, not a fundamental constraint on the total system. The first law of thermodynamics holds perfectly—within the dimension it is measured in.
This proposal is speculative and requires specific experimental tests to distinguish from global conservation. It is motivated by the vacuum energy discrepancy: the 120-order-of-magnitude difference between quantum field theory's prediction of vacuum energy and the observed cosmological constant. If conservation is global across all dimensions, the vacuum energy sum includes contributions from all dimensional levels. If conservation is dimensionally local, the sum is restricted to 3D contributions only, and the discrepancy dissolves.
We emphasize that this is one of many proposed resolutions to the vacuum energy problem (others include anthropic selection, quintessence, supersymmetry breaking). Dimensional locality is a theoretical proposal, not a verified resolution.
Gravity is exactly what Einstein described: the geometry of a curve. \(C_\mathrm{potential}\) provides that curvature naturally within its bandwidth. No additional force is required.
If gravity is not a force but a manifestation of \(C_\mathrm{potential}\) curvature, then there is no need for dark energy to explain cosmic expansion. Dark energy is required only if gravity is a force that must be counteracted. Since in this framework it is not, the expansion does not need explaining by a mysterious energy—it IS the pulse radiating outward.
We propose that \(C_\mathrm{potential}\) provides a geometric mechanism for gravitational curvature consistent with general relativity. If confirmed, the apparent cosmic acceleration could be reinterpreted as a framerate mismatch between dimensions, which would eliminate the need for dark energy. This interpretation is speculative and requires derivation of the Newtonian limit from \(C_\mathrm{potential}\) as a first validation step. No such derivation has been presented as of this writing.
Each pulse injects new energy into the system from a single point. It cascades radially through dimensions. In dimensions where the energy has already reached local equilibrium, that energy is passed upward to the next dimension. This is the dimensional overflow—not a dramatic event, but the continuous consequence of the pulse never stopping.
1D is a well of indefinite depth. It pulls potential into reality. It is the bridge between non-local (all potential) and local (expressed reality). \(f|t\) IS this bridge.
The 1D well does not deplete. The non-local reservoir it draws from is not bounded in the theory. The pulse keeps coming. This has consequences for the simulation (SIM-003): the source amplitude stays constant (correct, given an unbounded reservoir). There is no conservation of total energy across dimensions—each dimension has its own energy budget drawn from the 1D well.
The apparent acceleration of cosmic expansion is not, in this framework, driven by dark energy. It arises from framerate mismatch between dimensions.
Each dimension has its own framerate (Sec. ). From 3D, we observe the effects of all dimensions—but we measure at the 3D framerate (\(c\)). When higher-dimensional dynamics project into 3D, their faster framerates appear as acceleration. It is not that space is expanding faster; it is that we observe the framerate mismatch between 3D (our clock) and higher dimensions (faster clocks). The “acceleration” is proposed to be a projection artifact of measuring multi-dimensional dynamics with a single-dimensional clock.
This explanation is qualitative. No quantitative calculation has been performed to test whether the framerate mismatch reproduces the observed acceleration parameter. Such a calculation is identified as a necessary next step.
For the 3D landscape, we measure 13.8 billion years. But that is the 3D ledger's record. It started recording when the 2D-to-3D overflow created 3D space. The universe's total age, measured across all dimensional ledgers, is not a well-defined quantity from inside any single dimension because the framerates differ and the ledgers start at different points.
What we can know from 3D: the 3D ledger age (\({\sim}13.8\) Gyr at \(c\)), the 3D-to-4D boundary energy (\({\sim}1.022\) MeV), the framerate ratio (\(c_\mathrm{4D}/c_\mathrm{3D} \sim 1.625\)). What we cannot know from 3D: how many dimensions exist in total, the “age” of the system in any absolute sense, or what the 1D well looks like from outside.
Three boundary energies have been identified from independent subfields:
| Boundary | Energy | \(\log_{10}(\text{eV})\) | Source |
|---|---|---|---|
| 2D \(\to\) 3D | 0.86 meV | \(-3.066\) | He binding (measured) |
| 3D \(\to\) 4D | 1.022 MeV | \(6.009\) | Pair production (m) |
| 4D \(\to\) 5D | \({\sim}3\) PeV | \(15.477\) | Cosmic ray knee (m) |
Log steps between boundaries: \(+9.075\), \(+9.468\).
The three boundary energies (0.86 meV, 1.022 MeV, \({\sim}3\) PeV) span approximately \(10^{9.3}\) per step. We note that three points uniquely determine a quadratic; the significance lies not in the fit quality (which is guaranteed) but in the independence of the three measurements from different subfields (condensed matter, particle physics, cosmic ray physics). The interpretation of each energy as a dimensional boundary is the theory's framework; each has a well-established conventional explanation within its respective subfield.
where \(d\) = boundary dimension (\(d = 2\) for 2D \(\to\) 3D, \(d = 3\) for 3D \(\to\) 4D, etc.).
The quadratic term (\(0.1964\, d^2\)) makes the scaling super-exponential: each boundary requires progressively more energy than the previous step. This is consistent with progressive void capacity.
Extrapolated values:
| \(d\) | Boundary | \(\log_{10}(\text{eV})\) | Energy | Status |
|---|---|---|---|---|
| 1 | 1D \(\to\) 2D | \(-11.75\) | \(1.79 \times 10^{-12}\) eV | Extrapolated |
| 2 | 2D \(\to\) 3D | \(-3.07\) | \(8.60 \times 10^{-4}\) eV | MEASURED\(^*\) |
| 3 | 3D \(\to\) 4D | \(6.01\) | 1.02 MeV | MEASURED\(^*\) |
| 4 | 4D \(\to\) 5D | \(15.48\) | 3.0 PeV | MEASURED\(^*\) |
| 5 | 5D \(\to\) 6D | \(25.34\) | \(2.2 \times 10^{25}\) eV | Extrapolated |
| 6 | 6D \(\to\) 7D | \(35.59\) | \(3.9 \times 10^{35}\) eV | Extrapolated |
The \(d = 1\) extrapolation yields \({\sim}432\) Hz (\(1.79 \times 10^{-12}\) eV) for the \(f|t\) seed frequency. This number is derived from the formula, not assumed. Whether it corresponds to anything physically measurable is an open question that we note but do not editorialize upon.
Integrity note: This formula has 3 free parameters fitted to 3 data points. A quadratic through 3 points always yields a perfect fit. The formula is descriptive, not explanatory. Its value lies in generating falsifiable extrapolations: a fourth measured boundary energy would confirm or refute it.
A power law fit to three measured points:
The power law is monotonically increasing: consistent with the progressive model, never regressive. The exponent 0.1868 has no clean mathematical identity (\(3/16 = 0.1875\) is close but not exact). The coefficient 1.3179 is the extrapolated 1D value (the “seed ratio”).
Extrapolated predictions: 5D spiral ratio = 1.780, 6D = 1.842, 7D = 1.895, 8D = 1.943. These are falsifiable: any independent measurement of a higher-dimensional spiral ratio that disagrees with the extrapolation constrains or invalidates the formula.
The proton knee at \(3.0 \pm 0.2\) PeV (LHAASO 2025 ) is consistent with the energy scaling pattern extrapolated from the two lower boundaries. We suggest this may represent a 4D-to-5D boundary. Conventional explanations (maximum energy of galactic supernova remnant accelerators) are not excluded by the available data.
Supporting observations:
Within the TLT framework, the muon excess at PeV would be interpreted as a dimensional overflow signature analogous to helium's anomalous properties at the 2D-to-3D boundary. This interpretation is speculative and competes with conventional QCD explanations that have not been ruled out.
The theory provides post-hoc explanations for thirteen independently measured phenomena (labeled A through M in the verified explanations catalogue). These are explicitly labeled as post-hoc: the data was known before the theoretical explanation was constructed. They demonstrate internal consistency but carry less evidential weight than genuine predictions.
Selected examples:
The geometric cipher's only quantitative forward prediction (Section XXV, prediction 4D-P3 of the cipher) was falsified. Americium was predicted at 170–200 \(\mu\Omega\cdot\)cm; measured value is \({\sim}68\). Curium was predicted at \(>200\); measured at \({\sim}125\). The linear model broke at the Hill limit, where \(5f\) electrons undergo the itinerant-to-localized transition.
The calibration used only 3 data points (U, Np, Pu), which happened to lie in the linear regime before the transition. The correction, achieved through \(\{2,3\}\) decomposition of \(f\)-electron count (PROBE-004), recovered 5/5 accuracy on the same data. This failure and its correction are documented as required by the integrity protocol.
The first computational test of \(f|t\) produced identical results for Fe, Cu, Li, C, and H. The mass parameter was stored in the simulation but never entered the wave equation. Every element overflowed at the same cycle (9,692) with the same node count.
Correction: Iteration 2 (Batch 4E/4F) implemented amplitude scaling (\(f + A|t\) where \(A\) is proportional to mass). The 4F parallel sweep (144 parameter configurations) demonstrated that mass-dependent amplitude DOES differentiate results: mass = 1.0 kg yields peak ratio 1.223, while mass = 0.0 yields peak ratio 1.321—a measurable separation. Of 144 configurations, 108 (75\%) achieved success level 1. The mass parameter now enters the wave equation and produces distinct outcomes. The original failure (iteration 1) is corrected.
F2: Fibonacci Formula RegressionThe original dimensional formula predicted that higher-dimensional properties would regress (lower framerates, simpler geometry at \(d > 4\)). The physics demanded progression. The formula breaks beyond Cycle 1 (dimensions 1–3). The replacement (cycle-restart model) is qualitative, not yet quantitative for Cycle 2.
F3: Actinide Resistivity FalsificationDocumented in Sec. . The cipher's only quantitative forward prediction was wrong. Corrected through \(\{2,3\}\) decomposition of \(f\)-electron count.
F4: Energy Boundary Formula — 3 Points / 3 ParametersThe energy scaling formula fits exactly 3 measured boundaries with 3 free parameters. A quadratic through 3 points is always a perfect fit. This is not a prediction; it is a fit. The formula's value is in generating falsifiable extrapolations, not in the quality of the fit itself.
The theory rests on two axioms. Everything else derives from them.
AXIOM 1: \(f|t\) — A single frequency pulse, separated by time.This is the seed. From \(f|t\), the following derive as consequences:
The decoherence ceiling at which geometry in the current system can no longer maintain coherent structure through \(f|t\). This is derived through experiment and established as the boundary through first principles of the theory itself—it is not an added parameter but a structural consequence of the geometry reaching its capacity. When \(r = 0.5\) is reached, the system overflows into the next dimension.
DERIVED CONSEQUENCES (not axioms):Total: 2 axioms. All other elements (\(C_\mathrm{potential}\), amplitude, \(\{2,3\}\), Fibonacci framerate, local conservation, dimensional overflow, black holes) are derived consequences.
Note: SIM-003 is a computational test, not an experimental measurement. Agreement with known crystal structures would validate the computational framework; disagreement would indicate that the \(f|t\) axiom is insufficient for structure selection.
Test 2 (Strong): 4D Framerate Measurement. A precision measurement of tunneling velocity (or other superluminal propagation) that conclusively rules out \(c_\mathrm{4D} = 1.625c\) would invalidate the Fibonacci framerate formula for 4D. Test 3 (Strong): Element with Non-\(\{2,3\\) Coordination.} Discovery of any periodic crystal with a coordination number that does not factor into powers of 2 and 3 (e.g., coordination 5, 7, or 11) would falsify the \(\{2,3\}\) universality claim. Test 4 (Moderate): Boundary Energy. A measured energy boundary (e.g., the 5D-to-6D boundary, predicted at \(10^{25.3}\) eV) that deviates significantly from the quadratic formula would invalidate the energy scaling extrapolation. Such energies are currently beyond experimental reach. Test 5 (Moderate): 2D Framerate. If photonic systems confined to 2D show propagation at \(c\) rather than \(0.625c\), the dimensional framerate interpretation is wrong.TLT is not a replacement for quantum field theory or general relativity. It does not reproduce their quantitative predictions. What it provides is a framework of interpretive parsimony: 3 axioms and 7 assumptions generating a geometric structure that organizes observations across condensed matter, particle physics, and cosmology.
Specific convergences:
Specific divergences:
These divergences are not resolved by the present work. They represent the distance between TLT's geometric interpretation and established physics.
The following potential cherry-picking risks are identified and addressed:
For reference, the complete set of dimensional formulas developed in this framework:
Predictions falsifiable.
\(c_\mathrm{4D}\) indicated by Steinberg 1993 . Other dimensions speculative. Cycle 2 may require corrections.
Measured values from Conway \& Sloane 1998 are authoritative:
Chirality in the TLT framework is not a separate axiom. It is a consequence of dimensional framerate mismatch at overflow boundaries.
At the 2D-to-3D boundary:
The energy cannot go straight into 3D because it is still moving in 2D. The two velocities compose into a spiral:
This is the same physics as two currents meeting at different speeds: the velocity shear creates a vortex. At each dimensional boundary, the framerate differential creates turbulence that is necessarily helical.
The chirality (left vs right) is determined by the gradient of \(C_\mathrm{potential}\) at the overflow point. The overflow occurs on one side of the \(C_\mathrm{potential}\) maximum. Which side determines the handedness of the resulting spiral.
The ratio of the helix is the framerate ratio between dimensions:
\(\phi\) governs 3D specifically because the 2D-to-3D framerate ratio is approximately \(\phi\). It is not imposed as a governing principle; it emerges from the Fibonacci framerate structure.
The matter/antimatter asymmetry (CP violation) in the universe may be a consequence of the chiral nature of the 3D-to-4D overflow, analogous to how the 2D-to-3D overflow produces chiral (not symmetric) jets. The asymmetry is a property of overflow, not a separate input.
This is a post-hoc reinterpretation of known physics. It does not generate a quantitative prediction for the baryon asymmetry. It is noted as a qualitative consistency, not claimed as an explanation.
The 24-cell's symmetry group includes isoclinic (Clifford) rotations: simultaneous rotation in two independent planes, which is impossible in 3D and natural in 4D. These rotations require 720 degrees for a full return—exactly the topological property of spin-\(1/2\) particles. A detailed treatment is provided in the companion paper on dimensional recursion [Paper 2, in preparation], where the connection between isoclinic rotations and intrinsic spin is developed.
The structure of the chirality argument is internally consistent: every boundary produces a framerate mismatch, every mismatch produces a helical overflow, and the helical ratio matches the dimensional spiral ratio. The specific interpretation (chirality = framerate shear consequence) is theoretical. No experiment has tested whether confining systems to 2D produces overflow jets with \(\phi\)-ratio helicity.
Time Ledger Theory proposes that a single frequency pulse \(f\), separated by a decoherence interval \(t\), generates the dimensional structure of physical space. From two axioms (\(f|t\) pulse, \(r = 0.5\) coherence boundary), the theory derives:
The theory does not derive specific particle masses quantitatively (though it identifies the geometric mechanism). It does not provide a Lagrangian, \(S\)-matrix, or scattering cross-sections. It does not reproduce the quantitative predictions of quantum field theory or general relativity. It has no quantitative Cycle 2 framework. Its central computational test (SIM-003) failed in iteration 1; iteration 2 (\(f + A|t\), Batch 4E/4F) demonstrated mass-dependent differentiation but has not yet been extended to full crystal structure reproduction.
If the SIM-003 test succeeds in its corrected form, TLT would demonstrate that crystal structure geometry follows from a frequency pulse with decoherence and mass-dependent amplitude. This would establish an explanatory economy unprecedented in condensed matter physics: 2 axioms replacing the apparatus of density functional theory for structure selection.
If the theory's dimensional predictions are borne out—in particular, if 2D propagation velocity is measured at \(0.625c\) and not \(c\)—the interpretation of \(c\) as a dimensional framerate rather than a universal constant would represent a fundamental revision of special relativity.
A direct consequence of discrete time is that the universe oscillates between states of existence and non-existence at the pulse frequency. During each tick, the geometric structure exists and evolves. Between ticks, nothing exists—no space, no energy, no information. Reality is not continuous; it is strobed. This dual nature (existing / not existing) is a necessary implication of the \(f|t\) axiom, though it is not presently falsifiable by direct observation, as any measurement device is itself subject to the same pulse cycle.
The fundamental pulse frequency can be estimated from the dimensional framerate calibration. Working backward from the measured speed of light at \(d = 3\) through the Fibonacci framerate ratios, the base pulse frequency is approximately 8 Hz. This value falls within the range of the Schumann resonance (7.83 Hz fundamental), the Earth's electromagnetic cavity resonance. Whether this correspondence is coincidental or reflects a deeper connection between the pulse frequency and planetary-scale electromagnetic phenomena is an open question. The approximate value is noted here as a quantitative anchor, not a claim of causation.
These are large “ifs.” The theory is presented not as established physics but as a framework whose parsimony and internal consistency warrant the investment of further testing. The tests are specific, the predictions are quantitative, and the failures are documented.
The author acknowledges the computational assistance of Claude (Anthropic) in documenting the theory, performing literature searches (PROBE series), running computational tests (HPC series, SIM series), and auditing claims for qualifier accuracy (Pass 4 nuance audit) and integrity (Pass 5 integrity audit). All theoretical content, axioms, interpretations, and the \(f|t\) framework originate with the author. The triple-audit protocol (Claude, Gemini, Grok) was used for the \(\{2,3\}\) crystal classification (verified explanation \#2). The theory was developed without institutional affiliation or external funding.
| Symbol | Definition |
|---|---|
| \(f\) | Frequency pulse (1D) |
| \(t\) | Decoherence interval (pause between pulses) |
| \(A\) | Amplitude (proportional to thermal energy / pressure / mass) |
| \(r\) | Decoherence ratio (pause / total cycle time) |
| \(p\) | Decoherence parameter (\(C_\mathrm{potential}\)-dependent, range \({\sim}1\)–\(3\)) |
| \(d\) | Spatial dimension |
| \(c\) | Speed of light (\(=\) 3D framerate in TLT) |
| \(c_d\) | Framerate at dimension \(d\) |
| \(\phi\) | Golden ratio \(= (1 + \sqrt{5})/2 = 1.6180…\) |
| \(F(n)\) | \(n\)th Fibonacci number (\(1, 1, 2, 3, 5, 8, 13, 21, …\)) |
| \(\{2,3\}\) | The organizing prime pair |
| \(\{5\}\) | Pentagonal symmetry (excluded from periodic crystals) |
| \(C_\mathrm{pot}\) | \(C_\mathrm{potential}\) (local curvature of the bandwidth potential) |
| \(E\) | Energy |
| \(T_\mathrm{melt}\) | Melting temperature |
| \(E_\mathrm{coh}\) | Cohesive energy |
| \(\alpha\) | Archetype-specific proportionality constant (K/eV) |
| \(\Lambda^*\) | De Boer quantum parameter |
| Archetype | Coord | \(\{2,3\}\) Factoring | \# Elements |
|---|---|---|---|
| Diamond | 4 | \(2^2\) | 3 (C, Si, Ge) |
| A7 | 6 | \(2 \times 3\) | 5 (As, Sb, Bi, …) |
| BCC | 8 | \(2^3\) | 26 |
| HCP | 12 | \(2^2 \times 3\) | 36 |
| FCC | 12 | \(2^2 \times 3\) | 19 |
| \multicolumn{4}{l}{Total classified (ambient-pressure, non-molecular): 133/133} | |||
| \multicolumn{4}{l}{Exceptions: 0} |
| Element | \(\Lambda^*\) | Behavior |
|---|---|---|
| He-4 | 2.68 | Quantum liquid, superfluid |
| He-3 | 3.08 | Quantum liquid, superfluid |
| Ne | 0.593 | Classical solid, quantum |
| Ar | 0.186 | Classical solid |
| Kr | 0.104 | Classical |
| Xe | 0.064 | Classical |
| Feature | Energy | Momentum |
|---|---|---|
| Phonon region | linear | sound velocity \({\sim}238\) m/s |
| Maxon (peak) | \({\sim}1.1\) meV | \(p \sim 1.1\) \AA\(^{-1}\) |
| Roton minimum | \(\Delta/k_B = 8.65\) K | \(p_0/\hbar = 1.92\) \AA\(^{-1}\) |
| Archetype | \(\alpha\) (K/eV) | \(\sigma\) | \(N\) |
|---|---|---|---|
| BCC \(d\)-block | 420 | 24 | 7 |
| HCP \(d\)-block | 400 | 20 | 8 |
| FCC \(d\)-block | 390 | 35 | 10 |
| Universal | 412 | — | 30 |
All 13 verified explanations (A–M) are post-hoc: the data was known before the theoretical explanation was constructed. They demonstrate internal consistency of the framework but carry less evidential weight than genuine predictions.
@article{shelton2026tlt_p1,
author = {Jonathan Shelton},
title = {{Time Ledger Theory: A Geometric Framework Deriving Dimensional Properties from a...}},
year = {2026},
note = {Paper 1, Prometheus Research Group LLC}
}