Academic Positioning and Structural Analysis of Previous Research on "Finite Closure"
- kanna qed
- 2025年12月27日
- 読了時間: 12分
What is the difference between finite closure and compactification?
This paper defines Finite Closure as a framework to sever the dependency on "unverifiable infinities" (continuum, infinite precision, infinite causality) in AI operations on computers, enclosing them within an auditable finite domain through explicit boundary specifications. Finite Closure (i) fixes the target domain of evaluation and audit as a Window-Spec, (ii) contains numerical uncertainty using Outward Rounding and Interval Representation, and (iii) freezes execution logs via a tamper-evident Ledger, thereby structurally preventing the "retroactive fitting of criteria" and the "evaporation of accountability" after operation. Furthermore, this paper organizes the academic positioning of Finite Closure as the intersection of lineages including Ultrafinitism, Effective Field Theory (EFT) cut-offs, Interval Arithmetic, Information Thermodynamics, and Bounded Rationality, and presents the formal structure of its falsifiability (PASS/FAIL).

Contributions
Formal Definition of Finite Closure: Minimal definition based on Boundary Spec, Stopping Rule, Ledger, and Inclusion Guarantee.
Necessary Conditions for "Non-Retroactive Evaluation": Identification of the minimal set of objects to be fixed (Boundary Spec).
Falsifiability of Audit Protocols: Presentation of the closed form: Input → Finite Inequality → PASS/FAIL.
Academic Positioning: Organization of correspondences with existing mathematics, physics, computation, thermodynamics, and ethics.
Introduction: Finite Closure as a Boundary Spec for Accountability under Finite Resources
21st-century computational science, mathematical physics, and AI operations grapple with an unavoidable gap between theoretical descriptions premised on "infinite precision of real numbers" and implementations running on "computational and physical devices with finite resources." While the success of calculus has made the continuum model a powerful tool, as AI integrates into social systems and decision-making, this gap manifests as operational failures such as "unverifiable errors," "retroactive modification of evaluations," and the "evaporation of locus of responsibility."
Finite Closure, proposed in this article, is a conceptual framework designed to mathematically bridge this gap. It is not a claim denying infinity, but a design principle to "declare and fix a boundary as a specification, and determine PASS/FAIL inside that boundary" to establish operational accountability. This paper positions Finite Closure as a formulation of auditable evaluation based on "Interval Arithmetic and Outward Rounding in computer science," "Finitude requirements of Ultrafinitism," and "Information constraints in Quantum Physics and Thermodynamics."
Chapter 0: Minimal Definition and Verifiable Claims of Finite Closure
In this report, Finite Closure is minimally defined by the following three elements and a verifiable claim.
0.1 Boundary Spec
What Finite Closure fixes first is not the target problem itself, but the "Boundary Spec upon which the evaluation relies." The Boundary Spec $B$ is given, at a minimum, by the following tuple:
$$B := (D, M, \theta, G, \rho, R, S)$$
$D$: Evaluation Data (including input sets, reference values, acquisition conditions, and exclusion rules).
$M$: Evaluation Metrics (loss/accuracy/score functions).
$\theta$: Thresholds (or a set of safety conditions) determining PASS/FAIL.
$G$: Computational Graph (including operator associativity order and parallel reduction order).
$\rho$: Rounding Policy (IEEE 754 rounding modes, Interval Arithmetic outward rounding rules, etc.).
$R$: Resource Constraints (time, memory, precision, energy/inference cost, etc.).
$S$: Fixation of Random Elements (seeds, sampling rules, shuffling rules, etc.).
Crucially, $(G, \rho)$ must be included within the Boundary Spec. Floating-point arithmetic is generally non-associative; simply changing the order of association can alter the "value intended as a real number," making the order itself a specification.
0.2 Fixation (Non-Retroactivity)
Fixation refers to rendering the Boundary Spec $B$, once used for a decision, unchangeable retroactively. In implementation, the content of $B$ (and the hash of the reference implementation) is committed as a Certificate $C$, and subsequent audits or re-computations always refer to the same $C$.
0.3 Decision (PASS/FAIL)
Decision is a mapping $\text{Decide}_B : X \to \{\text{PASS}, \text{FAIL}\}$ that calculates evaluation values under Boundary Spec $B$ and maps the result to PASS/FAIL. Finite Closure requires, as a primary condition, that the output of $\text{Decide}_B$ be "reproducible for the auditor."
0.4 Verifiable Claim (Minimal Core)
The claim aimed at by Finite Closure can be compressed into a single sentence:
"As long as the auditor confirms $\text{Verify}(C, L) = \text{true}$ against the identical Certificate $C$ (= fixed Boundary Spec $B$), the PASS/FAIL decision is reproduced, and any retroactive modification of evaluation is detected."
Here, $L$ is an append-only ledger (logs of inputs, intermediate intervals, final intervals, hashes, etc.), and Verify is a procedure to validate the consistency between $C$ and $L$.
0.5 Limiting Case (Undecidable): When the Interval Crosses the Safety Condition
Finite Closure calculates the evaluation value as an interval $I_B(x) = [v_L, v_U]$ containing the true value using outward rounding and interval arithmetic (Inclusion Guarantee). However, if the interval width is large, it may not be possible to uniquely separate PASS/FAIL against a fixed safety condition.
Therefore, Decision is defined as a ternary value rather than binary. Let the safety condition be a set on real numbers $\text{Safe}_B \subset \mathbb{R}$:
$$\text{Decide}_B(x) = \begin{cases} \text{PASS} & (I_B(x) \subseteq \text{Safe}_B) \\ \text{FAIL} & (I_B(x) \cap \text{Safe}_B = \emptyset) \\ \text{INDETERMINATE} & (\text{otherwise}) \end{cases}$$
Here, $\text{INDETERMINATE}$ means "decision cannot be separated under the inclusion guarantee," and operationally, it is treated as FAIL (Usage Prohibited) based on a fail-closed policy. Therefore, PASS in Finite Closure is a strong claim that "the interval is completely contained within the safety set under the fixed Boundary Spec," and unless separation is achieved by improving the interval width (via Resource Constraints $R$, Stopping Rule, Rounding Policy $\rho$), a passing declaration is not permitted.
Chapter 1: Mathematical Foundations (Number Theory & Logic): Lineage of Finite Procedures and Constructibility
1.1 Rebellion against Actual Infinity: From Kronecker to Brouwer
The treatment of "infinity" in mathematics has always been a source of controversy. Against the mainstream set-theoretic Platonism of modern mathematics, the constructivist resistance movement existing since the late 19th century can be re-evaluated as a modern computer science manifesto. Leopold Kronecker's famous remark, "God made the integers...", asserts that the "existence" of mathematical objects should be limited to "constructibility" via finite procedures (algorithms). [R1]
This philosophy was inherited by L.E.J. Brouwer's Intuitionism in the 20th century. Brouwer rejected the unrestricted application of the "Law of Excluded Middle" to infinite sets. Deciding truth or falsehood for unverifiable domains is not constructive. [R2]
The philosophy of Finite Closure in GhostDrift resonates with this lineage. Although the parameter space of AI appears continuous, it exists only on discrete lattice points. We should define only the constructible (computable) region as a "closure" and speak of truth only within it.
1.2 Modern Development of Ultrafinitism
"Ultrafinitism" takes a stance that restricts even "potential infinity" under physical constraints, driven by the demands of computer science and physics. Doron Zeilberger asserts that modern Real Analysis is a "degenerate case of discrete analysis," claiming the continuum is merely an approximation of the real world. [R2]
The core question of Ultrafinitism is: "Do huge numbers that are physically unrepresentable truly exist as numbers?" As Alexander Esenin-Volpin points out, the sequence of natural numbers hits a "physical limit" or "computational wall" at some point, potentially altering mathematical properties. In GhostDrift's research, this perspective is crucial. Of the verification space of AI models, only the subset reachable within "computational resources and time constraints" (Feasible Region) is verifiable. Ultrafinitism supports the legitimacy of "cutting off" the unreachable region as "out of definition." [R3]
1.3 "Feasible Numbers" and the Wall of Computational Complexity
Rohit Parikh proposed the concept of "Feasible Numbers," highlighting the divergence between logical definition and physical feasibility. Even if a number is mathematically definable, it may be "unfeasible" from the perspective of computer science or cognitive resources. [R3]
Vladimir Sazonov incorporated the concept of "feasibility" into the arithmetic system itself to resolve this contradiction. [R4] GhostDrift's "Finite Closure" actively utilizes this "computational wall." Even if verifying the entire behavior space of AI is impossible, complete exhaustive search or formal verification becomes possible for a subspace defined under specific "Windows" or "Resource Constraints."
1.4 The Myth of the Continuum and Discrete Reality: Requirements from Physics
Physicist Nicolas Gisin questions that Real Numbers contain "infinite information" and links this to the indeterminism faced by modern physics. Packing infinite information into a spatial region with finite volume is physically impossible due to the Bekenstein bound (Black Hole Entropy). [R5][R14]
According to Gisin, chaos in classical mechanics and indeterminism in quantum measurement result from initial values not having infinite precision (= possessing only finite information). He expresses this as "the passage of time is the creation of new information." [R5] By redefining the AI inference process as discrete state transitions and recording all steps in a "Ledger" expressed with a finite number of bits, this "creation of information (indeterminism)" can be contained within the "range of precision" allowed by the system. [R14]
Chapter 2: Physical Foundations: Finite Observation, Cut-offs, and Finite Reference Frames
2.1 Ontological Status of "Cut-offs" in Quantum Mechanics
In Quantum Field Theory (QFT), "cut-offs" and "renormalization" are increasingly understood as "physical realities" reflecting the hierarchical structure of nature, thanks to the development of Effective Field Theory (EFT). [R7]
From the EFT perspective, physical laws in high-energy regions beyond a certain energy scale $\Lambda$ (cut-off) affect observers in low-energy regions only by being "renormalized" into parameters (mass or coupling constants). Assuming the theory holds down to an infinitesimal point ($\Lambda \to \infty$) is, in fact, a physically unnatural idealization. "Finite Closure" in GhostDrift applies this EFT philosophy to establish stable causality and predictability within the system (inside the closure) by setting appropriate "informational cut-offs."
2.2 't Hooft's Cellular Automaton Interpretation and Deterministic Universe
Gerard 't Hooft's "Cellular Automaton Interpretation (CAI)" describes quantum mechanics as the behavior of a deterministic discrete system at a deeper level. [R18]
Ontological Basis: The "true states" the universe can actually take are limited to specific basis vectors.
Finitude and Lattice Structure: It premises that spacetime is a lattice and the information content of the universe is finite. [R18]
If the root of the universe is a cellular automaton, artificial AI models should essentially be describable and controllable as completely deterministic cellular automata. The Finite Closure approach attempts to "reduce" AI from the fog of continuum approximation to its original discrete and deterministic form.
2.3 Quantum Reference Frames (QRF) and Finite Resource Constraints
Recent research on "Quantum Reference Frames (QRF)" reveals that reference frames themselves are physical systems subject to quantum laws and possess only finite degrees of freedom and energy.
Therefore, this report does not premise operation on the "limit of frequency based on infinite trials." Since the quantum reference frame itself is a finite physical system, the size, energy, and coherence time of the reference frame constrain measurement precision, creating a lower bound on estimation error. This treats the claim "theoretically probability $p$" by dropping it into a finite resource audit condition (Boundary Spec). [R7]
2.4 Maxwell's Demon and Thermodynamic Cost of Observation
The resolution of "Maxwell's Demon" in information thermodynamics (Landauer's Principle) demonstrates the physical cost of information. The process of physically erasing information necessarily requires the release of thermal energy. [R15][R16]
Since the Demon's (observer's) memory is finite, information must inevitably be "forgotten (erased)" or "compressed." The "Stopping Rule" in GhostDrift's "Finite Closure" functions as a boundary condition managing the balance of information inflow and outflow based on this thermodynamic cost.
Chapter 3: Computational Foundations: Reproducibility, Inclusion, and Outward Rounding
Note: The Interval Arithmetic, Outward Rounding, Ledger Fixation, and Prime Gravity discussed in this chapter are implementation constructions to satisfy the minimal definitions (A)(B)(C) of Finite Closure given in Chapter 0. Finite Closure itself is defined by (A)(B)(C), and the following are positioned as its concrete realizations (sufficient constructions).
3.1 Pathology of Indeterminism in IEEE 754 Floating-Point Arithmetic
IEEE 754 floating-point arithmetic, the foundation of modern computer science, possesses the property of "Non-Associativity." Example: In IEEE 754, $(10^{16}+1)-10^{16}=0$ (1 is lost due to rounding). [R10] On the other hand, $10^{16}+(1-10^{16})$ can be $10^{16}+(-10^{16})=0$, which does not match 1 in the sense of real numbers (dependence on association order).
Furthermore, in parallel reduction on GPUs, the association order may change due to thread placement or scheduling, causing the final result to fluctuate even with identical inputs (irreproducibility). This minute difference amplifies in non-linear AI models, posing a risk of reversing the final inference result.
3.2 Interval Arithmetic and Inclusion of Truth
GhostDrift adopts "Interval Arithmetic" to address this issue. Multiplication of basic operations is defined as:
$$[x_L, x_U] \times [y_L, y_U] = [\min(x_L y_L, x_L y_U, x_U y_L, x_U y_U), \max(x_L y_L, x_L y_U, x_U y_L, x_U y_U)]$$
Furthermore, "Outward Rounding" is introduced. By applying outward rounding (directed rounding) defined in the Boundary Spec to each operation, the calculated interval $[X_L, X_U]$ is guaranteed to contain the true value $X$ intended by real number arithmetic (Inclusion Guarantee). Environmental differences or parallelization may affect the interval width, but the property of "containing the true value" is preserved. [R11][R12]
3.3 Prime Gravity Protocol and Outward Rounding: GhostDrift's Solution
The "Prime Gravity Protocol" developed by GhostDrift Mathematical Institute sublimates the philosophy of Interval Arithmetic into an AI audit protocol using a distributed Ledger.
In Prime Gravity, (i) Boundary Spec $B$ is fixed as Certificate $C$, and (ii) computation logs (inputs, intermediate intervals, final intervals, hashes, residue representations, etc.) generated under $C$ are inscribed in an append-only Ledger $L$. The auditor re-computes under the same $C$ and verifies consistency with $L$ via $\text{Verify}(C, L)$. If the evaluator modifies rounding rules, exclusion rules, thresholds, etc., it is detected as a mismatch in $C$ or $L$. CRT (Chinese Remainder Theorem) is a component to restore integers (or scaled rational numbers) from residue representations, and integrity is guaranteed by the ledger commitment. This detects "retroactive evaluation" as an inconsistency between residue representations and the ledger, introducing "accountability within the Boundary Spec" to areas that have traditionally escaped into infinity through "freedom of interpretation."
3.4 Evaluation Protocol (Falsifiability)
This framework is verified in the following form:
Pre-Fixation: Fix $B$ (Window-Spec, Rounding Rules, Stopping Rule, Evaluation Function).
Execution: Run $\text{FC}(B,x)$ to generate the interval sequence and ledger.
Verification: Execute $\text{Verify}(ledger, B)$.
PASS: Inclusion and integrity hold under fixed specifications.
FAIL: Deviation from specifications, missing records, or modification occurred somewhere.
Here, FAIL implies "operational fraud, deviation, or irreproducibility," prohibiting the use of the result (safety declaration/pass judgment).
Chapter 4: Applied Implications (Auxiliary): Stopping Rules, Localization of Responsibility, and Institutional Design
4.1 Watsuji Ethics and Spatial Finitude
The "Spatiality" in Watsuji Tetsuro's ethics suggests that humans become ethical subjects within "bounded" communities. GhostDrift reconstructs digital space as a "Trusted Space" partitioned by "Finite Closure," making "betweenness" and mutual accountability among agents definable.
4.2 Bounded Rationality and Stopping Rules
Herbert Simon's "Bounded Rationality" preaches the importance of "Stopping" search and making decisions at an appropriate timing for agents with finite resources. Enforcing "Stopping Rules" in Finite Closure is an implementation of the ethical attitude to prevent infinite regress and accept responsibility for decisions under incomplete information.
4.3 Resolving the Responsibility Gap
Through "Localization of Responsibility" and "Fixation of Causality via Ledger," Finite Closure severs the infinite regress of causality in AI accidents, providing a "knot" that clarifies the locus of responsibility.
Chapter 5: Synthesis: Number-Theoretic Windows (Weights), Boundary Specs, and Localization of Causality
5.1 Finite Closure as Zeta Function Regularization
Similar to Zeta function regularization for the divergence of infinite series, Finite Closure introduces an "Evaluation Window" as a "Weight Function." This cuts out an engineeringly manageable "Effective Finite Region" from infinite causality and makes it converge.
5.2 Relationship with Compactification (Positioning): Accountability via Severance, Not Inclusion
The terminology in this chapter uses "opening and closing of dimensions" as a metaphor; the compactification theorem in topology or physical models in string theory are not used for proofs in this paper. That said, compactification in string theory is effective as a metaphor to explain the intuition of Finite Closure (opening/closing of boundaries and proliferation of dimensions).
Topological compactification is an operation to "include" objects by adding points at infinity to make the space complete. This makes tools of analysis and topology applicable. On the other hand, Finite Closure is an operation not to mathematically deny objects containing infinity, but to "declare Boundary Specs and sever the outside as out-of-scope" from the perspective of judgment and audit. In other words, while compactification aims for "Completion of Mathematical Structure," Finite Closure aims for "Auditability and Determination of Accountability." This difference is not metaphorical but directly connected to the minimal definitions of (A) Boundary Spec, (B) Fixation, and (C) Decision.
5.3 Integrated Structural Analysis
Domain | Traditional Paradigm (Infinite, Continuous, Open) | GhostDrift "Finite Closure" Paradigm (Finite, Discrete, Closed) |
Mathematics | Actual Infinity, Continuum Hypothesis | Ultrafinitism, Feasible Numbers, Constructive Proof [R2][R3] |
Physics | Quantum Mechanics (Standard), Infinite Spacetime | Cellular Automaton, Quantum Reference Frames, Cut-off [R7][R18] |
Computation | Floating-Point (Indeterminism) | Interval Arithmetic/Outward Rounding, Complete Inclusion [R10][R11] |
Thermodynamics | Maxwell's Demon (Infinite Memory) | Demon with Finite Memory (Cost of Forgetting, Stopping Rule) [R15] |
Ethics | Infinite Responsibility, Universal Individual | Spatial Betweenness, Localization of Responsibility, Bounded Rationality |
Minimal Difference of This Proposal
While Interval Arithmetic and formal methods exist, this proposal integrates them as requirements for "Non-Retroactive" operational governance, and its novelty lies in minimizing:
Pre-Fixation of Boundary Specs (Window-Spec / Stopping Rule)
Inclusion Guarantee of Numerical Values (Outward Rounding + Intervals)
Freezing of Execution (Ledger)
as a "Mandatory Three-Piece Set," closing the audit with PASS/FAIL.
Conclusion
GhostDrift Mathematical Institute formulates "Finite Closure" as a boundary specification principle to establish auditability and determination of accountability under finite resources. This report presented its minimal definitions (A)(B)(C) and the connection with previous research across mathematics, physics, and computation. Future work will reinforce verifiability with reproducible experiments and implementation logs regarding specific protocols satisfying (A)(B)(C) (Interval Arithmetic, Outward Rounding, Ledger Fixation, Prime Gravity).
References (Minimal Set)
[R1] L. Kronecker, (Lecture/Remark regarding integers), 1886.
[R2] Doron Zeilberger, "Real" Analysis is a Degenerate Case of Discrete Analysis, in New Progress in Difference Equations (Proc. ICDEA 2001), Taylor & Francis, 2004.
[R3] Rohit Parikh, "Existence and Feasibility in Arithmetic," Journal of Symbolic Logic 36(3), 1971, 494–508.
[R4] V. Yu. Sazonov, "On Feasible Numbers," in D. Leivant (ed.), Logic and Computational Complexity (1995).
[R5] Nicolas Gisin, "Real Numbers Are Not Real" (Essay/Talk), 2019.
[R7] S. D. Bartlett, T. Rudolph, R. W. Spekkens, "Reference frames, superselection rules, and quantum information," Rev. Mod. Phys. 79, 555 (2007).
[R10] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2019.
[R11] R. E. Moore, Interval Analysis, Prentice-Hall, 1966.
[R12] IEEE Standard for Interval Arithmetic, IEEE Std 1788-2015.
[R14] J. D. Bekenstein, "A universal upper bound on the entropy to energy ratio for bounded systems," Phys. Rev. D 23, 287 (1981).
[R15] R. Landauer, "Irreversibility and Heat Generation in the Computing Process," IBM Journal of Research and Development 5(3), 1961.
[R16] C. H. Bennett, "The thermodynamics of computation—A review," Int. J. Theor. Phys. 21, 905–940 (1982).
[R18] Gerard ’t Hooft, The Cellular Automaton Interpretation of Quantum Mechanics, Springer, 2016.



コメント