top of page
検索

AI Governance Report 2026 – State of the Art, Limitations, and Breakthroughs (GhostDrift)

0. Executive Summary

0.1 Conclusion: The Divergence between Regulatory Enforcement and Hollow Compliance

In early 2026, global AI governance reached a critical inflection point. With the enforcement of the EU AI Act's prohibited practices and GPAI rules, alongside the standardization of federal procurement requirements in the U.S. (OMB M-25-22), AI governance has transformed from a "voluntary effort" into a de facto "market license to operate." However, on the ground, compliance has increasingly devolved into bureaucratic checklist exercises driven by committees, paradoxically heightening the risk of "Ghost Drift"—the evaporation of accountability.

This report synthesizes the latest regulatory and standardization trends, identifies structural flaws in existing governance models, and proposes GhostDrift as a breakthrough framework. GhostDrift does not merely add procedures; instead, it embeds "Pre-decision Constraints" and "Post-hoc Impossibility" to physically fix the boundaries of responsibility.

0.2 State of the Art (Confirmed Status in 2026)

  1. Hybridization of Regulation: The landscape has shifted from seeking interoperability between the EU's hard law (AI Act) and Anglo-American soft law to a phase of concrete enforcement. Companies have established global standard operations aligned with either the "strictest regulation (The Brussels Effect)" or the "largest customer (U.S. Federal Government)."

  2. Systematization of Standards: The ecosystem for third-party certification is now fully operational. Beyond ISO/IEC 42001 (Management System), ISO/IEC 42005 (Impact Assessment) and 42006 (Certification Body Requirements) provide the necessary rigor. The NIST AI RMF has also standardized Generative AI risk management via the GenAI Profile (600-1).

  3. Operational Backbone: Model inventory, risk classification, hallucination mitigation for RAG (Retrieval-Augmented Generation), and log preservation for Adversarial Testing have become the operational baseline for enterprises.

0.3 Limitations (The Evaporation of Accountability)

  1. The Documentation Paradox: The sheer volume of documentation required to demonstrate accountability has become so vast that no single individual can grasp the holistic picture.

  2. Arbitrariness of Thresholds: The determination of boundaries between "High Risk" and "Limited Risk" is often left to self-assessment, leading to widespread regulatory arbitrage through interpretation.

  3. Limits of Static Auditing: Snapshot-based audits cannot guarantee continuous safety for AI models that behave non-deterministically and change dynamically.

0.4 The Breakthrough (GhostDrift)

GhostDrift is a technological architecture that renders "accountability evasion" mathematically and structurally impossible, rather than relying on human goodwill or vigilance. It offers principled solutions to accountability evaporation that existing frameworks cannot prevent:

  • ADIC Ledger: A ledger that guarantees complete reproducibility by eliminating calculation errors through Rational Arithmetic and Directed Rounding.

  • Explanation Budget: A quantitative cap (budget) on the qualitative resource of "explanation," preventing the infinite proliferation of exception handling.

  • Pre-decision Constraint: Physically blocks model outputs that violate pre-defined constraints, disallowing post-hoc justifications.


▶AI Accountability Project



1. Scope and Definitions

1.1 Scope and Period

This report covers the following four layers from late 2024 to January 2026:

  • Regulation: EU AI Act (2025-2026 application phase), U.S. Executive Orders/OMB Memoranda, UK Framework.

  • Standard: ISO/IEC 42000 series, NIST AI RMF & GenAI Profile, OECD Classification.

  • Practice: Corporate model management, MLOps/LLMOps, audit response.

  • Research: Preceding studies on AI responsibility, transparency, and auditability.

1.2 Definitions

  • AI Governance: The framework of direction, control, and oversight to ensure an AI system's lifecycle protects stakeholder rights and aligns with organizational goals and legal compliance.

  • Ghost Drift: A phenomenon where the locus of responsibility within an organization becomes ambiguous, causing the system to drift or go out of control without anyone making a definitive decision. It also refers to the structural mechanism by which responsibility evaporates.


2. Integration Map: Hierarchy of Regulation, Standards, and Practice

In 2026, AI governance exists in a fully interconnected four-layer structure.


Layer

Key Components (2026)

Function & Role

Regulation (Layer 1)

EU AI Act (Art 17 QMS, Art 27 FRIA)


US OMB M-25-22 (Procurement), EO 14365


UK AI Playbook

Source of Enforcement.


Defines market entry conditions and penalties.

Standard (Layer 2)

ISO/IEC 42001 (Cert), 42005 (Impact Assessment)


NIST AI RMF 1.0 + GenAI (600-1)


OECD.AI Classification

Common Language & Criteria.


Translates regulatory demands into technical/process requirements.

Practice (Layer 3)

Model Inventory & Registry


Risk Classification & Mapping


Incident Response & Reporting

Operational Processes.


Daily management tasks and evidence generation.

Technology (Layer 4)

MLOps / LLMOps Pipeline


Adversarial Testing / Red Teaming


Audit Logs & Artifact Stores

Implementation & Enforcement.


Automated controls and log preservation.


3. State of the Art: Institutions and Practices in 2026

3.1 Europe (EU): Hard Law Enforcement and Political Adjustment

Following its entry into force in August 2024, 2026 represents a critical period with mandatory application to high-risk AI systems imminent. However, it is necessary to distinguish between legal certainties and political adjustments (volatility risks) regarding implementation.

A) Legal Milestones (Confirmed)

  • February 2025: Complete prohibition of banned AI practices (e.g., social scoring) and entry into force of AI literacy obligations.

  • August 2025: Application of governance rules for General-Purpose AI (GPAI) models. Demonstration of compliance based on the GPAI Code of Practice is required.

  • August 2026: Mandatory Conformity Assessment and Fundamental Rights Impact Assessment (FRIA) for most High-Risk AI Systems (Annex III).

B) Political & Implementation Volatility Risks

  • Discussions continue regarding the potential delay of high-risk application timing and the expansion of regulatory sandboxes for SMEs, driven by industry requests.

  • Detailed KPIs for the Code of Practice are expected to be adjusted throughout late 2025; companies need a two-tiered approach: adhering to the "Legal Text" while monitoring updates to "Operational Guidance."

3.2 United States (US): Unified Market Formation via Federal Leadership

After the 2025 administrative transition, the U.S. pivoted toward preventing regulatory fragmentation by state laws and forming a de facto standard by leveraging federal purchasing power.

  • EO 14365 (December 2025): Executive Order on Establishing a Unified Framework for National AI Policy. Aims to suppress the "patchwork" of AI regulations progressing independently in each state and create a unified market rule at the federal level. This frees companies from state-by-state compliance costs but makes adherence to federal governance standards mandatory.

  • OMB M-25-22 (2025): Memorandum on Advancing Efficiency and Standardization in Federal AI Procurement. Rather than strictly tightening procurement requirements, this standardizes contract requirements that were previously disjointed across agencies, establishing a common protocol for safe and rapid AI adoption. Consequently, vendors supplying the government are required to comply with these "Unified Federal Standards."

  • NIST GenAI Profile (NIST.AI.600-1): Provides specific action plans to address risks unique to Generative AI (hallucinations, copyright, CBRN information, etc.).

3.3 United Kingdom (UK) & International: Government Practices Spilling Over to Private Sector

  • UK AI Playbook (January 2025): Generative AI Framework for HMG (Government Guidance). While outlining procedures and standards for public sector AI adoption, it effectively serves as a reference model for the private sector, defining "what the government considers safe AI."

  • OECD AI Classification: The classification of AI systems (System vs Model, Context), a prerequisite for risk assessment, has taken root as an international standard.

3.4 Progress in Standardization: ISO/IEC 42000 Series

  • ISO/IEC 42001 (AI-MS): The number of certified companies surged throughout 2025. It has begun to function as a "baseline requirement" for supply chain selection.

  • ISO/IEC 42005 (AI System Impact Assessment): Standardizes procedures for Algorithmic Impact Assessments (AIA).

  • ISO/IEC 42006 (Requirements for bodies providing audit): Ensures the quality and competence of certification bodies.


4. Limitations: Structural Defects Leading to Accountability Evaporation

Institutions are in place, but they have created new "voids of responsibility."

4.1 The Documentation Shield

The massive technical documentation required by the EU AI Act and others paradoxically creates a situation where "no one reads it." Companies claim to have fulfilled accountability by maintaining documents, but in reality, risks are buried within the text. It functions as a liability shield: "We documented it, therefore we are not responsible."

4.2 Black-boxing via Self-Assessment

For many high-risk determinations and conformity assessments, self-assessment by the provider is still permitted. If Risk Acceptance Criteria are relaxed, compliance can be achieved on paper. The critical issue is the lack of an "objective stop mechanism" external to the system.

4.3 The Post-hoc Fallacy

Current auditing focuses on analyzing logs after an accident or drift has occurred. However, the non-deterministic behavior of Generative AI allows for post-hoc rationalization to justify almost any outcome. In a system where "reasons can be attached later," responsibility is never fixed.


5. The Breakthrough: Reconstructing Governance with GhostDrift

GhostDrift is an architecture designed to create a "state where evasion is systemically impossible," rather than relying on human ethics or post-hoc audits. This is not an extension of existing governance but a paradigm shift based on mathematical necessity.

5.1 GhostDrift Impossibility Theorem

It is demonstrated that no matter how sophisticated existing governance methods are, they cannot avoid accountability evaporation under the following conditions.

Theorem (Post-hoc Impossibility, Informal) As long as any AI governance system satisfies the following three conditions, it cannot avoid "Ghost Drift" (the evaporation of accountability) within finite time:Dependence on Floating-Point Arithmetic: The calculation process includes rounding errors or non-determinism, failing to guarantee complete reproducibility.Permissibility of Post-hoc Explanation: There is room (infinite degrees of freedom) to add or modify explanations/interpretations after a decision is made.Diffused Collective Responsibility: The locus of responsibility is treated as a set (organization or committee) rather than a specific cryptographic signature key.

GhostDrift physically fixes responsibility only by denying these three conditions (via Rational Arithmetic, Pre-decision Constraints, and Single Signatures).

5.2 Core Components and Mapping to Regulatory Gaps

Each GhostDrift component is designed to plug a specific "loophole" in regulations or standards.

GhostDrift Component

Definition / Function

Gap in Regulation/Standard

ADIC Ledger


(Rational + Outward Rounding)

A ledger recording all inference and evaluation processes using rational arithmetic and directed rounding (outward), guaranteeing bit-level reproducibility.

ISO 42001 (A.9.2 Reporting)


Solves the problem where logs exist but lack reproducibility, leading to "he said, she said" disputes.

Explanation Budget

Defined in this report as a finite resource computed as a linear combination of: (a) Added Exception Rules + (b) Human Approval Events + (c) Post-hoc Explanation Nodes.

EU AI Act (Art 13 Transparency)


Physically limits the dilution of responsibility caused by the infinite proliferation of explanatory documentation.

Pre-decision Constraint

Hard-codes risk boundaries (guardrails) external to the model, physically blocking outputs that deviate.

NIST AI RMF (Map/Manage)


Blocks risks that are identified in maps but ignored in operation (or overridden by humans).

The Beacon


(Responsibility Boundary)

Identifies the human (or module) with the authority to "stop" the system and fixes that approval act with a digital signature.

OMB M-25-21 (Governance / CAIO)


Reduces the "governance structure" required of CAIOs from an abstract org chart to "cryptographic signature responsibility."

5.3 Case Study: Runaway and Shutdown of a Certified System

The decisive difference between existing frameworks and GhostDrift lies in "whether it stops or not."

  • Case: A financial advice system powered by an LLM, compliant with NIST AI RMF and certified under ISO 42001, begins generating responses exceeding risk tolerance during sudden market volatility.

  • Existing Governance: An incident committee is convened, and risk acceptance criteria are relaxed post-hoc due to "exceptional market conditions." Operations continue with ambiguous responsibility (who decided to relax the criteria?), and subsequent losses are treated as "unforeseen."

  • GhostDrift: At the moment a second exception approval (risk relaxation) is attempted, the Explanation Budget is depleted. The system follows a pre-defined protocol and physically forces a shutdown (or safe mode). This makes "continued runaway" via responsibility shifting or post-hoc justification impossible.

5.4 Evidence Architecture

  1. Constraint Definition: Define the allowable risk range (budget) based on regulatory requirements.

  2. Inference with ADIC: Record the AI inference process in the ADIC Ledger.

  3. Budget Check: Determine if the output is within the Explanation Budget. If exceeded, block or request Beacon approval.

  4. Immutable Logging: Save the determination result and approval signature in a tamper-proof format.


6. Implementation Roadmap

A phased implementation plan for enterprises to undertake during FY2026. GhostDrift does not replace existing processes but acts as an immediate "Quality Assurance Layer."

Phase 1: Visualization and Baseline (Q1-Q2 2026)

  • Inventory Completion: Registration of all models, including Shadow AI.

  • Regulatory Mapping: Documenting the rationale for whether internal systems fall under EU AI Act "High Risk" or U.S. OMB "Rights-Impacting AI."

  • GhostDrift v0: Installing "Exception Counters" on existing log infrastructure to begin measuring baseline data for the Explanation Budget (Exception Count + Approval Count).

Phase 2: Embedding Constraints (Q3 2026)

  • Hardening Guardrails: Implementing business-specific "Pre-decision Constraints" at the API Gateway layer.

  • ADIC Ledger Pilot: Begin preserving decision logs in a reproducible format for high-risk areas like finance and healthcare.

Phase 3: Fixing Responsibility and External Certification (Q4 2026)

  • Beacon Implementation: Mandating electronic signatures for critical exception approval processes, identifying responsibility at the individual level.

  • Third-Party Conformity Assessment: Submitting GhostDrift logs as "tamper-proof audit trails" during ISO 42001 certification audits to prove conformity.


7. References (Selected Primary Sources & Preceding Research)

Regulation & Policy

  1. European Parliament & Council. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.

  2. European Commission AI Office. (2025). General-Purpose AI Code of Practice: Final Draft.

  3. U.S. Office of Management and Budget (OMB). (2024). Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

  4. U.S. Office of Management and Budget (OMB). (2025). Memorandum M-25-22: Advancing Efficiency and Standardization in Federal AI Procurement.

  5. The White House. (2025). Executive Order 14365: Establishing a Unified Framework for National AI Policy.

  6. UK Government. (2025). Generative AI Framework for HMG (The AI Playbook). gov.uk.

  7. UK DSIT. (2024). A Pro-innovation Approach to AI Regulation: Response to Consultation.

  8. OECD. (2024). Explanatory Memorandum on the Updated OECD Definition of an AI System. OECD.AI Policy Observatory.

Standards & Frameworks

  1. ISO/IEC. (2023). ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system.

  2. ISO/IEC. (2025). ISO/IEC 42005:2025 Information technology — Artificial intelligence — AI system impact assessment.

  3. ISO/IEC. (2025). ISO/IEC 42006:2025 Information technology — Artificial intelligence — Requirements for bodies providing audit and certification of AI management systems.

  4. NIST. (2023). AI Risk Management Framework (AI RMF 1.0). NIST Trustworthy and Responsible AI.

  5. NIST. (2024). Artificial Intelligence Risk Management Framework: Generative AI Profile (NIST.AI.600-1).

  6. NIST. (2025). A Plan for Global Engagement on AI Standards (NIST.AI.100-5).

  7. CEN-CENELEC. (2025). Draft Harmonised Standards for the AI Act. (JTC 21).

Research & Industry Reports

  1. Mökander, J., et al. (2024). Auditing Large Language Models: A Three-Layered Approach. AI and Ethics.

  2. Raji, I. D., et al. (2024). The Fallacy of AI Functionality: Need for Pre-deployment Audits. FAccT '24.

  3. Bommasani, R., et al. (2024). The Foundation Model Transparency Index. Stanford CRFM.

  4. GhostDrift Research Group. (2025). The Ghost Drift: Mathematical Modeling of Accountability Evaporation in AI Systems. (Internal Whitepaper).

  5. Koenig, G., et al. (2024). Governance of Superintelligence: Safety and Security. OpenAI.

  6. Anthropic. (2024). Responsible Scaling Policy, Version 2.0.

  7. Google DeepMind. (2024). The Ethics of Advanced AI Assistants.

  8. Schuett, J. (2024). Risk Management in the EU AI Act. European Journal of Risk Regulation.

  9. Veale, M., & Borgesius, F. Z. (2024). Demystifying the Draft EU AI Act. Computer Law & Security Review.

  10. Ada Lovelace Institute. (2024). Inclusive AI Governance: Civil Society Perspectives.

  11. Future of Life Institute. (2025). Post-Act Implementation Guide for High-Risk AI.

  12. KPMG. (2025). Navigating ISO 42001 Certification: Global Trends.

  13. Deloitte. (2025). State of AI in the Enterprise, 6th Edition: The Governance Gap.

  14. Gartner. (2025). Magic Quadrant for AI Governance and Risk Management Platforms.

  15. IEEE. (2024). Standard for Transparency of Autonomous Systems (P7001).


Appendix A: AI Governance Issue Matrix (2026)

Issue

Model-Centric

System-Centric

Transparency

Model Cards / System Cards


Disclosure of training data and performance metrics for the model alone.


(EU AI Act GPAI obligations)

User Interface Transparency


Clear indication of AI usage, presentation of rationale, interaction logs.


(EU AI Act Art 50)

Auditability

Reproducible Training Runs


Reproducibility of training, logging of hyperparameters.


(For Academics/Developers)

GhostDrift / ADIC Ledger


Reproducibility of the entire decision-making process and fixation of responsibility boundaries.


(For Practitioners/Auditors)


Appendix B: GhostDrift Translation Table (For Auditors & Policy Makers)

GhostDrift Term

Existing Framework Term (ISO/NIST)

Decisive Difference

Pre-decision Constraint

Policy / Gating / Guardrails

Existing terms expect "compliance via operation," whereas GhostDrift refers to implementation that is physically/mathematically impossible to deviate from.

Post-hoc Impossibility

Audit Trail / Logging

Logs are for "tamper detection," whereas GhostDrift eliminates the room for post-hoc explanation generation itself via rational arithmetic, etc.

Explanation Budget

Exception Handling

Exception handling allows infinite additions, whereas a Budget is finite; depletion triggers system shutdown (managed as a finite resource).


 
 
 

コメント


bottom of page