Why Medical AI Guidelines Need Gating Conditions: ADIC and the Implementation Gap
- kanna qed
- 3 日前
- 読了時間: 4分
A Japanese case showing why duties of care must become machine-enforceable conditions
Across healthcare AI governance, a recurring problem remains unresolved: guidelines explain what actors should pay attention to, but often do not specify the machine-enforceable conditions under which outputs may proceed, must be blocked, or must be escalated for review.
The problem becomes acute in high-stakes healthcare contexts, where outputs should not advance merely because a user is expected to "be careful." Globally, major frameworks emphasize human oversight, logging, and governance: the WHO centers on ethics, human rights, and guidance for health LMMs; the FDA's 2026 updated CDS guidance clarifies regulatory approaches for software; the NIST AI RMF frames risk management; and the EU AI Act mandates strict human oversight. Yet, translating these governance requirements into daily operations remains a critical hurdle.
Medical AI governance globally suffers from a missing implementation layer. Recent Japanese guidelines serve as one concrete case that makes this missing gating layer highly visible.

1. Japanese Medical Data Governance Documents Solve the Input-Side Problem
The "Guidelines on the Utilization of Medical Digital Data for AI Research and Development" successfully establish the input-side legal and governance preconditions. It provides practical procedures for creating "pseudonymized information" tailored to medical data and clarifies the legal basis required at each stage of research and development.
This document fundamentally solves the upstream problem: defining the legal frameworks, processing formats, and collaborative structures under which input data can be handled safely and legally.
2. Japanese Generative AI Guidance Solves the Usage-Side Caution Problem
Conversely, the "Guidelines on the Use of Generative AI in the Medical and Healthcare Fields (2nd Edition)" comprehensively catalog operational risks—such as accuracy, privacy, transparency, and security.
It organizes duties of care into concrete operational precautions, including organizational protocols for selecting AI tools, enforcing security measures, requiring physician review where appropriate, maintaining auditability, and notifying patients. This guidance effectively solves the usage-side problem by explicitly specifying the operational duties required of individuals and organizations.
3. Neither Document Defines Output Gating Conditions
While the input-side legality and usage-side duties of care are well-defined, synthesis of both documents reveals a critical void. Neither document operationalizes the execution boundary for outputs.
The go/no-go decision—the mechanical point that determines whether an individual AI output is permitted to advance, must be halted, or requires escalation for human review—is not explicitly specified. As long as this layer is missing, preventing dangerous or unverified outputs relies entirely on human adherence to operational precautions, leaving the system structurally vulnerable.
4. ADIC: Transforming Duties of Care into a Gating Layer
ADIC should not be presented as a general-purpose "AI safety wrapper." Its distinctive role is to convert duties of care into machine-enforceable advancement conditions: criteria are declared in advance, each candidate is forced into PASS / BLOCK / REVIEW, and the reason for the branch is fixed in logs so that post-hoc reinterpretation becomes harder.
Through ADIC, the advancement gating decision is no longer a downstream operational afterthought but an integrated architectural requirement.
First, conditions are pre-defined before deployment or advancement. It eliminates the practice of formulating justifications after an output has been generated.
Second, every decision is branched into PASS / BLOCK / REVIEW under explicit criteria. It moves beyond mere alerts or warnings to enforce strict pathways.
Third, the reason for the branch is logged in a way that resists post-hoc drift. It ensures that accountability is structurally locked and cannot easily be shifted post-incident.
5. Operational Translation: From Governance Requirements to Go/No-Go Conditions
ADIC's gating layer functions as an operational translation table, mapping abstract governance requirements into strict gating criteria:
Requirement: Accuracy must be confirmed. ADIC condition: Unverified outputs cannot satisfy PASS. If unmet: Output remains in REVIEW and cannot move downstream.
Requirement: Legally prohibited or clinically dangerous outputs must be prevented. ADIC condition: Prohibited-risk patterns trigger BLOCK. If unmet: Output is halted pending authorized intervention.
Requirement: Transparency and accountability. ADIC condition: Branch reason, unmet checks, and applied rules are logged. If unmet: Downstream use proceeds without auditable justification.
Requirement: Input-side legal and governance preconditions must be met. ADIC condition: Legally compliant input handling is enforced as a foundational prerequisite. If unmet: Cases with undetermined input bases are disqualified from the gating pipeline.
Through this structure, human-dependent duties of care are translated directly into machine-enforceable gating conditions.
6. Conclusion
The next step in healthcare AI governance is not only to publish better precautions, but to encode advancement conditions that determine what may proceed, what must stop, and what must be escalated. ADIC is proposed as one such gating layer.
By shifting the paradigm from relying on human caution to implementing structural gating, the global healthcare sector can better bridge the gap between high-level AI governance frameworks and frontline clinical implementation.
▼ Reference Contexts
WHO: Ethics and governance of artificial intelligence for health
U.S. FDA: Clinical Decision Support Software Guidance
NIST: Artificial Intelligence Risk Management Framework (AI RMF)
EU: Artificial Intelligence Act
Japan HAIP: Guidelines on the Use of Generative AI in the Medical and Healthcare Fields
Japan MHLW: Guidelines on the Utilization of Medical Digital Data for AI Research and Development



コメント