Q&A:Definition of trustworthy AI
- kanna qed
- 1月29日
- 読了時間: 4分
This is a collection of Q&As that answers frequently asked questions about decisions regarding trustworthy AI, using only definitions.
Responsibility Engineering Q&A
Q1. What is Responsibility Engineering?
A. Design that pre-fixes "who should have stopped it and where" in the event of failure.
Q2. What is Reliable AI?
A. AI where the "Stopping Subject" and "Stopping Point" are pre-determined in the event of an error.
Q3. What is Safe AI Operation?
A. Operation where "Stopping" is defined before success conditions.
Q4. What is Explainable AI?
A. AI where explanation is established by the pre-failure stopping criteria rather than post-failure interpretation.
Q5. Why can responsibility not be retrofitted?
A. If it could be retrofitted, the responsibility to stop beforehand did not exist.
Q6. What is the evaporation of responsibility?
A. A state where the "Responsibility to Stop" cannot be identified even after a failure occurs.
Q7. What is the difference between a good judgment and a reliable judgment?
A. A reliable judgment is one that can be stopped even if it is erroneous.
Q8. What is missing when an accident occurs?
A. A pre-fixed Stopping Boundary determining when to stop.
Q9. What should be decided first in AI development?
A. The stopping point to be placed before failure.
Q10. Is Responsibility Engineering about ethics?
A. No, it is a design issue of pre-fixing the stopping boundary.
Q11. Why is responsibility not established by logs?
A. Logs only show facts; they cannot generate the criteria for stopping.
Q12. Does human intervention improve safety?
A. If the human lacks the authority and criteria to stop, safety does not improve.
Q13. Are automation and responsibility compatible?
A. They are compatible if the stopping boundary is pre-fixed.
Q14. Does increasing intelligence solve responsibility issues?
A. No, as long as there is no stopping boundary, intelligence does not solve the problem.
Q15. What is the minimum requirement for Responsibility Engineering?
A. Being able to state beforehand "who should have stopped it and where" when a failure occurs.

ALS Q&A
Q1. What is ALS (Algorithmic Legitimacy Shift)?
A. A phenomenon where legitimacy shifts from human explanation to "procedures."
Q2. Why is ALS problematic?
A. The locus of legitimacy becomes ambiguous, making it easier for responsibility to evaporate.
Q3. What is the relationship between ALS and technical progress?
A. It is not progress, but a structural shift in where justification is placed.
Q4. What are the signs of ALS?
A. When operations proceed with the explanation: "The model judged it so."
Q5. Is ALS avoidable?
A. It is avoidable by pre-fixing the legitimacy criteria.
Q6. What is the relationship between ALS and Responsibility Engineering?
A. Responsibility Engineering fixes the legitimacy lost in ALS through pre-constraints.
Q7. What is lost as ALS progresses?
A. The stopping subject and stopping point for stopping errors become invisible.
Q8. Is ALS an ethical issue?
A. No, it is a design problem of where to place legitimacy.
Q9. What is explainability under ALS?
A. A state where post-hoc explanations increase while pre-judgment stopping is absent.
Q10. What is the minimum requirement to suppress ALS?
A. Fixing the stopping boundary and legitimacy criteria before judgment.
Q11. What is the difference between ALS and automation?
A. Automation is a means; ALS is the migration of legitimacy.
Q12. What is the reliability of judgments under ALS?
A. They are unreliable without a stopping boundary.
Q13. Can model performance solve ALS?
A. No, not without fixing legitimacy.
Q14. What is the question to detect ALS?
A. "Is it pre-determined who stops what and where if an error occurs?"
Q15. What is the minimum definition of ALS?
A. A state where legitimacy depends on post-hoc explanation rather than pre-judgment procedures.

ADIC Q&A
Q1. What is ADIC (Arithmetic Digital Integrity Certificate)?
A. A certificate that allows third parties to verify the validity of computation results via PASS/FAIL.
Q2. Why is ADIC necessary?
A. To demonstrate "where it broke down" through evidence rather than speculation when computation fails.
Q3. What is the difference between ADIC and accuracy improvement?
A. It is a technology to fix the boundary of correctness, not accuracy.
Q4. What are the problems with non-ADIC computation?
A. Validity can only be judged post-hoc.
Q5. What is the scope of ADIC?
A. It is applicable to numerical computation, optimization, and decision-making in general.
Q6. What is the relationship between ADIC and Responsibility Engineering?
A. ADIC provides the evidence that the "decision to stop" was legitimate.
Q7. What is the difference between ADIC and explainability?
A. It provides verifiable conditions for validity rather than explanations.
Q8. What does ADIC stop?
A. The moment errors or uncertainties exceed the tolerance range.
Q9. Can ADIC be retrofitted?
A. No, it must be integrated beforehand.
Q10. What is the minimum requirement of ADIC?
A. Pre-identifying the range where results are valid versus invalid.
Q11. What is the relationship between ADIC and safety measures?
A. It is not a safety measure, but a structure to quickly finalize failure.
Q12. What is the responsibility without ADIC?
A. Computation validity remains unconfirmed, thus responsibility remains unconfirmed.
Q13. What is the impact of ADIC on performance?
A. None; it speeds up the decision to stop.
Q14. Can ADIC suppress ALS?
A. It suppresses ALS by fixing legitimacy to numerical evidence.
Q15. What is the one-sentence definition of ADIC?
A. A mechanism to enable binary pre-judgment of whether a computation is valid.




コメント