top of page
検索

Integrated Research Report on Algorithmic Legitimacy Shift (ALS) — Observations on the Irreversible Regime of Legitimacy and Social Premises

1. Introduction: "Irreversible Regime" as the Consequence of Supply and Demand

This report (Volume 3) addresses the consequence of these interactions. Specifically, it verifies whether the Algorithmic Legitimacy Shift (ALS), driven by the interplay of supply and demand, has reached the stage defined in our model as the "Irreversible Regime," where social premises, accountability, and justification structures are irreversibly altered.

This report integrates empirical studies, official surveys, and practitioner research to conduct a Working Assessment of whether the current state aligns with the "Irreversible Regime" defined in the ALS model.

1.1 Methods (Evidence Collection and Integration)

Scope. This report integrates empirical, policy, and practitioner evidence relevant to ALS and the working state-estimation of the Irreversible Regime (IR). The scope is limited to (i) generative / LLM-based search behaviors, (ii) social frictions around AI use, and (iii) organizational decoupling of authority and responsibility under algorithmic mediation.

Search window. 2024–2026 (last accessed: 2026-01-24).

Primary sources prioritized. Peer-reviewed venues (PNAS, PNAS Nexus, CHI/ACM), official policy (OECD), and primary survey publishers (Pew Research Center, Ipsos). Practitioner UX research (Nielsen Norman Group) is included only as mechanism illustration.

Inclusion criteria. A source is included if it reports (a) user behavior changes under AI summaries/LLM search, (b) reliance/overreliance or learning-depth effects, (c) social evaluation penalties / concealment behaviors, or (d) institutional expansion of algorithmic management affecting responsibility/authority.

Exclusion criteria. Pure opinion pieces without traceable underlying surveys, sources lacking minimally stated methods/sample, and secondary summaries when primary sources are accessible.

Integration procedure. Evidence is clustered into three domains (2.1–2.3). Each item is recorded as: (i) Fact statement (source-anchored), (ii) ALS mechanism interpretation (model-level), and (iii) mapped model variables with declared measurement type (ratio/ordinal/binary/qualitative indicator). The final state-estimation (Section 3) is explicitly a Working Assessment, not a universal sociological claim.

1.2 Operational Definition (Working): Irreversible Regime (IR)

In this report, IR is treated as a model-state in which the default social/epistemic pipeline becomes path-dependent under generative/LLM-mediated information access. We operationalize IR as the co-presence of three conditions:

  • (i) Anchoring / verification suppression ($\alpha$): verification behaviors (e.g., external link clicks) are structurally reduced when AI summaries are available, implying that AI outputs become default priors.

  • (ii) Reliance reinforcement under known error risk ($\rho$): LLM-based search improves task convenience but increases overreliance or reduces depth of learning, implying reliance becomes behaviorally self-reinforcing.

  • (iii) Organizational decoupling ($\gamma$): algorithmic mediation expands in workplaces such that authority/decision formation becomes algorithmically structured while responsibility remains socially assigned to humans.

IR in this document is a Working Assessment: evidence can support “IR-consistent” or “IR-near-threshold” states without claiming irreversible finality in an absolute sociological sense.

Measurement Types for Model Variables:

  • $\alpha_{anchor}$: ratio / proportion indicator (e.g., click-rate differences)

  • $\rho_{reliance}$: empirical/behavioral indicator (ordinal/ratio depending on study metrics)

  • $\beta_{penalty}$: empirical/behavioral indicator (ordinal/ratio)

  • $H_{hide}$: proportion indicator (concealment rates)

  • $\gamma_{hollow}$: policy/organizational indicator (ordinal qualitative → mapped score)



2. Domain Analysis and Verification of Consistency with the Irreversible Regime

Evidence Tier (GMI Standard)

  • T1: Peer-reviewed empirical (experiments / observational with methods; journals or flagship conferences)

  • T2: Peer-reviewed non-empirical (editorial, extended abstract, position/theory; limited empirical weight)

  • T3: Official policy / intergovernmental report (OECD etc.)

  • T4: Primary survey / polling publisher with disclosed sampling or methodology (Pew, Ipsos)

  • T5: Practitioner / UX research (methods may be proprietary; used for mechanism illustration)

  • T6: Journalism / commentary (used only to reference otherwise inaccessible polling; never used as a sole basis for “Fact” claims)

Rule: “Fact statements” must be supported by T1–T4. T5–T6 are restricted to mechanism illustration or context and must be labeled as such.

2.1 Epistemic Hysteresis

— Verification Costs and the Fixation of Truth in Search/LLM Behaviors —

No.

Study / Phenomenon

Evidence Tier

Study Design / Sample

Primary Outcome / Metric

ALS Interpretation (Mechanism)

Model Variable

Limitations

1

Pew Research Center (2025)


AI Summaries & Click Behavior

T4

National survey (US adults)

CTR (Click-Through Rate)


With AI summary: 8%


Without: 15%

[Legitimacy Anchoring]


External verification behavior (clicking) is structurally suppressed when AI summaries are presented. Represents the initial condition for fixing AI output as the default prior.

$\alpha_{anchor}$


(Ratio)

Causal inference limited by observational nature

2

Nielsen Norman Group (2025)


Shifts in Search Behavior

T5

Qualitative UX study

Behavioral patterns (Attention allocation)

[Process of Premise-Formation (Illustration)]


Illustrates the specific mechanism by which gatekeeping authority shifts from site operators to algorithms.

$T_{attention}$


(Ordinal)

Non-random sampling; Qualitative only

3

Melumad & Yun (PNAS Nexus, 2025)


LLM vs Web Search

T1

7 Experiments (n≈10,462)

Depth of learning / Understanding scores

[Damping of Knowledge Acquisition Costs]


Supports the path dependency where convenience drives shallow learning processes to become the default.

$D_{depth}$


(Ratio)

Task-specific context (advice generation)

4

Spatharioti et al. (CHI 2025)


Impact on Decision Making

T1

Controlled Experiments

Speed, Accuracy, Reliance rates

[Institutionalization of Reliance]


LLM search increases speed but induces Overreliance on errors. Indicates not merely improved accuracy, but increased risk in delegating judgment.

$\rho_{reliance}$


(Ratio)

Lab setting may differ from wild usage

Fact (Evidence). Pew Research Center (2025) reports that when an AI summary is shown on Google search results, users are less likely to click through to external links compared with results without the AI summary (reported proportions: 8% vs 15%). It also notes that clicks on links inside the AI summary itself are rare (reported at ~1% in the report’s measurement).

Model-based interpretation (ALS / IR Working Assessment). The observed reduction in verification behaviors supports condition (i) Anchoring/Verification Suppression. $\alpha_{anchor}$ indicates a structural shift where AI outputs become default priors, consistent with an IR-near-threshold state.

2.2 Social Friction and Counteractions to Legitimacy Shift

— Normative Barriers in Human-AI Interaction —

No.

Study / Phenomenon

Evidence Tier

Study Design / Sample

Primary Outcome / Metric

ALS Interpretation (Mechanism)

Model Variable

Limitations

5

Reif et al. (PNAS, 2025)


Social Evaluation Penalty

T1

Experiments (n=4,400)

Perceived competence / Warmth scores

[Duality of Legitimacy Shift]


Functional legitimacy shifts to AI, while humans incur a social penalty (friction) for relinquishing that legitimacy.

$\beta_{penalty}$


(Ordinal)

Short-term evaluation focus

6

Ipsos (2025) / Reporting


Workplace Concealment

T4

Polling (UK workers)

Concealment rate (29%) / Anxiety rate (26%)

[Reinforcement of Premise via Concealment]


Concealing AI use (Black-box adoption) allows algorithmic dominance to proceed invisibly.

$H_{hide}$


(Proportion)

Self-reported data

7

Sarkar (CHI EA 2025)


AI Slurs

T2

Discourse Analysis

Existence of classist slurs

[Defense of Legitimacy Boundaries]


Discourse functioning as a final line of resistance (boundary maintenance) against the shift of legitimacy to AI.

$B_{boundary}$


(Qualitative)

Theoretical/Interpretive

8

Acut & Gamusa (2025)


AI Shaming in Education

T2

Qualitative / Reflection

Perception of academic integrity

[Destabilization of Professional Authority]


Institutional rejection response to the source of "correctness" shifting to AI in education.

$A_{authority}$


(Qualitative)

Context-specific (Teacher Ed)

Fact (Evidence). Reif et al. (2025) demonstrate a social penalty for AI use. Ipsos polling (2025) indicates that 29% of surveyed workers conceal AI use from colleagues, suggesting a decoupling between actual practice and stated norms.

Model-based interpretation (ALS / IR Working Assessment). Concealment behavior supports a “black-box adoption” pathway: outputs circulate as if human-authored while the decision substrate shifts algorithmically, consistent with IR condition (iii) Decoupling and the transition into a latent legitimacy-transfer phase.

2.3 Evaporation of Responsibility and the Hollow Agent (Decoupling)

— Algorithmic Management and Labor/Organizations —

No.

Study / Phenomenon

Evidence Tier

Study Design / Sample

Primary Outcome / Metric

ALS Interpretation (Mechanism)

Model Variable

Limitations

9

OECD (2025)


Workplace Algorithmic Management

T3

Employer Survey (6 countries, 6000+ firms)

Adoption rates / Management practices

[Formalization of the Decision Subject]


Expansion of algorithmic mediation in decision processes. Substantive legitimacy of judgment shifts to AI.

$\gamma_{hollow}$


(Ordinal)

Employer-reported bias possible

10

Bowdler et al. (SJWEH, 2026)


Psychosocial Risks

T2

Editorial / Lit Review

Risk pathway conceptualization

[Chain Reaction: Knowledge → Organization → Body]


Organization of the risk structure where AI premises feedback as physical/mental load on humans.

$S_{stress}$


(Qualitative)

Non-empirical synthesis

Fact (Evidence). This editorial synthesizes emerging occupational safety and health concerns regarding algorithmic management and psychosocial risks, proposing the risk pathway rather than reporting new primary experimental data. OECD (2025) confirms wide adoption of algorithmic management tools.

Model-based interpretation (ALS / IR Working Assessment). Institutional adoption without clear responsibility frameworks supports condition (iii) Organizational Decoupling. The shift of authority to algorithms while responsibility remains with humans suggests the system is entering an IR-consistent state.


3. Conclusion: State Estimation for Implementation in GhostDrift Models

State Labels (Working)

  • S0: Pre-IR (insufficient evidence for any IR condition)

  • S1: Near-threshold (strong evidence for one condition + suggestive evidence for another)

  • S2: IR-consistent (co-presence of evidence supporting (i)(ii)(iii) within the scope of this report)

Note: These labels are internal to GMI’s ALS model and are not presented as universal sociological classifications.

Working Assessment

Based on the integrated evidence across (i) anchoring/verification suppression, (ii) reliance reinforcement under known error risk, and (iii) organizational decoupling under algorithmic mediation, the current environment is best labeled as S1–S2 (near-threshold to IR-consistent) within the scope of this report.

Model parameterization in GhostDrift’s ALS implementation should therefore assume non-trivial $\alpha$, persistent $\beta$ with concealment pathways, and institutionalized $\gamma$ as a baseline.


References

  • Pew Research Center. (2025, July 22). Google users are less likely to click on links when an AI summary appears in the results. Short Reads. (accessed 2026-01-24).

  • Moran, K., Rosala, M., & Brown, J. (2025, August 15). How AI Is Changing Search Behaviors. Nielsen Norman Group. (accessed 2026-01-24).

  • Melumad, S., & Yun, J. H. (2025). Experimental evidence of the effects of large language models versus web search on depth of learning. PNAS Nexus, 4(10), pgaf316. https://doi.org/10.1093/pnasnexus/pgaf316

  • Spatharioti, S. E., Rothschild, D., Goldstein, D. G., & Hofman, J. M. (2025). Effects of LLM-based Search on Decision Making: Speed, Accuracy, and Overreliance. CHI ’25. https://doi.org/10.1145/3706598.3714082

  • Reif, J. A., Larrick, R. P., & Soll, J. B. (2025). Evidence of a social evaluation penalty for using AI. Proceedings of the National Academy of Sciences, 122(19), e2426766122. https://doi.org/10.1073/pnas.2426766122

  • Ipsos. (2025, September 15). Nearly one in five Britons turn to AI for personal advice and support. Ipsos. (accessed 2026-01-24).

  • OECD. (2025). Algorithmic management in the workplace: New evidence from an OECD employer survey. OECD AI Papers, No. 24. Paris: OECD Publishing. https://www.google.com/search?q=https://doi.org/10.1787/287c13c4-en

  • Bowdler, M., et al. (2026; online 2025). Algorithmic management and psychosocial risks at work: An emerging occupational safety and health challenge. Scandinavian Journal of Work, Environment & Health, 52(1), 1–5. (Editorial). https://doi.org/10.5271/sjweh.4270

  • Sarkar, A. (2025). AI Could Have Written This: Birth of a Classist Slur in Knowledge Work. CHI EA ’25. https://doi.org/10.1145/3706599.3716239

  • Acut, D. P., & Gamusa, E. V. (2025). AI Shaming Among Teacher Education Students: A Reflection on Acceptance and Identity in the Age of Generative Tools. In Pitfalls of AI Integration in Education (IGI Global). https://doi.org/10.4018/979-8-3373-0122-8.ch005

 
 
 

コメント


bottom of page