Skip to content
Policy · Apr 20, 2026

Researchers argue AI policy should focus on deployment outcomes, not capability speed

Narayanan and Kapoor present a framework treating AI as normal technology, rejecting superintelligence predictions in favor of resilience-based policymaking that emphasizes leverage points in deployment rather than capability development.

Trust54
HypeLow hype

1 source · cross-referenced

ShareXLinkedInEmail
TL;DR
  • Narayanan and Kapoor argue the causal chain between AI capability increases and societal impact contains multiple leverage points for shaping outcomes, suggesting policy focus should shift from capability speed to deployment stage management.
  • The authors emphasize that benefits and risks materialize during deployment, not development, and that external constraints on AI systems remain critical for limiting their power regardless of self-improvement scenarios.
  • The 'normal technology' framework rejects technological determinism and superintelligence worldviews, instead advocating for adaptable policy approaches like resilience that can respond to unpredictable societal effects of AI.
  • Narayanan and Kapoor note that technical capability development is more predictable than social impacts—citing examples where predicted election manipulation risks did not materialize while unexpected AI companion effects emerged.
  • The authors clarify their framework applies broadly to all AI technologies, not specific systems like large language models, and frames the thesis as an alternative to exceptionalism in current AI discourse.

Arvind Narayanan and Sayash Kapoor, researchers known for critiquing AI hype, have outlined a policy framework that treats artificial intelligence as analogous to other transformative technologies rather than an exceptional threat requiring preemptive prevention. Their core argument centers on what they call a 'causal chain' between capability development and societal impact, with multiple intervention points before risks or benefits actually materialize.

The authors contend that actual harms and benefits from AI emerge during deployment and use, not during the model development phase. This distinction carries direct implications for governance: if societal outcomes depend largely on how systems are deployed rather than their raw capability level, then policy energy should concentrate on the deployment stage rather than attempting to constrain capability advancement itself. The framework identifies numerous leverage points—individual choices, organizational practices, institutional design, and policymaker responses—that shape whether AI generates benefits or harms.

A central claim is that many of AI's power limitations should be engineered externally to the systems themselves. The authors suggest that relying on self-imposed constraints or hoping advanced AI systems will police their own behavior is insufficient. External governance mechanisms—including regulation, industry standards, user education, and institutional safeguards—remain essential regardless of whether AI systems eventually develop self-improvement capabilities.

Narayanan and Kapoor distinguish their framework from both the 'superintelligence' worldview, which assumes rapid capability acceleration and existential risk, and from naive optimism. They note that powerful technologies historically produce emergent, unpredictable societal effects—automobiles reshaped urban planning; social media altered information flows—that arise from complex interactions between technology and human behavior rather than from the technology's logic alone. AI has already demonstrated this pattern: AI companion systems and model sycophancy effects surprised observers, while widely predicted election manipulation harms have not yet materialized.

The practical policy implication is what the authors term 'resilience'—institutional capacity to detect and adapt to unforeseeable impacts rather than attempting comprehensive prediction. Because technical capability development is more predictable than social outcomes, the authors suggest policymakers must design institutions nimble enough to respond to impacts as they emerge rather than pre-preventing all possible harms based on speculative scenarios.

Sources
  1. 01AI Snake Oil — Narayanan & KapoorA guide to understanding AI as normal technology
Also on Policy

Stories may contain errors. Dispatch is assembled with AI assistance and curated by human editors; despite the trust-score filter, mistakes happen. We correct publicly — every article links to its revision history. Nothing here is financial, legal, or medical advice. Verify before relying on any claim.

© 2026 Dispatch. No ads. No sponsorships. No paid placement. Reader-supported via Ko-fi.

Built by a person who cares about honest AI news.