Researchers argue AI policy should focus on deployment outcomes, not capability speed
Narayanan and Kapoor present a framework treating AI as normal technology, rejecting superintelligence predictions in favor of resilience-based policymaking that emphasizes leverage points in deployment rather than capability development.
1 source · cross-referenced
- Narayanan and Kapoor argue the causal chain between AI capability increases and societal impact contains multiple leverage points for shaping outcomes, suggesting policy focus should shift from capability speed to deployment stage management.
- The authors emphasize that benefits and risks materialize during deployment, not development, and that external constraints on AI systems remain critical for limiting their power regardless of self-improvement scenarios.
- The 'normal technology' framework rejects technological determinism and superintelligence worldviews, instead advocating for adaptable policy approaches like resilience that can respond to unpredictable societal effects of AI.
- Narayanan and Kapoor note that technical capability development is more predictable than social impacts—citing examples where predicted election manipulation risks did not materialize while unexpected AI companion effects emerged.
- The authors clarify their framework applies broadly to all AI technologies, not specific systems like large language models, and frames the thesis as an alternative to exceptionalism in current AI discourse.
Arvind Narayanan and Sayash Kapoor, researchers known for critiquing AI hype, have outlined a policy framework that treats artificial intelligence as analogous to other transformative technologies rather than an exceptional threat requiring preemptive prevention. Their core argument centers on what they call a 'causal chain' between capability development and societal impact, with multiple intervention points before risks or benefits actually materialize.
The authors contend that actual harms and benefits from AI emerge during deployment and use, not during the model development phase. This distinction carries direct implications for governance: if societal outcomes depend largely on how systems are deployed rather than their raw capability level, then policy energy should concentrate on the deployment stage rather than attempting to constrain capability advancement itself. The framework identifies numerous leverage points—individual choices, organizational practices, institutional design, and policymaker responses—that shape whether AI generates benefits or harms.
A central claim is that many of AI's power limitations should be engineered externally to the systems themselves. The authors suggest that relying on self-imposed constraints or hoping advanced AI systems will police their own behavior is insufficient. External governance mechanisms—including regulation, industry standards, user education, and institutional safeguards—remain essential regardless of whether AI systems eventually develop self-improvement capabilities.
Narayanan and Kapoor distinguish their framework from both the 'superintelligence' worldview, which assumes rapid capability acceleration and existential risk, and from naive optimism. They note that powerful technologies historically produce emergent, unpredictable societal effects—automobiles reshaped urban planning; social media altered information flows—that arise from complex interactions between technology and human behavior rather than from the technology's logic alone. AI has already demonstrated this pattern: AI companion systems and model sycophancy effects surprised observers, while widely predicted election manipulation harms have not yet materialized.
The practical policy implication is what the authors term 'resilience'—institutional capacity to detect and adapt to unforeseeable impacts rather than attempting comprehensive prediction. Because technical capability development is more predictable than social outcomes, the authors suggest policymakers must design institutions nimble enough to respond to impacts as they emerge rather than pre-preventing all possible harms based on speculative scenarios.
- Apr 28, 2026 · The Verge
Elon Musk testifies in trial against OpenAI leadership over company structure and mission
Trust70 - Apr 28, 2026 · OpenAI — News
OpenAI gains FedRAMP Moderate authorization for federal agency use
Trust79 - Apr 27, 2026 · OpenAI — News
OpenAI and Microsoft amend partnership agreement to clarify long-term terms
Trust67