OpenAI's Altman calls for calm on AI even as company resists regulatory oversight
The CEO frames AI fears as justified while OpenAI simultaneously lobbies against safety legislation and transparency requirements, creating tension between his public calls for democratic governance and the company's legislative efforts.
1 source · cross-referenced
- Sam Altman published a post calling for de-escalation of AI rhetoric following violent incidents targeting him and an Indianapolis city councilor who supported a data center project.
- Altman has spent a decade framing AI development in existential terms, including statements about superintelligence as humanity's greatest existential threat, comparable to pandemics and nuclear war.
- OpenAI has actively lobbied against AI safety and transparency legislation, including California's SB 1047 and SB 53, and the EU's AI Act, while backing a narrower Illinois liability shield.
- Public concern about AI has accelerated faster than any other political issue according to recent surveys, with majorities believing AI advances too quickly and superintelligence would be harmful.
- The tension between Altman's rhetoric warning of AI risks and OpenAI's legislative resistance to oversight creates skepticism about genuine commitment to democratic governance of the technology.
Sam Altman published a blog post on Friday calling for decreased rhetoric and violence surrounding artificial intelligence, following a firebombing at his home and a shooting near OpenAI headquarters. The post acknowledged that public fear about AI is warranted given the scale of societal change the technology may bring, and stated his preference for democratic governance of AI systems rather than concentrated corporate control.
This call for calm sits in tension with Altman's decade-long pattern of framing AI development in existential terms. In 2015, he wrote that superhuman machine intelligence represents humanity's greatest threat to continued existence. In 2023, he signed a Center for AI Safety statement equating AI extinction risk to pandemics and nuclear war. During a 2024 podcast appearance, he compared AI development to the Manhattan Project, invoking the image of scientists questioning their own creations.
Public concern about AI has intensified measurably. Surveys show AI rising faster than any other issue in voter priority, with majorities expressing belief that AI advances too quickly and that superintelligence would be predominantly harmful. These attitudes suggest the warnings from industry leaders have shaped public perception significantly.
While Altman advocates for democratic oversight, OpenAI's legislative record shows aggressive resistance to AI regulation. The company lobbied against California's SB 1047, which would have established safety standards for frontier AI models. It opposed SB 53, which would create transparency requirements, and sent a sheriff to deliver a subpoena to a nonprofit advocate. OpenAI also lobbied the European Union to weaken its AI Act. The company's support for Illinois legislation focuses narrowly on limiting its own liability rather than broad safety requirements.
The disjuncture between Altman's public rhetoric about AI risk and OpenAI's legislative efforts to resist oversight creates questions about the company's actual commitment to democratic governance. If the public perceives that warnings about existential risk come without corresponding accountability mechanisms, it may erode faith in democratic processes as a path to addressing AI concerns.
- Apr 28, 2026 · The Verge
Elon Musk testifies in trial against OpenAI leadership over company structure and mission
Trust70 - Apr 28, 2026 · OpenAI — News
OpenAI gains FedRAMP Moderate authorization for federal agency use
Trust79 - Apr 27, 2026 · OpenAI — News
OpenAI and Microsoft amend partnership agreement to clarify long-term terms
Trust67