Sam Altman says AI superintelligence is so big that we need a ‘New Deal’—critics say OpenAI’s policy ideas are a cover for ‘regulatory nihilism’
In a recent 13-page policy paper titled “Industrial Policy for the Intelligence Age,” OpenAI CEO Sam Altman argues that the advent of AI superintelligence necessitates a comprehensive reevaluation of societal structures, including the tax system and work hours. This call for a ‘New Deal’ comes as AI technology rapidly evolves, raising concerns about its implications for the economy and society at large.
The Context of OpenAI’s Proposal
Released on April 6, 2026, the paper aims to initiate a dialogue on the economic impacts of superintelligence, a point at which AI systems surpass human intelligence. The document suggests various policy ideas intended to address these anticipated changes, emphasizing a people-first approach.
However, the timing of the release coincided with a detailed investigation by The New Yorker, which scrutinized OpenAI’s practices and raised questions about Altman’s credibility regarding AI safety and ethics. This backdrop has led to skepticism about the motives behind OpenAI’s proposals.
Key Proposals in the Policy Paper
The policy paper outlines several significant proposals, including:
- Establishment of public wealth funds
- Implementation of shorter workweeks
- Strategies for broadening wealth distribution
- Measures to mitigate risks associated with AI
- Efforts to democratize access to AI technologies
OpenAI describes these ideas not as definitive solutions but as a starting point for further discussion, inviting feedback and refinement from various stakeholders.
Mixed Reactions from Experts
The reception of OpenAI’s proposals has been mixed among experts in the field. Lucia Velasco, a senior economist and AI policy leader, acknowledges that OpenAI’s call for a structural economic shift is valid. She notes that many governments are still approaching AI as a technological issue rather than a broader economic challenge.
“OpenAI is the most interested party in how this conversation turns out,” Velasco stated, highlighting the need for a diverse range of voices in the ongoing discussion. She believes that while the document is a useful contribution, it is essential to ensure that the conversation does not end with OpenAI’s influence dominating the narrative.
Concerns About Originality and Implementation
Critics, including Soribel Feliz, an independent AI policy advisor, argue that many of the ideas presented in the paper are not new. Feliz points out that similar frameworks have been discussed in various AI governance conversations since the launch of ChatGPT in November 2022.
“The language around public-private partnerships, AI literacy, and worker voice reads like it came out of a UNESCO or OECD AI policy framework report,” Feliz remarked. She emphasizes the gap between identifying solutions and creating actionable mechanisms to implement them.
Target Audience and Political Context
The primary audience for OpenAI’s proposals appears to be policymakers in Washington, D.C., rather than the millions of ChatGPT users. Some experts believe that the paper represents an improvement over previous, less concrete efforts by OpenAI.
Nathan Calvin, vice president of state affairs at Encode AI, expressed optimism about the document, noting that it offers more concrete suggestions regarding auditing and incident reporting. However, he also pointed out OpenAI’s lobbying efforts, which may undermine its credibility in advocating for regulatory changes.
Criticism of Regulatory Nihilism
Despite the positive aspects of the proposals, some critics, like Anton Leicht from the Carnegie Endowment, argue that the ideas presented are overly ambitious and may not translate into real-world action. Leicht suggests that the vague nature and timing of the document could be perceived as an attempt to provide cover for what he terms “regulatory nihilism.”
He advocates for redirecting the AI industry’s political funding and lobbying efforts toward meaningful policy implementation rather than merely presenting ideas that lack a clear path to execution.
Conclusion
As AI technology continues to evolve at an unprecedented pace, the discussions surrounding its implications for society and the economy are becoming increasingly urgent. OpenAI’s policy paper serves as a catalyst for these conversations, but the effectiveness of its proposals will depend on the willingness of various stakeholders to engage constructively and translate ideas into actionable policies.
Note: The views expressed in this article are based on the current discourse surrounding AI policy and may evolve as new developments occur in the field.

