Transformative AI and the Increase in Returns to Experimentation: Policy Implications

Transformative AI will generate a genius supply shock: abundant, cheap, and fast agents that can outperform human beings across many domains. But society is likely to adapt too slowly to this remarkable but unfamiliar new capability. In this essay, the authors explore two policies—regulatory sandboxes and regulatory holidays— that can help companies, regulators, and individuals learn how to use these powerful new tools and put them to effective use.

I. Genius Supply Shock

In 2023, a team of researchers set out to build a single benchmark to test the full breadth and depth of human knowledge. The result was “Humanity’s Last Exam”—a 2,500-question gauntlet spanning university-level mathematics, physics, chemistry, engineering, biology, law, philosophy, economics, and literature. Unlike traditional AI evaluations, which test for narrow capabilities in isolated tasks, this benchmark simulates the challenge of a PhD qualifying exam merged with a generalist’s oral defense. Questions are long-form and openended. To score well, an AI must not only know, but understand. Success requires what we typically associate with our highest-functioning minds: flexible reasoning, conceptual abstraction, and the capacity to transfer knowledge across domains. Until recently, no machine had come close to passing.

Given the pace of development, these capabilities will be long outdated by the time this essay is published. But even now, a new source of genius-level capabilities is coming into view: not millions of ordinary workers, but genius-level AI tools, available on demand.

In July 2025, Grok 4—an AI system developed by xAI—scored approximately 44.4% on this benchmark, placing it within striking distance of top-performing human postgraduate scientists.1 Around the same time, OpenAI’s GPT and Google DeepMind’s Gemini model achieved gold-medal-level performance at the International Mathematical Olympiad.2 Simultaneously, Meta CEO Mark Zuckerberg assembled Superintelligence Labs and pursued elite researchers with nine-figure compensation packages, in part motivated by signs that their AIs are beginning to self-improve—motivating his expensive bet on superintelligence.3 Given the pace of development, these capabilities will be long outdated by the time this essay is published. But even now, a new source of genius-level capabilities is coming into view: not millions of ordinary workers, but genius-level AI tools, available on demand.

These advances realized in 2025 were anticipated by, among other experts, Dario Amodei, cofounder and CEO of AI foundation model company Anthropic, who described forthcoming systems as providing a “country of geniuses in a datacenter” where a genius is “smarter than a Nobel Prize winner across most relevant fields—biology, programming, math, engineering, writing, etc.”4 These systems do not simply automate routine tasks, he speculated; they synthesize knowledge across domains, propose and critique solutions, and do so at a digital scale. Unlike human experts, they are cheap, abundant, and tireless.

Consider three stylized examples:

  1. Cross-disciplinary problem-solving: A team at a biotech start-up feeds a genome-wide association dataset and the entire corpus of medicinal chemistry into an AI agent. Within hours, the agent proposes three chemically novel, synthesizable drug leads for a rare disease—tasks previously requiring years of PhD-level effort in biology, chemistry, and informatics, and thus previously incurring a cost that may outweigh the benefit given the small market associated with the rare disease. 

  2. Real-time regulatory drafting: A trade ministry preparing for next-week negotiations asks the agent to analyze 4,000 pages of World Trade Organization precedent, identify loopholes relevant to semiconductor export controls, and draft treaty language that closes them while remaining compliant with other international commitments. It returns a legally coherent first draft within minutes. 

  3. Complex system design and verification: A civil-engineering firm uploads lidar scans of a bridge, sensor data, regional climate projections, and evolving safety codes. The agent produces an executable retrofit plan, complete with bill of materials, schedule, and risk assessment, matching the output of a multidisciplinary engineering consultancy.

In each example, AI agents compress entire value chains of cognitive labor, illustrating why we treat the potential of transformative AI as a supply shock. When such tools become widely available at low cost, the bottleneck in value creation shifts from solving problems to deciding which problems to solve and testing whether proposed solutions actually work.

 

II. Demand adjusts slowly

The genius supply shock does not automatically translate into economic impact. In the short run, organizations face fixed contracts, workflows, capital stocks, and, crucially, validation pipelines. When low-cost geniuses arrive before the capacity to test their proposals, much of the new problemsolving capacity sits idle. Our companion paper, “Genius on Demand,”5 models this dynamic as a one-time supply shock that is followed by a protracted adjustment period. Firms must build complementary organizational capital—data infrastructure, evaluation frameworks, liability regimes—before they can deploy genius-level agents at scale. Importantly, they need to work out precisely what these geniuses can do for them, as previously they were very scarce. Now their application becomes heavily “imagination-limited.”

Firms must build complementary organizational capital—data infrastructure, evaluation frameworks, liability regimes— before they can deploy genius-level agents at scale.

History provides a useful analogue. Zvi Griliches documented the 1930s spread of hybrid corn, a breakthrough that increased yields dramatically but diffused slowly across regions.6 Early adopters in Iowa saw large productivity gains, yet farmers elsewhere hesitated because varieties needed local adaptation and farmers wanted evidence that the new seeds would work under their specific conditions. Adoption required experiments, demonstration plots, and peer-to-peer learning. The same will hold for genius-level AI. Solutions proposed by an AI may be persuasive on paper but fail in production because of data quality, governance, or human factors. Organizations will need to run experiments—pilots, audits, and A/B tests—and they will need mechanisms to share lessons about what works and what does not.

In addition to firm-level adoption decision criteria, consumer-facing applications may face additional barriers due to consumer preferences. Beyond price and safety, users may value process attributes, such as human contact, explainability, contestability, privacy guarantees, local data control, and the ability to opt out. Even when genius AIs are measurably more accurate, adoption may stall if the way a decision is made violates these process preferences.

Without targeted policies, the private sector may underinvest in experiments. Companies worry that investing in trials will benefit competitors who free-ride on their results. Regulatory uncertainty further discourages experimentation: Deploying AI systems in sensitive domains (health care, finance, HR, transportation) often requires navigating unclear or evolving legal obligations. As a result, the returns to experimentation will be high, but the quantity of experimentation may be low.

Without credible, local evidence on safety, ROI, and workflow fit, CFOs will not fund complements, regulators will not clarify rules, payers will not reimburse, and vendors will not converge on standards.

Beyond validation via experiments, several structural barriers slow diffusion: fragmented and low-quality data; legacy IT and interoperability failures; scarce complementary capital (sensors, robotics, secure cloud); cybersecurity and privacy compliance burdens; unclear liability and insurance cover; procurement and reimbursement rules that favor status-quo processes; talent shortages in AI orchestration and change management; and uncertainty over IP and data rights. These frictions are mutually reinforcing. Without credible, local evidence on safety, ROI, and workflow fit, CFOs will not fund complements, regulators will not clarify rules, payers will not reimburse, and vendors will not converge on standards. Well-identified pilots generate the operational metrics and causal evidence that unlock complementary capital, de-risk rulemaking, and coordinate ecosystems, turning many slow, systemic bottlenecks into solvable engineering and governance tasks.

 

III. What happens when there is an abundance of genius AI?

We develop a formal model in our companion paper “Genius on Demand” that clarifies how the sudden arrival of abundant, near-zero marginal-cost, high-level cognition transforms labor markets and production.7 In the short run, supply overwhelms demand. Firms have not yet reorganized tasks or invested in complementary infrastructure, and therefore the set of problem-solving tasks that genius AIs can perform is limited to a narrow subset that were already considered frontier problems. As a result, the additional intelligence largely displaces human geniuses to the frontier while leaving routine knowledge workers temporarily untouched.

In the long run, however, if AI agents have an absolute advantage over human workers across a broad swath of knowledge work, our model predicts that many tasks previously classified as “routine” will be reclassified as “genius” tasks. The stock of frontier problems expands as the cost of solving them falls; tasks that were once considered too complex or unprofitable become tractable. The division of labor shifts toward more oversight, supervision, and creative exploration. This dynamic echoes Schumpeterian creative destruction: The arrival of a superior factor of production expands the production possibility frontier but also triggers a reallocation of resources and skills that can be painful for individuals and regions.

Two principal mechanisms drive these predictions. First, task substitution. Genius AI agents can perform many steps in cognitive workflows more accurately and quickly than humans. In the short run, this substitution is limited to problem-solving tasks where codified data exist and where institutional constraints allow machine outputs to be acted upon. In the long run, substitution expands as firms invest in data, digital infrastructure, and workflow redesign. Second, task reclassification. As the marginal cost of solving complex problems falls, firms will find it profitable to apply genius AI to domains formerly left unaddressed because they were too difficult or too small in scale. This reclassification effect mirrors the expanded set of research problems undertaken by the pharmaceutical start-up in our earlier example, and it is analogous to the shift from openpollinated seed to hybrid seed in agriculture. Hybrid corn created new high-yield varieties that were previously nonexistent; genius AI will create new categories of research, design, and analysis problems.

Hybrid corn created new high-yield varieties that were previously nonexistent; genius AI will create new categories of research, design, and analysis problems.

The model’s predictions also have distributional implications. Countries or firms with deep pools of human geniuses and strong complementary capital are likely to deploy AI geniuses more effectively in the short run, possibly capturing first-mover advantages. The reclassification of problems toward genius tasks means that those with existing frontier knowledge may find their comparative advantage accentuated. Conversely, regions lacking digital infrastructure or highskill talent may experience displacement of routine workers without commensurate gains in productivity. The interplay between local capabilities and the substitution–reclassification dynamic thus shapes how the gains from the genius supply shock are distributed.

 

IV. Potential social inefficiencies

Even if the private response to the genius supply shock follows our model’s predictions, the outcome may be socially inefficient. We outline three reasons.

Inadequate information about problem classification. When the cost of genius cognition falls, firms must decide which tasks are best left to routine workers and which should be reclassified as genius tasks that benefit from expert problem-solving ability. This requires experimentation and learning about relative productivity. Each firm’s experimentation has spillovers: When one organization discovers that a legal drafting task can be enhanced with a genius agent, its competitors can learn from that experience. Yet firms have limited incentives to share results or insights. This is a classic information externality reminiscent of agricultural adoption. Griliches’ hybrid-corn study documents how farmers waited to observe neighbors’ yields before planting expensive new seed.8 Information about what counts as a genius task will diffuse more slowly than socially desirable if firms treat these insights as proprietary.

Precautionary restrictions on experimentation. As AI systems grow more capable, concerns about safety, misinformation, labor displacement, and alignment may intensify. In March 2023, an open letter signed by prominent AI researchers and entrepreneurs called for a six-month moratorium on training systems more powerful than GPT-4, arguing that “powerful AI systems should be developed only once we are confident that their effects will be positive and risks manageable.”9 Some jurisdictions might respond by imposing precautionary restrictions on experimentation, preventing firms from testing AI in realworld settings. From a social perspective, such restrictions may prevent the discovery of beneficial uses and slow the accumulation of safety data. They may also push experimentation into closed corporate or national labs, limiting transparency and the diffusion of lessons learned.

Regulations may err in either direction: too stringent, stifling innovation and diffusion; or too lax, failing to internalize negative externalities like privacy breaches, algorithmic biases, job displacements, or systemic risks that impose uncompensated costs on third parties.

In addition to social inefficiencies that may occur due to under-adoption resulting from precautionary restrictions on experimentation, social inefficiencies may also arise from overadoption due to, for example, a lack of restrictions on unsafe use. In other words, regulations may err in either direction: too stringent, stifling innovation and diffusion; or too lax, failing to internalize negative externalities like privacy breaches, algorithmic biases, job displacements, or systemic risks that impose uncompensated costs on third parties.

Institutional inertia and administrative bottlenecks. Existing regulatory bodies may not have the expertise or resources to assess complex AI systems quickly. Approving new applications can become a bottleneck, delaying deployment even when the benefits are clear. In highly regulated sectors like health care or finance, administrative backlog can reduce effective demand for genius cognition. Without streamlined processes or dedicated regulatory sandboxes, risk-averse administrators may default to delaying or denying applications.

 

V. Rationale for policy intervention

Given these social inefficiencies, there is a rationale for proactive policy to enable beneficial experimentation while protecting against harm. Two principles guide intervention.

First, encourage experimentation while preserving optionality. In a world of radical technological uncertainty, policymakers should facilitate controlled experimentation. Experiments reveal which tasks can be reclassified as genius tasks, the costs and benefits of substitution, and the design of complementary investments. However, because outcomes are uncertain and may involve societal harm, experiments should be structured to preserve policy optionality: Regulators must be able to tighten or relax rules ex post based on evidence.10 At the same time, the initial scope for experimentation should be limited due to uncertainty: No one, including regulators, firms, or society at large, yet knows the optimal balance between encouraging experimentation and mitigating harms. By starting small in low-stakes environments and gathering evidence through experiments, policymakers are able to learn about both under- and over-adoption externalities, allowing for more informed adjustments over time.

By starting small in low-stakes environments and gathering evidence through experiments, policymakers are able to learn about both under- and overadoption externalities, allowing for more informed adjustments over time.

Second, promote diffusion and reduce adoption lags. Many of the benefits of genius AI will materialize only when complementary capital and social trust catch up. Policies that lower information barriers, build shared knowledge infrastructure, and disseminate best practices can accelerate this diffusion, increasing the likelihood that the gains from genius AI are widely shared rather than captured by early movers alone. In addition, consumer preferences evolve endogenously through experience: Carefully designed experiments can update beliefs and raise acceptance, while early missteps may generate salient negative signals that slow diffusion economy-wide. Well-structured experimental regimes are therefore informational goods as well as regulatory tools: For example, by running stratified, consent-based pilots with randomized variations in disclosure, human-in-the-loop, and recourse mechanisms, regulators and firms can jointly learn the willingness-to-accept for automation across populations, identify heterogeneity in process utility, and calibrate guardrails that enable scale without eroding trust.

These principles motivate the design of policy instruments that allow experimentation and diffusion while preserving the option to intervene if harms emerge. In the remainder of the paper, we discuss two broad approaches: regulatory sandboxes and regulatory holidays. Regulatory sandboxes are time-limited, firm-specific exemptions granted under close supervisory oversight. Entry is gated, scope is narrow, and participants commit to enhanced monitoring, disclosure, and guardrails; in exchange, they receive customized relief to run tightly scoped trials in live settings. In contrast, regulatory holidays are broader, category-based, temporary exemptions that apply to many deployers who meet the baseline eligibility criteria. Holidays reduce transaction costs by avoiding case-by-case approvals and enable parallel experimentation at scale, but provide less bespoke supervision than sandboxes. In short, sandboxes trade coverage for control; holidays trade control for coverage.

 

VII. Regulatory sandboxes

Concept and evidence

Regulatory sandboxes originated in financial services as controlled environments where firms could test innovative products or services under the supervision of regulators. In a sandbox, a regulator grants a time-limited waiver or mitigation of specific obligations to participating firms, monitors their activities closely, and uses the resulting data to decide whether existing rules need revision. The concept has since been applied to AI. In 2015, the United Kingdom’s Financial Conduct Authority launched the first fintech sandbox.11 Since then, the European Union, the United Arab Emirates, and several US states announced AI sandboxes. For example, Article 57 of the EU Artificial Intelligence Act requires each member state to establish an AI sandbox or participate in a multistate sandbox; Spain became the first country to set up such a sandbox in 2022, and other member states must do so by August 2026.12 Utah’s 2025 Artificial Intelligence Policy Act created the first AI sandbox in the United States,13 and other states, including Connecticut, Oklahoma, and Texas, have proposed similar legislation.

Evidence from early implementations suggests that sandboxes can facilitate learning. The International Association of Privacy Professionals notes that sandboxes provide a controlled environment for companies to test AI innovations while regulators observe whether temporary waivers should become permanent.14 The Organisation for Economic Co-operation and Development recommended that governments use experimentation to provide a controlled environment in which AI systems can be tested and scaled.15 The UAE’s Regulations Lab, established in 2019, offers temporary licenses to test innovative technologies, including AI, and uses lessons from these trials to inform legislation.16 Singapore has used sandboxes to gather insights into generative AI for marketing and customer engagement.17 Sandboxes may accelerate innovation by reducing regulatory uncertainty, permitting firms to collaborate with regulators, and generating public evidence about the impact of AI systems.

Sandboxes may accelerate innovation by reducing regulatory uncertainty, permitting firms to collaborate with regulators, and generating public evidence about the impact of AI systems.

Design principles

To maximize the benefits of AI sandboxes, policymakers should consider several design principles. First, broad eligibility: The openness of sandboxes to a wide range of firms—large incumbents, start-ups, public agencies, and nonprofits—may enable regulatory learning to better reflect diverse use cases rather than a narrow set of commercially viable applications. Second, clear entry and exit criteria: To the extent that regulators specify what qualifies a project for sandbox admission and what triggers graduation or expulsion, the increased clarity may reduce uncertainty and increase the chance that sandboxes do not become indefinite carve-outs. Third, mandatory disclosure: To the extent that participation is conditional on companies sharing performance metrics, safety incidents, and demographic impact assessments with regulators and, where appropriate, the public, society benefits from increased information. Fourth, proportional oversight: To the extent that regulators tailor monitoring intensity to the risk profile of the application (e.g., low-risk uses may require lighter-touch oversight, while high-risk experiments might need close supervision and human-in- the-loop safeguards), a better calibrated risk–reward regulatory regime may better serve the public good. Finally, sunset clauses increase the likelihood that sandboxes do not become alternative regulatory regimes; results from sandbox experiments should feed directly into the evolution of the broader regulatory framework.

Benefits and risks

Sandboxes address several regulatory and market failures. By providing regulatory relief and clarity, they lower the private cost of experimentation, encouraging firms to test some AI systems they might otherwise avoid. By requiring data sharing and regulator involvement, they may reduce information externalities: Lessons from one company’s trial may inform rules that apply to the whole sector. Because sandboxes can include small- and medium-sized enterprises, they may help democratize innovation and prevent large incumbents from monopolizing learning. However, sandboxes also pose risks. They can privilege well-resourced firms that have the legal and technical capacity to engage with regulators, potentially excluding resource-poor actors. There is a danger that sandboxes become de facto safe harbors that allow products to escape full oversight.

Because sandboxes can include small- and medium-sized enterprises, they may help democratize innovation and prevent large incumbents from monopolizing learning.

In addition, requirements to reveal experimental results with competitors may stifle experimentation. This challenge mirrors the tensions in patent policy, where innovators receive protections or privileges in exchange for disclosure; yet striking the right balance between compensating participants and mandating information sharing is fraught with difficulties.18 One approach to this tension could be requirements that participants share model outputs and performance against specified metrics (e.g., medical diagnostic accuracy, customer satisfaction scores, fraction of automated task completions), including comprehensive documentation of all failure modes and their frequencies for each metric, without being required to disclose model weights, training data, or orchestration approach.

If firms are required to fully disclose all elements of their experiments, they may be inadequately compensated for the risks and costs incurred; this could potentially deter their investment in trials, as competitors could free-ride on the shared knowledge. On the other hand, allowing participants to retain excessive benefits from their exemptions risks creating unfair advantages or favoritism ex post, undermining equity and public trust. This trade-off may render regulatory holidays preferable in scenarios where broad participation is desired: They sidestep the need for selective compensation by extending exemptions evenly across deployers, while still leveraging transparency requirements to ensure societal learning without the same disclosure– compensation dilemmas. We turn to the regulatory holidays next.

 

VIII. Regulatory holidays

Origins and definition

A regulatory holiday differs from a sandbox in two respects: It applies broadly rather than only to selected participants, and it relaxes obligations ex ante rather than providing bespoke waivers.19 Under a regulatory holiday, an investor or deployer of a new technology is exempted from certain regulations for a pre-specified period. The concept has roots in network industries: For example, the EU’s Second Gas Market Directive permits a “regulatory holiday” for investors building new pipeline capacity.20 Experimental evidence comparing price cap regulation with a regulatory holiday found that a holiday can increase the expected return on investment but may also create incentives to underinvest relative to the social optimum.21 The simplicity of a holiday makes it attractive—authorities can commit to it and it carries low enforcement costs—but its broad scope means that mistakes can affect many users.

Experimental evidence comparing price cap regulation with a regulatory holiday found that a holiday can increase the expected return on investment but may also create incentives to underinvest relative to the social optimum.

In the AI context, a regulatory holiday would entail a temporary, time-bounded suspension of certain rules for an entire class of low-risk applications. For example, a health ministry might permit hospitals to deploy AI assistants for administrative coding and scheduling tasks without prior certification, provided they report errors and near misses. A labor ministry might allow firms to use AI tools for drafting nonbinding contracts under a holiday that requires disclosure of outcomes and an after-action audit. By comparison with a sandbox, participation would not require special applications or bespoke oversight; any firm operating in the specified domain could take advantage of the holiday.

Policy proposals and controversy

Because holidays are broad, they are politically contentious. In 2025, a provision in a US budget bill proposed to bar states from regulating AI for up to ten years.22 Proponents argued that a moratorium would prevent a patchwork of state laws and allow companies to focus on innovation. Opponents countered that giving AI developers a decadelong “regulatory holiday” would leave consumers unprotected and enable dominant firms to operate without accountability. The Senate ultimately removed the provision by a 99–1 vote, signaling that Congress would not grant the AI sector such a sweeping exemption. This episode highlights both the appeal and the danger of broad holidays: They can encourage investment but may also undermine public trust and preempt needed safeguards.

Applying holidays responsibly

For AI regulatory holidays to be effective, they typically must be narrowly scoped and designed with guardrails. First, limit the scope: Under a wide range of conditions, holidays may be most appropriate for low-risk applications, such as administrative tasks or research collaboration, and potentially less appropriate for high-risk uses such as clinical diagnosis, credit scoring, or criminal justice. Second, require data transparency: Under certain conditions, firms benefiting from the holiday might be required to disclose usage statistics, demographic impacts, and safety incidents to regulators and, where appropriate, to the public. Third, establish independent oversight: Under certain conditions, the social benefits may outweigh the costs of requiring an oversight board to monitor outcomes, investigate complaints, and recommend whether the holiday should be extended, modified, or terminated. Fourth, include sunset clauses: Under certain conditions, society may benefit from requiring that the holiday expires automatically after a fixed period; policymakers can then use the accumulated evidence to decide whether to normalize, restrict, or ban the activities in question. Finally, coordinate internationally: Because AI markets are global, society may benefit from requiring that countries implementing holidays share data and coordinate standards to prevent harmful regulatory arbitrage. That said, under other conditions, the cost of requiring companies to share information with their competitors may so significantly dampen private sector incentives to invest in experimentation that the cost of this requirement outweighs the benefit. For example, one can imagine US foundation model companies, like OpenAI and Anthropic, being so severely hampered in their efforts to raise the vast amounts of capital they require to experiment and scale their models that if they were forced to share their model weights with their competitors, they would be unable to compete with well-capitalized incumbents like Google, Microsoft, and their Chinese counterparts.

Under a wide range of conditions, holidays may be most appropriate for low-risk applications, such as administrative tasks or research collaboration, and potentially less appropriate for high-risk uses such as clinical diagnosis, credit scoring, or criminal justice.

Following these precautionary design principles may prove challenging due to jurisdictional competition that could lead to overly lax regulation akin to the dynamics observed in tax competition.23 As jurisdictions compete to attract firms and stimulate economic activity, they may lower regulatory standards or offer more generous exemptions to gain a competitive edge, creating a “race to the bottom” where the collective interest in managing AI-related risks is undermined. Firms can relocate to jurisdictions with the least stringent oversight, driving the incentive to deviate from coordinated standards and potentially resulting in insufficient safeguards against negative externalities like privacy violations or systemic failures. This may complicate efforts to ensure a balanced approach to AI experimentation.

 

VIII. Conclusion

Transformative AI promises a once-in-a-generation supply shock in cognitive capability. Yet as history teaches, technologies rarely reshape economies overnight. Genius-level AI will create value only when organizations have the capacity and incentives to test and adopt its outputs. Without intervention, private actors will underinvest in experimentation, and regulators will learn too slowly. Regulatory sandboxes and regulatory holidays offer complementary tools to accelerate this learning. Sandboxes allow regulators and firms to run controlled experiments with bespoke waivers, generating high-quality evidence about novel AI applications. Holidays offer a broader, timelimited suspension of specific rules, enabling rapid diffusion while still collecting data and preserving the option for ex post intervention. Both instruments require careful design to balance accelerating productivity gains with safety.

By embracing experimentation and incorporating feedback into evolving rules, policymakers can shorten the lag between the arrival of genius-level AI and its economic impact. The goal is not to pick winners or give the AI sector a free pass, but to create a regulatory environment in which experimentation is rewarded, learning is shared, and the productivity benefits of abundant cognition accrue to society as quickly as possible without excessively compromising safety.

The goal is not to pick winners or give the AI sector a free pass, but to create a regulatory environment in which experimentation is rewarded, learning is shared, and the productivity benefits of abundant cognition accrue to society as quickly as possible without excessively compromising safety.

1. Deni Ellis Béchard, “New Grok 4 Takes on ‘Humanity’s Last Exam’ as the AI Race Heats Up,” Scientific American, July 11, 2025, https://www.scientificamerican.com/article/elon-musks-new-grok-4-takes-on-humanitys-last-exam-as-the-ai-raceheats-up/.

2. Kenrick Cai and Jaspreet Singh, “Google Clinches Milestone Gold at Global Math Competition While OpenAI Also Claims Win,” Reuters, July 22, 2025, https://www.reuters.com/world/asia-pacific/google-clinches-milestone-gold-global-math-competition- while-openai-also-claims-2025-07-22/.

3. Johana Bhuiyan, “Zuckerberg Claims ‘Superintelligence Is Now in Sight’ as Meta Lavishes Billions on AI,” The Guardian, July 30, 2025, https://www.theguardian.com/technology/2025/jul/30/zuckerberg-superintelligence-meta-ai.

4. Dario Amodei, “Machines of Loving Grace: How AI Could Transform the World for Better,” Dario Amodei (blog), October 2024, https://www.darioamodei.com/essay/machines-of-loving-grace.

5. Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb, “Genius on Demand: The Value of Transformative Artificial Intelligence,” National Bureau of Economic Research, working paper, 2025, https://www.nber.org/books-and-chapters/economics-transformative-ai/genius-demand-value-transformative-artificial-intelligence.

6. Zvi Griliches, “Hybrid Corn: An Exploration in the Economics of Technological Change,” Econometrica 25, no. 4 (October 1957): 501–522, https://www.jstor.org/ stable/1905380.

7. Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb, “Genius on Demand: The Value of Transformative Artificial Intelligence,” National Bureau of Economic Research, working paper, 2025, https://www.nber.org/books-and-chapters/economics-transformative-ai/genius-demand-value-transformative-artificial-intelligence.

8. Griliches, “Hybrid Corn: An Exploration in the Economics of Technological Change.”

9. Jyoti Narayan et al., “Elon Musk and Others Urge AI Pause, Citing ‘Risks to Society,’” Reuters, April 5, 2023, https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/.

10. Joshua S. Gans, “How Learning About Harms Impacts the Optimal Rate of Artificial Intelligence Adoption,” Economic Policy 40, no. 121 (January 2025): 199–219, https://doi.org/10.1093/epolic/eiae053.

11. Financial Conduct Authority, “Regulatory Sandbox,” November 2015, https://www.fca.org.uk/publication/research/regulatory-sandbox.pdf.

12. Richard Sentinella, “How Different Jurisdictions Approach AI Regulatory Sandboxes,” IAPP, May 14, 2025, https://iapp.org/news/a/how-different-jurisdictions-approach-ai-regulatory-sandboxes.

13. Stuart D. Levi et al., “Utah Becomes First State to Enact AI-Centric Consumer Protection Law,” Skadden Insights, April 5, 2024, https://www.skadden.com/insights/publications/2024/04/utah-becomes-first-state.

14. Sentinella, “How Different Jurisdictions Approach AI Regulatory Sandboxes.”

15. Organisation for Economic Co-operation and Development, Regulatory Sandboxes in Artificial Intelligence, OECD Digital Economy Papers No. 356 (OECD Publishing, July 2023), https://doi.org/10.1787/8f80a0e6-en.

16. United Arab Emirates, “Regulatory Sandboxes in the UAE,” Digital UAE, updated October 14, 2024, https://u.ae/en/about-the-uae/digital-uae/regulatory-framework/%20%20regulatory-sandboxes-in-the-uae.

17. Digital Policy Alert, “Singapore: Launched Generative AI Evaluation Sandbox for SMEs (Launched February 7, 2024),” accessed September 29, 2025, https://digitalpolicyalert. org/event/17560-implemented-generative-ai-evaluation-sandbox-by-ai-verify-foundation-and-imda.

18. Kenneth Arrow, “Economic Welfare and the Allocation of Resources for Invention,” in Richard Nelson, ed., The Rate and Direction of Inventive Activity: Economic and Social Factors (Princeton University Press, 1962); Suzanne Scotchmer, “Standing on the Shoulders of Giants: Cumulative Research and the Patent Law,” Journal of Economic Perspectives 5, no. 1 (Winter 1991): 29–41, https://doi.org/10.1257/ jep.5.1.29.

19. Joshua S. Gans and Stephen P. King, “Access Holidays and the Timing of Infrastructure Investment,” Economic Record 80, no. 248 (March 2004): 89–100, https://doi.org/10.1111/j.1475-4932.2004.00127.x.

20. Bert Willems and Gijsbert Zwart, “Regulatory Holidays and Optimal Network Expansion,” TILEC Discussion Paper no. 2016-008, CentER Discussion Paper no. 2016-015 (April 2016), http://dx.doi.org/10.2139/ssrn.2770531.

21. Bastian Henze, Charles Noussair, and Bert Willems, “Regulation of Network Infrastructure Investments: An Experimental Evaluation,” Journal of Regulatory Economics 42, no. 1 (February 2012): 1–38, https://doi.org/10.1007/s11149-012-9185-4.

22. Rebecca Bellan, “US Senate Removes Controversial ‘AI Moratorium’ from Budget Bill,” TechCrunch, July 1, 2025, https://techcrunch.com/2025/07/01/us-senate-removes-controversial-ai-moratorium-from-budget-bill.

23. John D. Wilson, “A Theory of Interregional Tax Competition,” Journal of Urban Economics 19, no. 3 (May 1986): 296–315, https://doi.org/10.1016/0094- 1190(86)90045-8; Michael P. Devereux, Ben Lockwood, and Michaela Redoano, “Do Countries Compete over Corporate Tax Rates?,” Journal of Public Economics 92, no. 5–6 (June 2008): 1210–1235, https://doi.org/10.1016/j.jpubeco.2007.09.005.

Previous
Previous

What’s There to Fear in a World with Transformative AI? With the Right Policy, Nothing.

Next
Next

Information in the Age of AI: Challenges and Solutions