The San Francisco Consensus

 

Those following current debates in Silicon Valley would be forgiven for thinking that we in the Al community don’t know what we’re talking about. Al experts are divided on a host of issues. Perhaps most famously, diverging assessments of the existential risks posed by Al have separated “doomers” and “accelerationists.” But leading thinkers also disagree on the relative merits of open and closed models, the benefits of regulation, and the national security implications for deterrence, to highlight just a few of many unresolved questions.

Yet beneath the apparent discord lies a deeper consensus around a number of key ideas. Most of those leading the development of Al agree on at least three central premises: First, they believe in the power of so-called scaling laws, arguing that ever larger models can continue to drive rapid progress in Al. Second, they think the timeline for this revolution is much shorter than previously expected: Many Al experts now see us reaching superintelligence within two to five years. Finally, they are betting that transformative Al (TAl), or systems that can outperform humans on many tasks, will bring unprecedented benefits to humanity. This belief is expressed in hockey-stick graphs promising exponential rates of scientific advancement, financial returns, and ultimately human progress.

I call this set of overlapping views the San Francisco Consensus. A consensus is a set of beliefs held by a large majority of experts—in this case technologists, scientists, and entrepreneurs. From the postwar Keynesian consensus to the neoliberal Washington Consensus, periods of epistemic agreement have always been a function of both observable truths and underlying ideological commitments. The fact that such a consensus is widely held does not, of course, make it true. In fact, history should give those of us in Silicon Valley ample cause for humility. Throughout the 2000s, many of us were overly optimistic about the social impacts of the internet—a vision undercut by disinformation, weaponization, and mental health crises.

From the postwar Keynesian consensus to the neoliberal Washington Consensus, periods of epistemic agreement have always been a function of both observable truths and underlying ideological commitments. The fact that such a consensus is widely held does not, of course, make it true.

Perhaps that is why, while the San Francisco Consensus is broadly accepted in Silicon Valley, it is by no means universal. Some leading thinkers believe that current LLMs offer no plausible path to TAl nor its even more ambitious cousin, artificial general intelligence (AGI). Additionally, it remains to be seen whether Al can overcome the stagnation Peter Thiel famously described with reference to the internet.1 To paraphrase Thiel: will the Al revolution get us flying cars or a 140-character recipe in the style of Emily Dickinson?

The San Francisco Consensus also does not necessarily reflect a global consensus among the Al community. In Europe, many tend to be more skeptical about both the social impacts and the pace of progress. And as I have written elsewhere, China has been much less preoccupied with AGI and more focused on deploying Al for practical applications across a variety of sectors.2

If, however, the San Francisco Consensus holds true—in other words, if Al turns out to be truly transformative, then the next few years will be crucial. Many Al experts believe that AGI will be achieved through recursive self-improvement, or the ability of an Al system to autonomously enhance its own capabilities, leading to a rapid explosion of intelligence. From the perspective of economics and national security, recursive self-improvement has seismic implications: The company and country that first reach AGI might be able to lock in an enduring advantage, preventing others from catching up. Not surprisingly, then, many who adhere to the San Francisco Consensus believe that the next five years will be the most important of the next 1,000.

 

The Architecture of Intelligence

What is the San Francisco Consensus’s vision of technological progress? In simplest terms, those who believe in the San Francisco Consensus may imagine Al proceeding along three axes: a revolution in language, one in agentic capability, and one in reasoning. All three are moving on different timelines and with varying likelihoods of success.

The language revolution has already occurred: Computers can now make sense of, replicate, and effectively interact with human language. The agentic revolution is currently underway, turning Al from a tool into an actor. In the future, entire workflows will be managed by interconnected Al systems. Take real estate, for example: Buyers will instruct their Al agents to scan all available properties and negotiate with sellers, who will in turn deploy agents across their workflows. Still, agents will be functional rather than creative. They will maximize the speed and efficiency of knowledge exchange without creating fundamentally new knowledge.

The third and final revolution is the reasoning revolution. This is by far the most consequential and, by extension, the most speculative one. A central tenet of the San Francisco Consensus is the belief that scaling up existing architectures—by training larger models on more data with more compute—will reliably yield better performance and achieve a revolution in reasoning capabilities. The reasoning revolution unleashing AGI would occur as a property of scale.

 

Running Out of Everything

But the vision of the San Francisco Consensus is not guaranteed. Like all technologies, Al faces constraints. The foremost material constraint revolves around hardware and energy inputs. Over the past decades, semiconductor advances have reduced the size and cost of chips, just as new generations of Al accelerators—Blackwell, Rubin, and others—are now pushing the frontier of performance. Energy generation, however, has proven harder to scale. To power the Al revolution, it has been estimated that the United States could require the equivalent of 92 additional nuclear power plants. Existing political and logistical constraints may thus demand rapid innovations that expand the production of existing technologies (such as small modular reactors) or the development of new ones (like nuclear fusion).

A second constraint is data. Today’s large language models have already absorbed most of the public internet. Future progress may therefore hinge on synthetic data or multi-agent systems. And to reach something closer to AGI, Al systems may even need to learn as humans do—by engaging with the world. Just as children acquire tacit knowledge by moving through their environment, Al models will require new ways of integrating real-world knowledge through computer vision, multimodal training, and embodied interaction.

The third challenge relates to algorithms themselves, or the underlying architecture on which models are trained. In recent years, we have seen steady algorithmic progress from GPT-1 through GPT-5, each enabled by smarter training methods. The San Francisco Consensus continues to hold that the path to AGI lies in refining LLMs—extending their memory, reducing hallucinations, and improving reliability. Others are more skeptical. Yann LeCun, one of the “godfathers of Al,” has long argued that LLMs are fundamentally incapable of creative invention. Leading researchers like Fei-Fei Li are now experimenting with completely new approaches.3

 

The Great Acceleration

As it has become clear, the path to AGI is far from a straight one. Why, then, do many in Silicon Valley remain so intent on bringing about something like superhuman intelligence? It is because of the final and most central belief of the San Francisco Consensus: the conviction that TAI will unleash unprecedented human progress across all domains of life.

First, discoveries fueled by Al could significantly accelerate scientific progress and, as a result, lengthen human lifespans. Already, some companies are feeding all globally known pathogens into Al systems that are tasked with coming up with cures. Al tools hold the potential for substantial advances in the eradication of age-old diseases and the discovery of new drugs.

Second, Al could substantially increase people’s quality of life. Along with health care, everyday experiences will improve as Al adapts to each person. Most importantly, Al could dramatically democratize knowledge both within and between countries, enabling more people to participate in creative and intellectual life. One can only imagine the effects on economic mobility if someone in a remote village had constant access to a brilliant polymath in their back pocket.

Third, and relatedly, the San Francisco Consensus anticipates massive economic returns from advances in Al. This point cannot be overstated: Even seemingly modest productivity gains could have profound consequences. According to the Congressional Budget Office, federal debt could be kept under control if yearly growth rates rose by a meager 0.5 percentage points.4 If AI made every worker twice as productive, worries about an impending fiscal and demographic catastrophe would look very different. And what if continuously accelerating superintelligence led to 20% or 30% growth per year? We are not prepared to imagine the social and economic impacts of such rapid growth.

If AI made every worker twice as productive, worries about an impending fiscal and demographic catastrophe would look very different. And what if continuously accelerating superintelligence led to 20% or 30% growth per year? We are not prepared to imagine the social and economic impacts of such rapid growth.

It must be said that many of these improvements—in science, quality of life, and economic progress—don’t necessarily require AGI. But it is also worth pondering how different the world would look under the explosion in productivity expected by the San Francisco Consensus.

 

Cracks in the Consensus

Even such massive gains, however, would not come without trade-offs. These are what I call the “wicked problems” of the San Francisco Consensus. Elsewhere, I have dealt with the consequences of superintelligence for democracy, and the ways Al’s recursive self-improvement could undermine existing regimes of geopolitical deterrence and nonproliferation.5 Others have written at great length about the problem of Al alignment, which continues to be unsolved.

The final set of wicked problems concerns the social and economic implications of this technology. This is the focus of the following compendium. As I mentioned above, one crucial question centers around the impact of Al on productivity and economic growth. As economists have noted, the trajectory of these economic effects may follow a J-curve rather than a simple model of exponential growth.6 In other words, it could take years before investments in Al pay off.

The other major question relates to the distributive effects of this growth. David Autor, for example, has argued that Al could restore the middle class as an economic engine of America. Joseph Stiglitz, by contrast, has warned that Al could polarize the labor market in ways that exacerbate inequality. Some, like Erik Brynjolfsson, see Al as a driver of productivity that could unleash a new wave of broadly shared prosperity, while others, like Daron Acemoglu, caution that the resulting changes could further concentrate wealth and power in the hands of a few.7 These are just a few of many voices that have enriched this conversation beyond the narrow, and often stultifyingly technical, debates in the Valley.

This then brings me to my final point: In a way, what distinguishes the San Francisco Consensus from those that came before is that it is largely the product of technologists and entrepreneurs rather than bureaucratic and financial elites. For scholars like the ones in this volume, the San Francisco Consensus should therefore be cause for both serious consideration and rigorous critique. Silicon Valley alone cannot answer the profound economic, social, and—quite frankly—existential questions that will come of this transformation.


1. Peter Thiel, “The End of the Future,” National Review, October 3, 2011, https://www.nationalreview.com/2011/10/end-future-peter-thiel/.

2. Eric Schmidt and Selina Xu, “Silicon Valley Is Drifting Out of Touch with the Rest of America,” New York Times, August 19, 2025, https://www.nytimes.com/2025/08/19/opinion/artificial-general-intelligence-superintelligence.html.

3. See, for example, Gary Marcus, “The Fever Dream of Imminent Superintelligence Is Finally Breaking,” New York Times, September 3, 2025, https://www.nytimes.com/2025/09/03/opinion/ai-gpt5-rethinking.html.

4. Congressional Budget Office, “The Long-Term Budget Outlook Under Alternative Scenarios for the Economy and the Budget,” May 2025, https://www.cbo.gov/publication/61332.

5. Eric Schmidt and Andrew Sorota, “This is No Way to Lead the Country,” New York Times, November 11, 2025, https://www.nytimes.com/2025/11/11/opinion/ai-democracy-government-authoritarianism.html.

6. See, for example, Erik Brynjolfsson, Daniel Rock, and Chad Syverson, “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies,” Working Paper No. 25148 (National Bureau of Economic Research), January 2020, https://www.nber.org/system/files/working_papers/w25148/w25148.pdf.

7. David Autor, “AI Could Actually Help Rebuild the Middle Class,” Noēma, February 12, 2024, https://www.noemamag.com/how-ai-could-help-rebuild-the-middleclass/; Erik Brynjolfsson, Danielle Li, and Lindsey Raymond, “Generative AI at Work,” The Quarterly Journal of Economics 140, no. 2 (May 2025): 889–942, https://doi.org/10.1093/qje/qjae044; Daron Acemoglu, “The Simple Macroeconomics of AI,” Working Paper No. 32487 (National Bureau of Economic Research), May 2024, https://doi.org/10.3386/w32487.

Next
Next

The Democratization of Intelligence