What’s There to Fear in a World with Transformative AI? With the Right Policy, Nothing.

Transformative AI may make us more collectively prosperous than ever before, yet there is no iron law that says societal well-being must rise along with it. In this essay, Betsey Stevenson1 argues that a successful transition to a world with TAI requires us to engage with three problems—how ordinary people can seek a better life if human work is devalued or displaced, how valuable resources will be distributed, and where people will get meaning and purpose. If we get the policy response right, we do not need to fear a world with TAI.

Introduction

The most valuable thing that I produce today may be the data generated from my digital footprint. From the apps I scroll, to the products I order, to the movement and health data captured on my phone and watch, to the medical information in my lab tests. Like everyone else, I produce a constant stream of behavioral traces that can be harvested, analyzed, and fed into machine learning systems. Those systems, in turn, generate predictions and profits that will ultimately shape not only the ads we see and the opportunities presented to us, but also the evolution of markets, institutions, and, ultimately, our sense of what it means to be human.2

This data is revolutionizing prediction, creating AI systems that promise to handle complex tasks that humans cannot, given our more limited processing power. Transformative AI promises to take the data that we have collectively produced over hundreds of years and use it to generate vast amounts of wealth. If we stop the analysis there, it is easy to be optimistic: more output, more possibilities, more resources with which to meet human needs. If we want to be able to produce more using fewer resources, technological progress is the only way to do it. In much the same way that the steam engine, and later internal combustion engines and electric motors, outperformed humans in strength-based tasks, advanced AI is expected to outperform humans in an expanding range of intellectual and information-processing tasks. That earlier transformation reshaped not only labor markets and economies, but also families, communities, and our very notion of human value.3 Many people were harmed along the way, but few would argue with the proposition that the creation of superhuman artificial strength made humans better off.

Today’s fear is not just another reordering of skills, as in the past, but the possibility that all human skills are displaced so thoroughly that the labor share of income falls and the gains from technological progress flow primarily to the owners and developers of artificial intelligence. A growing empirical literature shows that recent waves of automation have tended to reduce the labor share of income, even as productivity rises.4 This is not an iron law, but it is a warning: Without deliberate policy, an AI-enabled boom in output need not translate into broadly shared prosperity.

The question of whether AI will improve societal well-being cannot be divorced from the question of distribution. And the question of distribution cannot be divorced from considering what human endowments will remain valuable in the age of transformative artificial intelligence. AI can only improve well-being if society adapts in ways that preserve a fair and stable distribution of income in the face of potentially declining returns to human labor and human capital.5

AI can only improve well-being if society adapts in ways that preserve a fair and stable distribution of income in the face of potentially declining returns to human labor and human capital.

The empirical literature on income and life satisfaction shows a robust, roughly log-linear relationship between income and well-being: As countries grow richer, average life satisfaction rises, and the same relationship holds when we compare people within a country or follow them over time.6 Income makes people happier, but at a diminishing rate. That simple fact carries a powerful implication: For any given level of average income, a more equal distribution will typically yield higher average well-being. Economic growth can raise societal welfare, but how the gains are shared determines how much welfare rises.

Economists have long debated the trade-off between equity and efficiency: The quest for a better life drives people to study instead of socialize, to work another shift rather than rest, to invest today in the hope of a better tomorrow. Historically, those who chose work over leisure reaped material rewards. Too much redistribution can blunt these incentives and make society poorer overall. But what happens to those incentives if human labor becomes only weakly connected or even completely disconnected from the returns to a more prosperous life? Will we face the same kind of trade-off between enlarging the pie and giving people equal slices, or will the structure of the trade-off itself be transformed? Economists and policymakers need to consider that centuries of thinking about what motivates economic activity may itself be undone by the revolution in artificial intelligence.

If AI systems displace or devalue human work even as they expand total output, then preserving and enhancing well-being will require solving three problems simultaneously: (1) How can ordinary people seek a better life? (2) how will resources be distributed? and (3) where will people get meaning and purpose?

Economists and policymakers need to consider that centuries of thinking about what motivates economic activity may itself be undone by the revolution in artificial intelligence.

This essay explores how policy needs to respond to ensure a successful transition to an era of AI-enabled prosperity by addressing these three questions. I propose three crucial areas for policymakers’ attention. First, we must reshape the conversation about whether we will lose jobs into one that focuses on managing the speed and process of job transformation. Second, we must recognize the important role that humans have historically played, and will continue to play, in generating the data on which artificial intelligence relies, and then design institutions that ensure they share in the value created. Third, we must shore up the bonds of reciprocity and trust within society, so that people experience AI-enabled prosperity not as something done to them by distant systems, but as a shared project that affirms their place in the social contract.

 

Jobs Will Disappear, and Policy Must Help Manage the Transition

Today, the labor market is the main mechanism through which people strive for a better life, stake a claim on society’s resources, and, in many cases, find meaning and purpose. If future advances in AI and robotics come to rival or surpass human capabilities across a wide range of tasks, human labor may no longer provide good answers to any of those three questions. How society adapts to this profound change will be shaped not just by the technology itself, but by the policy responses that governments and societies choose.7

The early Industrial Revolution saw rapid technological advancements that led to significant economic growth, increased productivity, and rising profits for industrialists, yet for decades real wages for ordinary workers barely budged.8 The tensions of rising output per head while real wages for the typical worker stagnated triggered policy shifts that laid the foundations for broader welfare initiatives, more significant regulatory interventions, political activism, and a labor movement focused on ensuring that technology benefited more than a narrow elite. The lesson is not that technology is bad, but that productivity gains do not automatically translate into flourishing. They only do so when societies build institutions that make the new economic regime first tolerable, and then genuinely beneficial, for most people.

The lesson is not that technology is bad, but that productivity gains do not automatically translate into flourishing. They only do so when societies build institutions that make the new economic regime first tolerable, and then genuinely beneficial, for most people.

The same logic applies today. The goal of policy should not be to stop AI from ever replacing human labor; stopping technological change would risk throttling growth and locking hundreds of millions of people into extreme poverty. Economic history shows that rising productivity has been central to lifting large shares of humanity out of destitution. Instead, the goal of policy should be to shape the pace and direction of technological change, slowing the adoption of technologies that reduce jobs so that these changes happen on a timeline that works with human capacity to adapt.

Technological progress has already transformed both the quantity and the composition of work. At the turn of the century, about 40% of the global workforce was employed in agriculture; today, that share is around 25%. Over the past 150 years, the average number of hours people around the globe work has declined.9 Shorter work weeks, longer vacations, childhoods spent in play and learning, and longer and healthier retirements have all contributed to the typical person working less than their ancestors might have, a change that most have welcomed.10 These changes are indicative of economic growth and rising living standards. Countries can afford to use labor-saving technology in agriculture when workers have better opportunities elsewhere. Parents can afford to keep their children out of the labor market when basic needs are met. Together they illustrate two simple points. First, greater wealth raises living standards and changes the mix of jobs. Second, in competitive markets, people are only replaced by machines when machines become cheaper for any given set of tasks.11

AI will not shape our future only through what it can do, but through how fast we let it reshape work.

AI will not shape our future only through what it can do, but through how fast we let it reshape work. Economists often distinguish between labor-augmenting and labor-replacing technologies, but adding a time dimension shows that an important distinction is the speed at which technology changes the mix of tasks, the number of workers needed, and demand for the product.12

Word processing is a useful example. It made office workers more efficient and ultimately wiped out entire pools of typists. Yet because that transition unfolded over decades, office work evolved gradually. As word processing software replaced routine secretarial tasks, jobs were transformed rather than instantly destroyed, with new responsibilities in coordination, communication, and information management added over time.

Recent empirical work on office software adoption shows a similar pattern for contemporary white-collar work. Dillender and Forsythe study the adoption of new office and administrative support (OAS) software and find that when firms adopt more sophisticated tools, this software augments administrative staff, allowing them to do more complex tasks.13 As a result, wages for these workers rise. But other workers experience negative wage spillovers, as the same technology helped replace some of the work of office accountants and human resource professionals. In short, it augments some workers while undermining others. And yet broader social changes, such as women becoming increasingly accepted in a wider range of occupations, and the expansion of other service jobs, such as in health care and education, helped ensure that, for many, the adjustment felt like a natural progression rather than an abrupt erasure.

Speed also matters for how people experience meaning, control, and dignity in their lives. Not being able to enter a disappearing occupation is very different from being laid off from it in mid-career. Choosing to upskill is different from being told to “upskill or leave.” When technological change moves at a manageable pace, people can voluntarily adapt: They have time to learn new skills, switch employers or sectors, and reinterpret their identities around new roles. That is part of what happened as agriculture and manufacturing shrank as shares of employment: Education systems expanded, new service-sector jobs emerged, and families could reorient children away from farm or factory labor toward schooling. By contrast, if AI compresses these transitions into just a few years, workers and communities may experience change as something done to them, rather than a process they can navigate—fueling resentment, loss of purpose, and a sense of being tossed aside.

If AI compresses these transitions into just a few years, workers and communities may experience change as something done to them, rather than a process they can navigate— fueling resentment, loss of purpose, and a sense of being tossed aside.

The same time dimension will shape how AI and, eventually, transformative AI affect inequality and job structure across three rough phases: short-run task replacement, medium-run occupational restructuring, and long-run human substitution.

In the short-run, AI tools augment or replace specific tasks for doctors, lawyers, accountants, coders, customer service agents, and many others. We are currently in this phase, and early evidence suggests that these tools can raise the productivity of lower-performing or lessexperienced workers, narrowing gaps within occupations: Experiments with ChatGPT and other generative tools show large gains in speed and quality for workers at the bottom of the performance distribution.14 But there is also a real concern that automating entry-level tasks could erode entry-level jobs. These are jobs in which workers have historically been productive enough to be paid an income, while simultaneously gaining the experience that allows great wage gains later on. If these workers are no longer productive enough at the entry-level point, it will make it harder for new workers to gain the experience they need to become more productive later on. In short, AI may reshape the age-wage profile and thus mandate new models of work and training.

These changes will lead to phase two: In the medium-run, occupations themselves will restructure as some jobs shrink or vanish, others expand, and entirely new roles appear—much as they did when agriculture’s share of US employment fell from 60% in the late 19th century to 1% today, and manufacturing’s share fell from about 30% in the mid-20th century to roughly 8% now. Whether these transitions felt (and currently feel) like opportunity or devastation depends crucially on whether people and places have enough time and support to move into the new jobs that arise. The transition away from manufacturing in the United States has been less successful on these dimensions than the transition away from agriculture.

That is why policymakers cannot be agnostic about the pace of AI adoption, its pairing with new opportunities, and the way it plays out in different communities. They cannot decide whether AI exists, but they can influence whether its deployment is a slow, managed evolution or a rapid wave of redundancy. Through regulation, taxation, education and training systems, social insurance, and place-based investment, policy can slow and smooth labor displacement, give workers time to adapt, and help ensure that whole communities are not abruptly left behind. Well-designed policies can create paths for people to augment their skills, incomes, and life patterns, rather than be displaced by technology. How AI ultimately affects income distribution, social cohesion, and people’s sense of meaning will depend less on the technology itself and more on whether we choose to manage the speed and shape of its impact on human work.

Taxation is a natural place to start, because it directly shapes the relative prices that firms face when deciding whether to hire humans or buy machines. In simple terms, we tend to get less of whatever is taxed, and more of whatever is subsidized. Today, most advanced economies tax labor more heavily than capital, reflecting a historical view that capital is more mobile and more responsive to taxation, and that investment is crucial for growth. But in a world where we worry that human labor may be displaced too quickly, this logic becomes perverse. Why are we discouraging human labor by taxing it heavily, while effectively subsidizing its replacement by labor-saving capital? Recent work by Acemoglu et al. suggests that the US tax code is indeed biased in favor of equipment and software and against labor, and that this bias likely encourages excessive automation.15

Why are we discouraging human labor by taxing it heavily, while effectively subsidizing its replacement by labor-saving capital?

A first, simple policy recommendation is therefore to dramatically reduce taxation on labor and shift our tax base toward consumption, pollution, and capital. The goal is not to freeze technology, but to stop subsidizing the premature replacement of workers before institutions like training, social insurance, and distribution have caught up. Some policymakers have proposed a “robot tax” to address automation, but implementation would hinge on politically fraught definitions of what counts as a “robot.”16 It is more straightforward to change broad relative prices that make it less attractive to fire workers and buy machines.

 

Data Generation is Adding Value: Who Should Own It?

But even a better-designed tax system does not answer the deeper question: Who owns the surplus when machines can do almost everything? AI systems do not arise out of nowhere. They are built on top of publicly funded research, shared infrastructure, open-source software, the accumulated content of the internet, and the constant flow of behavioral data produced by billions of people as they search, scroll, post, commute, shop, and work. When large models turn those ingredients into extraordinary profits, it is hard to argue that the resulting rents belong solely to a handful of firms and their shareholders. The social contract for an AI age has to start from a simple recognition: The surplus created by these systems is a collective product and should, in some measure, be treated as a collective asset.17

The social contract for an AI age has to start from a simple recognition: The surplus created by these systems is a collective product and should, in some measure, be treated as a collective asset.

Data is the asset everyone has. We are no longer contributing to the economy only through our formal jobs, but also through the data and digital interactions generated simply by living a connected life. Technology companies do not have a natural right to this data any more than they have a natural right to our labor. If we take seriously the idea that people have a claim over the fruits of their effort, it is hard to see why that claim should stop at the digital edge. In a world where AI relies on, and was indeed created by, appropriating data from the public commons, the question becomes: How should the resulting wealth be shared so that it raises well-being broadly rather than simply enriching those who already own capital and code?

That observation points toward a central policy recommendation for the AI era: a digital dividend. Rather than treating AI and data-driven profits as a private windfall, governments should tax the use of data and the rents from highly profitable digital and AI firms, and return that money to people as a regular dividend. Structurally, it would look a lot like a universal basic income, but with a crucial difference: It is not framed as a handout, but as a return on a shared asset. You are contributing your data into the system; you get a dividend from that shared resource. The analogy is clearest in places such as Alaska, where the Permanent Fund invests a share of state oil revenues and pays every resident an annual dividend.18 In an AI context, the “resource” is not oil in the ground but data, computation, and intellectual capital built on top of centuries of public and private investment. 

A digital dividend differs from traditional welfare programs in two important ways. First, it is framed as a citizen’s share of common wealth rather than as residual charity. That matters for dignity and meaning: People are not receiving help because they have failed in the labor market, but because they are co-owners of an economic system that now relies heavily on non-labor inputs that they and their ancestors helped generate. Second, it is decoupled from employment status while still being explicitly linked to the productivity gains from AI. That makes it a natural complement to, rather than a replacement for, employment-based social insurance. In a world of diminishing marginal well-being from income, channeling AI-driven surplus toward the bottom and middle of the distribution will raise average life satisfaction more than allowing a concentrated windfall at the very top. 

In a world of diminishing marginal wellbeing from income, channeling AI-driven surplus toward the bottom and middle of the distribution will raise average life satisfaction more than allowing a concentrated windfall at the very top.

Designing a digital dividend raises serious technical and political questions about how we reward effort and contribution, how we define tax bases, how we handle cross-border data flows, and whether we risk entrenching dominant platforms. What has been described so far is a digital dividend that is based only on a share of common wealth, but individual people’s data has varying value. Currently, advertisers pay different amounts for access to different audiences; an analogy for the data dividend would be to allow people to earn more by contributing data that is especially useful for training models, targeting services, or developing new products. 

However, the value of personal data is likely to be linked to income and social position. Tying a digital dividend directly to the market value of one’s data therefore risks reproducing, or even amplifying, existing inequalities. While data may be a kind of labor, it is not clear that paying people for their unique data would generate the kind of relationship between effort and reward that wages do today. 

A broad-based, transparent, and universal dividend can be both administratively feasible and politically durable. In this way, a universal digital dividend is both a distribution policy and a growth policy: It ensures that the gains from AI actually translate into demand for goods and services and into higher well-being, while giving people a tangible stake in a technological future that might otherwise feel like something done to them rather than with them. 

 

Building Reciprocity and Trust in an AI Economy 

Even if we manage the pace of job transformation and share the economic surplus more fairly, something more fragile remains at stake: the bonds of reciprocity and trust that hold a society together. AI systems are not appearing in a neat world of small, competitive firms trading fairly in open markets. They are emerging in highly concentrated sectors, built on models trained with data taken from what had seemed like a public commons, and developed by firms that depend heavily on public infrastructure, public research, and public tolerance of their business models. This is both a justification for the kind of digital dividend described, and a risk. 

When the benefits and burdens of a powerful technology are so asymmetrically distributed, trust does not arise automatically; it has to be earned and maintained.

These are not your butcher and your baker in the standard competitive framework. When the benefits and burdens of a powerful technology are so asymmetrically distributed, trust does not arise automatically; it has to be earned and maintained. And yet attitudes toward capitalism and big business are quite negative.19 AI is colliding with a political economy in which voters have lost faith in economic expertise, and policymakers are drifting away from open competition toward protectionist policies which, perhaps unwittingly, reward those who curry favor.20 In a world of local butchers and bakers, that kind of cronyism is costly but limited. In a world in which a handful of firms may control the infrastructure for finance, media, logistics, and health care, it is potentially catastrophic. It undermines people’s sense that the rules are fair, that effort is rewarded, and that the state is acting as an honest broker rather than a partner in extraction. Countries with robust democratic institutions, independent regulators, and a real commitment to competition are likely to navigate AI much better than those without them. Policy here is diffuse but not mysterious: enforce competition law, guard against regulatory capture, protect independent institutions, and keep the policymaking table open so that smaller firms and civil society have a real voice. These are not just efficiency concerns; they are how we build trust in the system and signal that we still take reciprocity seriously. 

Yet even if we get the economics and the governance right, there remains a deeply human question: What gives life meaning in a world where work may not? In Japan, the concept of ikigai—that which makes life worth living—captures a sense of purpose that is only loosely tied to income or occupation. Survey evidence suggests that ikigai is most strongly associated with participation in civic life: People who are involved in multiple community groups, volunteer organizations, or social clubs report higher purpose and better health. By contrast, in the United States, happiness and life satisfaction have stagnated over the last half century even as incomes have risen.21 One of the clearest trends has been a decline in participation in civic organizations, a major source of ikigai.22 Thriving communities require engagement, and yet people are failing to join the volunteer groups, social clubs, and civic organizations that give life meaning. As Robert Putnam noted, we still bowl, but we do not bowl in leagues. We are richer in private consumption, poorer in shared projects.23 

Yet even if we get the economics and the governance right, there remains a deeply human question: What gives life meaning in a world where work may not?

If AI and automation reduce the need for human labor, we will be confronted with a choice about what to do with the time and energy that AI frees. One path is toward further individualization: more personalized feeds, more solitary consumption, more finely targeted experiences that never require us to negotiate with people unlike ourselves. Another is to use some of the gains from AI to rebuild the social infrastructure of shared life: to support civic organizations, create and maintain public spaces where people actually want to spend time together, synchronize weekends and vacations so that time off is a collective good rather than a private luxury, and recognize caregiving and volunteering as real contributions to social welfare. Jobs need not remain the sole anchor of dignity and self-worth, but everyone needs some sense that others depend on them and that they, in turn, can depend on others.

 

Conclusion 

The threads of distribution, competition, and meaning come together in a single word: trust. We have recently seen what happens when technical capacity outruns social cohesion: We possess vaccines that can prevent diseases like measles, and yet outbreaks reappear because people do not trust the institutions delivering them. We could easily imagine a world in which AI delivers remarkable tools for diagnosis, education, climate mitigation, or democratic deliberation—and yet those tools go unused, or are turned against one another, because people do not trust the firms that provide them, the officials who regulate them, or their fellow citizens who might benefit from them. In such a world, the problem is not that we lack cures; it is that we lack the social glue to use them together. 

Seen in this light, the policies I have emphasized are all, in their own way, social-cohesion policies. Managing the speed of AI adoption so that people can adapt, rather than be discarded, is a way of saying that we still see one another as partners rather than inputs to be swapped out. A digital dividend that returns a share of AI-driven surplus to the broad population signals that we recognize our mutual contributions and obligations. Robust democratic institutions and competition policy are how we reassure people that the game is not rigged. Investments in civic life and non-market roles create new sources of purpose and connection when wage labor alone cannot carry that burden. 

What, then, is there to fear in a post-TAI world? Not, primarily, the absence of output. If history is any guide, transformative technologies will deliver extraordinary productive capacity. The deeper fears are older and more human: that the material gains arrive in a pattern that leaves many people behind for a long time, and that even when incomes rise, the institutions and norms that grant people purpose, status, and community fail to keep pace. If we focus only on productivity, we will get productivity. If we also design for fair shares, robust institutions, and renewed reciprocity—recognizing our data and our attention as the shared inputs to this new system, and building a social contract that reflects that fact—we have a chance to get what people actually want when they say they want growth: a life that feels worth living, together.

Seen in this light, the policies I have emphasized are all, in their own way, socialcohesion policies. Managing the speed of AI adoption so that people can adapt, rather than be discarded, is a way of saying that we still see one another as partners rather than inputs to be swapped out.

1. I would like to thank Schila Labitsch and Archana Kamath, who provided excellent research assistance, and the thoughtful discussions with my students who read and debated many of the cited works here with me. I would also like to thank the conference participants and organizers of the 2025 Economics of Transformative AI Workshop. I am immensely grateful for their insights, time, and patience. All remaining errors and insufficiencies remain my own. 

2. Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Harvard Business Review Press, 2018); Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Public Affairs, 2019); Jaron Lanier, Who Owns the Future? (Simon & Schuster, 2013); Imanol Arrieta-Ibarra, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, and E. Glen Weyl, “Should We Treat Data as Labor? Moving Beyond ‘Free,’” AEA Papers and Proceedings 108 (2018): 38–42. 

3. Karl Polanyi, The Great Transformation: The Political and Economic Origins of Our Time (Farrar & Rinehart, 1944). 

4. Loukas Karabarbounis and Brent Neiman, “The Global Decline of the Labor Share,” Quarterly Journal of Economics 129, no. 1 (2014): 61–103, https://doi.org/10.1093/ qje/qjt032; Daron Acemoglu and Pascual Restrepo, “Automation and New Tasks: How Technology Displaces and Reinstates Labor,” Journal of Economic Perspectives 33, no. 2 (Spring 2019): 3–30, https://doi.org/10.1257/jep.33.2.3

5. Daniel Susskind, A World Without Work: Technology, Automation, and How We Should Respond (Metropolitan Books, 2020). 

6. Daniel W. Sacks, Betsey Stevenson, and Justin Wolfers, “Subjective Well-Being, Income, Economic Development, and Growth,” Working Paper 16441 (National Bureau of Economic Research, 2010), https://doi.org/10.3386/w16441; Betsey Stevenson and Justin Wolfers, “Economic Growth and Subjective Well-Being: Reassessing the Easterlin Paradox,” Brookings Papers on Economic Activity 39, no. 1 (2008): 1–102; Betsey Stevenson and Justin Wolfers, “Subjective Well-Being and Income: Is There Any Evidence of Satiation?,” American Economic Review 103, no. 3 (2013): 598–604, https://doi.org/10.1257/aer.103.3.598

7. David H. Autor, “Why Are There Still So Many Jobs? The History and Future of Workplace Automation,” Journal of Economic Perspectives 29, no. 3 (2015): 3–30, https://doi.org/10.1257/jep.29.3.3; Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton & Company, 2014). 

8. Robert C. Allen, “Engels’ Pause: Technical Change, Capital Accumulation, and Inequality in the British Industrial Revolution,” Explorations in Economic History 46, no. 4 (2009): 418–435, https://10.1016/j.eeh.2009.04.004.

9. Charlie Giattino and Esteban Ortiz-Ospina, “Are We Working More than Ever?,” Our World in Data, December 16, 2020, https://ourworldindata.org/working-more-than- ever.

10. Betsey Stevenson, “Artificial Intelligence, Income, Employment, and Meaning,” in The Economics of Artificial Intelligence: An Agenda, eds. Ajay Agrawal, Joshua Gans, and Avi Goldfarb (University of Chicago Press, 2018). 11. Autor, “Why Are There Still So Many Jobs?; Acemoglu and Restrepo, “Automation and New Tasks: How Technology Displaces and Reinstates Labor.” 

12. David Autor and Neil Thompson, “Expertise,” Working Paper 33941 (National Bureau of Economic Research, 2025), https://doi.org/10.3386/w33941; Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, “Canaries in the Coal Mine? Six Facts About the Recent Employment Effects of AI,” Stanford Digital Economy Lab, August 26, 2025, https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/ Canaries_BrynjolfssonChandarChen.pdf

13. Marcus Dillender and Eliza Forsythe, “Computerization of White Collar Jobs,” Working Paper 29866 (National Bureau of Economic Research, 2022), https://doi. org/10.3386/w29866

14. Shakked Noy and Whitney Zhang, “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence,” Science 381, no. 6654 (2023): 187–192, https://doi.org/10.1126/science.adh2586; Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, “Generative AI at Work,” Quarterly Journal of Economics 140, no. 2 (May 2025): 889–938, https://doi.org/10.1093/qje/qjae044

15. Daron Acemoglu, Andrea Manera, and Pascual Restrepo, “Does the US Tax Code Favor Automation?,” Brookings Papers on Economic Activity 2020, no. 1 (Spring 2020): 231-300, https://dx.doi.org/10.1353/eca.2020.0003

16. Cady Stanton, “‘Robot Tax’ Proposal Sparks Skepticism over Its Practicality,” Tax Notes, October 21, 2025, https://www.taxnotes.com/featured-news/robot-taxproposal- sparks-skepticism-over-its-practicality/2025/10/20/7t67l.

17. Lanier, Who Owns the Future?; Arrieta Ibarra et al., “Should We Treat Data as Labor?” 

18. Damon Jones and Ioana Marinescu, “The Labor Market Impacts of Universal and Permanent Cash Transfers: Evidence from the Alaska Permanent Fund,” American Economic Journal: Economic Policy 14, no. 2 (May 2022): 315–340, https://10.1257/ pol.20190299

19. Jeffrey M. Jones, “Image of Capitalism Slips to 54% in U.S,” Gallup News, September 8, 2025, https://news.gallup.com/poll/694835/image-capitalism-slips.aspx.

20. Betsey Stevenson, “When Democracy Falters: A Multidisciplinary, Multibook Review Essay on Polarization, Populism, and Authoritarianism,” Journal of Economic Literature, forthcoming.

21. Jean M. Twenge, “The Sad State of Happiness in the United States and the Role of Digital Media,” in World Happiness Report 2019, eds., Betsey Stevenson and Julian Wolfers (Wellbeing Research Centre, 2019): 86–95, https://www.worldhappiness. report/ed/2019/the-sad-state-of-happiness-in-the-united-states-and-the-role-of-digital- media/; Stevenson and Wolfers, “Economic Growth and Subjective WellBeing: Reassessing the Easterlin Paradox”; Betsey Stevenson and Justin Wolfers, “The Paradox of Declining Female Happiness,” American Economic Journal: Economic Policy 1, no. 2 (2009): 190–225, https://10.1257/pol.1.2.190

22. Betsey Stevenson, The Economics of Transformative AI (University of Chicago Press, 2025), chap. 10, https://www.nber.org/books-and-chapters/economics-transformative- ai/what-there-fear-post-agi-world

23. Robert D. Putnam, Bowling Alone: The Collapse and Revival of American Community (Simon & Schuster, 2000).

Previous
Previous

Preserving Fiscal Stability in the Age of Transformative AI

Next
Next

Transformative AI and the Increase in Returns to Experimentation: Policy Implications