Transformative AI in Financial Systems

As AI continues its advance, our financial systems stand on the brink of transformation. In this essay, the authors warn that the same technology that promises to enable frictionless trade, real-time modeling, and inclusive ownership could also accelerate inequality, market manipulation, and financial instability. We need to pay more attention, they argue, to how transformative AI will overhaul not only production, but trade and investment as well.

Introduction

We live in an era where AI is becoming increasingly powerful, potentially transforming research, development, production, and consumption. However, that is not the only digital transformation occurring: Our methods of managing trade and investment are being transformed by the widespread digitization of assets, enabling near-instant trading, unimaginably complex financial instruments, and new types of connections between trades. The tools enabling these new capabilities support the “tokenization” of realworld assets (e.g., cryptocurrencies backed by real-world assets) and more secure systems (e.g., digitally distributed ledgers and encrypted communications). For instance, SWIFT, the largest bank transfer company that moves $4 trillion per day, has recently deployed AI-enabled automatic payments and trading, known as “smart contracts,” on a distributed ledger as a core part of their technology stack.1

Tokenization is a particularly important idea. It represents an all-digital extension of the traditional process of securitization, where illiquid assets, such as home mortgages, are bundled together to provide investors with a stream of income. Securitization makes it easy to own shares in physical assets like property as well as shares of future revenues, and to make complex bets on the future value of those shares, e.g., to place short or hedge orders on asset value. Securitization makes assets more liquid and markets more efficient, but as the 2008 mortgage crash showed, it can potentially make them vulnerable to unusual events.

But tokenization, because it is all-digital, makes securitization and security trading much cheaper, and opens up the use of AI to make very complex, high frequency trades.2 In turn, the tokenization of real-world assets will allow critical resources, such as medicines, food, housing, data, and computation, to be traded and monetized in the same way high-frequency stock trading occurs today, which could lead to vulnerability in those sectors as well.

On the positive side, the combination of tokenization and AI has allowed new distributed methods of capital accumulation that benefit much larger segments of society.3 It also promises to provide new ways for the less wealthy to purchase homes by digitally automating legal, accounting, and similar costly services,4 as well as offering methods to compensate individuals for currently unpaid prosocial or caregiving work.5

Combining AI agents and digital asset technology could lead to a more efficient, fair, and prosperous world; however, the problems already evident with today’s simple AI suggest that it will not do so without corresponding improvements in auditing and regulation.

But at the same time, even today’s simple AI has shown potential for harm. For example, AI-driven crypto investment platforms (known as decentralized finance, or DeFi, platforms), which buy and sell assets using fully automated smart contracts, can result in explosively unstable markets exhibiting dramatic wealth accumulation and annihilation.6 Similarly, AIs used to coordinate many other (very simple) AIs are already a mainstay of cyberattacks, and groups of humans coordinating using AI-driven social media have been the source of bank failures and the “meme stock” phenomenon.7

Our conclusion is that combining AI agents and digital asset technology could lead to a more efficient, fair, and prosperous world; however, the problems already evident with today’s simple AI suggest that it will not do so without corresponding improvements in auditing and regulation. The basic problem is the dramatically lower costs and higher speed with which market-dominating coalitions of trades and actors can be created. AI-driven instant coalitions can cause financial crashes, as well as allow nefarious actors to “corner” markets and impoverish agents who are not participating in their coalition.8

In this paper, we will consider how advances in AI are beginning to transform finance and financial systems, examine these potential benefits and problems, and then suggest how we might go about creating scientific understanding and practical policy to best harness the potential of future transformative AI technology. For purposes of this paper, we will use the term “AI” to refer to the sorts of AI that are already commercially prevalent, and “transformative AI” (TAI) as a future sort of AI with the capability to make accurate predictions and explanations of all financial phenomena, limited only by the availability of data, the range of situations and strategies used to train the AI, and measurement noise. Markets with competing TAIs will not necessarily be more efficient than the best of today’s markets, but the competition for better data, better models, and computational resources may come to be the core of all trading strategies.

 

Building Human-Centric Financial Systems

The famous epitaph on Karl Marx’s gravestone boldly states, “The philosophers have only interpreted the world in various ways. The point however is to change it.”9 Many of Marx’s followers tried and did change the world (mainly for the worse), but very few of them, if any, managed to understand it properly.

TAI combined with the emerging widespread digital infrastructure offers a practical pathway toward realizing a more efficient, fair, and prosperous financial system, addressing many of the structural issues Marx identified, including the concentration of capital, systemic inefficiencies, and the resulting alienation of large chunks of the population.

We may revisit the epitaph in the context of 21st-century technological capabilities, which, if used correctly, allow us to better understand society, including the financial system, with all its failures and provide the tools to re-engineer it. TAI combined with the emerging widespread digital infrastructure (e.g., tokenization, distributed ledgers, etc.) offers a practical pathway toward realizing a more efficient, fair, and prosperous financial system, addressing many of the structural issues Marx identified, including the concentration of capital, systemic inefficiencies, and the resulting alienation of large chunks of the population.10

Marx critiqued the capitalist financial system for enabling accumulation by the few, leading to systemic crises and worker alienation.11 Academics and policymakers have studied these problems for over a century without finding robust solutions. The tools of TAI and digital infrastructure potentially allow us to address these problems head-on. The following are some of the possibilities for AI and distributed and encrypted ledger technology to address Marx’s concerns:

1. Unequal access to capital. Traditional credit systems exclude large population groups. TAI can potentially analyze and create far more inclusive and culturally contextualized credit models, and dramatically lower accounting, legal, insurance, and other service costs. It can also potentially develop novel strategies for safe and flexible fractional ownership of traditionally illiquid or inaccessible investments, such as real estate, infrastructure, and art, democratizing access to wealth-building opportunities.

2. Communal ownership. TAI can enable communities to pool capital and democratically govern financial resources, echoing Marx’s ideal of communal ownership but updated for the digital age.12 TAI reframes economic concepts in postindustrial economies, making data a factor of production and allowing new compensation models for user-generated data, effectively returning value to people for their participation in digital value creation. For instance, TAI can enable data co-ops, so that individuals can control and monetize their financial and behavioral data, shifting power away from surveillance capitalism.13

TAI reframes economic concepts in postindustrial economies, making data a factor of production and allowing new compensation models for user-generated data, effectively returning value to people for their participation in digital value creation.

3. Unpaid work. Traditional financial methods and measures do not recognize non-economic contributions, such as childcare, costs to the environment, or the artistic flourishing of communities. TAI can potentially estimate the value of these activities (e.g., GDP-B)14 and their ramifications for social sustainability, using measures such as were developed for the UN Sustainable Development Goals.

4. Fair, effective regulation. TAI can potentially model and anticipate systemic risks by continuously analyzing flows across the entire financial network (including shadow banking and crossborder transactions). This real-time risk monitoring can improve systemic stability and allow regulators to preempt crises instead of reacting to them. One of the greatest risks associated with TAI is its use by humans to accomplish nefarious ends, but TAI can also be used to detect and counter this danger through regulation, standards, and similar mechanisms.

5. Social value. TAI can also potentially continually test transactions not just for profit but for social impact and resilience. The UN Sustainable Development Goals laid out quantitative metrics for 17 dimensions of human life (e.g., inequality, sustainability, etc.).15 Unlike the opaque balance sheets of large institutions, TAI can potentially be used to create verifiable audit trails that help prevent hidden social damage and contagion.

Unlike the opaque balance sheets of large institutions, TAI can potentially be used to create verifiable audit trails that help prevent hidden social damage and contagion.

Marx’s material dialectic saw history as shaped by contradictions between labor and capital. In our era, the contradiction lies between centralized, extractive financial systems and the hope, goals, and fears of participating humans. TAI and digital infrastructure have the potential to address that contradiction; they are, however, merely tools, and will not automatically result in social change. Their deployment must be guided by collective political will, regulatory foresight, and ethical principles combined with democratic values, interdisciplinary input, and public accountability.

 

TAI for Economic Policy

The impact of transformative AI on economic policy is multifaceted. Current AI is already reshaping how central banks decide upon and implement economic interventions. TAI will go further, enabling central banks to process massive, real-time datasets, including financial transaction flows, price signals, and social media sentiments, far beyond traditional macroeconomic indicators. TAI should be able to predict inflation, unemployment, and GDP more accurately, and support nowcasting to help policymakers react faster.

Clearly, central banks can become more data-driven, responsive, and nuanced by using TAI tools, provided they learn how to interpret, validate, and ethically govern the corresponding results. Without substituting human central bankers, TAI tools have the potential to augment their decision-making capabilities decisively.

To understand how this might happen, let us first step back and look at the granularity with which we model and regulate the economy. Current practice generally relies on quite aggregate measures such as GDP, employment, and profitability within economic regions or industry segments. But AI is already able to handle much more granular data, and TAI along with improved digital infrastructure will presumably be even more capable.

At what level of granularity could TAI model every aspect of the economy and produce accurate predictions? We should presume that there will be some level of private and proprietary data that is unavailable to the TAI model, not only because of human preference but also because without some private or otherwise unavailable data (e.g., with no uncertainty), the notions of market and competition become meaningless.

The classic Input-Output (IO) approach to modeling the economy builds on Benedetto Cotrugli’s and Luca Pacioli’s invention of dualledger accounting in the 15th century,16 by assuming that the input (credit) and output (debit) ledgers of all companies and individuals are available for inspection. The idea of complete modeling of the economy in terms of IO tables can be traced back to the celebrated Tableau Économique, introduced by the French economist François Quesnay in 1758.17 Subsequently, Quesnay’s insights were expanded by the Soviet- American economist Wassily Leontief, who won the Nobel Memorial Prize in Economic Sciences in 1973 for his insights.18 Both Tableau Économique and Leontief’s original tables are static. While Leontief and his disciples spent decades designing the dynamic version of the model, they could never bring their efforts to fruition. For that reason (among several others), the interest in the model diminished with time.

AI can dramatically enhance IO models, making them more realistic, responsive, and valuable for policy and forecasting.

However, AI can dramatically enhance IO models, making them more realistic, responsive, and valuable for policy and forecasting. While traditional IO models (like those based on Leontief’s framework) are powerful for tracking sectoral interdependencies, they often have limited behavioral realism. Removing these limitations will be where TAI can provide decisive help. Dynamic IO models could extend Leontief’s static IO tables over time and track intersectoral flows of goods, services, and capital, including capital accumulation, consumption, and investment dynamics. Such models are already used in energy management, industrial policy, and national accounts. Unfortunately, today’s static IO models are limited by the lack of sufficiently granular data, and so frequently assume fixed production coefficients and neglect nonlinearities, price feedback loops and other economic complexities, and technological change. By construction, they heavily depend on historical data. TAI, on the other hand, can enable the use of granular, real-time data.

TAI can augment dynamic IO models in various ways, particularly by learning time-varying coefficients connecting different economic processes instead of assuming they are fixed. TAI models should be able to detect changes in sectoral input structures over time and capture shifts due to technology, globalization, regulation, or supply chain reorganization, thus producing a model that evolves with the economy. Such models are of great importance, given the current situation in the world.

TAI models should be able to detect changes in sectoral input structures over time and capture shifts due to technology, globalization, regulation, or supply chain reorganization, thus producing a model that evolves with the economy.

Perhaps the biggest challenge with traditional IO models is that they are linear, but economies are not. Older AI techniques such as graph neural networks and agent-based machine learning models are already starting to capture nonlinear feedback and network effects, including input substitution, network fragility caused by shock propagation through supply chains, and sectoral adaptation over time. TAI should enable online learning from new data, thus enabling real-time updating and monitoring of high-frequency trade, production, logistics, or emissions data. It could automatically update coefficients or exogenous variables and produce living models that evolve continuously, not every five years. Consequently, TAI models should be useful for more robust modeling of crises, innovation diffusion, or disruption.

 

From Policy to Practice

To intervene meaningfully, we must go beyond just modeling inputs and outputs. We must also consider time and rate of change in order to model what really happens in the economy, with complete consistency between stocks (wealth, debt, and social and financial capital) and flows (income, consumption, and investment) over time. These sorts of dynamic IO models are called stock-flow consistent (SFC) models, pioneered by Wynne Godley and developed further by post-Keynesians.

SFC models are accounting-consistent macroeconomic models that track how financial stocks evolve through time due to economic flows. These models enforce budget constraints on all sectors (households, firms, government, foreign); treat money, debt, and financial and social assets as core, not auxiliary; and reject the static, equilibrium-oriented view of neoclassical economics by simulating dynamic, time-evolving systems, much like real economies. By integrating SFC models with TAI, we can move from current economic theory to rigorous, real-time practice that clarifies realities on the ground, and places GDP growth as a goal that is complemented by equally rigorous metrics of social well-being and stability such as were proposed in the UN’s Sustainable Development Goals.

Already, a new financial system is emerging based on the potential to combine TAI, digital infrastructure, and SFC models. Examples include stablecoins, Central Bank Digital Currencies, and payment systems like SWIFT, all of which are experimenting with the use of AI and distributed, encrypted digital infrastructure to make traditional SFC-based systems more efficient and safer.19 TAI would be well-suited to model economies as complex adaptive systems, allowing us to represent heterogeneous agents (firms, households, banks) with unique goals and constraints; calibrate nonlinear SFC models using real-world data; and simulate policy outcomes under uncertainty, changing tax regimes, and the like. These innovations can potentially allow central banks and regulators to monitor sectoral financial positions fully transparently, thus enabling more grounded monetary, social, and fiscal interventions.

To be practical, we need specific technological innovations that complement TAI and widespread digital platforms, allowing the enhancement of SFC modeling in several ways. Three important advances would be the capability for:

1. Causal discovery tools that move beyond current frameworks to suggest new structural relationships and discover unexpected feedback loops, such as credit constraints rippling through sectors; analyze historical data to challenge assumptions in the model architecture; and build theory-informed, data-tested models rather than pure a priori structures.

2. Fine-grain agent-based simulations of households, firms, and banks interacting under SFC constraints. TAI integration can help generate thousands of plausible futures based on alternative shocks, policies, or structural changes. In turn, TAI can potentially help policymakers explore complex, nonlinear stress scenarios, such as debt-deflation spirals, fiscal constraints, tariffs, and taxes.

3. Data architectures that enable real-time data ingestion from financial markets, consumer behavior, and public spending by continuously updating model states and boundary conditions. This data could allow the creation of “living” models that inform policy in dynamic, feedback-sensitive ways to become a reality, as discussed later in this paper.

TAI combined with more advanced digital infrastructure should turbocharge the validity of SFC models by ensuring their empirical grounding, real-time adaptability, and computational tractability. This pairing is particularly potent due to ever-growing financial complexity, enabling post-Keynesian, institutionalist, and system-dynamic insights to be more policy-relevant and operational.

Advanced TAI-SFC systems could allow the government to move from reactive to algorithmic fiscal policy by simulating fiscal multipliers and dynamically adjusting spending and taxation policies based on real-time economic conditions.

Advanced TAI-SFC systems could allow the government to move from reactive to algorithmic fiscal policy by simulating fiscal multipliers and dynamically adjusting spending and taxation policies based on real-time economic conditions. Targeting social transfers, such as job guarantees and subsidies, becomes more precise using AI analytics on household-level data.

However, if we are to move from using TAI just for modeling and start using TAI for regulation of the financial system, we need to be cognizant of several potential dangers: (1) TAI-enhanced productivity is unlikely to translate into broad-based prosperity without intervention, such as job guarantees and reskilling. (2) Big Tech’s intermediation of financial flows may erode national control over economic levers. (3) The commodification of behavioral and financial data shifts wealth toward those who own the algorithms. (4) Non-auditable TAI models may make tracing the root causes of financial crises or inequality harder.

We can expect an “AI arms race,” which will eventually lead to a new market equilibrium.

There may be concern that the use of AI advancements in the real world will give some market participants a decisive advantage over others. However, this assumption is naïve. Instead, we can expect an “AI arms race,” which will eventually lead to a new market equilibrium. Situations like that happen time and again. For instance, at the initial stages of WWI, the famous 75mm rapid-fire field gun gave the French considerable advantage, which was quickly countered by the German’s use of machine guns. As a result, the short war of movement envisioned by the war planners deteriorated into positional trench warfare.

 

Limits to TAI for Developing Policy

TAI can potentially transform integrated SFC and IO models from accounting tools into adaptive, data-driven simulators of complex economic systems. Such advanced models are compelling for, e.g., supply chain regulation that incorporates the effects of tariffs, pandemics, war-time economic modeling, and just-in-time industrial policy design. Yet even with the possibilities it provides, there are limitations to how TAI can impact economic policy.

Policy-setting relying on the IO model never became as popular in the West, partly because it was a critical element of central planning, practiced with gusto but without much success in communist countries. However, TAI advancements could make central planning more feasible, at least technologically. But whether it’s desirable, stable, or superior to market mechanisms is a deeper economic, political, and ethical question.

Historically, central planning failed primarily because of information overload, which prevented planners from gathering or processing data on millions of prices, goods, and consumer preferences. Limited resources did not allow the dynamic matching of supply and demand at scale, thus resulting in coordination failure. Slow, bureaucratic responses to economic changes resulted in a lack of real-time feedback. However, TAI changes the situation dramatically, since it can ingest and analyze massive amounts of data, transactions, satellite data, and mobile usage. Current AI models can predict consumption, production bottlenecks, logistics flows, and market dynamics. Moving forward, reinforcement learning—where models learn through trial and error to maximize rewards—could simulate allocation strategies across entire economies. Digital twins (described later in this paper) of economies or supply can be stress-tested and optimized continuously. Such advances might mean that TAI could make markets obsolete, by replacing price signals and competition with regulation by TAI.

Moving forward, reinforcement learning— where models learn through trial and error to maximize rewards—could simulate allocation strategies across entire economies.

Several current examples of AI-enabled partial central planning are worth mentioning. For instance, “soft central planning” is practiced in China, which uses existing AI methods to execute large-scale industrial policies, prioritize local manufacturing, allocate resources in smart cities, and exercise industrial targeting via AI-assisted bureaucracies. Big Tech platforms like Amazon and Alibaba operate internally planned economies with AI-driven logistics, warehousing, pricing, and labor optimization. They do it better than any government ever has. War economies and pandemic responses showed how AI and centralized resource allocation can be effective in vaccine distribution and similar activities.

However, even with transformative AI, the challenges and limits of what central planning can do remain. Complexity still grows nonlinearly, human preferences shift faster than statistical tools (models) can reliably estimate, and incentive problems remain since, without price signals and private ownership, it is challenging to ensure innovation, efficiency, and accountability.20 TAI-powered planning could also enable digital authoritarianism, especially with bias in data and models, and surveillance and control of consumption behavior. Central plans would be shaped by those who control the algorithms, which might have unintended consequences. Given the human preference for freedom of action, perhaps the most plausible future is not full central planning but TAI-augmented market economies. Governments would use TAI along with secret internal data for macro-level coordination of strategic sectors, such as military, energy, transport, and housing. However, markets would still allocate consumer goods, with TAIs limited to providing algorithmic governance to better coordinate sectors.

Given the human preference for freedom of action, perhaps the most plausible future is not full central planning but TAIaugmented market economies.

Despite TAI’s capabilities, the core arguments on knowledge, incentives, and decentralization remain highly relevant due to the bounded rationality of economic agents.21 The term “bounded rationality,” coined by Herbert Simon, challenges the idea of fully rational economic agents.22 Key obstacles include cognitive limitations: Human brains cannot process all available information; consequently, decisions are made with simplified models, heuristics, or “rules of thumb.” Agents, whether human or TAI, must make choices under incomplete and asymmetrical information. What’s “optimal” is unknowable in real time, because the economic world is too complex and insufficiently observable for complete optimization. Rationality is “bounded” by the environment’s ever-changing structure, private and unobserved data, and limited previous experience. Even if all information were available, estimates of AI’s computational complexity suggest that calculating the optimal solution exceeds realistically envisioned computational power, a point TAI is now challenging with increasingly larger AI models, but not entirely defeating.

AI can process massive amounts of data, but not preferences that have not yet been formed. Human wants are not fixed but evolve based on context, trends, feedback loops, and peer behavior. Innovation emerges from unexpected combinations and incentives, not just optimization of previously significant variables.

TAI also lacks access to subjective preferences that evolve in the act of choosing. According to Friedrich Hayek, society needs “to secure the best use of resources known to any of the members, for ends whose relative importance only these individuals know.”23 AI can process massive amounts of data, but not preferences that have not yet been formed. Human wants are not fixed but evolve based on context, trends, feedback loops, and peer behavior. Innovation emerges from unexpected combinations and incentives, not just optimization of previously significant variables.

Another limitation is that TAI lacks skin in the game, so that planners using TAI must still align incentives, or else misallocation persists. Real-world systems work not just because of information but because people bear the consequences of their choices, providing feedback that continually drives the system’s evolution. The combinatorial explosion of possibilities results in computationally intractable models in complex, multiagent, real-world systems. Optimization is always approximate and better suited to narrow domains, such as Amazon warehouse routing, than whole economies.

These considerations suggest that TAI can assist but not replace markets, because it cannot replicate the subjectivity of value, innovation dynamism, knowledge locality, and the incentive structure of free exchange. Hayek and political economist Ludwig von Mises would likely argue that TAI will strengthen the case for decentralization, not centralization, by empowering individuals and firms with better tools but not removing the need for markets.

 

TAI Digital Twins for Developing Economic Policy

These digital twin systems simulate decentralized decision-making and interact under complex, evolving conditions.

How can we transcend the limitations of AI to understand the everchanging, specific preferences and utilities of an entire society of socioeconomic agents? One promising avenue for developing economic policy is the idea of developing AI and TAI agents to represent individual economic participants, namely, households, firms, banks, central banks, and the government. This is a foundational step in building “digital twins”: economic models in which each entity in the system is modeled by a TAI agent constrained by the same limitations and capabilities as the real-world entities. These digital twin systems simulate decentralized decision-making and interact under complex, evolving conditions.

Such TAI economic agents are software entities that represent decision-making units (households, firms, banks, etc.). These agents have goals, constraints, memory, and learning abilities. They interact in simulated markets or environments and adapt behaviors based on data, policy, or peer influence. This approach is the computational instantiation of Hayek’s and Simon’s visions, representing boundedly rational agents operating in dynamic systems.

TAI digital twins simulate real-world decisionmaking with adaptive learning, bounded rationality, and decentralized interactions. These economic agents allow economists, planners, and technologists to model complex, dynamic systems bottom-up, explore policy in silico before real-world trials, and build more inclusive, responsive, and robust economic simulations.

These economic agents allow economists, planners, and technologists to model complex, dynamic systems bottom-up, explore policy in silico before real-world trials, and build more inclusive, responsive, and robust economic simulations.

One of the significant dangers for our economic system stems from the fact that economic agents can form secret coalitions, potentially encouraged by human competition. Thankfully, the technology to detect and limit these overly powerful coalitions is now being developed.24 The general idea is to use TAI tools first to detect unusual correlations or outsized variance and then dampen the network dynamics by adding noise, randomly reordered events, or delays. Such interventions break up coordination, dampen feedback loops, and spread simultaneous actions over time.25

One critical policy intervention is to monitor the concentration of coalitions. Transactions for necessities or small everyday commerce should be exceptionally stable (e.g., narrow banks that do not loan money) and heavily controlled, but networks meant for speculation and innovation might have much less oversight. A second policy intervention is to monitor and limit concentrations of user data.26 A meaningful way to accomplish this goal is through coalitions of humans, including data co-ops and banks, with TAIs serving individuals rather than companies so they can defend users from manipulation by powerful corporate AIs.27 The use of realistic digital twins to better understand and address these problems is an important future research area.

 

Living Labs for Developing Economic Policy

A second idea for transcending the limitations of TAI for setting socioeconomic policy is to use TAI to study cryptocurrency trading as a laboratory for the real economy.28 This powerful idea leverages crypto markets’ transparent, high-frequency, decentralized nature to simulate, analyze, and predict real-world economic behaviors in ways that are unethical or illegal to do with traditional financial systems.

Platforms such as Ethereum (the first cryptocurrency with AI contracts) offer experimental conditions that are rare in the real economy. Ethereum, for instance, has millions of economic participants, thousands of businesses, and both naïve and AI-enabled traders. Distributed ledgers provide a complete record of every transaction, thus allowing agent-level behavioral analysis. Relatively high-frequency trading opens a window into rapid decision-making under uncertainty. Since crypto markets operate 24/7/365, they enable the study of continuous real-time adaptation.

Relatively high-frequency trading opens a window into rapid decisionmaking under uncertainty. Since crypto markets operate 24/7/365, they enable the study of continuous real-time adaptation.

Due to global access to these sorts of platforms, participants are very diverse, coming from different countries, social classes, and ideologies. The unregulated introduction of thousands of new businesses and new governance structures allows evaluation of many types of innovation. Finally, relatively light regulation allows more experimental market designs. Thus, today’s crypto platforms can serve as petri dishes for advanced digital platforms with state-of-the-art AI.

Real-world experience with crypto platforms shows that even today’s AI can be used for behavioral pattern detection by using clustering, anomaly detection, and graph neural networks. These state-of-the-art AI methods allow mapping herding behavior during crashes, pumpand- dump cycles, and risk-on/risk-off flows between various digital coins (e.g., BTC, ETH, stablecoins, and altcoins). Such AI techniques can also be used to model microstructure dynamics (order flow, slippage, liquidity mining behavior) and identify real-time sentiment shifts and bounded rationality performance under extreme volatility.29

Cryptocurrency platforms are not just speculative playgrounds. They are a live experimental testbed for economics.

Experiments on platforms such as Ethereum can help us better understand the consequences of market structure changes, policy interventions, and crisis contagion without risking damage to the broader economy. These insights apply to political economy, collective action, and institutional design. In summary, cryptocurrency platforms are not just speculative playgrounds. They are a live experimental testbed for economics. TAI gives us the tools to model adaptive agents and systemic interactions; test alternative institutions and policies in silico; understand panic, bubbles, and irrational exuberance; reimagine programmable monetary policy; study emergent institutional behaviors; improve financial stability frameworks; model labor markets, pricing behavior, and policy responses; and design more resilient, efficient, responsive, and inclusive economic systems.

 

Conclusions

AI technology is becoming extremely powerful, transforming not only production but also trade and investment. Especially impactful are the emergence of relatively autonomous AI agents for commerce and investment, and the widespread digitization of all sorts of real-world assets. These tools can enable far more efficient systems and positive social outcomes, including new distributed methods of capital accumulation that benefit much larger segments of society and ways to compensate people for currently unpaid prosocial or caregiving work. TAI is also likely to be a valuable tool for understanding and managing our economy. But today’s AI and especially TAI has shown the potential for harm, potentially resulting in a world dominated by unstable markets, social distortions, and cyberwar.

We need to recognize that TAI combined with asset digitization could lead to a more efficient, fair, and prosperous world, but that it may not do so without corresponding improvements in auditing and regulation. The central problem is that TAI enables near-instant formation of market-dominating coalitions of agents that can cause financial crashes, “corner” markets, and stealthily concentrate wealth. The core solution is ensuring that we develop tools that can continuously audit the economy and slow or limit emergent market problems, so that they can be vetted before they do significant damage.

The core solution is ensuring that we develop tools that can continuously audit the economy and slow or limit emergent market problems, so that they can be vetted before they do significant damage.

1. Swift, “Swift to Add Blockchain-Based Ledger,” September 29, 2025, https://www.swift.com/news-events/news/swift-add-blockchain-based-ledger.

2. The “token” itself is just a small digital message that uniquely proves ownership of a share in this bundle of assets. Tokens can be backed up by a signed digital contract in a database, but more and more frequently systems accept cryptographic proof that you have the token, then just assume you own the asset. The lack of provenance for tokens opens up major opportunities for fraud. Distributed ledgers help address this problem by maintaining digital copies of all trades in multiple places, and making these copies cryptographically secure.

3. Alexander Pentland, Alexander Lipton, and Thomas Hardjono, Building the New Economy: Data as Capital (MIT Press, 2021), Chapter 1.

4. Pentland, Lipton, and Hardjono, Building the New Economy: Data as Capital, Chapter 14.

5. Pentland, Lipton, and Hardjono, Building the New Economy: Data as Capital, Chapter 4.

6. Jiageng Liu, Igor Makarov, and Antoinette Schoar, “Anatomy of a Run: The Terra Luna Crash,” Working Paper 6847-23 (MIT Sloan School of Management, 2023), http://dx.doi.org/10.2139/ssrn.4416677.

7. Alexander Lipton and Alexander Pentland, “Breaking the Bank,” Scientific American 318, no. 1 (December 2017): 26–31, https://doi.org/10.1038/ scientificamerican0118-26.

8. Shahar Somin, Tom Cohen, Jeremy Kepner, and Alexander Pentland, “Echoes of the Hidden: Uncovering Coordination Beyond Network Structure,” preprint, arXiv, April 23, https://arxiv.org/abs/2504.02757.

9. See Friedrich Engels, Ludwig Feuerbach und der Ausgang der Klassischen Deutschen Philosophie...Mit Anhang Karl Marx über Feuerbach von Jahre 1845 [Ludwig Feuerbach and the End of Classical German Philosophy ... With Notes on Feuerbach by Karl Marx 1845], (Verlag von J. H. W. Dietz, 1886), 69–72.

10. Friedrich Hayek, “The Use of Knowledge in Society,” American Economic Review 35, no. 4 (September 1945): 519–530, https://www.jstor.org/stable/1809376; Wassily Leontief, ed., Input-Output Economics (Oxford University Press, 1986); Alexander Lipton, “Blockchains and Distributed Ledgers in Retrospective and Perspective,” Journal of Risk Finance 19, no. 1 (2018): 4–5, https://doi.org/10.1108/JRF-02-2017-0035; Alexander Lipton and Adrien Treccani, Blockchain and Distributed Ledgers: Mathematics, Technology, and Economics (World Scientific, 2021).

11. Karl Marx, Das Kapital. Kritik der Politischen Ökonomie [Capital: A Critique of Political Economy] (Verlag von Otto Meissner, 1867–1894).

12. Pentland, Lipton, and Hardjono, Building the New Economy: Data as Capital, Chapter 2.

13. Michael Max Bühler, Igor Calzada, Isabel Cane, et al., “Unlocking the Power of Digital Commons: Data Cooperatives as a Pathway for Data Sovereign, Innovative and Equitable Digital Communities,” Digital 3, no. 3 (2023): 146–171, https://doi. org/10.3390/digital3030011. (Invited for presentation at the 2023 G20 Summit, New Delhi, India.)

14. Stanford Digital Economy Lab, “GDP-B: A New Way to Measure Growth and Well- Being in the Economy,” accessed November 10, 2025, https://digitaleconomy.stanford.edu/gdp-b/.

15. United Nations Department of Economic and Social Affairs, “The 17 Goals,” accessed October 28, 2020, https://sdgs.un.org/goals.

16. Benedetto Cotrugli, Della Mercatura e Del Mercante Perfetto [Of Commerce and the Perfect Merchant], 1458; Luca Pacioli, Summa de Arithmetica, Geometria, Proportioni et Proportionalita [Summary of Arithmetic, Geometry, Proportions and Proportionality] (Paganini, 1494).

17. François Quesnay, Tableau Économique [Economic Table] (Imprimerie Royale, 1758).

18. Leontief, ed., Input-Output Economics.

19. Leontief, ed., Input-Output Economics; Alexander Lipton, “Modern Monetary Circuit Theory, Stability of Interconnected Banking Network, and Balance Sheet Optimization for Individual Banks,” International Journal of Theoretical and Applied Finance 19, no. 6 (2016): 1650034, https://doi.org/10.1142/S0219024916500345; Pentland, Lipton, and Hardjono, Building the New Economy: Data as Capital.

20. See Hayek, “The Use of Knowledge in Society.”

21. See Sinclair Davidson, “The Economic Institutions of Artificial Intelligence,” Journal of Institutional Economics 20, e20 (March 2024): 1–16, https://doi.org/10.1017/S1744137423000395.

22. See Herbert Simon, “A Behavioral Model of Rational Choice,” Quarterly Journal of Economics 69, no. 1 (1955): 99–118, https://doi.org/10.2307/1884852.

23. Hayek, “The Use of Knowledge in Society.”

24. Somin, Cohen, Kepner, and Pentland, “Echoes of the Hidden: Uncovering Coordination Beyond Network Structure.”

25. Victor Erofeeva, Oleg Granichin, Renata Avros, and Zeev Volkovich, “Multilevel Modeling and Control of Dynamic Systems,” Scientific Reports 14, no. 27903 (2024): https://doi.org/10.1038/s41598-024-79279-1.

26. Pentland, Lipton, and Hardjono, Building the New Economy: Data as Capital, Chapter 14.

27. Digital Economy Lab and Consumer Reports, “Loyal Agents: A Stanford + Consumer Reports Initiative for Creating a Secure, Consumer-Centric, Trusted Marketplace of AI Agents,” https://loyalagents.org.

28. Pentland, Lipton, and Hardjono, Building the New Economy: Data as Capital, Chapter 9.

29. Somin, Cohen, Kepner, and Pentland, “Echoes of the Hidden: Uncovering Coordination Beyond Network Structure.”

Previous
Previous

Information in the Age of AI: Challenges and Solutions

Next
Next

Titans, Swarms, or Human Renaissance? Technological Revolutions and Policy Lessons for the AI Age