The Democratization of Intelligence

 

The Pattern of Progress

New technologies don’t emerge in a vacuum. In the beginning, they often reflect and amplify the inequalities of their time—access is shaped by where you live, what you earn, and the systems around you. Then, broader diffusion takes hold. That is what we have seen in the past, with previous waves of technological progress. And that is what we see with AI today.

A recent study explored these dynamics.1 When ChatGPT launched in November 2022, adoption followed a familiar pattern: Uptake was concentrated among young, highly educated users with typically masculine names in high-income countries, likely driven by a heavy focus on coding as a use case. In the first months after launch, nearly 80% of active users had masculine names, and the highest adoption rates came from the richest decile of countries.

By mid-2025, though, that story had started to change. ChatGPT had reached mass adoption, with 800 million weekly active users sending more than 2.5 billion messages per day, roughly 15% of the world’s adult population. And the profile of those users? It shifted fast—and continues to do so. The data suggest that AI is being democratized faster than previous technologies, with early disparities in geography and gender closing at a remarkable pace.

The data suggest that AI is being democratized faster than previous technologies, with early disparities in geography and gender closing at a remarkable pace.

Growth in low- and middle-income countries outpaced rich countries by more than four to one over the past year. As of May 2025, internet-enabled penetration in countries at the 25th percentile of GDP per capita matched adoption in high-income economies. And the gender gap in usage closed. By June 2025, users with typically feminine first names slightly outnumbered those with typically masculine names.

Interestingly, usage mix changed as well. While all activity grew, non-work messages grew faster than work-related ones, climbing from 45% of traffic in July 2024 to more than 60% by mid-2025. What exactly are people doing with it? Mostly, they’re trying to get something done. Nearly 80% of conversations involve practical guidance, information seeking, or writing.

Education stands out: 10% of all messages, and nearly 40% of practical-guidance queries, are tied to tutoring or teaching. And among work tasks, writing leads the way, with editing and improving existing text outpacing new content creation by two to one; this demonstrates how people are using ChatGPT as a tool to augment their work, not do it for them.

 

Access Everywhere

But the impact of AI isn’t captured in just the number of users—it’s in where and how the technology is showing up as well. Again, because of striking growth in usage among low- and middle-income countries, tools that once felt exclusive to tech-savvy, high-income settings are now embedded in communities around the world.

In Kenya, for instance, Penda Health conducted a study of its LLM-powered tool, AI Consult, covering 39,849 visits across 15 clinics.2 The study found that clinicians using AI Consult saw a 16% relative reduction in diagnostic errors and a 13% reduction in treatment errors compared to those without. AI Consult acted as a real-time safety net, alerting clinicians only when potential mistakes were detected while leaving them fully in control.

UNICEF conducted a two-year pilot in Uruguay using multimodal ChatGPT to help create Accessible Digital Textbooks (ADTs) for children with disabilities.3 By automating parts of the production process, like generating captions, image descriptions, and simplified text, the AI tool cut development time and cost from months to days.4 The goal is to expand access for millions of learners who are often the most excluded.

And in Nigeria, the ADVISER system uses AI to optimize the allocation of vaccination interventions, increasing child-vaccination uptake from 43.6% to 73.9% across more than 13,000 families in Oyo State.5

The use of AI is broadening, not narrowing. And in that broadening lies its greatest potential—but also the greatest risk.

These aren’t fringe experiments—they’re proof points. In each case, AI is meeting people where they are and solving real-world problems. Yes, limitations remain. Data is messy, and self-reporting brings bias. But the direction is clear: The use of AI is broadening, not narrowing. And in that broadening lies its greatest potential—but also the greatest risk.

 

The Risk of an Intelligence Divide

We are in the early phase of the AI rollout. Early trends are promising. But the risk that AI could revert to being the domain of the wealthy remains. The infrastructure required to train and deploy advanced AI is capital-intensive. Frontier models require clusters of thousands of GPUs, each with power draws in megawatts. Data centers are constrained by cooling, grid capacity, land cost, and permitting.

These physical constraints privilege incumbent firms in regions with abundant energy and an enabling regulatory framework. AI research talent is also concentrated. The result? A growing risk of an “intelligence divide”: a world where only some countries, companies, or communities get to fully benefit from AI, while others are left behind.

This risk isn’t simply economic, either—it’s about whose values shape the future. Political and social frameworks will decide whether AI widens opportunity or entrenches control. And as AI becomes embedded in how we learn, work, govern, and communicate, the values built into these systems will define their impact.

Authoritarian regimes, for instance, can wield AI to tighten surveillance and suppress choice. But open, value-aligned societies can use it to expand agency and inclusion, though only with intentional leadership. This is not just a race for technical dominance. It is a contest of values, and the future must favor coalitions of free societies that prove democratic principles can guide the most powerful technology on earth.

To win, democracies must invest in shared infrastructure, talent pipelines, and open standards that embed fairness and transparency from the start. Public-private partnerships can scale compute and connectivity while keeping guardrails strong. And citizens must see tangible benefits, like better education, health, and opportunity, so that democratic governance earns the trust needed to shape AI’s next frontier.

 

What It Takes to Democratize Intelligence

If we want AI to serve the many, not just the few, and if we want to build on early glimpses of how this might happen, then we must treat it like the critical infrastructure it is. It isn’t just code; it needs to be a shared system of compute, energy, data, and talent that requires collective investment and stewardship.

Just as the industrial era depended on railways and roads, the AI era depends on power, connectivity, accessible tools, and skills. Governments and companies alike need to show up differently. It starts with energy and bandwidth. AI requires enormous compute power, and that means abundant electricity and fast, reliable networks.

Just as the industrial era depended on railways and roads, the AI era depends on power, connectivity, accessible tools, and skills. Governments and companies alike need to show up differently.

Right now, growth is wildly uneven. For example, China is moving aggressively to bolster its power capacity, which is already more than double that of the United States.6 To close this gap, democracies must modernize permitting, expand clean-energy zones, and treat broadband and grid upgrades as national priorities. Otherwise, the intelligence divide will map directly onto energy divides.

Permitting reform is central: Data centers cannot be left in years of red tape. Aligning local and national rules, targeting clean-energy corridors, and using AI itself to accelerate reviews would speed approvals without sacrificing transparency. Done well, these reforms create jobs, reduce costs, and help stabilize the grid.

Access must be affordable, especially for small businesses and nonprofits. Governments can offer compute credits, fund shared infrastructure, or act as anchor tenants in public-private partnerships. The goal is simple: make experimentation possible for those outside the usual power centers, including small businesses, startups, schools, and nonprofits. These steps matter because progress on expanding access is real, as seen in the closing demographic gaps in adoption, but it will stall unless infrastructure and affordability keep pace.

And finally, we can’t forget people. Tools are only as inclusive as the skills to use them. We need a massive investment in training, not just for developers and technical talent, but for workers in all areas, educators, small businesses, and civil servants. Programs like OpenAI Certified, a new initiative to give more people the skills to use AI effectively, are one way to ensure that opportunity spreads as fast as adoption.7

But reskilling alone isn’t enough. We also need to bring people along in a way that sparks imagination, not fear, spotlighting the opportunities AI can unlock in classrooms, clinics, small shops, and community halls. People are far more likely to embrace change when they can see and feel its upside. That means governments, businesses, and civil society leaders must become storytellers of possibility, not just managers of disruption. Framing matters: Today’s leaders must emphasize that AI is a tool for shared progress, not a force to fear. That’s how we build ecosystems that deliver real benefits to real communities.

Whatever solutions we choose, they need teeth. Every recommendation must come with clear lines of accountability—named actors, target dates, and public reporting—to avoid the pattern of lofty pledges without follow-through.

 

If You Can’t Measure It, You Won’t Fix It

Infrastructure spending and headline user counts tell only part of the story. The real test is whether AI is delivering meaningful benefits across the classrooms, clinics, farms, and small businesses where it has begun to take root. Governments and companies should align on a set of inclusion metrics that reflect both reach and impact, tracking not just how widely AI spreads, but whether it improves people’s lives.

Geographic reach is key. Adoption should spread beyond traditional power centers. Tracking usage in low-access communities relative to national averages would show whether the pattern seen in Kenya’s clinics or Nigeria’s vaccination drives is replicating elsewhere, i.e., evidence that AI is meeting people where they are.

Affordability is also crucial. The cost of essential AI services should fall over time. A simple ratio of prices faced by small organizations or individual users versus large firms would reveal whether a farmer in the Majority World can access agricultural models as readily as a multinational enterprise uses translation tools.

Then there is the economic impact. Small businesses should see tangible results fast. Measuring the time from first use to productivity gains or revenue growth can show whether a bakery in São Paulo or a tutoring platform in Nairobi—both cities with vibrant tech adoption and entrepreneurial energy—moves from experimentation to payoff in months, not years.

Understanding the distribution of talent is important. The AI workforce should not cluster exclusively in a few tech hubs. Tracking where skilled engineers, educators, and entrepreneurs are located can indicate whether opportunity is spreading globally, mirroring how tools like multimodal ChatGPT in Uruguay expanded access for underserved students.

Finally, public service delivery is critical. Governments should use AI to scale essential services. Monitoring how many education, health, and administrative programs are enhanced by AI per 100,000 citizens would provide a clear signal of whether countries are moving from pilots to broad implementation.

These kinds of measures, tracked by region and income decile, would reveal whether the technology is narrowing or widening the intelligence divide. And the best part is that AI itself can help collect and analyze the data. If we’re serious about democratization, we need to prove it—in the numbers, and in the places that matter most.

 

Conclusion

The best part is that AI itself can help collect and analyze the data. If we’re serious about democratization, we need to prove it—in the numbers, and in the places that matter most.

AI’s future isn’t fixed. It can concentrate power or expand opportunity, deepen inequality or unlock potential. Its trajectory won’t be defined by capability alone, but by the choices we make about where it goes, who it serves, and how it’s used. We have early evidence that AI can scale across genders, regions, and ages with speed never seen before. The chance before us to democratize intelligence is real, but we must act.

Governments must invest in the foundational infrastructure required for broad and equitable AI access, including modernizing the grid, expanding broadband, streamlining siting approvals, and providing targeted compute subsidies. Companies must commit to inclusive pricing models, forge partnerships with local communities and small businesses, and operate with greater transparency. Civil society must safeguard fairness, advocate for responsible standards, and ensure that a wide range of voices is represented. Multilateral institutions and development finance partners must deploy capital strategically to close infrastructure and access gaps, ensuring that no region or community is left behind.

But more than anything, we each need to remember: This isn’t a technology story. It’s a human one. The tools are powerful. What matters is what we choose to build with them.


1. Aaron Chatterji, Thomas Cunningham, David J. Deming, Zoë Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman, “How People Use ChatGPT,” Working Paper No. 34255 (National Bureau of Economic Research, 2025), https://doi.org/10.3386/w34255.

2. Robert Korom et al., “Pioneering an AI Clinical Copilot with Penda Health,” OpenAI, July 25, 2025, https://openai.com/index/ai-clinical-copilot-penda-health/.

3. UNICEF, “OpenAI and UNICEF Accelerate Digital Textbook Access,” accessed October 10, 2025, https://www.unicefusa.org/about-unicef-usa/partnerships/companies/openai.

4. Sophia Torres Cantella, Marta Carnelli, and Julie de Barbeyrac, “Can AI Help Bridge the Gap in Inclusive Education?,” UNICEF, accessed October 10, 2025, https://www.unicef.org/innocenti/can-ai-help-bridge-gap-inclusive-education.

5. Opadele Kehinde, Ruth Abdul, Bose Afolabi, Parminder Vir, Corinne Namblard, Ayan Mukhopadhyay, and Abiodun Adereni, “Deploying ADVISER: Impact and Lessons from Using Artificial Intelligence for Child Vaccination Uptake in Nigeria,” preprint, arXiv, December 30, 2023, https://doi.org/10.48550/arXiv.2402.00017.

6. “List of Countries by Electricity Production,” Wikepedia, last updated August 14, 2025, 10:14 (UTC), https://en.wikipedia.org/wiki/List_of_countries_by_electricity_ production.

7. Fidji Simo, “Expanding Economic Opportunity with AI,” OpenAI, September 14, 2025, https://openai.com/index/expanding-economic-opportunity-with-ai/.

Previous
Previous

The San Francisco Consensus

Next
Next

Private Physical AI for the Edge: Small, Energy-Efficient, and Everywhere