“Career” Advice from the AI Frontier: Preparing Young People for Work in the Age of Transformative AI

How can we prepare the next generation for work in the age of transformative AI? In this biographical essay, Avital Balwit draws on her experience at a frontier AI company to provide practical career advice to those thinking about the future of work. Our traditional approach to reskilling will not be sufficient—and Balwit sets out what will be required in its place.

I. Introduction

We stand at the edge of a technological development that seems likely, should it arrive, to fundamentally transform employment as we know it.

I work at a frontier AI company. With every iteration of our model, I am confronted with something more capable and general than before. In 2019, GPT-2 could barely count to five or string together coherent sentences. By 2023, GPT-4 was outperforming 90% of human test takers on medical licensing exams1 and the bar exam.2 In 2025, models are refactoring code bases for seven hours3 and winning gold at the International Mathematical Olympiad.4 Models are both more agentic5 and capable of much longer-running tasks.6 Leading researchers at frontier AI companies increasingly believe we’ll achieve AI systems that can match or exceed human cognitive capabilities across virtually all domains before 2030. As someone who once prided myself on writing 2,000 words in an hour—a skill which, like cutting blocks of ice from a frozen pond, is arguably obsolete—I find these advances both exhilarating and unsettling.

Many knowledge workers grasp at the ever-diminishing number of places where such models still struggle, rather than noticing the ever-growing range of tasks where they have reached or passed human level. But the economically relevant comparison is not whether the AI is better than the best human—it’s whether it’s better than the human who would otherwise do that task.

The economically relevant comparison is not whether the AI is better than the best human—it’s whether it’s better than the human who would otherwise do that task.

Young people entering the workforce today need fundamentally different preparation for a world where AI surpasses human cognitive abilities. They don’t just need new skills; they need new mindsets about work, value, and purpose. This essay offers practical guidance for individuals navigating this transition and policy recommendations for the institutions meant to prepare them.

 

II. Understanding the Trajectory: Where AI is Going, Not Where It Is

To avoid surprise and wasted effort, it is critical to understand the pace of progress and the trajectory rather than just the current level of capabilities. If you look at a snapshot, you might think you only need to become a senior-level software engineer to be safe from AI disruption, given that current AI systems are at a junior-developer level, or that you need to become a world-class writer as opposed to a producer of more generic marketing copy. It might be true that AI systems still sometimes stumble on particularly challenging coding problems or that they still have a subtle “AI tinge” of generality in their writing style, but if you look back at what they were capable of a year ago, it will be clear that stagnation should not be the mainline prediction.

Dario Amodei, CEO of Anthropic, describes what’s coming as “a country of geniuses in a data center.”7 OpenAI claimed that superintelligence8 would arrive this decade. Of course, some other researchers believe that it will take longer to achieve this milestone, or that it may not be possible at all.9 This means that young people need to plan under some degree of uncertainty, but should at least take very seriously the possibility of transformative AI very soon.

Young people need to plan under some degree of uncertainty, but should at least take very seriously the possibility of transformative AI very soon.

Based on the timelines I view as most credible, transformative AI, as defined in this volume, seems likely to arrive in the next two to four years. That is assuming the current pace of progress continues. “Arrive” in my usage means “exists in the lab and works well enough that customers can use it”—but it doesn’t presuppose some degree of diffusion throughout the economy, and I have much more uncertainty on exactly how quickly AI will diffuse.

Given these timelines, the question isn’t whether you can stay ahead of AI—it’s how to thrive alongside systems that surpass you.

Why this time is different

For those encountering this topic for the first time, a natural question might be: Why expect such profound change now? After all, we’ve weathered significant technological upheavals before. The Industrial Revolution transformed employment and influenced education, but society adapted without radical restructuring. What makes this technological moment so different as to necessitate fundamental changes in how we educate and work?

The answer lies in the nature of this technological revolution itself. Unlike previous innovations that were highly specific—the calculator, the steam engine, the assembly line—artificial intelligence represents something unprecedented: a general-purpose technology that can potentially automate cognitive work across virtually every domain.

Unlike previous innovations that were highly specific—the calculator, the steam engine, the assembly line—artificial intelligence represents something unprecedented: a general-purpose technology that can potentially automate cognitive work across virtually every domain.

Historically, technological progress followed a predictable pattern: Machines displaced physical labor while creating new opportunities for cognitive work. Farmhands became factory workers; factory workers became office workers. This transition sustained employment by shifting human comparative advantage from physical to mental tasks. But AI directly targets cognitive work—the very domain where humans retreated as machines claimed physical labor. While I anticipate some short-term increase in demand for physical work, I expect robotics to follow software intelligence relatively quickly, meaning we cannot count on a simple reversal that replaces all automated cognitive work with an equivalent volume of physical jobs.

(A brief aside on an interesting transitional era: I expect there to be a time when robotics is lagging or capacity constrained when humans will wear headphones and potentially VR glasses such that the advanced AI systems can advise them on how to conduct certain work. Any worker who has access to this will essentially have the world’s best coach on some topic advising them how to do that activity. Of course, there is still a delta between coaching and having the top expert actually do the activity, but it will still significantly upskill workers.)

This is not to say AI will eliminate all employment. I do expect new categories of work to emerge, though I cannot predict their precise nature. Some occupations will remain relatively protected—ballet dancers, massage therapists, classical musicians, athletes—not because they are technically impossible to automate, but because humans doing them matters intrinsically to their value. We want humans doing certain tasks, regardless of whether machines could do them better.

When robotics does arrive at scale, it will likely follow economic logic rather than just technical feasibility. Your neighborhood electrician will probably outlast factory workers. Residential electrical work involves tremendous variability—almost every house presents unique challenges—and it is “lower value” than automating some complex, high-value manufacturing plant where a single facility produces billions in value annually. I expect robots to master such idiosyncratic residential tasks within my lifetime, but they will initially be deployed where they create the most value: operating fabs to produce more frontier AI chips, or working in robotics factories to build more robots. The highly specific physical work with significant variety will be automated later.

 

III. Core Strategies for the AI-Native Worker

The future is unevenly distributed, and it permeates slowly, even within developed, wealthy, internet-laced countries. I will not infrequently hear AI-interested folks from outside the San Francisco core tell me that they need to get a PhD in machine learning so they can prepare for the AI future. While that might have been a great idea in 2015, it is no longer necessary or even wise. You don’t need to understand how to build an engine to drive a car, particularly when this particular car will soon be capable of building its own engine to your natural language specifications. (Of course, some people are simply fascinated by engines, and then they should feel free to pursue that passion.)

You don’t need to understand how to build an engine to drive a car, particularly when this particular car will soon be capable of building its own engine to your natural language specifications.

So what should young people who want to prepare for the AI future do?

Become a manager, not an individual contributor

The nature of work is shifting from doing to directing. We see this already in software development, where people who can’t code are building applications by describing what they want in natural language. This pattern will repeat across every domain.

For many emails or memos, I’ve moved away from writing content to specifying what I need, evaluating outputs, and synthesizing results. I’m less of a writer now and more of an editor-in-chief.

Young workers should think of themselves as managers from day one. Even entry-level roles will involve directing AI systems and evaluating their outputs. This shifts work from generation to verification and curation.

Likely there is not a 1:1 correspondence between managing humans and managing AI systems; for instance, right now one tends to need to be a bit more explicit with AI systems than with humans, because AI systems don’t start with as much shared context (or, while they might start with “more” context in some absolute sense, they don’t start the conversation knowing “who you want them to be” in the same way a human does). There will be some element of discovery—and like all AI topics, expect what is needed to change as the systems improve. It might become more and more like managing humans, or even easier along some dimensions, or it might be slightly alien.

Scale yourself through AI partnership

Every young person should approach their career as if they have a team of brilliant (if perhaps slightly alien) assistants at their disposal—because they do. I’ve watched some of my most productive technical colleagues at Anthropic work with “AI teams”—dozens of open tabs running coding agents.

Think like a researcher with infinite research assistants. A single person can now conduct analyses that would have required entire teams. Two or three people can run institutions that once needed hundreds.

The capacity to run a team may create a bimodal distribution of agency. Some will use these tools to become extraordinarily impactful, directing AI systems to execute complex visions. Others will find themselves in reactive roles, such as checking and verifying AI outputs, or more generally being directed by the AI systems. The difference will come from initiative and vision, not technical skill.

Cultivate uniquely human advantages

What remains distinctly human when machines surpass us intellectually? Four things stand out:

Taste and values: AI can generate a thousand decoration schemes for your home, all aesthetically pleasing and within budget. But which one is you? The human role becomes one of choice, of bringing personal meaning to infinite possibilities. Young people should develop strong aesthetic sensibilities and clear values—not because AI can’t have taste, but because your taste is inherently, uniquely yours.

Relationships: My mother recently had her first serious interaction with Claude, where she was discussing her transition to retirement and seeing if it would give sound advice. The response actually made her cry—it displayed empathy, patience, thoughtfulness. Yet afterward, she still wanted to discuss it with me. The song a friend shares matters more than the perfect playlist an AI generates. The bedtime story a parent tells carries weight precisely because a parent tells it.

Trust and judgement: There will be many cases where humans trust the judgement of other humans more, regardless of whether they “should,” or view something as more legitimate if done by a human versus an AI. For a stark example: Think about car accidents with self-driving cars versus human-driven cars. Of course, everyone would prefer that no one get hit by a car, but if someone is hit by a car, people seem far angrier if that car was self-driven versus human-driven. People simply have a preference against AI in this situation. (I think people should simply prefer the mode of transport that leads to fewer car accidents, but this is not borne out by folks’ current preferences). This will repeat in many domains. Even if AIs could render legal judgements or vote on policy “better” than a human could, people likely will not want this, and view these as uniquely human domains.

Initiative and vision: You now have more powerful tools at your fingertips. Your agency has been enhanced. What will you do with that? You need to cultivate your ambition, creativity, and entrepreneurial spirit. Young people should cultivate the ability to identify what should exist and marshal resources (including AI) to make it real.

 

IV. Psychological and Social Preparation

Embracing instability and change The pace of change will accelerate. Young people today should expect their careers to look nothing like their parents’—or even their older siblings’. The skills that get you hired at 22 may be obsolete by 27. The company you join may not exist in five years. The entire industry might transform.

This isn’t cause for despair—it’s reason to develop meta-skills. Learn how to learn quickly. More importantly, the ability to let go of outdated expertise and embrace new paradigms will matter more than any specific knowledge. I’ll discuss this more below.

The ability to let go of outdated expertise and embrace new paradigms will matter more than any specific knowledge.

I think often about my great-grandmother, who lived through the transition from horse-drawn carriages to space travel. The young people entering the workforce today will experience changes just as dramatic, compressed into even shorter time frames. They need mental models for thriving in uncertainty.

Building anti-fragile identity

If your identity is tied to being smart, you’re in for a rough time. I’ve already experienced this. I was the person friends came to for quick, clever content. Now I watch Claude do it better, faster, and without the typos.

Young people need sources of self-worth beyond intelligence. Are you kind? Brave? Persistent? Funny? These qualities gain importance as cognitive tasks become commodified. Knowing content is becoming less important. We will need to become ambitious about different things.

Creating intentional human spaces

Magnus Carlsen, the world chess champion, recently withdrew from the world championship cycle.10 Why? Because preparation for classical chess now requires months of working with AI to find minute opening advantages, and he didn’t want to spend his time that way.

Young people will need to make similar choices. When should you deliberately exclude AI? When does efficiency destroy the experience?

When should you deliberately exclude AI? When does efficiency destroy the experience?

I’ve started writing some pieces without AI assistance—not because the output is better, but because the process of struggling with ideas hones my thought and keeps those skills sharp. Young workers will need to identify which struggles are worth preserving.

 

V. Practical Skills and Behaviors

Technical literacy without technical obsession

Every young person should use AI tools extensively. Not to become programmers (unless that’s your passion), but to understand how you can best leverage it. Use AI for everything for a week. Then try going without it. Notice the difference.

But avoid the trap of technical obsession (unless that is your intrinsic passion).

More importantly, learn to recognize attentional black holes. The same systems that can amplify your productivity can also destroy it. If an AI system doesn’t periodically prompt you to step away, to think independently, to connect with others, then it’s not the right system for you.

Network and brand-building

In a world where anyone can generate competent content, attribution matters more than ever. Why should someone read your AI-assisted analysis versus anyone else’s? The answer lies in trust, perspective, and relationship.

Why should someone read your AI-assisted analysis versus anyone else’s? The answer lies in trust, perspective, and relationship.

Young people face a catch-22: Established voices maintain audiences even as AI levels the playing field, but building a new voice becomes harder when content is commodified. Your best odds come from starting now, while human-generated (or human-curated) content still carries more weight. The young workers who thrive will be those who use AI to amplify their authentic, unique viewpoint and provide consistent value to those who engage.

Physical and interpersonal skills

While AI will eventually extend into physical domains through robotics, this transition will lag behind cognitive automation. More importantly, humans will maintain preferences for human service in certain contexts. We might accept a robot surgeon, but do we want a robot doula? The comedian, the yoga instructor—these roles will likely persist not because machines can’t do them, but because there is something desirable about a human doing them.

Young people shouldn’t flee to physical trades expecting permanent shelter—robotics will follow. But they should recognize that embodied, interpersonal skills will hold their value longer.

 

VI. Considerations for Education and Employment Policy

I am not an educational or employment policy expert, and I welcome more work on these areas by those who are. What follows should be taken as spelling out various needs rather than suggesting specific solutions.

Reimagining educational priorities

Our current education system optimizes for exactly the wrong things. Standardized tests measure what AI does best—knowing and regurgitating facts.

Instead, education should prioritize:

Initiative: Teach students to identify problems worth solving, not just solve assigned problems.

Collaboration and delegation: In an AI world, working well with others—being able to scale yourself through teamwork and delegation—will pay dividends.

Developing values and taste: Encourage deep engagement with ethics, aesthetics, and purpose, not just accumulation of facts.

Structural changes to education systems

Part of our educational response to AI should be implementing good ideas that already exist in pockets of our current system. Certain current educational approaches already demonstrate alternatives to our AI-vulnerable system. Both Montessori and Waldorf schools embody certain principles that align with AI-era needs. (Note: This is not a full-throated endorsement of every aspect of these programs).

Waldorf11 schools’ heavy emphasis on artistic integration across all subjects develops some of the aesthetic sensibilities and personal taste that will allow humans to better leverage AI. By delaying academic instruction until age seven and maintaining minimal technology use throughout, Waldorf schools explicitly build children’s sense of worth around creativity, craftsmanship, and human connection rather than purely on cognitive performance metrics. This helps create graduates who’ve learned to find meaning in the process, not just the output.

Montessori12 education prefigures the managerial mindset I mentioned above. Students, even very young ones, practice the core skills of future work: choosing their own projects from prepared options, setting goals, evaluating progress, and teaching other children. The mixed-age classroom structure naturally teaches students to be comfortable not being the smartest in the room—preparation for working alongside superintelligent AI. Montessori students also practice assessing the quality of their own work and whether it meets their standards.

Both approaches reject standardized testing in favor of portfolio-based assessment and holistic evaluation. As our world becomes increasingly quantified, children may benefit from being able to spend some time in “unmeasured spaces.”

The existence of these educational models provides concrete evidence that students can develop meta-skills, intrinsic motivation, and anti-fragile identities when educational systems prioritize them. The challenge is scaling these approaches beyond the small percentage of students who currently have access to them.

Alongside more widely implementing what these models get right, we should also try for AI partnership as core curriculum: Every student should learn to direct AI systems effectively and to evaluate outputs critically. Educators should be careful which AI’s they introduce and when—I don’t think we yet know the optimal age for students to start interacting with LLMs, but it does seem clear that we want these systems to be wise, kind, and age appropriate. These should not be sycophantic systems that never challenge students, nor should they be addictive. We want systems that introduce the right amount of friction into learning, and that encourage students to still invest in human relationships.

We should also try for AI partnership as core curriculum: Every student should learn to direct AI systems effectively and to evaluate outputs critically.

Finally, we should consider shorter, more frequent education cycles. A one-time four-year degree assumes stable knowledge. Instead, imagine a shorter initial program, after which students come back for periodic several-months-long intensives throughout their careers. Given the likely dynamism of the coming years, multiyear degrees done at one point in life are likely to become stale. One could still complete multiyear degrees, but much more in the spirit of “for the love of the knowledge itself” rather than for career preparation.

Employment policy adaptations

AI will have large effects on employment. It is unclear how much this will look like job elimination versus a massive shift in underlying tasks.

Unemployment support: If AI eliminates a large number of jobs temporarily or in the long run, we will need some system of transfers that ensures people can live with dignity and economic security. AI seems likely to vastly increase economic growth and generate immense sums of wealth; this means that even as jobs are eliminated, people broadly can be made better off provided the political will is there. (Of course, this is hard enough to do within a nation, let alone across them—though economic effects are unlikely to stop at borders).

AI seems likely to vastly increase economic growth and generate immense sums of wealth; this means that even as jobs are eliminated, people broadly can be made better off provided the political will is there.

That said, economic security is only one issue. Work provides many valuable benefits beyond the pay: a sense of purpose, social connection, a role in one’s community. People will still need these aspects, even if it comes outside of a job. These could potentially come from clubs, volunteering, parenthood, or religion. Consumption and leisure will not be enough to make people happy in the long run.

Human-AI collaboration standards: We will need to develop frameworks for when human oversight is required, desired, or prohibited. These standards should evolve with increasing AI capabilities so as to not become stale or counterproductive. For instance, given current capabilities, there may be activities that we want to mandate a “human in the loop” (court judgements, drone strikes), but in the future, if AI systems outperform humans to such an extent that human intervention is worsening outcomes, it is conceivable that we would wish to actually prohibit it (e.g., if all cars had to be self-driving, or if certain surgeries could only be conducted by AI models).

 

VII. On the difficulties of preparation

We are preparing young people for a world we can’t fully imagine. The specific skills we teach may become obsolete. The careers we envision may never exist. The challenges we anticipate may pale beside those that actually emerge.

I am frequently asked for advice on the future, and I want to be able to give it. But I myself have a very hard time seeing beyond the next few years because I expect so much to change. This can be a frightening thought, but it is not the only one. I believe there is a path to longer, healthier, wealthier lives—that my future children might live even more rich, fulfilling, and meaningful lives than my own. But it is certainly not guaranteed.

I do believe that those who prepare for dynamism, who are mentally ready for uncertainty, who have familiarity with using AI to multiply their impact, and who have honed the skills that will be most complementary to AI will be in a better position. But most importantly, it will be vital to know what to direct this new great power toward—to know what to build, to have developed taste, to have cultivated a vision, and to be thoughtful and clear about one’s values.

Most importantly, it will be vital to know what to direct this new great power toward—to know what to build, to have developed taste, to have cultivated a vision, and to be thoughtful and clear about one’s values.

1. Annika Meyer et al., “Comparison of the Performance of GPT-3.5 and GPT-4 with That of Medical Students on the Written German Medical Licensing Examination: Observational Study,” JMIR Medical Education 10 (February 8, 2024): e50965, https://doi.org/ 0.2196/50965.

2. Pablo Arredondo, “GPT-4 Passes the Bar Exam: What That Means for Artificial Intelligence Tools in the Legal Profession: A Q&A with Sharon Driscoll and Monica Schreiber,” Stanford Law School (blog), April 19, 2013, https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry/.

3. Anthropic, “Introducing Claude 4,” May 22, 2025, https://www.anthropic.com/news/claude-4.

4. Kenrick Cai and Jaspreet Sing, “Google Clinches Milestone Gold at Global Math Competition, While OpenAI Also Claims Win,” Reuters, July 22, 2025, https://www.reuters.com/world/asia-pacific/google-clinches-milestone-gold-global-mathcompetition-while-openai-also-claims-2025-07-22/.

5. OpenAI, “Introducing ChatGPT Agent: Bridging Research and Action,” July 17, 2025, https://openai.com/index/introducing-chatgpt-agent/.

6. Thomas Kwa et al., “Measuring AI Ability to Complete Long Tasks,” METR, March 19, 2025, https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-longtasks/.

7. Dario Amodei, “Machines of Loving Grace: How AI Could Transform the World for the Better,” Dario Amodei (blog), October 2024, https://www.darioamodei.com/essay/machines-of-loving-grace.

8. Jane Leike and Ilya Sutskever, “Introducing Superalignment,” OpenAI, July 5, 2023, https://openai.com/index/introducing-superalignment/.

9. See the views of Yann Le Cun, “We Won’t Reach AGI by Scaling Up LLMs,” interview with Alex Kantrowitz, Big Technology Podcast, YouTube, May 30, 2025, https://www.youtube.com/watch?v=4__gg83s_Do; Francois Chollet, “Why the Biggest AI Models Can’t Solve Simple Puzzles,” interview with Dwarkesh Patel, Dwarkesh Podcast, YouTube, June 11, 2024, https://www.youtube.com/watch?v=UakqL6Pj9xo; and Richard Sutton, “Father of RL Thinks LLMs Are a Dead End,” interview with Dwarkesh Patel, Dwarkesh Podcast, YouTube, September 26, 2025, https://www.youtube.com/watch?v=21EYKqUsPfg.

10. Tarjei J. Svensen, “Carlsen on Lack of Motivation, Classical Chess, New WC Formats, and Family Life,” Chess.com, last updated May 2, 2023, https://www.chess.com/news/view/carlsen-on-his-future-personal-life-motivation-and-more.

11. Alex Van Buren, “What is the Waldorf School Method?,” New York Times, April 19, 2020, https://www.nytimes.com/2020/04/19/parenting/waldorf-school.html.

12. Chloë Marshall, “Montessori Education: A Review of the Evidence Base,” npj Science of Learning 2, no. 11 (2017), https://doi.org/10.1038/s41539-017-0012-7.

Previous
Previous

Advanced AI as a Global Public Good and a Global Risk

Next
Next

Beyond Job Displacement: How AI Could Reshape the Value of Human Expertise