The Universal Innervation of the Economy
AI is of fundamental importance to every business. The application of “iterative algorithms”—AI, machine learning, directed evolution, generative design—to build complex systems is the most powerful advance in engineering since the invention of the scientific method in the 16th century. These techniques already allow us to build software solutions that exceed our understanding and will innervate every industry.
I have been anticipating this transition for 35 years now. Back in 1989, I started a PhD in electrical engineering. My focus was on how to accelerate neural networks by mapping them to parallel processing computers. Twenty-five years later, during the neural net revival, we began to talk of “machine learning” and “deep learning.” I chose AI as the top tech trend of 2013 at the Churchill Club VC debate.1
In 2014, while at Draper Fisher Jurvetson, I led the first investment in one of the first AI chip companies, Nervana, and in 2016, in the first analog AI chip company, Mythic. Across our portfolio, we saw powerful applications of deep learning, from molecular design toimage recognition to cancer research to autonomous driving. At Future Ventures, we have invested in AI silicon, infrastructure for data centers, and foundation models (e.g., Elon Musk’s xAI and open-source Zyphra).
Today, with transformers and diffusion models, the terminology has shifted again. Now, we talk of AI, with some anticipating artificial general intelligence (AGI) and transformative AI (TAI). But in all these cases, from the neural networks of the past century to the LLMs of today, the approach is roughly the same: the repeated application of a simple iterative algorithm to learn complex relationships hidden in data.
A march to specialized silicon is underway. All of these companies originally deployed their neural networks on traditional compute clusters, or CPUs. Some, though, were realizing huge advantages by moving their code to GPUs, specialized processors that were originally designed for rapid rendering of computer graphics but have many more computational cores than the former. By the time of Nervana’s founding in 2014, some (e.g., Microsoft’s and Google’s search teams) were exploring “FPGA” (field-programmable gate array) chips for their even finer-grained arrays of customizable logic blocks.
During due diligence, we spoke to Amazon, Google, Baidu, and Microsoft. And we found a much broader application of deep learning within these companies than we could have imagined before, from product positioning to supply chain management.
“From the neural networks of the past century to the LLMs of today, the approach is roughly the same: the repeated application of a simple iterative algorithm to learn complex relationships hidden in data.”
At Google, for instance, machine learning is central to almost everything that they do. Looked at in this way, their acquisitions and new product strategies make sense. They are not traditional productline extensions, but a process expansion of machine leaning. In short, they are not just playing games of Go for the fun of it. For instance, Google switched their core search algorithms to deep learning, and used DeepMind to cut data center cooling costs by a whopping 40%.2
Importantly, advances in deep learning are domain independent. Google can hire and acquire talent and delight in their passionate pursuit of game playing or robotics. These efforts help Google build a better “brain.” And, like a newborn human, this synthetic neural network can learn many things. The team can use it to find cats on the internet and play a great game of Go. But the process advances they make in building a better brain can then be turned to ad matching, a task that does not inspire the best and the brightest to come work for Google.
This domain independence of deep learning has profound implications for labor markets. The locus of learning shifts from end products to the process of their creation. AI development is more like parenting than programming. At the same time, because engineers can move so easily across domains, the deep learning and AI sector have heated up labor markets to relatively unprecedented levels. Large companies are paying $6–10 million per engineer for talent acquisitions,
“This domain independence of deep learning has profound implications for labor markets. The locus of learning shifts from end products to the process of their creation. AI development is more like parenting than programming. ”
and $4–5 million per head for pre-product startups still in academia. Students in the master’s program in a certain Stanford lab averaged $500,000 per year for their first job offer at graduation. We witnessed an academic turn down a million-dollar signing bonus because they got a better offer. And things went hyperbolic with the billion-dollar LLM aquihires.
And again, these “brain builders” can join any industry. When we were building the deep learning team at Human Longevity Inc. (HLI), we hired the engineering lead from the Google Translate team, Franz Och.
Och had pioneered Google’s better-than-human translation service not by studying linguistics or grammar, or even speaking the languages being translated—but by focusing on building the brain that could learn the job from countless documents already translated by humans (UN transcripts in particular).
Similarly, when Och came to HLI, he cared about the mission but knew nothing about cancer and the genome. Yet the learning machines he had built were able to find the complex patterns across the genome. In short, deep learning expertise is fungible. That is why a burgeoning number of companies are hiring and competing across industry lines.
An ever-widening set of industries is undergoing transformation, from automotive to agriculture, from health care to financial services. We see it in our venture portfolio, from chemistry, to analog circuit design (Sphere Semi), to epigenetic and genetic targets for RNA therapy (Moonwalk Bioscience and Deep Genomics), to autonomous driving and robotics, to vector-map discovery of airplanes and industrial activities in Planet’s daily planet maps, to cybersecurity, to financial risk assessment, and to visual classification in security cameras, drones and medical images, among many others.
There are some common patterns in the power and inscrutability of artifacts built with iterative algorithms. We see this in biology, cellular automata, genetic programming, machine learning, and neural networks.
There is no mathematical shortcut for the decomposition of a neural network or genetic program, no way to “reverse evolve” with the ease that we can reverse engineer the artifacts of purposeful design. The beauty of compounding iterative algorithms — evolution, fractals, organic growth, art — derives from their irreducibility.
The deep learning techniques, while relatively easy to learn, are quite foreign to traditional engineering modalities. It takes a different mindset and a relaxation of the presumption of control. The practitioners are like magi, sequestered from the rest of a typical engineering process. The artifacts of their creation are isolated blocks of functionality defined by their interfaces. They are like blocks of magic handed to other parts of a traditional organization. This carries over to the customers, too; just
about any product that you experience in the next five years that seems like magic will almost certainly be built by these algorithms.
Where will this take us? We are building artificial brains. And we have started with the sensory cortex, much like an infant coming into the world. Neural networks had their early success in speech recognition in the 1990s; in 2012, the deep learning variant dominated the ImageNet competitions.
Today, visual processing can be done better by machines than humans in many domains (like pathology, radiology, and other medicalimage classification tasks). DARPA has research programs aiming to surpass a dog’s nose in olfaction. And even within these systems, like vision, the deep learning network starts with low-level constructs (like edge-detection) as foundations for higher level constructs like facial forms, and ultimately, finding cats on the internet with self-taught learning.
But the artificial brains need not limit themselves to the human senses. With the internet of things, we are creating a sensory nervous system on the planet. All of this “big data” would be a big headache, but machine learning helps us find patterns in it all and make it actionable.
And it need not stop there. It is precisely by iterative algorithms that human intelligence arose from its primitive antecedents. Biological evolution was slow, but it provides an existence proof of the process.
“It is precisely by iterative algorithms that human intelligence arose from its primitive antecedents. Biological evolution was slow, but it provides an existence proof of the process.”
And now, a similar process is being vastly accelerated in the artificial domain.
And it’s just beginning. In the next handful of years, three billion new minds will come online for the first time to join this global conversation, thanks to Starlink providing low-cost broadband to unserved areas. These people are decoupled from the global economy today, but they will soon have access to online education and all of the economic potential of entrepreneurship and innovation. This alone should foster an innovation boom.
And then AI enters the chat. AI bridges across all academic disciplines, beyond the capacity of any human mind. Like a universal translator of languages across a common vector space, AI models will bring together our disparate idea pools like never before, finding new patterns in this whole, ushering in a new combinatorial compounding of innovation. It may feel like a Cambrian explosion of future shock.
“We will not engineer an artificial intelligence; rather we will set up the right conditions under which an intelligence can emerge,” wrote Danny Hillis in the closing chapter of The Pattern on the Stone, “The greatest achievement of our technology may well be creation of tools that allow us to go beyond engineering — that allow us to create more than we can understand.”3
1. Steve Jurvetson, “Steve Jurvetson on Machine Learning,” Churchill Club Top 10 Tech Trends Debate, YouTube, May 23, 2013, https://youtu.be/yeCq8GgDyXM?si=HyIEYJ0ldnowxhsA.
2. Richard Evans and Jim Gao, “DeepMind AI Reduces Google Data Centre Cooling By 40%,” Google DeepMind, July 20, 2016, https://deepmind.google/discover/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40/.
3. W. Daniel Hillis, The Pattern on the Stone: The Simple Ideas That Make Computers Work (Basic Books, 1998).