Deep into World War II, Nazi forces were steamrolling through much of Europe. They had advanced weapons, efficient wartime industry, fanatically driven infantry, but above all, they had a machine called Enigma. This device allowed Nazi forces to safely collaborate over long distances by sending Morse-coded messages to each other over standard communication lines; it was a cipher machine impregnable to human code breakers.
Thankfully, the Allies found a solution. They no longer needed a human mind to break Enigma. Instead, through an invention of the late Alan Turing, the Allies built a revolutionary new tool called the British Bombe, an electromechanical device that finally deciphered the Nazis’ secret code, and ultimately helped them win the war.
This Bombe lay the groundwork for what became the modern computer.
Working alongside Turing during the Bombe development project was I. J. Good, a British Mathematician and cryptologist. He saw early on the end game this new device could one day bring about. In a 1965 paper, he wrote:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Creating the first artificial superintelligence
So far in our Future of Artificial Intelligence series, we've defined the three broad categories of artificial intelligence (AI), from artificial narrow intelligence (ANI) to artificial general intelligence (AGI), but in this series chapter, we'll focus on the last category—the one that breeds either excitement or panic attacks among AI researchers—artificial superintelligence (ASI).
To wrap your head around what an ASI is, you’ll need to think back to the last chapter where we outlined how AI researchers believe they will create the first AGI. Basically, it will take a combination of big data feeding better algorithms (ones that specialize in self-improvement and human-like learning abilities) housed in increasingly powerful computing hardware.
In that chapter, we also outlined how an AGI mind (once it gains these self-improvement and learning abilities that we humans take for granted) will eventually outperform the human mind by way of superior speed of thought, enhanced memory, untiring performance, and instant upgradability.
But here it’s important to note that an AGI will only self-improve to the limits of the hardware and data it has access to; this limit can be big or small depending on the robot body we give it or the scale of computers we allow it access to.
Meanwhile, the difference between an AGI and an ASI is that the latter, theoretically, will never exist in a physical form. It will operate entirely within a supercomputer or network of supercomputers. Depending on the goals of its creators, it may also get full access to all data stored on the Internet, as well as whatever device or human that feeds data into and over the Internet. This means there will be no practical limit to how much this ASI can learn and how much it can self-improve.
And that’s the rub.
Understanding the intelligence explosion
This process of self-improvement that AIs will eventually gain as they become AGIs (a process the AI community calls recursive self-improvement) can potentially trip off a positive feedback loop that looks kind of like this:
A new AGI is created, given access to a robot body or a large dataset, and then given the simple task of educating itself, of improving its intelligence. At first, this AGI will have the IQ of an infant struggling to grasp new concepts. Over time, it learns enough to reach the IQ of an average adult, but it doesn't stop here. Using this newfound adult IQ, it becomes much easier and faster to continue this improvement to a point where its IQ matches that of the smartest known humans. But again, it doesn't stop there.
This process compounds at each new level of intelligence, following the law of accelerating returns until it reaches the incalculable level of superintelligence—in other words, if left unchecked and given unlimited resources, an AGI will self-improve into an ASI, an intellect that has never before existed in nature.
This is what I. J. Good first identified when he described this ‘intelligence explosion’ or what modern AI theorists, like Nick Bostrom, call the AI’s ‘takeoff’ event.
Understanding an artificial superintelligence
At this point, some of you are probably thinking that the key difference between human intelligence and an ASI's intelligence is how fast either side can think. And while it's true that this theoretical future ASI will think faster than humans, this ability is already fairly commonplace throughout today's computer sector—our smartphone thinks (computes) faster than a human mind, a supercomputer thinks millions of times faster than a smartphone, and a future quantum computer will think faster still.
No, speed isn’t the feature of intelligence we’re explaining here. It’s the quality.
You can speed up the brains of your Samoyed or Corgi all you want, but that doesn't translate into a new understanding how to interpret language or abstract ideas. Even with an extra decade or two, these doggos won't all of a sudden comprehend how to make or use tools, let alone understand the finer differences between a capitalist and socialist economic system.
When it comes to intelligence, humans operate on a different plane than animals. Likewise, should an ASI reach its full theoretical potential, their minds will operate on a level far beyond the reach of the average modern human. For some context, let’s look at the applications of these ASI.
How might an artificial superintelligence work alongside humanity?
Assuming a certain government or corporation is successful in creating an ASI, how might they use it? According to Bostrom, there are three separate but related forms these ASI might take:
- Oracle. Here, we would interact with an ASI the same as we already do with the Google search engine; we’ll ask it a question, but no matter how complex said question, the ASI will answer it perfectly and in a way that’s tailored to you and your question’s context.
- Genie. In this case, we will assign an ASI a specific task, and it will execute as commanded. Research a cure for cancer. Done. Find all the planets hidden inside the backlog of 10 years worth of images from NASA's Hubble Space Telescope. Done. Engineer a working fusion reactor to solve humanity's energy demand. Abracadabra.
- Sovereign. Here, the ASI is assigned an open-ended mission and given the freedom to execute it. Steal the R&D secrets from our corporate competitor. "Easy." Discover the identities of all foreign spies hiding inside our borders. "On it." Ensure the continued economic prosperity of the United States. "No problem."
Now, I know what you’re thinking, this all sounds pretty far fetched. That’s why it’s important to remember that every problem/challenge out there, even the ones that have stumped the world’s brightest minds to date, they’re all solvable. But the difficulty of a problem is measured by the intellect tackling it.
In other words, the greater the mind applied to a challenge, the easier it becomes to find a solution to said challenge. Any challenge. It's like an adult watching an infant struggle to understand why he can't fit a square block into a round opening—for the adult, showing the infant that the block should fit through the square opening would be child's play.
Likewise, should this future ASI reach its full potential, this mind would become the most powerful intellect in the known universe—powerful enough to solve any challenge, no matter how complex.
This is why many AI researchers are calling the ASI the last invention man will ever have to make. If convinced to work alongside humanity, it can help us solve all of the world's most pressing problems. We can even ask it to eliminate all disease and end aging as we know it. Humanity can for the first time cheat death permanently and enter a new age of prosperity.
But the opposite is also possible.
Intelligence is power. If mismanaged or instructed by bad actors, this ASI could become the ultimate tool of oppression, or it could flat out exterminate humanity altogether—think Skynet from the Terminator or the Architect from the Matrix movies.
In truth, neither extreme is likely. The future is always far messier than utopians and distopians predict. That’s why now that we understand the concept of an ASI, the rest of this series will explore how an ASI will impact society, how society will defend against a rogue ASI, and how the future might look like if humans and AI live side-by-side. Read on.