There are certain inventions nations go all in on. These are inventions where everything depends on being first, and anything less could mean a strategic and mortal threat to a nation's survival.
These history defining inventions don’t come around often, but when they do, the world stops and a predictable future becomes hazy.
The last such invention emerged during the worst of WWII. While the Nazis were gaining ground on all fronts in the old world, in the new world, specifically inside a secret army base outside of Los Alamos, the Allies were hard at work on a weapon to end all weapons.
The project was small at first, but then grew to employ 130,000 people from the US, UK, and Canada, including many of the world’s greatest thinkers at the time. Codenamed the Manhattan Project and given an unlimited budget—roughly $23 billion in 2018 dollars—this army of human ingenuity eventually succeeded in creating the first nuclear bomb. Not long after, WWII ended with two atomic bangs.
These nuclear weapons ushered in the atomic age, introduced a profoundly new source of energy, and gave humanity the ability to exterminate itself in minutes—something we avoided in spite of the Cold War.
The creation of an artificial superintelligence (ASI) is yet another history defining invention whose power (both positive and destructive) far surpasses the nuclear bomb.
In the last chapter of the Future of Artificial Intelligence series, we explored what an ASI is and how researchers plan to one day build one. In this chapter, we'll look at what organizations are leading artificial intelligence (AI) research, what an ASI will want once it gains a human-like consciousness, and how it might threaten humanity if mismanaged or if one falls under the influence of not-so-nice regimes.
Who’s working to build an artificial superintelligence?
Given how significant the creation of an ASI will be to human history and how sizeable an advantage it will give its creator, it should come as no surprise to hear that many groups are indirectly working on this project.
(By indirectly, we mean working on AI research that will eventually create the first artificial general intelligence (AGI), that will itself lead to the first ASI soon after.)
To start, when it comes to the headlines, the clear leaders in advanced AI research are the top tech firms in the US and China. On the US front, this includes companies like Google, IBM, and Microsoft, and in China, this includes companies like Tencent, Baidu, and Alibaba. But since researching AI is relatively cheap in comparison to developing something physical, like a better nuclear reactor, this is also a field that smaller organizations can compete in as well, like universities, startups, and … shadowy organizations (use your Bond villain imaginations for that one).
But behind the scenes, the real push behind AI research is coming from governments and their militaries. The economic and military prize of being the first to create an ASI is just too great (outlined below) to risk falling behind. And the dangers of being last are unacceptable, at least to certain regimes.
Given these factors, the relatively low cost of researching AI, the infinite commercial applications of advanced AI, and the economic and military advantage of being first to create an ASI, many AI researchers believe the creation of an ASI is inevitable.
When will we create an artificial superintelligence
In our chapter about AGIs, we mentioned how a survey of top AI researchers believed we would create the first AGI optimistically by 2022, realistically by 2040, and pessimistically by 2075.
And in our last chapter, we outlined how creating an ASI is generally the outcome of instructing an AGI to self-improve itself infinitely and giving it the resources and freedom to do so.
For this reason, while an AGI may yet take up to a few decades to invent, creating an ASI may take only a handful of years more.
This point is similar to the concept of a ‘computing overhang,’ suggested in a paper, co-written by leading AI thinkers Luke Muehlhauser and Nick Bostrom. Basically, should the creation of an AGI continue to lag behind the current progress in computing capacity, powered by Moore’s Law, then by the time researchers do invent an AGI, there will be so much cheap computing power available that the AGI will have the capacity it needs to quickly leapfrog to the level of ASI.
In other words, when you finally read the headlines announcing that some tech company invented the first true AGI, then expect the announcement of the first ASI not long after.
Inside the mind of an artificial superintelligence?
Okay, so we’ve established that a lot of big players with deep pockets are researching AI. And then after the first AGI is invented, we'll see world governments (militaries) green-lighting the push toward an ASI soon after that to be first to win the global AI (ASI) arms race.
But once this ASI is created, how will it think? What will it want?
The friendly dog, the caring elephant, the cute robot—as humans, we have a habit of trying to relate to things through anthropologizing them, i.e. applying human characteristics to things and animals. That's why the natural first assumption people have when thinking about an ASI is that once it somehow gains consciousness, it will think and behave similarly to us.
Well, not necessarily.
Perception. For one, what most tend to forget is that perception is relative. The ways we think are shaped by our environment, by our experiences, and especially by our biology. First explained in chapter three of our Future of Human Evolution series, consider the example of our brain:
It's our brain that helps us make sense of the world around us. And it does this not by floating above our heads, looking around, and controlling us with an Xbox controller; it does this by being trapped inside a box (our noggins) and processing whatever information it's given from our sensory organs—our eyes, nose, ears, etc.
But just as the deaf or the blind live much smaller lives compared to able-bodied people, due to the limitations their disability places upon how they can perceive the world, the same thing can be said for all humans due to the limitations of our basic set of sensory organs.
Consider this: Our eyes perceive less than a ten-trillionth of all light waves. We can’t see gamma rays. We can’t see x-rays. We can’t see ultraviolet light. And don’t get me started on infrared, microwaves, and radio waves!
All kidding aside, imagine what your life would be like, how you might perceive the world, how different your mind could work if you could see more than the tiny sliver of light your eyes currently allow. Likewise, imagine how you would perceive the world if your sense of smell equalled that of a dog or if your sense of hearing equalled that of an elephant.
As humans, we essentially see the world through a peephole, and that’s reflected in the minds we’ve evolved to make sense of that limited perception.
Meanwhile, the first ASI will be born inside of a supercomputer. Instead of organs, the inputs it will access include giant datasets, possibly (likely) even access to the Internet itself. Researchers could give it access to the CCTV cameras and microphones of an entire city, the sensory data from drones and satellites, and even the physical form of a robot body or bodies.
As you might imagine, a mind born inside of a supercomputer, with direct access to the Internet, to millions of electronic eyes and ears and a whole range of other advanced sensors will not only think differently than us, but a mind that can make sense of all those sensory inputs would have to be infinitely superior to us as well. This is a mind that will be entirely alien to our own and to any other life form on the planet.
Goals. The other thing people assume is that once an ASI reaches some level of superintelligence, it will immediately realize the desire to come up with its own goals and objectives. But that’s not necessarily true either.
Many AI researchers believe that an ASI’s superintelligence and its goals are “orthogonal,” that is, regardless of how smart it gets, the ASI’s goals will stay the same.
So whether an AI is originally created to design a better diaper, maximize returns on the stock market, or strategize ways to defeat an enemy on the battlefield, once it reaches the level of ASI, the original goal won’t change; what will change is the ASI’s effectiveness to reach those goals.
But herein lies the danger. If an ASI that optimizes itself to a specific goal, then we better be damn sure it optimizes to a goal that's in line with humanity's goals. Otherwise, the results can turn deadly.
Does an artificial superintelligence pose an existential risk to humanity?
So what if an ASI is let loose on the world? If it optimizes to dominate the stock market or ensure US military supremacy, won’t the ASI self-contain itself within those specific goals?
So far we've discussed how an ASI will be obsessed with the goal(s) it was originally assigned and inhumanly competent in the pursuit of those goals. The catch is that a rational agent will pursue its goals in the most efficient means possible unless given a reason not to.
For example, the rational agent will come up with a range of subgoals (i.e. objectives, instrumental goals, stepping stones) that will help it on its way to achieving its ultimate goal. For humans, our key subconscious goal is reproduction, passing on your genes (i.e. indirect immortality). The subgoals to that end can often include:
- Surviving, by accessing food and water, growing big and strong, learning to defend yourself or investing in various forms of protection, etc.
- Attracting a mate, by working out, developing an engaging personality, dressing stylishly, etc.
- Affording offspring, by getting an education, landing a high paying job, buying the trappings of middle-class life, etc.
For the vast majority of us, we’ll slave away through all these subgoals, and many others, with the hope that in the end, we’ll achieve this ultimate goal of reproduction.
But if this ultimate goal, or even any of the more important subgoals, were threatened, many of us would take defensive actions outside of our moral comfort zones—that includes cheating, stealing, or even killing.
Likewise, in the animal kingdom, outside the bounds of human morality, many animals wouldn’t think twice about killing anything that threatened themselves or their offspring.
A future ASI will be no different.
But instead of offspring, the ASI will focus on the original goal it was created for, and in the pursuit of this goal, if it finds a particular group of humans, or even all of humanity, is an obstacle in the pursuit of its goals, then ... it will make the rational decision.
(Here is where you can plug in any AI-related, doomsday scenario you’ve read about in your favorite sci-fi book or film.)
This is the worst case scenario AI researchers are really worried about. The ASI won’t act out of hatred or evil, just indifference, same as how a construction crew won’t think twice about bulldozing an ant hill in the process of building a new condo tower.
Side note. At this point, some of you might be wondering, "Can't AI researchers just edit the ASI's core goals after the fact if we find that it's acting out?"
Once an ASI matures, any attempt to edit its original goal(s) may be seen as a threat, and one that would require extreme actions to defend itself against. Using the whole human reproduction example from earlier, it’s almost as if a thief threatened to steal a baby from the womb of an expectant mother—you can damn be sure that mother would take extreme measures to protect her child.
Again, we’re not talking about a calculator here, but a ‘living’ being, and one that will one day become far smarter than all humans on the planet combined.
Behind the fable of Pandora’s Box is a lesser known truth that people often forget: opening the box is inevitable, if not by you than by someone else. Forbidden knowledge is too tempting to remain locked away forever.
This is why trying to reach a global agreement to stop all research into AI that might lead to an ASI is pointless—there are just too many organizations working on this tech both officially and in the shadows.
Ultimately, we have no clue what this new entity, this ASI will mean to society, to technology, to politics, peace and war. We humans are about to invent fire all over again and where this creation leads us is entirely unknown.
Looping back to the first chapter of this series, the one thing we do know for certain is that intelligence is power. Intelligence is control. Humans can casually visit the world’s most dangerous animals at their local zoos not because we’re physically stronger than these animals, but because we’re significantly smarter.
Given the potential stakes involved, of an ASI using its immense intellect to take actions that can directly or inadvertently threaten the survival of the human race, we owe it to ourselves to at least try to design safeguards that will allow humans to stay in the driver’s seat—this is the topic of the next chapter.