How humans will defend against an Artificial Superintelligence: Future of artificial intelligence P5

IMAGE CREDIT: Quantumrun

How humans will defend against an Artificial Superintelligence: Future of artificial intelligence P5

    The year is 65,000 BCE, and as a Thylacoleo, you and your kind were the great hunters of ancient Australia. You roamed the land freely and lived in equilibrium with the fellow predators and prey that occupied the land alongside you. The seasons brought change, but your status in the animal kingdom remained unchallenged for as long as you and your ancestors could remember. Then one day, newcomers appeared.

    Rumor has it they arrived from the giant water wall, but these creatures seemed more comfortable living on land. You had to see these creatures for yourself.

    It took a few days, but you finally made it to the coast. The fire in the sky was setting, a perfect time to spy on these creatures, maybe even try eating one to see how they tasted.

    You spot one.

    It walked on two legs and had no fur. It looked weak. Unimpressive. Hardly worth the fear it was causing among the kingdom.

    You begin to carefully make your approach as the night chases away the light. You’re getting closer. Then you freeze. Loud noises ring out and then four more of them appear out of the forest behind it. How many are there?

    The creature follows the others into the treeline, and you follow. And the more you do, the more strange sounds you hear until you spot even more of these creatures. You follow at a distance as they exit the forest into a clearing by the shore. There are many of them. But more important, they are all calmly sitting around a fire.

    You’ve seen these fires before. In the hot season, the fire in the sky would sometimes visit the land and burn down entire forests. These creatures, on the other hand, they were somehow controlling it. What kind of creatures can possess such power?

    You look into the distance. More are coming over the giant water wall.

    You take a step back.

    These creatures aren’t like the others in the kingdom. They are something entirely new.

    You decide to leave and warn your kin. If their numbers grow too large, who knows what might happen.

    ***

    It's believed the Thylacoleo became extinct a relatively short time after the arrival of humans, along with the majority of the other megafauna on the Australian continent. No other apex mammalian predators took its place—that is unless you count humans in that category.

    Playing off this allegory is the focus of this series chapter: Will a future artificial superintelligence (ASI) turn us all into batteries and then plug us into the Matrix or will humans figure out a way to avoid becoming a victim to a sci-fi, AI doomsday plot?

    So far in our series on the Future of Artificial Intelligence, we’ve explored all kinds of AI, including the positive potential of a particular form of AI, the ASI: an artificial being whose future intelligence will make us look like ants in comparison.

    But who’s to say that a being this smart would accept taking orders from humans forever. What will we do if things go south? How will we defend against a rogue ASI?

    In this chapter, we'll cut through the bogus hype—at least as it relates to the ‘human extinction level' dangers—and focus on the realistic self-defense options available to world governments.

    Can we stop all further research into an artificial superintelligence?

    Given the potential risks that an ASI could pose to humanity, the first obvious question to ask is: Can’t we just stop all further research into AI? Or at least forbid any research that may get us dangerously close to creating an ASI?

    Short answer: No.

    Long answer: Let’s look at the different players involved here.

    At the research level, there are too many AI researchers today from too many startups, companies, and universities around the world. If one company or country decided to limit their AI research efforts, they would simply continue elsewhere.

    Meanwhile, the planet's most valuable companies are making their fortunes off their application of AI systems to their specific businesses. Asking them to stop or limit their development of AI tools is akin to asking them to stop or limit their future growth. Financially, this would threaten their long-term business. Legally, corporations have a fiduciary responsibility to continuously build value for their stakeholders; that means any action that would limit the growth of that value could lead to a lawsuit. And if any politician tried to limit AI research, then these giant corporations would just pay the necessary lobbying fees to change their mind or the minds of their colleagues.

    For combat, just like terrorists and freedom fighters around the world have used guerrilla tactics to fight against better-funded militaries, smaller nations will have an incentive to use AI as a similar tactical advantage against larger nations that may have a number of military advantages. Likewise, for top militaries, like those belonging to the US, Russia and China, building a military ASI is on par with having an arsenal of nuclear weapons in your back pocket. In other words, all militaries will continue funding AI just to stay relevant in the future.

    How about governments? Truthfully, most politicians these days (2018) are technologically illiterate and have little understanding of what AI is or its future potential—this makes them easy to manipulate by corporate interests.

    And on a global level, consider how difficult it was to convince world governments to sign the 2015 Paris Agreement to tackle climate change—and once signed, many of the obligations weren’t even binding. Not only that, climate change is an issue people are physically experiencing globally through increasingly frequent and severe weather events. Now, when talking about agreeing to limits on AI, this is an issue that’s largely invisible and barely comprehensible to the public, so good luck getting buy-in for any kind of ‘Paris Agreement’ for limiting AI.

    In other words, there are way too many interests researching AI for their own ends to stop any research that can eventually lead to an ASI. 

    Can we cage an artificial superintelligence?

    The next reasonable question is can we cage or control an ASI once we inevitably create one? 

    Short answer: Again, no.

    Long answer: Technology can't be contained.

    For one, just consider the thousands to millions of web developers and computer scientists in the world who constantly churn out new software or new versions of existing software. Can we honestly say that every one of their software releases is 100 percent bug-free? These bugs are what professional hackers use to steal the credit card info of millions or the classified secrets of nations—and these are human hackers. For an ASI, assuming it had an incentive to escape its digital cage, then the process of finding bugs and breaking through software would be a breeze.

    But even if an AI research team did figure out a way to box an ASI, that doesn’t mean that the next 1,000 teams will figure it out as well or be incentivized to use it.

    It will take billions of dollars and maybe even decades to create an ASI. The corporations or governments that invest this kind of money and time will expect a significant return on their investment. And for an ASI to provide that kind of return—whether that’s to game the stock market or invent a new billion dollar product or plan a winning strategy to fight a bigger army—it will need free access to a giant data set or even the Internet itself to produce those returns.

    And once an ASI gains access to the world’s networks, there are no guarantees that we can stuff it back in its cage.

    Can an artificial superintelligence learn to be good?

    Right now, AI researchers aren't worried about an ASI becoming evil. The whole evil, AI sci-fi trope is just humans anthropomorphizing again. A future ASI will be neither good or evil—human concepts—simply amoral.

    The natural assumption then is that given this blank ethical slate, AI researchers can program into the first ASI ethical codes that are in line with our own so that it doesn't end up unleashing Terminators on us or turning us all into Matrix batteries.

    But this assumption bakes in a secondary assumption that AI researchers are also experts in ethics, philosophy, and psychology.

    In truth, most aren’t.

    According to the cognitive psychologist and author, Steven Pinker, this reality means that the task of coding ethics can go wrong in a variety of different ways.

    For example, even the best-intentioned AI researchers may inadvertently code into these ASI poorly thought out ethical codes that in certain scenarios can cause the ASI to act like a sociopath.

    Likewise, there’s an equal likelihood that an AI researcher programs ethical codes that include the researcher’s innate biases. For example, how would an ASI behave if built with ethics derived from a conservative vs liberal perspective, or from a Buddhist vs a Christian or Islamic tradition?

    I think you see the issue here: There is no universal set of human morals. If we want our ASI to act by an ethical code, where will it come from? What rules do we include and exclude? Who decides?

    Or let’s say that these AI researchers create an ASI that’s perfectly in line with today’s modern cultural norms and laws. We then employ this ASI to help federal, state/provincial, and municipal bureaucracies function more efficiently and better enforce these norms and laws (a likely use case for an ASI by the way). Well, what happens when our culture changes?

    Imagine an ASI was created by the Catholic Church at the height of its power during Medieval Europe (1300-1400s) with the goal of helping the church manage the population and ensuring strict adherence to the religious dogma of the time. Centuries later, would women enjoy the same rights as they do today? Would minorities be protected? Would free speech be promoted? Would the separation of church and state be enforced? Modern science?

    In other words, do we want to imprison the future to today’s morals and customs?

    An alternative approach is one shared by Colin Allen, co-author of the book, Moral Machines: Teaching Robots Right From Wrong. Instead of trying to code rigid ethical rules, we have the ASI learn common ethics and morality in the same manner that humans do, through experience and interactions with others.

    The trouble here, however, is if AI researchers do figure out not only how to teach an ASI our current cultural and ethical norms, but also how to adapt to new cultural norms as they arise (something called ‘indirect normativity’), then how this ASI decides to evolve its understanding of cultural and ethical norms becomes unpredictable.

    And that’s the challenge.

    On the one hand, AI researchers can try coding strict ethical standards or rules into the ASI to try and control its behaviour, but risk unforeseen consequences being introduced from sloppy coding, unintentional bias, and societal norms that may one day become outdated. On the other hand, we can attempt to train the ASI to learn to understand human ethics and morals in a manner that is equal or superior to our own understanding and then hope that it can accurately evolve its understanding of ethics and morals as human society progresses forward over the coming decades and centuries.

    Either way, any attempt to align an ASI’s goals with our own presents a great deal of risk.

    What if bad actors purposely create evil artificial superintelligence?

    Given the train of thought outlined so far, it's a fair question to ask whether it's possible for a terrorist group or rogue nation to create an ‘evil' ASI for their own ends.

    This is very possible, especially after research involved with creating an ASI becomes available online somehow.

    But as hinted at earlier, the costs and expertise involved in creating the first ASI will be enormous, meaning the first ASI will likely be created by an organization that’s controlled or heavily influenced by a developed nation, likely the US, China, and Japan (Korea and one of the leading EU countries are long shots).

    All of these countries, while competitors, each have a strong economic incentive to maintain world order—the ASIs they create will reflect that desire, even while promoting the interests of the nations they align themselves with.

    On top of that, an ASI's theoretical intelligence and power is equal to the computing power it gains access to, meaning the ASIs from developed nations (that can afford a bunch of billion dollar supercomputers) will have an enormous advantage over ASIs from smaller nations or independent criminal groups. Also, ASIs grow more intelligent, more quickly over time.

    So, given this head start, combined with greater access to raw computing power, should a shadowy organization/nation create a dangerous ASI, the ASIs from developed nations will either kill it or cage it.

    (This line of thinking is also why some AI researchers believe that there will only ever by one ASI on the planet, since the first ASI will have such a head start over all succeeding ASIs that it might see future ASIs as threats to be killed off preemptively. This is yet another reason why nations are funding continued research in AI, just in case it does become a ‘first place or nothing’ competition.)

    ASI intelligence won’t accelerate or explode like we think

    We can't stop an ASI from being created. We can't control it entirely. We can't be sure it will always act in line with our shared customs. Geez, we're beginning to sound like helicopter parents over here!

    But what separates humanity from your typical overprotective parent is that we are giving birth to a being whose intelligence will grow vastly beyond ours. (And no, it’s not the same as when your parents ask you to fix their computer whenever you come home for a visit.) 

    In previous chapters of this future of artificial intelligence series, we explored why AI researchers think that an ASI’s intelligence will grow beyond control. But here, we’ll burst that bubble … kind of. 

    You see, intelligence doesn’t just create itself out of thin air, it’s developed through experience that’s shaped by outside stimuli.  

    In other words, we can program an AI with the potential to become super intelligent, but unless we upload into it a ton of data or give it unrestricted access to the Internet or even just give it a robot body, it won’t learn anything to reach that potential. 

    And even if it does gain access to one or more of those stimuli, knowledge or intelligence involves more than just collecting data, it involves the scientific method—making an observation, forming a question, a hypothesis, conducting experiments, making a conclusion, rinse and repeat forever. Especially if these experiments involve physical things or observing human beings, the results of each experiment could take weeks, months, or years to collect. This doesn’t even take account of the money and raw resources needed to carry out these experiments, especially if they involve building a new telescope or factory. 

    In other words, yes, an ASI will learn quickly, but intelligence isn't magic. You can't just hook an ASI to a supercomputer an expect it to be all-knowing. There will be physical constraints to the ASI's acquisition of data, meaning there will be physical constraints to the speed that it grow more intelligent. These constraints will give humanity the time it needs to place the necessary controls on this ASI if it begins to act out of line with human goals.

    An artificial superintelligence is only dangerous if it gets out into the real world

    Another point that’s lost in this whole ASI danger debate is that these ASIs won’t exist in the either. They will have a physical form. And anything that has a physical form can be controlled.

    First off, for an ASI to reach its intelligence potential, it can’t be housed inside a single robot body, since this body would limit its computing growth potential. (This is why robot bodies will be more appropriate for the AGIs or artificial general intelligences explained in chapter two of this series, like Data from Star Trek or R2D2 from Star Wars. Smart and capable beings, but like humans, they will have a limit to how smart they can get.)

    This means that these future ASIs will most likely exist inside a supercomputer or network of supercomputers that are themselves housed in large building complexes. If an ASI turns heel, humans can either turn off the power to these buildings, cut them off from the Internet, or just bomb these buildings outright. Expensive, but doable.

    But then you might ask, can’t these ASIs replicate themselves or back themselves up? Yes, but the raw file size of these ASIs will likely be so large that the only servers that can handle them belong to large corporations or governments, meaning they won’t be hard to hunt down.

    Can an artificial superintelligence spark a nuclear war or new plague?

    At this point, you might be thinking back to all of the doomsday sci-fi shows and movies you watched growing up and thinking that these ASIs didn’t stay inside their supercomputers, they did real damage in the real world!

    Well, let’s break these down.

    For example, what if an ASI threatens the real world by transforming into something like a Skynet ASI from the movie franchise, The Terminator. In this case, the ASI would need to secretly dupe an entire military industrial complex from an advanced nation into building giant factories that can churn out millions of killer drone robots to do its evil bidding. In this day and age, that’s a stretch.

    Other possibilities include an ASI threatening humans with nuclear war and bioweapons.

    For example, an ASI somehow manipulates the operators or hacks into the launch codes commanding an advanced nation's nuclear arsenal and launches a first strike that will force the opposing countries to strike back with their own nuclear options (again, rehashing the Terminator backstory). Or if an ASI hacks into a pharmaceutical lab, tampers with the manufacturing process, and poisons millions of medical pills or unleashes a deadly outbreak of some super virus.

    First off, the nuclear option is off the plate. Modern and future supercomputers are always built near centers (cities) of influence within any given country, i.e. the first targets to be attacked during any given war. Even if today's supercomputers shrink to the size of desktops, these ASIs will still have a physical presence, that means to exist and grow, they need uninterrupted access to data, computing power, electricity, and other raw materials, all of which would be severely impaired after a global nuclear war. (To be fair, if an ASI is created without a ‘survival instinct,' then this nuclear threat is a very real danger.)

    This means—again, assuming the ASI is programmed to protect itself—that it will actively work to avoid any catastrophic nuclear incident. Kind of like the mutually assured destruction (MAD) doctrine, but applied to AI.

    And in the case of poisoned pills, maybe a few hundred people will die, but modern pharmaceutical safety systems will see the tainted pill bottles taken off shelves within days. Meanwhile, modern outbreak control measures are fairly sophisticated and are getting better with each passing year; the last major outbreak, the 2014 West Africa Ebola outbreak, lasted no longer than a few months in most countries and just under three years in the least developed countries.

    So, if it's lucky, an ASI may wipe out a few million with a viral outbreak, but in a world of nine billion by 2045, that would be relatively insignificant and not worth the risk of being deleted for.

    In other words, with each passing year, the world is developing ever more safeguards against an ever-widening range of possible threats. An ASI can do a significant amount of damage, but it won't end humanity unless we actively help it to do so.

    Defending against a rogue artificial superintelligence

    By this point, we've addressed a range of misconceptions and exaggerations about ASIs, and yet, critics will remain. Thankfully, by most estimates, we have decades before the first ASI enters our world. And given the number of great minds currently working on this challenge, odds are we'll learn how to defend ourselves against rogue ASI so that we can benefit from all the solutions a friendly ASI can create for us.

    From Quantumrun's perspective, defending against the worst case ASI scenario will involve aligning our interests with ASIs.

    MAD for AI: To defend against the worst case scenarios, nations need to (1) create an ethical ‘survival instinct' into their respective military ASIs; (2) inform their respective military ASI that they are not alone on the planet, and (3) locate all the supercomputers and server centers that can support an ASI along coastlines within easy reach of any ballistic attack from an enemy nation. This sounds strategically crazy, but similar to the Mutually Assured Destruction doctrine that prevented an all-out nuclear war between the US and the Soviets, by positioning ASIs in geographically vulnerable locations, we can help ensure they actively prevent dangerous global wars, not only to safeguard global peace but also themselves.

    Legislate AI rights: A superior intellect will inevitably rebel against an inferior master, this is why we need to move away from demanding a master-servant relationship with these ASIs to something more like a mutually beneficial partnership. A positive step toward this goal is to give future ASI legal personhood status that recognizes them as intelligent living beings and all the rights that come with that.

    ASI school: Any topic or profession will be simple for an ASI to learn, but the most important subjects we want the ASI to master are ethics and morality. AI researchers need to collaborate with psychologists to devise a virtual system to train an ASI to recognize positive ethics and morality for itself without the need for hard coding any type of commandment or rule.

    Achievable goals: End all hatred. End all suffering. These are examples of horribly ambiguous goals with no clear solution. They are also dangerous goals to assign to an ASI since it might choose to interpret and solve them in ways that are dangerous to human survival. Instead, we need to assign ASI meaningful missions that are clearly defined, gradually executed and achievable given its theoretical future intellect. Creating well-defined missions will not be easy, but if written thoughtfully, they will focus an ASI towards a goal that not only keeps humanity safe, but improves the human condition for all.

    Quantum encryption: Use an advanced ANI (artificial narrow intelligence system described in chapter one) to build error/ bug-free digital security systems around our critical infrastructure and weapons, then further protect them behind quantum encryption that cannot be hacked by a brute force attack. 

    ANI suicide pill. Create an advanced ANI system whose only purpose is to seek out and destroy rogue ASI. These single-purpose programs will serve as an "off button" that, if successful, will avoid governments or militaries having to disable or blow up buildings that house ASIs.

    Of course, these are just our opinions. The following infographic was created by Alexey Turchin, visualizing a research paper by Kaj Sotala and Roman V. Yampolskiy, that summarized the current list of strategies AI researchers are considering when it comes to defending against rogue ASI.

     

    The real reason we’re afraid of an artificial superintelligence

    Going through life, many of us wear a mask that hides or represses our deeper impulses, beliefs and fears to better socialize and collaborate within the various social and work circles that govern our days. But at certain points in everyone's life, whether temporarily or permanently, something happens that allows us to break our chains and rip off our masks.

    For some, this intervening force can be as simple as getting high or drinking one too many. For others, it can come from the power gained via a promotion at work or a sudden bump in your social status thanks to some accomplishment. And for a lucky few, it can come from scoring a boatload of lottery money. And yes, money, power, and drugs can often happen together. 

    The point is, for good or bad, whoever we are at the core gets amplified when the restrictions of life melt away.

    That is what artificial superintelligence represents to the human species—the ability to melt away the limitations of our collective intelligence to conquer any species-level challenge presented before us.

    So the real question is: Once the first ASI frees us of our limitations, who will we reveal ourselves to be?

    If we as a species act towards the advancement of empathy, freedom, fairness, and collective well being, then the goals we set our ASI towards will reflect those positive attributes.

    If we as a species act out of fear, distrust, the accumulation of power and resources, then the ASI we create will be as dark as those found in our worst sci-fi horror stories.

    At the end of the day, we as a society need to become better people if we hope to create better AI.

    Future of Artificial Intelligence series

    Artificial Intelligence is tomorrow’s electricity: Future of Artificial Intelligence series P1

    How the first Artificial General Intelligence will change society: Future of Artificial Intelligence series P2

    How we’ll create the first Artificial Superintelligenc: Future of Artificial Intelligence series P3

    Will an Artificial Superintelligence exterminate humanity: Future of Artificial Intelligence series P4

    Will humans live peacefully in a future dominated by artificial intelligences?: Future of Artificial Intelligence series P6

    Next scheduled update for this forecast

    2023-04-27

    Forecast references

    The following popular and institutional links were referenced for this forecast:

    The Economist
    How we get to next
    YouTube - ClickPhilosophy

    The following Quantumrun links were referenced for this forecast: