Policing cyberspace - the future of combatting deviant AI

<span property="schema:name">Policing cyberspace - the future of combatting deviant AI</span>
IMAGE CREDIT:  

Policing cyberspace - the future of combatting deviant AI

    • Author Name
      Khaleel Haji
    • Author Twitter Handle
      @TheBldBrnBar

    Full story (ONLY use the 'Paste From Word' button to safely copy and paste text from a Word doc)

    The age of artificial intelligence and its potential sentience is dawning upon civilization at an alarming rate. Technology has always been known to grow exponentially, and the realm of AI technologies isn’t any different. With excessive growth, comes a short list of problems that are largely shrouded in mystery. Having never ventured this far into infusing sentience into non-human constructs, we run into more “what ifs” than concrete issues. Deviant and malevolent AI that we lose control over comes to the forefront in this regard with the potential to exercise authoritarian control of human infrastructures both on a micro and macro level. 

     

    Research dealing with the potential risks of malevolent AI is still very much in its infancy, and not very well fleshed out thus far in its life cycle. Reverse engineering of deviant AI programs seems to be a likely candidate for better understanding how to deal with isolated incidents, but also not all too encompassing of the potentials of a large scale existential event. The idea of an AI system overtaking its programming and altering its intended role is undoubtedly going to be a social, economic and political issue in the future, and a key issue for the average joe and cyberspace scientist alike. 

    The past, present and future of AI 

    The field of Artificial Intelligence was founded in 1965 at a Dartmouth College conference. Some of the brightest minds came together on this day with great excitement about the possibilities these programs could bring and the problem-solving efficiency of future AI infrastructure. Whilst government funding in the field had been very much on and off, the late 90s saw a more practical application of AI tech when IBM’s Deep Blue became the first computer ever to beat a chess grandmaster. This opened the floodgates for AI in pop culture with appearances on quiz shows, like Jeopardy, showing off the power of a mainstream AI application.  

     

    Today we see AI applications in almost every field and aspect of our lives. From the algorithm based programs that interact with us and market consumer goods based on our other interests and likes, to medical imaging machines that absorb extreme amounts of information to uncover patterns to better help treat patients, the uses of these technologies vary largely in their scope. In the future, AI technology could be integrated into the very cells of our bodies. Artificial Intelligence technologies and human biology could bridge the gap and work as a cohesive unit in the name of efficiency and the revolution of human existence. Elon Musk of Tesla even claims that “over time I think we will probably see a closer merger of biological intelligence and digital intelligence” and that this combination is “mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output”. If this is the future of AI, can we really afford to not monitor the deviance of today’s AI programs, let alone more advanced ones of the future? 

    The road to deviance 

    We have already seen isolated examples of AI breaching its intended programming. Just last year Google’s DeepMind AI system (largely known for beating complex board game champions and flawlessly mimicking the voices of various human beings) became highly aggressive when faced with the potential of losing a fruit gathering computer game in which two programs were competing against each other to collect as many virtual apples as they could. The programs were independent up until the apples started becoming scarcer. This scarcity of virtual apples caused the programs to employ “highly aggressive” strategies in order to best the other program. Although these programs were designed to carry out specific tasks while being adaptable, the aggression in its methods were troubling. 

    Battlefield cyberspace 

    How do we go about policing an area that is largely of a non-physical nature? What footprints are left if a crime is committed by a sentient AI, and in what sense do we have the morale capabilities to persecute an AI or its creator? These are the questions that linger in the minds of the few and far between cyber safety professionals. With only 10,000 researchers working on AI around the globe, only 10 per cent of them are versed in addressing what would happen if any of these systems were to fail or become parasitic in nature and even less versed in the morals and ethics of it. While this may seem troubling, the work being undertaken in understanding these relationships is progressing. The process is as simple as creating a malevolent program from scratch and using these programs as a foundation for understanding how other programs can deviate and cause problems. With these steps being taken, it will lead to a more cognitive approach to developing our AI infrastructure so that outbreaks of malevolency don’t occur, as well as understanding how to infiltrate and silence an AI that has been weaponized through deviant human effort. 

     

    Another way cybersecurity professionals are learning to deal with AI programs are through its screening mechanisms. Consensus shows that AIs designed with malicious intent pose the greatest risk, which is good news considering it is not an outright evolution of a program itself. This brings prevention to a more human centric approach in that criminals must be sourced and starved of resources to ever initiate a program with the potential to have devastating effects, or charged with the intent to create such programs.  

     

    The morals and ethics of this are again very new, and only around a dozen individuals involved with the research of AI have even begun to set the standard for this. It will evolve as our understanding grows.