Automated cyberattacks using AI: When machines become cybercriminals

IMAGE CREDIT:
Image credit
iStock

Automated cyberattacks using AI: When machines become cybercriminals

Thrive from future trends

Subscribe today to equip your team with the leading trend and foresight platform used by multidisciplinary, future-focused teams working across departments in Strategy, Innovation, Product Development, Investor Research, and Consumer Insights. Convert industry trends into practical insights for your business.

Starting at $15/month

Automated cyberattacks using AI: When machines become cybercriminals

Subheading text
The power of artificial intelligence (AI) and machine learning (ML) is being exploited by hackers to make cyberattacks more effective and lethal.
    • Author:
    • Author name
      Quantumrun Foresight
    • September 30, 2022

    Post text

    Artificial intelligence (AI) and machine learning (ML) maintain the ability to automate nearly all tasks, including learning from repetitive behavior and patterns, making a powerful tool to identify vulnerabilities in a system. More importantly, AI and ML make it challenging to pinpoint a person or an entity behind an algorithm.

    Automated cyberattacks using AI context

    In 2022, during the US Senate Armed Services Subcommittee on Cybersecurity, Eric Horvitz, Microsoft’s chief scientific officer, referred to the use of artificial intelligence (AI) to automate cyberattacks as “offensive AI.” Highlighting that it’s hard to determine if a cyberattack is AI-driven. Similarly, that machine learning (ML) is being used to aid cyberattacks; ML is used to learn commonly used words and strategies in creating passwords to hack them better. 

    A survey by the cybersecurity firm Darktrace discovered that IT management teams are increasingly concerned about the potential use of AI in cybercrimes, with 96 percent of respondents indicating that they’re already researching possible solutions. 

    IT security experts feel a shift in cyberattack methods from ransomware and phishing to more complex malware that are difficult to detect and deflect. Possible risk of AI-enabled cybercrime is the introduction of corrupted or manipulated data in ML models. An ML attack can impact software and other technologies currently being developed to support cloud computing and edge AI. Insufficient training data can also re-enforce algorithm biases such as incorrectly tagging minority groups or influencing predictive policing to target marginalized communities. Artificial Intelligence can introduce subtle but disastrous information into systems, which may have long-lasting consequences.

    Disruptive impact

    A study by Georgetown University researchers on the cyber kill chain (a checklist of tasks performed to launch a successful cyberattack) showed that specific offensive strategies could benefit from ML. These methods include spearphishing (e-mail scams directed towards specific people and organizations), pinpointing weaknesses in IT infrastructures, delivering malicious code into networks, and avoiding detection by cybersecurity systems. Machine learning can also increase the chances of social engineering attacks succeeding, where people are deceived into revealing sensitive information or performing specific actions like financial transactions. 

    In addition, the cyber kill chain can automate some processes, including: 

    • Extensive surveillance - autonomous scanners gathering information from target networks, including their connected systems, defenses, and software settings. 
    • Vast weaponization - AI tools identifying weaknesses in infrastructure and create code to infiltrate these loopholes. This automated detection can also target specific digital ecosystems or organizations. 
    • Delivery or hacking - AI tools using automation to execute spearphishing and social engineering to target thousands of people. 

    As of 2022, writing complex code is still within the realm of human programmers, but experts believe that it won’t be long before machines acquire this skill, too. 

    Implications of automated cyberattacks using AI

    Wider implications of automated cyberattacks using AI may include: 

    • Companies deepening their cyber defense budgets to develop advanced cyber solutions to detect and stop automated cyberattacks.
    • Cybercriminals studying ML methods to create algorithms that can secretly invade corporate and public sector systems.
    • Increased incidents of cyberattacks that are well-orchestrated and target multiple organizations all at once.
    • Offensive AI software utilized to seize control of military weapons, machines, and infrastructure command centers.
    • Offensive AI software utilized to infiltrate, modify or exploit a company’s systems to take down public and private infrastructures. 
    • Some governments potentially reorganizing the digital defenses of their domestic private sector under the control and protection of their respective national cybersecurity agencies.

    Questions to comment on

    • What are the other potential consequences of AI-enabled cyberattacks?
    • How else can companies prepare for such attacks?

    Community forecast feedback

    View the community's ratings after you leave your own below.

    Average year

    All readers

    Average year

    Qr readers

    --

    Average vote

    All readers

    Average vote

    Qr readers

    --

    Average vote

    All readers

    Average vote

    Qr readers

    --

    Average vote

    All readers

    Average vote

    Quantumrun readers

    --
    --
    --

    Average vote

    Company readers

    Insight references

    The following popular and institutional links were referenced for this insight:

    Center for Security and Emerging Technology Automating Cyber Attacks