Computational propaganda: The era of automated deception

IMAGE CREDIT:
Image credit
iStock

Computational propaganda: The era of automated deception

Computational propaganda: The era of automated deception

Subheading text
Computational propaganda controls populations and makes them more susceptible to disinformation.
    • Author:
    • Author name
      Quantumrun Foresight
    • November 21, 2022

    Insight summary

    In the age of social media, it has become increasingly difficult for some people to trust what they see and hear due to the spread of computational propaganda—algorithms designed to manipulate public opinion. This technology is primarily utilized to sway people’s perceptions of political issues. And as artificial intelligence (AI) systems continue to develop, computational propaganda may be applied to ever more lethal ends.

    Computational propaganda context

    Computational propaganda uses AI systems to create and disseminate misleading or false information online. In particular, Big Tech firms like Facebook, Google, and Twitter have been criticized for using their algorithms to manipulate public opinion. For example, Facebook was accused in 2016 of using its algorithm to suppress conservative news stories from its trending topics section. Meanwhile, the 2016 US presidential election was a high-profile case where computational propaganda was said to influence voters. For example, Google was accused of skewing its search results to favor Hillary Clinton and Twitter was criticized for allowing bots to spread false information during the election. 

    The effects of computational propaganda are felt globally, particularly during national elections and often against minority groups. In Myanmar, from 2017 to 2022, there has been a surge in hate speech and violence against the Rohingya Muslim minority group. Much of this hatred is due to online propaganda engineered by nationalist groups in Myanmar spreading fake news and inflammatory videos that demonize the Rohingya. 

    Another consequence of computational propaganda is that it can erode trust in democracy and institutions. This erosion can have serious consequences, leading to increased polarization and political unrest among a country’s domestic population. Due to its proven effectiveness, many governments and political parties globally are using AI propaganda to weaponize social media platforms against their opponents and critics.

    Disruptive impact

    Computational propaganda is becoming increasingly sophisticated due to its integration of a variety of emerging AI innovations. One example includes natural language processing (NLP) which enables AI to write original content that sounds human. In addition, deepfake and voice cloning technology are downloadable by anyone. These technologies allow people to create fake personas, impersonate public figures, and stage elaborate disinformation campaigns from their bedrooms. 

    Experts believe that the danger of automated propaganda is magnified by:

    • an uninformed public,
    • a legal system that is ill-equipped to treat mass disinformation, and
    • social media companies with little protection against exploitation.

    A potential solution to computational propaganda is for the US Congress to pressure social media companies to verify their users’ identities. Another solution is for social media platforms to adopt a modified system whereby a third party cryptographically verifies an individual’s identity before allowing them to create an account.

    However, these measures are challenging to implement because social media platforms constantly change and evolve. It would be difficult for these corporations to verify users with the ever-changing landscape of online usage. In addition, many people are wary of governments regulating communications platforms as it can be a form of censorship.

    Implications of computational propaganda

    Wider implications of computational propaganda may include: 

    • Governments increasingly using social media and fake news websites for state-sponsored computational propaganda to influence elections, policies, and foreign affairs.
    • The increasing use of social media bots, fake accounts, and AI-generated profiles designed to spread disinformation about fabricated news and videos.
    • More violent incidents (e.g., public riots, assasination attempts, etc.) caused by disinformation propaganda campaigns online, which can harm citizens, destroy public property and disrupt essential services.
    • Increased investments in public-funded programs designed to train the public to identify disinformation and fake news.
    • Re-enforcement of discrimination against ethnic and minority groups, resulting in more genocide and lower quality of life.
    • Tech companies deploying advanced detection algorithms for identifying and countering computational propaganda, leading to improved digital media integrity and user trust.
    • Educational institutions integrating media literacy into curricula, fostering critical thinking among students to discern factual information from computational propaganda.
    • International collaborations among countries to establish global standards and protocols for combating computational disinformation, enhancing global digital security and cooperation.

    Questions to consider

    • How has computational propaganda affected your country?
    • In what ways do you protect yourself from computational propaganda when consuming content online?

    Insight references

    The following popular and institutional links were referenced for this insight: