Cybersecurity

The secret robot armies fighting to undermine democracy

The EU-funded COMPROP project set out to investigate networks of automated social media accounts, and their role in shaping public opinion. Though initially focussed on Twitter, the team at the University of Oxford’s Programme on Democracy and Technology found computational propaganda – algorithms put to work for a political agenda – on Facebook, Instagram, Telegram, YouTube, and even dating app Tinder. “We didn’t expect over the course of the project the problem would grow as bad as it did,” notes principal investigator Philip Howard. “We can see how some governments, lobbyists, the far right and white supremacists all use these to manipulate democracies.”

Under the influence

Typically the goal is to spread misinformation. “When Malaysia Airlines Flight 17 was shot down over Ukraine, there were multiple ridiculous stories of what transpired – that democracy advocates shot it down, that United States troops shot it down, that a lost tank from WWII came out of the forest and shot it down,” adds Howard. By laying out multiple conflicting stories, authoritarian regimes prevent their citizens from knowing which narrative to respond to. This strategy was eventually turned outward, to undermine social movements and destabilise foreign nations. “Sometimes campaigns are about a specific crisis or person, but often the goal is to undermine trust in courts, police, journalism, science, or government at large,” explains Howard. The target audience for these bots is perhaps only 10-20 % of the population, typically disaffected, conservative-leaning adults who are politically active. In a highly polarised country, swaying 10 % of the electorate can have a resounding impact. Howard adds that these campaigns are particularly bad for the role of women and minorities in public life: “It’s much easier to drive a woman out of public life than a man.”

Pandemic propaganda

The COMPROP project focussed heavily on COVID misinformation, which Howard notes came chiefly from three sources: Russian media, Chinese media, and United States former president Donald Trump. While Trump’s disinformation was tied to domestic United States politics, Russia and China pushed three broad themes intended for foreign audiences. “The first was that democracy can’t help us, elected leaders are too weak to make decisions,” adds Howard. “The second message was that Russian or Chinese scientists were going to get the vaccine first, and the third was that Russia or China was leading on humanitarian assistance efforts.” Howard says more effort is needed to contain these propaganda networks. “We’re past the point of self-regulation by industry. If tech firms stepped up, and governments imposed fines on politicians who commission these programmes, that set of initiatives would go a long way.” Yet even identifying which social media accounts are automated has proven difficult. “One bot writer in Germany said his team would read our methodology papers and adjust their algorithms to just below our catchment,” says Howard. “We were in a sort of dialogue with these programmers.” Howard and his team are now focussed on how machine learning technology will power a new generation of computational propaganda. “If someone can take your social media feed and behavioural data, and come up with political messages you’ll respond to, they’ll do that,” he concludes. “This is the next great threat.”

  • 122
    Shares

Leave a Reply