Twitter Provides a New Overview of ‘State-Backed’ Information Operations on its Platform
If there was ever any doubt as to the level of attempted manipulation operations which are occurring on social media networks every day, Twitter’s new report should work to further underline the concern, while also showcasing the efforts which are now being undertaken to remove such activity.
As part of its ongoing transparency efforts, Twitter has this week added six new datasets to its research archive of “Tweets and media associated with known state-backed information operations on Twitter”.
State-backed operations are essentially Government supported initiatives on social platforms which are designed to influence public opinion one way or another. Twitter first launched its archive in late 2018, and since then, Twitter says the information has been used by a range of academic organizations to help improve their detection processes.
“Thousands of researchers from across the globe have downloaded the datasets, which contain more than 30 million Tweets and over 1 terabyte of media, using our archive to conduct their own investigations and to share their insights and independent analysis with the world. We believe that people and organizations with the advantages of institutional power, and which consciously abuse our service, are not advancing healthy discourse but are actively working to undermine it. By making this data open and accessible, we seek to empower researchers, journalists, governments, and members of the public to deepen their understanding of critical issues impacting the integrity of public conversation online, particularly around elections. This transparency is core to our mission.”
That’s a lot of attempted manipulation – and when you combine this with similar disclosures from Facebook, the scope of the issue becomes frighteningly clear.
As an outline, in this year alone (as reported by Facebook and Twitter):
- Twitter has removed 4779 accounts and their activity originating from Iran, while Facebook has removed more than 800 Pages and 36 Facebook accounts, also linked to Iranian-backed organizations (initial action in January).
- Facebook has removed 265 Facebook and Instagram accounts, Pages, Groups and events linked to Israel.
- Facebook has removed 500 Facebook accounts, Pages and Groups linked to Russia (initial findings from January), while Twitter has removed 4 accounts which were found to be connected with Russia’s Internet Research Agency (which has been linked to manipulation leading into the 2016 US Presidential Election)
- Facebook has removed 103 Pages, Groups and accounts on both Facebook and Instagram which had been found to be engaging in coordinated inauthentic behavior as part of a network that originated in Pakistan
- Facebook has removed 687 Facebook Pages and accounts which had engaged in coordinated inauthentic behavior in India
- Facebook has taken action against 420 Pages, Groups and accounts based in the Philippines which engaged in “coordinated inauthentic behavior” on Facebook and Instagram (initial action in January)
- Facebook has banned 2,632 Pages, Groups and accounts which were found to be connected to state-backed operations originating from Iran, Russia, Macedonia and Kosovo
- Facebook has removed 137 Facebook and Instagram accounts, Pages and Groups which were part of a domestic-focused network in the UK
- Facebook has taken action against 4 Pages, 26 Facebook accounts, and 1 Group which originated from Romania
- Twitter has removed 130 accounts linked to Spain
- Facebook has removed 168 Facebook accounts, 28 Pages and eight Instagram accounts for engaging in “coordinated inauthentic behavior” targeting people in Moldova
- Facebook has removed 234 accounts, Pages and Groups from Facebook and Instagram as part of a domestic network in Indonesia
- Twitter has removed 33 accounts connected to Venezuela
- Facebook has removed 9 Facebook Pages and 6 Facebook accounts for engaging in “coordinated inauthentic behavior” originating from Bangladesh
- Facebook has also taken increased action in Myanmar, including the removal of at least three co-ordinated misinformation networks.
The breadth of the various misinformation and manipulation campaigns is almost too much to comprehend, as each, within itself, utilizes a complex web of lies and tricks in order to influence different groups of voters.
Have you been influenced? Almost definitely, but it’s impossible to know exactly how it works, and where that manipulation may be coming from.
For example, you might be web savvy and well across the various issues and the digital methods being used to push people in one direction or another. But let’s say your uncle isn’t so web savvy, and regularly shares misinformation about climate change or a similar issue. You see these posts, and you feel compelled to respond, or maybe it just solidifies you even further in your own views. In this sense, your opinion has been manipulated, even though you might feel like you’re not directly impacted.
The idea of these campaigns is often to fuel division, so even if you’re aware that the information being shared isn’t true, it still sparks an emotional response – which may, in turn, increase the divide between those for and against an issue.
That’s why this is such an important issue, and as you can see from the massive list of findings thus far, it’s happening likely a lot more than you’d suspect. And these are just the ones they’ve found – there’s no way of knowing how many other, similar operations are still in effect.
That’s why we need datasets like this one from Twitter, so academic researchers and advisory groups can examine methods to better detect and remove such, and stop state-backed groups, and other organizations, from using social media platforms to bend opinion to their will.
Again, even if you feel like you’re largely immune to such, you’re not, and we need to get better at determining how such initiatives are being undertaken in order to maximize the benefits of digital connectivity.