How to spot fake social media accounts, bots and trolls
January 7, 2022In 2014, The Washington Post ran a story titled "Irony alert: First tweet from Putin account congratulated Obama?" Four years later, Business Insider published a piece about Russian President Putin following only 19 people on Twitter, one of whom has been dead for five years. It turns out that both media outlets were duped by a fake, English-language Twitter account impersonating Russian President Vladimir Putin. They were not the only respected media organizations to make this mistake.
Set up in November 2012, the fake account had amassed almost a million followers by November 2018, when it was finally suspended by Twitter for impersonating the Russian leader. Confusingly, Putin's official, verified English-language Twitter feed has a similar number of followers.
The now defunct imposter account mainly retweeted official Kremlin statements, instead of misinformation. This allowed it to go undetected for so long.
Fake accounts of this kind are increasingly used as tools of information warfare. Fortunately, by now most major social media platforms are aware of this threat.
The precise number of fake Twitter accounts is unknown. According to one Twitter staffer, however, each week the platform challenges between 8,5 and 10 million bots, with two-thirds of malicious accounts automatically removed. Facebook estimates that 5% of its worldwide monthly users are fakes. The social network deleted some 1.7 billion fraudulent accounts in the second quarter of 2021 alone.
What constitutes a fake account?
According to Facebook, each day the platform blocks millions of attempts to set up fake accounts. Facebook says such accounts are "created with malicious intent to violate our policies."
Twitter, meanwhile, reserves the right to permanently suspend accounts for impersonating individuals, brands or organizations in a misleading or deceptive manner. Accounts that happen to have a similar username or profile picture to others are not, meanwhile, in automatic violation of this policy.
How can I spot a fake account?
Genuine Facebook, Twitter and Instagram accounts operated by companies and persons of public interest usually bear a blue verification symbol. You will see this symbol on the profile pages of politicians and celebrities.
It helps you tell apart, for example, the official profile pages of Facebook founder Mark Zuckerberg and Tesla boss Elon Musk from fraudulent ones that may not be readily apparent as fake.
Recently, however, Twitter admitted mistakenly verifying numerous inauthentic accounts.
When accounts lack a verification badge, pay attention to:
- Account names and profile URLs
Often, fraudsters will retroactively change their Twitter or Facebook usernames after registering on the platform. In this case, the original Twitter account name — preceded by the @ symbol — will provide a clue that something could be wrong. Likewise, a divergent Facebook URL should make you suspicious.
Take, for instance, this Facebook account supposedly owned by Elon Musk (see screenshot below). Registered to the misspelled username "Muskk" and "facebook.com/elonreeve.musk.338," it is highly unlikely that this is Elon Musk's actual Facebook page. Beyond that, organizations and persons of public interest will usually link to their real social media accounts.
If you want to verify the authenticity of an account registered to a person who is not in the public eye, use search engines to check whether said person is present on other social networks using the same name. Check if the profile pictures used are similar. Also check if profile biographies, contact and location details match up. And examine whether the accounts share similar content. If you can detect a large degree of overlap, then you are most likely dealing with a genuine account by a real person. It is very unlikely, after all, that fake accounts with identically named profiles and near-identical content are operated on different social networks.
Do bear in mind that these are clues, rather than solid evidence, for gauging whether or not an account is authentic.
- Profile pictures
You can also study profile pictures for hints (to learn more about spotting manipulated images, see the explainer in this series), provided one is available. Use the reverse picture search method (a service offered by Google, Bing and Yandex) to find out whether the picture used in the profile depicts someone else than claimed, or if the image has surfaced elsewhere online. Low-resolution pictures can be a red flag as well: it is very unlikely persons of public interest will use grainy pictures for their official social media profiles.
- Followers, friends, subscribers
Is it likely Germany's former Chancellor Angela Merkel has fewer than 4,000 Twitter followers? And how plausible is it for the Duchess of Cambridge, Kate Middleton, to have a mere 52 friends on Facebook? Neither account seems particularly authentic.
But fakes are not always so easy to spot — recall the fake Elon Musk account with over 60,000 followers mentioned above.
To check whether this could be a real account, load up Musk's verified Twitter account, and compare it to the Facebook account in question. While Musk has over 61 million followers on Twitter, the dubious Facebook account has a mere 60,000 — so something must be fishy. You should always be suspicious whenever you spot a conspicuously large discrepancy regarding a person's followers, friends or subscribers across different social media platforms. Followerwonk is a helpful tool for analyzing follower figures on different networks.
Another way to assess the authenticity of an account supposedly run by a famous person is to check if verified accounts interact with it. Do teammates from the same sports club, for example, comment on photos shared by dubious account? Or do fellow, verified accounts repost content shared by the account in question?
- Content and online behavior
Pay attention to when a social media profile was created. If it was set up and has been active for years, it could well be real. Still, this is no surefire way to gauge authenticity. The fake Putin account discussed above, after all, was active for six long years.
Also study the kind of content posted by an account. Does it match the person, or does it seem out of character? If someone constantly changes his or her location, this should raise eyebrows — unless the individual works as a travel blogger, war correspondent or in a similar capacity.
Be wary of people sharing content they seem to have little or no connection to. A German student posting images purporting to show an Afghan war zone? A Russian pensioner sharing photos from what she claims are Parisian anti-vaccination protests? Neither case seems particularly plausible. Both could well be fake accounts, or at least be sharing unverified misinformation.
Suspicious behavior like this is typical of bots, too.
What are bots?
Bots, short for robots, tirelessly comment on Facebook posts, share content, or artificially stoke online debate on marginal or otherwise overlooked topics. Their behavior, as the name suggested, resembles that of automated robots.
It is essential, however, to differentiate between good and bad bots. Good bots may automatically share news, or weather forecasts, as well as earthquake alerts or satellite images on social media.
Bad bots, in contrast, are designed to imitate genuine human activity to advance a certain agenda. Depending on the algorithm, such computer programs may compose and publish social media posts or comments, follow others, or even send out friendship requests.
Bad bots can distort reality by amplifying certain political opinions on social media platforms, and drawing artificial attention to certain issues by repeatedly sharing falsehoods or disinformation and undermining constructive online debate.
Recently, researchers from Carnegie Mellon University analyzed over 200 million Tweets sent out in 2020 discussing coronavirus or COVID-19. They came to a worrying conclusion: 82% of the top 50 influential retweeters were bots; likewise 62% of the top 1,000 retweeters.
How can I spot bots?
Bots, like fake social media accounts, can be detected if you pay attention to:
- Account names
Usernames consisting of jumbled combinations of letters and words could indicate you are dealing with a bot.
- Profile pictures
Does the account lack a profile picture? Does it show someone, but in very poor quality? If so, be wary.
- Profile bio/account details
Sparse profile information, a recent registration date, and a dubious location that does not seem to match the person in question should get your alarm bells ringing.
- Online behavior that seems out of character for a human
Have you observed a single social media account simultaneously publishing identical content across different platforms, or underneath several different posts? Does the account post countless crude replies in a very short time, or constantly retweet content? If the answer is yes, you are most likely dealing with a bot.
Bots are also known to typically follow a large number of other accounts, while having few to no followers of their own. To check follower numbers, consider using tools like Followerwonk or Botometer.
What are trolls?
Trolls are real human beings who exhibit destructive and hyperactive online behavior, much like bots. They are often paid to harass certain public figures or media organizations. Facebook describes such targeted action as "coordinated inauthentic behavior on behalf of a foreign or government actor." Such campaigns can even be waged by so-called troll factories (to learn more about trolls and their role as agents of state propaganda, see the next article in this series).
One of the most prominent examples of a troll factory is the Internet Research Agency, in St. Petersburg, Russia. EUvsDisinfo, a project by the European External Action Service to counter disinformation, found that the agency had disseminated falsehoods and pro-Kremlin messages in multiple languages during various European and US election campaigns, and in the lead up to several referenda.
The agency gained global notoriety when US intelligence services published a report in January 2017 analyzing how it had worked to manipulate US public opinion ahead of the 2016 US presidential election. Since then, Facebook has been busily deleting accounts affiliated with the agency.
Troll factories and so-called troll armies are also said to operate out of India, China, Saudi Arabia and Mexico. Indeed, in 2020, The Washington Post revealed that teenage supporters of former US President Donald Trump were paid to spread disinformation on social media.
How do I spot trolls?
Troll accounts are harder to detect than downright fake accounts, as they are usually controlled by real human beings. Spotting troll accounts becomes even more challenging when they have been set up and maintained over years as part of wider troll networks. You cannot, therefore, easily identify them by their dubious profile names, recent registration dates or suspicious follower numbers. Even so, be sure to check for such potential clues.
In addition, closely study the content shared by the account in question. Does it contain links to websites known for publishing disinformation? (For more on detecting disinformation, see the first article in this series.) Pay attention to whether the user also publishes personal posts alongside sharing other content. If an account only reposts third-party content, this could indicate you are dealing with a troll. Be especially suspicious if this shared content contains disinformation.
Check to see if you have a common friend on a given social network. Then reach out to this person and inquire about the suspicious account.
Pay attention also to how much time a potential troll spends on social media. Does the person devote hours to commenting on certain online discussions? Does the individual post the exact same message underneath different social media posts, or publish and share content in rapid succession? These could be clues indicating you are dealing with a person whose Facebook, Twitter, Instagram or YouTube activity is a job, rather than a hobby.
One of the biggest telltale signs of troll activity is when the person in question fails to contribute anything constructive to an online debate. If the person focuses solely on manipulating others and evoking negative emotions, cease any further discussion and flag the account to the platform for verification.
More on how to spot misinformation:
- How do I spot fake news?
- How do I spot manipulated images?
- Fact check: How do I spot state-sponsored propaganda?
- How do I spot a deep fake?
This article was originally written in German.