According to a recent study by NATO researchers, social media sites are failing to control the problem of fake content and accounts. This has been an ongoing issue for sites such as Facebook and Twitter for years. Social media sites have made numerous efforts to prevent or at least manage such issues. As the NATO study reveals, however, it’s turning out to be a difficult problem to overcome.
The Problem of Fake Content
There are several distinct yet overlapping issues when it comes to fake social media content.
- Fake accounts. These may be bots or actual humans who sign up using multiple identities. They may use various methods to mask their IP addresses, making it hard to spot them.
- Fake engagement. Engagement in the form of likes, followers and comments go a long way when it comes to establishing credibility. A post with over a thousand likes appears more credible than one with only a few. Yet much social media engagement is fake. The NATO researchers in the aforementioned study found that it was easy to buy tens of thousands of likes, comments and views. In fact, many such services openly advertise on gig sites such as Fiverr.com.
- Fake news/content. Content posted to social media sites, such as links, memes and posts may contain false information. This is perhaps the most difficult area to police as it runs up against the whole dilemma of free expression vs. censorship. Even someone with a legitimate account on Facebook, for example, can link to a fake news site. Or, even more problematically, he or she can simply post something that’s not true.
Every social media site has a process for flagging and removing fake content. However, the NATO study found that even content that was flagged was still available weeks later. This brings up the question of why it’s so difficult to crack down on this problem, even when the culprits have been identified.
Social media platforms are making real efforts to curb the problem of fake accounts. Facebook announced that it disabled over 2 billion fake accounts in the first quarter of 2019 alone. This, however, points to the sheer massiveness of the problem. As most fake accounts utilize automation tactics, identifying and eliminating them is analogous to removing water from a sinking ship with buckets.
How Platforms Fight Fake News
The term “fake news” was controversial from the time it was coined, during the 2016 presidential election. Both President Donald Trump and his opponents throw the term back and forth to describe statements made by the other side. Politics aside, however, there are several ways to understand the term.
The most obvious type of fake news are stories that are demonstrably false. Sites like Snopes.com are designed to expose such stories. Snopes often examines popular Facebook memes that make inflammatory claims. For example, a recent Snopes post answered the question, “Are roses on cars part of a Kentucky sex trafficking plot?” Facebook posts were claiming that sex traffickers were using roses with a poisonous chemical that made victims pass out so they can be easily grabbed. Snopes revealed that the story is false.
There are countless such fake stories on Facebook and other social media sites, including fake celebrity deaths, political scandals and various unsubstantiated conspiracy theories. Many of these stories fall into the category of urban legends, which were around long before the internet. However, with platforms such as Facebook, it’s now possible to have such false stories spread around the world in a matter of hours.
Facebook tries to combat false stories in a number of ways. The latest algorithm is designed to filter out news from untrustworthy sources. It also takes steps to demonetize such stories. Bots are an especially big problem on Twitter, making it even harder to crack down on fake accounts and stories than Facebook. Despite this, Twitter has its own strategies for combating false stories, such as by monitoring popular hashtags. One way to spread propaganda quickly on Twitter is to generate a hashtag. While Twitter may not be able to prevent such practices, it can at least intervene and try to prevent fake and bot-generated posts from going viral.
Does It Help to Label Suspicious Stories?
Facebook has experimented with warning users that certain stories are likely to be false. Dartmouth University conducted a study to measure the effectiveness of different types of warnings on Facebook. They concluded, among other things, that the way such warnings are worded can make a difference in their effectiveness. For example, users are more likely to doubt the accuracy of a story labeled “Rated False” than one labeled merely “Disputed.” YouTube takes a similar approach, posting warnings under videos on controversial topics. Users are then shown links to news stories under such videos. Like other social sites, YouTube is reluctant to outright remove questionable videos as this can lead to charges of censorship.
How Much Responsibility Do Social Media Users Bear?
Some experts believe that it’s ultimately users who have the power to control misinformation on social media. After all, false information only achieves its purpose if people believe it and share it. A social media researcher named Claire Wardle believes that users must develop “emotional skepticism” when it comes to reading posts. This involves not taking everything at face value. According to Wardle, everyone who uses social media is responsible for preventing what she calls “information pollution.”
The real problem, especially on Twitter, is not that there are bots and fake stories but that real people fall for these tricks and spread such posts. A higher degree of skepticism would make such efforts far less effective. Without the viral effect, there would be no real incentive to post fake news.
Fake News and Manipulation on Social Media: No Simple Solution
As the NATO study confirms, there is still quite a bit of work to be done in the area of social media manipulation. It appears that sites such as Facebook and Twitter are not going to eliminate these problems overnight. While they can devise more sophisticated algorithms and implement stricter policies, they also need the cooperation of users if they are to succeed.