Twitter Addresses the Problem of Fake Media

iot machine learning with human and object recognition which use artificial intelligence to measurements ,analytic and identical concept, it invents to classification,estimate,prediction, database

Maia Clinch Maia Clinch, Product Specialist

Twitter recently announced a new policy regarding manipulated and synthetic media. The issue of “fake media” is rapidly becoming a major headache for all social media sites. As technology allows people to distort photos and videos, social media sites are having to deal with users distorting reality for various purposes. Twitter declares that beginning in March 2020, it will start labeling media it deems to be “synthetic” or “manipulated.”

What is Fake Media?
Prior to the digital age, someone would need quite a bit of time and skill to distort an image. With today’s technology, however, it’s a relatively simple matter. There are several names for this type of media, including manipulated and synthetic media. Synthetic implies that the entire image or video was artificially created, something that is getting more viable every year. Manipulated media covers a broader range of possibilities. Technically, making yourself look younger in a selfie using a filter is a kind of manipulation. Twitter, however, is really interested in manipulation done for more sinister reasons.

Anyone with Photoshop or a similar program can alter a photo. This is usually done for fun as people distort their own selfies or place themselves in fictional scenes. However, there’s a growing problem of trolls and others with ill intentions doing this to malign their targets. A simple example might be manipulating a photo of a target and showing him or her with a criminal, terrorist, or dictator.

When it comes to video, the possibilities are even more creative and potentially sinister. It’s now possible to make realistic videos of people saying or doing imaginary things. A popular and innocent example of this was done in 2019, when world leaders appear to sing John Lennon’s song Imagine. While the intentions behind this seem benevolent, the point remains that it’s fictional yet looks quite realistic. People can just as easily be portrayed as quoting Hitler, committing crimes, or engaged in illicit sexual behavior. We’re really very early in the age of fake media. As the technology to create such fabrications is more readily available, it will no doubt become a more common issue.

How Social Media Sites are Responding to Fake Media
While people can post fake media on their own websites, it’s social media sites, including YouTube as well as Facebook, Instagram and Twitter that are at the center of the controversy. Those who want to spread false information have the best chance of having their images and videos go viral if they post them on these sites.

Facebook recently announced new measures to stop the spread of fake news. The site now flags content it recognizes as potentially fake and suggests alternative stories as alternatives. However, the issue of fake media is even more problematic. Fake facts can be argued with. However, fake images can spread virally in minutes and many users won’t bother to question them. Facebook also announced that it was banning fake videos that might sway voters in the upcoming 2020 election.

YouTube also keeps a watchful eye for fake videos and has been doing so since 2016. Of course, as the world’s largest video sharing site, YouTube has even more reason than Twitter and Facebook to be concerned about fake videos. Twitter’s policy on manipulated media is fairly nuanced. They are going to look at the intention behind the post, including the text. In other words, they’re mainly concerned with media that’s obviously trying to deceive users rather than content created for amusement or as a parody. Other social media sites have also taken a stance on this issue.

Identifying Deepfakes
One challenge with fake media is identifying it in the first place. When someone posts a news story, the facts can be checked, though this isn’t always easy. With images and videos, however, you need advanced tools to even spot sophisticated fakes. The latest AI technology makes it possible to create what is known as deepfake photos and videos. They involve face swaps and, in the case of videos, examples of the target’s voice to make him or her apparently say things they didn’t actually say.

As an article in the Guardian explains, the majority of deep fakes are pornographic. However, this technology can be used for any purpose, from the comical to the malicious. It takes equally advanced AI technology to identify such fakes. There’s an emerging war between the creators of fake media and those whose job it is to identify this kind of deceptive media. It’s similar to the ongoing conflict between ethical vs. malevolent hackers.

The Challenges of Addressing Fake Media
Deepfakes provide social media with a number of difficult challenges.

  • Spotting deepfakes. As this practice gets more widespread, Twitter and other sites will be under pressure to identify fakes before they’re widely shared.
  • Deciding what to target. Twitter is currently only concerned with media that could cause “serious harm.” This leaves many gray areas, such as differentiating parodies that are simply intended to entertain from content that is truly designed to deceive viewers.
  • Labeling or banning? So far, Twitter is not actually banning anything, only planning to label certain manipulated or synthetic media. Facebook and YouTube are taking stronger stances, banning this type of content entirely.
  • How forcefully will Twitter and other sites enforce their policies? Twitter’s new rule states that users “may not share synthetic or manipulated media that are likely to do harm.” They don’t, however, specify any consequences. If users don’t have their accounts blocked, for example, they won’t have much incentive to stop the behavior.
  • Legal implications. People who are trolled or defamed by fake media could potentially sue not only the creators of the media but the social media sites where this content is shared.

Twitter and other social media sites are just starting to contend with the issue of fake media. So far, the problem is still fairly limited. If the skills and tools to create deepfakes get more widespread, however, it could quickly balloon into a major problem. A great deal will hinge on technical issues such as the ability to accurately identify manipulated and synthetic media.