On December 16, Instagram announced that it will be using artificial intelligence to identify online bullying. The focus will be on captions that accompany photos and videos. When the AI identifies possible offenses, it will issue a warning to the user. This feature is not yet available for all accounts. As of now, it’s being released gradually in select countries that Instagram has not specified.
Online bullying has been a serious issue for some time, one of the more blatant drawbacks of the social media revolution. Instagram and other platforms have been searching for an effective solution to curtail cyberbullying. The new feature is designed to discourage posts that are designed to hurt others.
Young Users are Vulnerable on Instagram
While online bullying can occur on any social media platform, Instagram is a site that’s especially popular with young users. Apart from bullying, studies have linked Instagram with self-esteem issues as many users are tempted to compare themselves to others on this image-driven site. Instagram photos and videos not only highlight personal appearance but fashion as well as glamorous locations. Cyberbullies have a wide range of topics and approaches if they want to make others feel inferior. According to a study of over 1,000 college students, 78 percent reported at least one incident of cyberbullying.
Instagram, like all social media sites, already has measures in place to discourage bullying. Users can block other users. Offensive content can be reported. This, however, is generally used for extreme cases such as explicit threats. One problem with online bullying, especially when pertaining to children and teens, is that there are many gray areas. It’s one thing to block a stranger, who could be a dangerous online predator. It’s something else for a child to block a classmate. Social media plays a major role in the social interaction of young people today. Blocking friends or even “frenemies” could have negative consequences such as falling out of favor in social circles.
The Need to Remove Fake Accounts
Online bullying takes many forms. In some cases, users and victims are well known to each other, as when they go to the same school. In other cases, however, bullies hide their identity using fake accounts. Accounts are created under false names for a variety of reasons, including pranks, fraud, propaganda and bullying. According to one estimate, there are over 150 million fake Instagram accounts. Of course, many fake accounts are created for more complicated and profit-driven reasons that bullying. Some of the largest culprits are promoting porn sites, for example. Others are spreading political propaganda. However, creating fake accounts is also a tactic used by cyberbullies.
An especially harmful type of bullying occurs when a user starts an account under a real person’s name and uses it to discredit him or her. When someone is using a fake account, it’s harder to monitor and control their behavior. Someone who knows how to do this probably doesn’t care very much if an account is banned. He or she can simply start a new one. Thus, in order to make real progress on cyberbullying, Instagram and other sites need effective technology to spot and remove fake accounts.
How AI Can Help Reduce Bullying
The AI technology at the root of Instagram’s strategy is similar to that used by marketers, law enforcement agencies, and anyone working to make machines more adept at understanding language. Instagram has been pursuing this type of strategy for several years. In 2016, both Facebook and Instagram were using an AI tool called DeepText, originally created to fight spam. It was later used to identify racist comments and, more recently, bullying in general.
This mission is part of a larger quest to create AI programs that can recognize language with an almost human-like understanding. Humans have some key advantages over even the most advanced software programs, at least for the time being. They can understand things like nuance, sarcasm and context. Projects such as DeepText are designed to close this gap. For example, it would be hard for AI to spot the difference between good-natured teasing between friends and a case of mean-spirited bullying. AI is, however, getting more advanced all the time. For one thing, it can now identify offensive photos as well as words.
Will It Work?
Instagram previously experimented with a feature to discourage harassment and bullying, though in a less forceful manner. In the old version, Instagram would ask the user “Are you sure you want to post this?” The new feature is more explicit about pointing out the possibly objectionable nature of a post. One issue that has yet to be determined is how sophisticated the AI feature is. There are a couple of potential problems that could make it cumbersome. On the one hand, it might flag innocuous posts or content that’s meant to be simply amusing rather than hostile. Users enjoy posting silly and humorous memes on Instagram and won’t appreciate warnings popping up constantly.
Another possible hurdle is that social media savvy users will find ways to circumvent the feature, possibly by substituting words or making their attacks more subtle. Perhaps the most interesting question is whether people who use the internet to bully others can be shamed into modifying their behavior by such a feature. After all, Instagram is not actually preventing people from posting questionable content, only making suggestions. At the same time, the new AI feature isn’t only appealing to users’ consciences. The warning will include information about Instagram’s community guidelines. There’s at least an implicit threat that if users post content that violates the site’s TOS, they could face penalties such as losing their account.
Instagram’s new feature has the potential to discourage bullying. If nothing else, it will make potential bullies aware that their activities are being monitored. However, the problem is one that’s fairly complex and is not likely to be solved by any single measure. Reducing cyberbullying requires awareness on the part of victims, parents and mental health professionals. People also need to be more alert about reporting incidents to Instagram and other platforms.