Instagram is betting on artificial intelligence weapons to battle cyber bullying, using AI to scan photos for abusive content at the Facebook-owned service. Instagram chief Adam Mosseri Tuesday said artificial intelligence is being used to detect signs of bullying and then automatically flag content for review by staff from the image-oriented social network.
“This change will help us identify and remove significantly more bullying,” Mosseri said in a blog post. ”
Online harassment is a big problem, with 40 percent of all people on the internet having reported experiencing some form of harassment online, according to Pew Research Center. In order to combat harassment on Instagram, the photo-sharing platform is gearing up to let people with “high volume content threads” filter their comment streams, or just turn them off entirely, The Washington Post reports.
For those who decide to leave on the comments, they can create a banned words list that will enable them to hide the comments that use those terms. Soon, Instagram will enable everyday people on Instagram — the ones with not as much action on their accounts — to moderate their comments.
Facebook has also announced the introduction of new tools to tackle online bullying, the company has announced. Specifically, it’s rolling out a way for people to hide or delete multiple comments at once from the options menu of a post, and is beginning to test new ways to more easily search and block offensive words from showing up in comments. It’s also rolling out a new way to report bullying on behalf of others and is offering the opportunity to appeal decisions related to bullying and harassment.
This comes less than a month after Instagram released a comment moderation option for business pages, which similarly lets accounts block comments with certain offensive words and phrases. Here’s what the functionality looks like for business accounts: