For many of us, keeping up with YouTube drama and “tea” channels is a guilty pleasure we hate to admit we enjoy.
However, sometimes the banter and the shade can go a step too far; content can border on hateful and situations can become a case of harassment.
While social media networks are great platforms to share ideas and content, there is a lot of unsuitable speech online. Instagram, for example, has recently taken dealing with cyber-bullying to task.
Lately, there is speculation about the responsibility YouTube should when it comes to cracking down on online bullying and hate speech.
This debate can be traced back to the incident with YouTuber Steven Crowder, known for his conservative talk show.
Crowder recently made remarks about at Vox video producer, which then prompted further hate comments towards Maza. The video producer then reported these to YouTube as a case of online harassment. His stance was that bullies like Crowder are always going to exist, but it is the responsibility of YouTube to monitor and address hateful content.
YouTube responded by demonetizing Crowder’s video so he could not profit from the content and CEO Susan Wojcicki issued an apology to those who were offended by Crowder’s remarks.
However, she said after the team closely evaluated Crowder’s channel, they found that it was not in violation of their Community Guidelines so did not take his channel down.
Wojcicki defended YouTube’s decision as she explained it was necessary to look at the context, and said Crowder’s videos do not breach their policies.
They introduced a series of rules after highlighting the actions they would take in dealing with unsuitable content.
In terms of hate speech, YouTube will continue to remove videos with such content.
With borderline content, however, YouTube says they will simply reduce its exposure (or “bury” it) but not remove it.
Borderline content entails subjects which may not be as extreme or obviously harmful, such as sources that are considered to be authorities posting anti-vaccination videos.
While these are steps in the right direction, there are concerns that YouTube is not doing enough.
They faced heavy backlash because their Harassment & Cyberbullying Guidelines explicitly mention that users should not post “…content that makes hurtful and negative personal comments/videos about another person.”
Now YouTube are drafting the creator-on-creator harassment rules that could have been effective in situations like the Tati Westbrook and James Charles (with a sprinkle of Jeffree Star) showdown.
While neither Tati nor James was inciting supremacist or extremist speech, they certainly made comments that could have threaten the other’s career and been perceived as slanderous.
Such comments are not directly offensive to larger communities but they nonetheless can have severe impacts on content creators’ lives.
Part of YouTube’s solution will probably be de-monetization of hurtful content, essentially removing ads from such videos.
YouTube CPO Neal Mohan explained many content creators are incentivized to say more extreme or provocative things, by the money that can be made, removing that element could have an impact.
We are curious to see what new policies YouTube releases and how effective they’ll be at cracking down on hate.
What are your thoughts? Let us know in the comments below.
Photo credit: Unsplash.