There’s long been a policy within social media of being hands-off. And this absolutely applies for Google’s massive video sharing service, YouTube. Over the years, the service has undeniably become the norm for most users to share videos online. And while the majority of content is totally innocent and otherwise non-offensive, things have gotten worse for users and content creators over the years.
There’s still a seedy underbelly under that veneer. Youtube has consistently profited from some pretty shady stuff. There have been so many controversies at this point that it’s hard to have any faith in the ability of YouTube to engage in fair and honest enforcement of policies. And the company refusing to punish PewDiePie for racial slurs, or refusing to ban Logan Paul for filming a suicide victim, certainly didn’t instill confidence either.
It’s with that in mind that this new series of announcements should raise some alarm, because it feels a lot like more of the same, despite attesting to being the opposite of YouTube’s usual lax enforcement of policies. Read the full list of incoming changes here. In short, YouTube wants to crack down on toxic comments and violent rhetoric.
YouTube’s rules already forbade any direct threat of violence and harassment based on protected characteristics. The new policies now also cover “veiled or implied threats” that “includes content simulating violence toward an individual or language suggesting physical violence may occur.” It also prohibits “demeaning language that goes too far.”
While all of these seem like good changes, and under a sane universe they would be, if YouTube didn’t try to go down the same route that got them into this mess in the first place. Google has already instilled tons of new content controls and filters onto YouTube content, mostly to the detriment of creators. They just recently announced a lifting of some restrictions too, which is a bit weird and contradictory; now that I think about it when squared with these new rules.
The company went further on Twitter, explaining that they will also be rolling out tools to automate the process. YouTube Content ID, monetization controls and other automated systems are constantly derided for being inaccurate and biased. And no one should have any faith that YouTube will change its tune now. It seems far more likely that the increasingly politicized nature of the problem, with radicalization and propaganda being ever-present online, that YouTube will continue to follow the path of Twitter, Facebook and other social media.
What path is that? Ignoring extremism and continuing to allow the far-right and other extremist elements free-reign to propagate and recruit on their platforms is a given for these multi-billion dollar corporations. Pandering to jerks has made them all rich so far, why stop now. Because let’s be real, this is a far-too-late attempt to put the hate-fueled far-right back into the pen after Google gave them all the space they needed to grow and radicalize new generations.
I would be willing to bet that this system has little to no impact on toxic comments not of a political nature either. Toxic people online are really clever about dreaming up new insults and harassment tactics. There’s no reason to think that bypassing whatever protections YouTube puts in place won’t become the norm.
So whatever happens here, it’s clear that the likelihood of the issue really changing or lessening in impact because YouTube threw another algorithm at the problem, is next to nil. The far-right online has far too much velocity now to be stopped by banning them, even entirely, from the platform. It’s a start though. Tell you what, if YouTube bans all the Fascist filth, I’ll have some faith in them, but I’m not holding my breath.