Social media controls the world

Sometimes it’s great, but as someone mentioned here, iirc, eventually those algorithms will lead you to insane racist shit. Like invariably. Like how Pandora always led to Dio for whatever reason if you left it alone long enough. You could be listening to classical music, start doing something else and come back to Dio. No matter what all music led to Dio.

YouTube refuses to moderate anything at the end of the day. That takes money and they’re not willing to spend it. Both them and Facebook are in the same boat in that regard. Both multi-billion dollar companies that refuse to spend money monitoring anything. They have reporting features that they ignore or automate to bots/algorithms instead of having a human being in the loop, so they’re easily exploited or fail horribly.

It happens with all manner of extremism. Watch a medical video, and it starts recommending videos that teach you how to reverse your heart disease by eating only ginsing and grout; watch a child development video and you get anti-vaxx stuff; watch space science and you get flat-earth; watch investing and precious metals and you get Rothchilds and the global banking elites, etc.

Exactly. That’s a failing of the algorithm, that they refuse to address. It drives everything to the extremes and/or big channels.
It makes me think of the uselessness of Steam’s “discovery queue” where literally everything on the list is “Because it is popular.” Yeah, thanks for the individualized discovery queue Steam. I’ve never shown an interest in any of these things, but they’re popular so just shit them out for me over and over again, maybe I’ll give in.

It’s not a failure of the algorithm, because they don’t see it as a problem, indeed it is their business model to make you watch as much as possible. The social cost doesn’t factor in.
{Obligatory Zeynep}

That sounds like it’s working perfectly, imo.

Well to be fair, he is the last in line.

The YouTube thing is a tough issue to solve. There is no way you can manually moderate the amount of content coming in, much less the comments on the content. Algorithms to police that sort of thing can only do so much, if you tighten them way up you catch flak for auto-suspending people whose content or comments didn’t necessarily deserve such, but if you keep them slack then you let idiots get away with lots of idiocy.

Social media platforms were meant to be self-policing. The users of any platform would be the ones reporting violations of terms and objectionable content. The problem with that is twofold : First, there is way more content than anyone ever dreamed there would be, and Second, people have become more shitty in general as social media has matured. It’s hard to self-police for douchebags when the very nature of your service allows people to act like douchebags without consequence.

I mean it isn’t really. The issue here isn’t just the content (though that is a huge issue) it is that this content is being monetized. They are approving all of these accounts for monetization. They can’t really control who uploads videos, but it would be magnitudes easier to control how content can be monetized. Say, a stricter vetting process? Some transparency in the decision making etc.

They never can control what is being uploaded and who is commenting, but they certainly control how ads get served on their platform. These videos could have been found, and it would be a really bad look for youtube, as it is clear there are child predators using their site for nefarious purposes, but it wouldn’t have been as bad if these videos weren’t being monetized. This is why the advertisers left. They had been assured their ads wouldn’t be served to questionable content, and that is exactly what happened.

Youtube’s standards for ad service and monetization are completely and utterly broken. For instance, I can’t monetize content on youtube because in 2009 my friend clicked on a bunch of google ads on my website, and that was seen as cheating their system. I didn’t direct my friend to do this, but as he was helping me work on the site, I have to guess his IP address was suspect for clicking on the ads. That got me a lifetime ban, but some pervert re-uploading some 10 year old girl doing gymnastics can get money from disney and nestle for serving ads. Fucking disgusting.

I am not saying this out of any animosity towards Google for what happened to me, but it is obvious that they basically have no clear reasoning or policies surrounding what content can be monetized, and who can get paid. Their house is not in order.

And I want to say there is a special place in hell for people criticizing the people bringing this issue to light because it might cause “ad-pocalypse 2” and hurt the channels they like.

Shit is broken, and Youtube deserves to eat mountains of shit for this.

That’s pretty messed up. Many of these companies already have a means to address these… through their reporting functions which they outright ignore or don’t support. They don’t have to monitor everything that is uploaded.

I thought there was an article floating around somewhere that had great suggestions on how to reduce this stuff from one of their employees and i was shot down. I’ll probably never find that again.

And their response was basically to fire up some bots that declared everything wasn’t ad friendly. Channels about HEMA or ancient weapons got flagged for violence or whatever. Every video game got flagged because they might have guns or someone might “kill” something. It was and still is a clusterfuck.

I mean, it is possible, but at the same time the first ad-pocalypse never ended anyway really, so who cares at this point? The damage has been done, but it didn’t solve the problem, so fear that it will cause the damage already happening is kind of a questionable position.

Speak of the Devil:

https://twitter.com/drewtoothpaste/status/1100033982568321025

The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”

“He who fights with monsters should be careful lest he thereby become a monster. And if thou gaze long into an abyss , the abyss will also gaze into thee.”

They need to figure out how to rotate these responsibilities and never leave it with one person too long.

The solution to out of control illegal user-created content sites isnt censorship, its accounts linked to real life identities and monitoring.

Which is more expensive, and reduces volume of content, which reduces revenue, but you won’t be Paedo-Central any more.

I can implement any form of individual identification and verification processes and systems at an enterprise level, if any socmedia giants with deep wallets need it and want to sell it to the industry, i’ll quickly form a company and sell it to you for $1.2bn

What do you mean? This is what Facebook does, minus monitoring. You probably don’t see the pedo-crap on Facebook because it’s illegal, pretty much universally rejected and hated, but the flat earth, the holocaust stuff, and the anti-vax stuff… it’s all over the place on Facebook, with people’s real names right next to it.

And you can’t report them most of the time and if you do it’s fine according to Facebook.

New rules on Twitter for depictions of child sex.

Twitter does not tolerate any material that features or promotes child sexual exploitation. This may include media, text, illustrated, or computer generated images.

Anime “artists” are crying.

Anime is a part of an entire culture in another country, there is nothing " " about it. It’s a real thing.

There’s anime artists and there’s “my waifu happens to look like an underage girl but is really a 1000-year-old-demon so it’s ok to sexualize her” anime “artists”.