I think you’re misunderstanding my objection here.

I’m not suggesting that it would necessarily be overly broad… I’m saying that with the amount of content that needs to be regulated, you would not be able to moderate it all to the degree that you would be able to prevent any illegal activity from being posted. It’s literally not possible to perform that moderation.

This means that allowing users to generate content, will automatically be exposing yourself to criminal liability.

This is the fundamental basis of section 230.

If that liability exists, then you cannot really have the internet at all.

This is why, instead, it would perhaps be better to limit liability to only very specific types of content (not based on content, but based on delivery mechanism). But it’s necessary to understand that the actual, unspoken intent in those cases is to actually end that content delivery entirely. And as such, JSnell pointed out that even with the description given here, it actually would result in prohibiting some very critical content delivery that we all depend on and no one here objects to.

But again, the problem with what you are suggesting is that by operating, these companies would be opening themselves up to criminal liability, while having no real control over whether they would actually commit crimes. It would be based on whether some of their billions of users committed crimes, and if they didn’t catch them prior to posting about it, they would then be criminally liable themselves.

This isn’t a position that I think any company would be willing to put themselves into.

So you want to prevent normal people from being able to generate content then. You think it’s better to silence the general public, than allow them to post stuff publicly, because some of them may say things you think are bad.

I mean, that’s ok for you to believe that. I disagree with your ultimate goal then.

I could see you being able to regulate it on the basis of allowing people to opt out, or require them to opt in to such data tracking.

That kind of stuff is very possible.

The problem there, I would argue (with a lot of expert support), is that those sorts of laws often result in horrific outcomes.

I disagree.

Let’s say I set up a social network just for my neighborhood. I could probably moderate that by myself, in my spare time. Quarter to Three has maybe a few hundred users and is adequately moderated by four people, none of whom do it as a full-time job. There are subreddits with millions of followers, and those are adequately moderated by volunteer teams of ten to twenty.

There does exist some number where moderation isn’t feasible, sure. But it’s a big number. And I would argue that a social network ought to either not grow beyond that number or invest appropriate resources into moderation, human or otherwise.

I agree! But we often ask companies to do things they don’t want to do, for the betterment of society. I’m sure cigarette manufacturers really wish they didn’t need to remind people on the packaging that smoking will kill them. But they do it anyway, and they still manage to make a bunch of money.

I think you’re slippery-slopin’ a bit here. I don’t agree that opening up social platforms to some liability would destroy the internet or silence the general public. It’s disingenuous to suggest that’s my goal.

Sure, they can do that because they wont go to jail because someone said something stupid.

Let’s toss a hypothetical:
Armando goes off the rails and shoots up his local GOP headquarters.
The Feds go through his social media history.
They find Armando posts.
They come for Tom.

Now that’s just a random silly example, I pulled from my ass. I do not think Armando will go on a murder spree. I want that point to be clear. But based on the things he’s says here in jest, Tom would be liable for allowing him to foment such “murderous ideas” or whatever and not doing something about it.

But the threat of such liability would surely result in Tom adopting a different moderation style, yeah? Warning and then banning Armando?

This is where good faith protections would come into play. If Tom did nothing about it at all, there would be liability. If he warned and then banned him, or just banned him outright, he should be free from liability. If he warns him time after time without doing anything further, then it would be up to lawyers to determine.

I should reiterate that I’m not advocating that this is the best or even a good way to solve it. Just that I don’t think it’s an impossible problem.

I see this as a huge negative.

Edit: And also only the beginning of a very long list. I’m fairly sure P&R wouldn’t be allowed to exist at all because it would have too much liability. So no P&R discussions anywhere on the internet.

Ok. So how does the fact “you watched videos by xyz” map to this recommendation? Is this another video by xyz? Is the viewership similar to that of xyz (at which point we’d need to drill down to what similarity means)? Are there other videos that can get recommended to people who watched xyz; if there are, why did you recommend exactly this one? Are there other creators than xyz whose videos I’ve watched; presumably yes, so why did you choose to give me recommendations related to xyz rather than abc? Are there criteria other than having watched a video by some creator that might cause recommandations; presumably yes, so why did you not choose to give me a recommendation based on one of those instead?

And so on.

But the problem we were purporting to solve was the accidental radicalization of people.

What you’re asking for is just being able to make the recommendations configurable so that you could get better ones (assuming you wouldn’t get worse recommendations due to the difference between stated vs. revealed preferences). Most people would not bother, and those people are also the ones most at risk.

The uneducated and unemployable guy who is currently sitting in his parents basement drinking beer and reading Facebook, and slowly getting more and more extreme recommendations, is not going to be tweaking the ML parameters. Or if we made the system that’s more explainable, he would not be asking to be shown more university lectures. He is reading shit because he likes reading shit.

So /r/…/new? It works fine for a subreddit that gets a dozen posts a day. if it’s 50x that, /new becomes basically an unreadable slush pile.

But what if Tom never noticed? Would he be expected to read everything posted to the site?

I don’t really frequent subreddits that have that much traffic, so it works fine for me. And it’s more the hiding of comments and “continue this thread” than the ordering of posts that really gets my goat, though I definitely prefer chronologic feeds

Well, Facebook has so many pies it can legitimately claim it needs the data, but it still legally limits how they share it.

Well, it is, it just can’t be the latest models that come out of their minds without further work into understanding the box, but instead only slightly older stuff that already went through the process. Tough shit.

Agree with what you say about 230 (and it’s similar elsewhere), the discussion has been around for a while. At least Epic would have an excuse for having no forums, but, well, neither would any other game company.

Papers do explain the intuitions behind creating new methods, simplify them and present them. They do come from a human, after all.

Not much we can do there, free speech and freedom of association and all. I’m just more optimist that it’s not a lot of people.

It is an extremely hard problem to solve, but one thing is for sure: these companies wouldn’t even bother trying if they aren’t forced to do so, either by social pressure or regulation.

People upthread are telling me that most users are too dumb to even edit the existing options, so why would they bother exposing this more advanced tuning?

If you want to be cruel, force them to have a minimum of 50% of people who do and they’ll create a great UX for users.

As I used to say, when we put together notices for 401(k), it’s not necessarily important that everyone reads everything, but it is important that somebody reads it and understands it. Employees talk, and usually important details get around, even if much of it is ignored.

Which is to say, as long as researchers and journalists can get their hands on it, the important stuff can trickle down to regular people.

Think about it like climate science data. I don’t understand any of it, but having the data means that other people that can understand it, make predications and judgments.

But such moderation would not protect you from the liability you are suggesting.

Tom moderates our posts, after the fact, generally on the basis of complaints from other users.

In order to prevent criminal liability, he would need to prevent those posts before they went up, right? Unless you think that removing such posts after the fact, on the basis of complaints, is good enough… Which is generally just how these social networks function today.

Assuming that is not good enough, and you want to impose criminal liability for allowing the posts at all, Tom would need to read everyone’s posts, and then allow them to go through.

Such moderation works largely prevent this conversation that we are having now, and also works require orders of magnitude more time from the moderators. Even for a tiny community like this, it would be entirely infeasible.

I want to interject at this point because I think we only disagree on your last sentence. I agree that it would be unreasonable to review every single post before it airs, and thus any kind of good faith moderation efforts ought to be enough to shield one from liability. But I disagree that this is how social networks function today. I think most platforms understand that increased engagement means they make more money, so they drag their feet on any kind of aggressive moderation of those communities. I’m just saying there ought to be either an incentive or a penalty that makes them want to moderate more proactively.

I would like to think that there is a solution to our social media problems other than going full CCP.

If we are simply talking about after the fact moderation, in terms of illegal content (which is what you have limited this to in this context), I’m pretty sure that it is currently required under section 230 for providers to remove illegal content if they become aware of it.

But if this is all we are talking about, then it’s probably ok, but I think you would need to define what would constitute “more aggressive” moderation, from a regulatory perspective.

Here’s a fun one. Dude made a chrome extension that would let you automatically unfollow (but not unfriend) everything, thus emptying your News Feed, but you could still interact with people, look at their feeds directly, etc. Then…

a few months ago, Facebook sent me a cease-and-desist letter. The company demanded that I take down the tool. It also told me that it had permanently disabled my Facebook account—an account that I’d had for more than 15 years, and that was my primary way of staying in touch with family and friends around the world. Pointing to a provision in its terms of service that purports to bind even former users of Facebook, Facebook also demanded that I never again create a tool that interacts with Facebook or its many other services in any way.