I think you’re misunderstanding my objection here.
I’m not suggesting that it would necessarily be overly broad… I’m saying that with the amount of content that needs to be regulated, you would not be able to moderate it all to the degree that you would be able to prevent any illegal activity from being posted. It’s literally not possible to perform that moderation.
This means that allowing users to generate content, will automatically be exposing yourself to criminal liability.
This is the fundamental basis of section 230.
If that liability exists, then you cannot really have the internet at all.
This is why, instead, it would perhaps be better to limit liability to only very specific types of content (not based on content, but based on delivery mechanism). But it’s necessary to understand that the actual, unspoken intent in those cases is to actually end that content delivery entirely. And as such, JSnell pointed out that even with the description given here, it actually would result in prohibiting some very critical content delivery that we all depend on and no one here objects to.
But again, the problem with what you are suggesting is that by operating, these companies would be opening themselves up to criminal liability, while having no real control over whether they would actually commit crimes. It would be based on whether some of their billions of users committed crimes, and if they didn’t catch them prior to posting about it, they would then be criminally liable themselves.
This isn’t a position that I think any company would be willing to put themselves into.
So you want to prevent normal people from being able to generate content then. You think it’s better to silence the general public, than allow them to post stuff publicly, because some of them may say things you think are bad.
I mean, that’s ok for you to believe that. I disagree with your ultimate goal then.