Yup, this seems like a good way forward.

I am explicitly in favor of this. If the choice is between no algorithmic feed on FB, or the current state of things? Ill take the former 100%.

And regulating the algorithmic aspect does not require repeal of 230. those are separate ideas. You only muddy the conversation mixing them like that.

I’m not placing any restriction on the algorithm. I’m suggesting that companies that use algorithms need to be held to a much higher bar. If facebook and youtube want to use them, that’s fine, but, like a Newpaper or a Television channel, they should be held to a higher standard of liability.

Okay, so I’m not the only one that is confused by why 230 keeps being brought up?

Honestly, I’m probably one of the least tech savvy people on this forum, so I thought I was just missing something.

Sadly for Tom (Sorry @tomchick) I hardly ever actually read the reviews on the front page. I’m so many years behind (I only buy games that are 75% off or so) that I have to purposefully go looking for them.
Although, he did get me to buy Dominions 3 for full price when it came out. That was a good call on his part.

Scott, this is not how laws work in this country.

When the government takes you into court, the burden of proof lies on the prosecution. It is not your responsibility as a defendant to prove your innocence. Come on Scott, you know this is how our legal system works.

In practice, the FTC would make a case that certain claims are specifically, provably false, and show that they were false. As an example, they would say, “Company X made this claim, and we can show that is is untrue.” Certainly, the process in court would involve them making those claims, and then if the defendant cannot counter them, that would push a jury towards the prosecution like in any case, but the burden of proof is not on the defendant in this kind of criminal prosecution any more than it is in any other criminal case.

Also, actual lawsuits under truth in advertising are pretty rare. As I suggested before, you can look at the homeopathic medicine industry as an example of fundamentally dishonest companies who operate with impunity. They basically make a ton of claims in their advertisements, and then have a disclaimer at the bottom of the screen that says, “None of this shit actually works.”

I followed pretty closely and I’m not getting that impression at all.

Argument 1: Moderating all content is pretty much impossible to do right and even if it were, would be extremely problematic to free speech. Not seeing many flaws in this logic.

Argument 2: Holding media companies accountable for content prioritized via algorithm is a reasonable place to start. Doing this would likely eliminate the use of such algorithms.

Makes sense to me. I still hold that the problem is human nature, and that eliminating the algorithms won’t do nearly enough to stop the destruction.

Edit: to clarify the last statement, I mean with social media at this scale. Built in human traits are just not equipped to handle this level of anonymous social connectivity.

Well human cognition, exploits, psychology etc do play a role. However the problem is that Facebook and co explicitly target and manipulate these things to drive ‘engagement’. They are knowingly and intentionally making decisions that are harmful to society, and their users, because it is profitable. Facebook is an active participant, not some passive factor. They are Camel cigarettes using Joe Camel to sell cigarettes to kids. They know what they are doing is harmful, they just don’t give a shit.

Yeah, they suck. Getting rid of those algorithms would help. Maybe slow down the decline, but fights will draw crowds and tribes will form.

Perhaps the discussion is about getting Facebook to suck less and the scope of social media in general is too broad.

I can’t be bothered to reproduce any of the exchanges from yesterday, which is probably a source of relief to everyone here.

I really don’t think you know what you’re talking about here. In this narrow segment of the legal world, it is settled law that companies must at minimum be able to provide evidence for substantial claims made in their advertising. If the FTC thinks your advertising is false, it is going to invite you to produce that evidence, and if you can’t, you’re going to lose.

I feel like we are talking past each other Scott.

You aren’t suggesting that the onus is on the defense to prove their innocence, right? You’re just saying that the crime here, is failing to have certain types of evidence of their claims.

That’s cool, I do not disagree with that.

All I’m saying is that it is the responsibility of the prosecution to prove that the defendant’s evidence fails to meet the legal requirements, because like all criminal cases, the responsibility lies upon the prosecution to prove beyond a reasonable doubt that a crime has taken place.

That case is going to involve them pointing out certain claims, a lack of proof, and how those claims meet the legal requirement for prosecution under the law. And the defense would present evidence that they are true, and/or arguments about how the claims don’t actually fall under the law’s jurisdiction.

What does this have to do with social media?

Didn’t you respond to this:

Interestingly, the FTC have sued Match.com. The basis of the case is that there are scammers on Match.com, people trying to exploit Match.com users, and Match.com knows about them. When known scammers ‘like’ or ‘express interest’ in a user’s profile, Match.com treats that differently if you’re a paid subscriber than if you’re not yet a paid subscriber. If you’re a paid subscriber, they filter out the likes and don’t tell you about them, because they know they are lies. If you’re not yet a paid subscriber, they treat the likes as genuine and tell you about them, because that is the means they have to induce you to become a paid subscriber. Effectively, they’re amplifying lies to get you to pay to find out about them.

Amplifying lies for profit is what we are talking about, isn’t it? That’s what FB’s algorithms are doing?

I think that everything you wrote is reasonable except for the last bolded part. They should be held liable for damage, not for accuracy. I believe that was the contention earlier, and I agree. I don’t think that using advertising as the paradigm is way to go here.

It’s not just about the lies. It’s about the conflict. That’s what keeps retention and does the damage.

I don’t think you could reasonably hold them liable for amplifying the truth, whether that causes damage or not; and I don’t think it would mean anything to hold them liable for amplifying lies which cause no damage. It’s the lies that cause damage that strike me as actionable.

Sorry for jumping a few hours back in the discussion, but I think what you wrote is interesting since these are very concrete proposals that went mostly undiscussed.

A few problems here.

First, there is no such thing as “uncurated”. Everything is a decision. What do you think, for example, the results of an uncurated search engine would be? What’s the uncurated form of a subreddit’s front page?

Second, you are forbidding things such as spam filtering or indeed any form of automated moderation.

Third, this will achieve nothing because basically every user will opt-in. After every YouTube video there will be a button “do you want to see more cool videos like this?”, until the user clicks yes, because sooner or later they will in fact want to see more similar videos.

This doesn’t really work.

First, what you’re doing is making sure that any malicious actors (e.g. spammers, russian trollfarms, or even just people trying to optimize for the social media or search engine visibility of their content) know exactly what they need to do to rank well. Ranking problems, including spam filtering, rely fundamentally on the exact signals being a secret.

Second, a lot of this ranking is now done using huge ML models. There’s a good reason for that: machines are actually better at this than humans would be. The algorithm doesn’t really exist, it’s all just data.

This is just completely detached from reality. Why did you want to see it? Well, first of all it’s never a boolean “did you want to see it” question. It’s a ranking question of “what are the things you most wanted to see”. You literally cannot answer that in isolation for any piece of content, without simultaneously saying for every other possible piece of content why that was a worse one. This is both logistically impossible, and even if you could do it it would be useless to the user.

But OK, let’s say that we define this to mean something like “everything being displayed must have a score between 0 and 1” , “the items must be sorted in descending order by score” and “you must decompose the score into the smallest possible component parts”. That will be hundreds, or more probably thousands, of score components. Again in practice completely impossible for a user to interpret-

Anything more granular than “we’re recommending this video because it’s by Jeremy Clarkson, and you like Top Gear” is useless to the actual users. But any ranking system that simple would be useless, so in reality the only way to get that kind of explanation would be for the explanation to lie.

I suspect it is not workable to require that blocking a single piece of content must substantially inform the algorithm. As a reductio ad absurdum, if I block a million different things, how much effect could blocking the million and oneth thing have?

Anyway, if you did try to legislate this, what would just happen is that all the content providers would play it safe and define blocking a piece of content as blocking e.g. the poster. That has a well quantified and demonstrable effect. What they will not do is try to demote all content similar to what you blocked, since that’ll be uncertain and you’re requesting a legal requirement for certainty.

I think the opposite. Your proposals will do nothing at all for the actual users, except make them click an extra button. But will make the lives of the bad guys a lot easier.

Right. Damage, not accuracy. Seems that more of us agree on the base premise here than one might think.

It certainly goes past fact. What if their algorithm presented a truth to another group just to rile them up? That group does some dastardly deed as a result. Damage, although difficult to prove.

Then shut the fuckers down. If changing the algorithm is impossible, and policing content is impossible, then melt the servers.

Fuck Facebook.

For the record, I flat think you are wrong. There is a way to make changes to improve things by changing how and for what reasons the algorithms operate. Former Facebook employees have even recently discussed some of the things. So, no, I do not grant your premises on what would happen if things were changed as proposed. And even if you are right, then the solution is to nuke the Facebook servers from orbit, just to be sure.

“These are really interesting, concrete proposals that deserve more discussion, so now I will tell you that they’re really all bad and wrong” is genuinely funny.

But the proposal wasn’t scoped to “Facebook”, which would have been problematic since you’d generally not try to name a specific company in a law. It was scoped to “algorithmic content delivery”. Are you really saying that we should burn down e.g. search engines, email, and Reddit?

:)

I don’t think the proposal would work, but could well be wrong. Or maybe I’m correct, but there are ways to rethink parts of the proposal in a way that addresses the issues. And both of those reasons are why why concrete proposals are interesting and worth discussing! They advance our shared understanding a lot better than just vague “we should regulate this somehow” feelings.

Keep in mind one of the biggest pushers for changing Section 230 is actually Facebook.