Sorry for jumping a few hours back in the discussion, but I think what you wrote is interesting since these are very concrete proposals that went mostly undiscussed.
A few problems here.
First, there is no such thing as “uncurated”. Everything is a decision. What do you think, for example, the results of an uncurated search engine would be? What’s the uncurated form of a subreddit’s front page?
Second, you are forbidding things such as spam filtering or indeed any form of automated moderation.
Third, this will achieve nothing because basically every user will opt-in. After every YouTube video there will be a button “do you want to see more cool videos like this?”, until the user clicks yes, because sooner or later they will in fact want to see more similar videos.
This doesn’t really work.
First, what you’re doing is making sure that any malicious actors (e.g. spammers, russian trollfarms, or even just people trying to optimize for the social media or search engine visibility of their content) know exactly what they need to do to rank well. Ranking problems, including spam filtering, rely fundamentally on the exact signals being a secret.
Second, a lot of this ranking is now done using huge ML models. There’s a good reason for that: machines are actually better at this than humans would be. The algorithm doesn’t really exist, it’s all just data.
This is just completely detached from reality. Why did you want to see it? Well, first of all it’s never a boolean “did you want to see it” question. It’s a ranking question of “what are the things you most wanted to see”. You literally cannot answer that in isolation for any piece of content, without simultaneously saying for every other possible piece of content why that was a worse one. This is both logistically impossible, and even if you could do it it would be useless to the user.
But OK, let’s say that we define this to mean something like “everything being displayed must have a score between 0 and 1” , “the items must be sorted in descending order by score” and “you must decompose the score into the smallest possible component parts”. That will be hundreds, or more probably thousands, of score components. Again in practice completely impossible for a user to interpret-
Anything more granular than “we’re recommending this video because it’s by Jeremy Clarkson, and you like Top Gear” is useless to the actual users. But any ranking system that simple would be useless, so in reality the only way to get that kind of explanation would be for the explanation to lie.
I suspect it is not workable to require that blocking a single piece of content must substantially inform the algorithm. As a reductio ad absurdum, if I block a million different things, how much effect could blocking the million and oneth thing have?
Anyway, if you did try to legislate this, what would just happen is that all the content providers would play it safe and define blocking a piece of content as blocking e.g. the poster. That has a well quantified and demonstrable effect. What they will not do is try to demote all content similar to what you blocked, since that’ll be uncertain and you’re requesting a legal requirement for certainty.
I think the opposite. Your proposals will do nothing at all for the actual users, except make them click an extra button. But will make the lives of the bad guys a lot easier.