3 days agoCreated a post • 440 points @prostoalex
We demand censorship.
We refuse to pay for it.
We cry when it's done on the cheap.
And it's all their fault!Reply
Agadmator, the person who made the video in question, also made a video soon after explaining the situation, and gave some hypotheses on why the video got taken down. In addition to the reason being hate speech, he suggested it may have been because they discussed Covid-19, lockdowns, etc, and YouTube was attempting to stop the spread of misinformation.
The video I’m talking about is here: https://youtu.be/KSjrYWPxsG8Reply
Let me off this euphemism treadmill! Please, I'm getting ill.Reply
This tells us that their AI and algorithm should be improve as a lot of people are interacting with one another in the comment section.Reply
What a fine mess we've created.
- Popular experiences tend to be better experiences, so we all congregate to the same services
- Homogenous user behavior leads to monopolistic situations, increasing outrage when anything goes wrong
- Even if the government doesn't try to enforce moderation, the company attempts to self-moderate to maintain its image
- The popularity of the service makes human moderation impossible, creating a need for inevitably-flawed robots
I see no solution. The only way to win is not to play.Reply
Where there is rejection for hate, there will be love for hate.Reply
This is a perfect example of "be careful what you wish for". The Wired position seems to be "just put more resources into policing speech, which is a good and necessary activity". My hunch is that cases like these (false positives, at least, as currently judged by the current authorities) will proliferate just as the criteria for judging what constitutes unacceptable speech do. I would challenge the would-be censors to define specifically, in a way not requiring an additional consultation with them for more infusions of judgement, just what types of utterances that they want to suppress, and why. The closer one gets to this, the more the case for censorship will dissolve. Tl;dr: they are complaining about ambiguity in the implementation of the solution, while having failed to define the problem.Reply
Chess is systematically racist, so this is fineReply
Maybe they should train their networks on memes and make it do the opposite ;-).Reply
It's probably suicide. Not hate speech.
And they weren't talking about chess. Skydiving and the abstract of jumping.
But let's love everyone's cliche from 2010, long ago solved if it ever existed AI is racist or too not racist.Reply
Playing the devils advocate a bit: Perhaps the computer is entitled to its own opinion? Of course my first thought is to dismiss it as an obvious false positive but objectively it is a war game white vs black. If someone came up with the game today it would be a dubious choice? Connect Four comes in red vs black and yellow vs red, blue vs red. Why not the traditional black vs white?Reply
I'm actually starting to think we only see these stories about absurd censorship to make the more commonplace and pernicious stuff seem legitimate by comparison. "Oh, hahah, our totally legitimate censorship ML that uses language models to isolate people from each other based on predictions of patterns in their thinking made a funny goof! Gee whiz, you got us that time!"
Anyway, Google will be fine. Lots of tech companies have managed to re-brand after getting on board with idealists, just look at Hollerith.Reply
Given there's 500 or so hours of video uploaded per minute (or some other huge amount), i'm not sure we can expect YouTube to moderate each potential violation. Each video constitutes a miniscule amount of revenue.
The only solution (I see) to this is for YouTube to charge for each upload - say $1 a video (there may need to be different prices in different parts of the world), this wouldn't detract the majority of uploaders and would pay for checking hate speech, copyright violations etc.Reply
Banning a Chess Video for using Black and White as hate speech is so insanely bad. I would love to see an experiment where the same discussion about the game is done but replacing Black and White with actually racist Terms.
> N-Word Horse to B4
> Superior Race Queen to C5
> N-Word Pawn beats Superior Race Pawn
And see if the algorithm flags that too. I´d bet good money it does not...
Just a few examples:Reply
This reminds me of a story from a previous era of automated content moderation...
When I was a student at the University of Cincinnati, I was a member of a group called LARC which stood for Laboratory for Recreational Computing. The main purpose of LARC was to get the University of Cincinnati to subsidize our yearly trip to DEFCON, but I digress.
The UC mail servers, or at least the ones where the LARC mailing list was hosted, had some kind of stupid search and replace censorship to replace naughty words with cleaner equivalents. The cleaner equivalents were in ALL CAPS of course.
So a few members of LARC were working on a project to build a classical arcade cocktail table game out of Linux and MAME and some other stuff. I don't remember the details. All I remember is that the mail server transformed this into the "MALE GENITALIAtail table".
This became its official name. I think the MALE GENITALIAtail table was eventually installed in the student union.Reply
I had a video flagged, and marked as adult content because the title had "Hump Day" in it.
I am not sure in other parts of the world, but in Australia Hump day is the middle day of the week, A.K.A Wednesday.Reply
> “White’s attack on black is brutal. White is stomping all over black’s defenses. The black king is gonna fall … ”
Fortunately there's an easy solution: we can just replace "white" and "black" with neutral alternatives like "allow" and "deny" or "pass" and "block".Reply
Not the first time. Bound to happen.
My main gripe is that Google has no money or brains to actually enlist humans to solve such problems.Reply
The infection of Youtube with Google's fetish for replacing people with machines may be the worst thing about the entire acquisition. Google's obsession with forever increasing the ratio of users to employees is a curse upon us all.Reply
That's what happens when you mix clueless relativism with figurative language.Reply
Well. The speech police started changing "blacklist" and "whitelist" in programming context, even when those had no racist history; maybe it's time to change it in chess. (After all, white always goes first, that is not very PC.)
Rename "black" and "white" to "second player" and "first player".Reply
I honestly wouldn't be surprised if someone makes the argument that this automated flagging is an indicator that chess's language is inadvertently racially charged. And think about the concept of "white goes first." All it takes is a few viral tweets, and suddenly the game of chess is in the crosshairs.Reply
I guess this just shows how backwards and racist Chess terminology is. It may have been fine back in the day, but I want my children to grow up in a world where Blacks and Whites aren't at war. The sides should be switched to much more neutral names, e.g. based on light - lumen/umbral or something based on trees - teak/mahogany. Obviously we should also replace "attack", "check" and "mate" with non-hate-speech - maybe "push", "jeopardy" and just remove "mate", as you can say the same thing by "I believe you have run out of valid moves to make and thus I win".
Not like this is without precedent - I remember back when pieces were said to kill each other and then it was replaced by "capturing". Come to think of it "capturing" as an objective is also rather problematic, maybe we should call it "liberating", since you're really returning these peasants who were forced to serve their king back to their homes.Reply
I am surprised international chess organizations haven't denounced racism and announced the plan to change "white" and "black" to "A" and "B".Reply
I'd stay level higher.
It's our reliance on dominating platforms.
Their wrong choices of wrong technology negatively affects lots of people.
Government here suppose to interfere but lots of bribery is in place to prevent anything meaningful to happenReply
This is no censorship. Youtube only protects us from false information!Reply
Oh no. Please. Not chess. Can’t we just leave this one alone?Reply
Google's pursuit of profit is responsible these and innumerable more injustices. Those who would defend Google often claim that automated moderation is necessary at Google's scale, but does anybody really doubt that Google would still make more than enough money to stay in business if they hired more humans? Automated moderation is not necessary.Reply
Must be a slow news day. Youtube algo f-ups are just part of platform at this point.Reply
It is obvious that there are no such problems when ai is used for driving cars and flying drones. /sReply
Well at least it wasn't for corrupting the youth.Reply
Buttlenagged = Archibald buttle + nagging by softwareReply
There was a star trek channel on youtube which got suspended because he called the fictional race Ferengi “greedy”, which they actually are. Got reinstated after a few days. But it’s getting ridiculous now.Reply
It's never been confirmed that the language of chess is the reason the channel was flagged. It's all speculation. A fishing channel being taken offline due to hate speech, for example, is a boring story. The same thing happening to a chess channel is much juicier due to the implication that an AI accidentally flagged the words "black" and "white" as racist. There are a lot of reasons to be outraged by that idea, but it's important to remember that it may not have happened.Reply
if i am writing a youtube comment and care about it, i always recheck if the comment is still there after a couple of minutes and then a couple of days. because the comments are now "disappearing" more and more frequently the last time my comment got automatically deleted right away was a couple of weeks ago for "bottle opening" words (in my language) put together. replacing a few letters in these words with different same-looking characters helped for some time, but eventually even these got deleted a few days later. i should probably give up using this last google service i still useReply
YouTube overall generates tremendous value for people who view videos on it.
There are so many YouTube videos being generated for the amount of money being made that it is not economically feasible to hire humans to review all the videos.
Even if there is a human to review videos that have been flagged, there is a time delay to doing so.
YouTube seems to be erring on the side of flagging false positives at least till there is time for human review.
The technology reviewing videos is immature. It may not be an engineering failing. It may be a problem that requires a scientific breakthrough.
So a valid critique is that there is no effective way to reach a human at Google. Critiquing the technology is pointless.Reply
When does AI start being smarter than us? Can't wait.Reply
We need a word or phrase for this phenomenon, where we attempt to substitute human pattern recognition with algorithms that just aren't up to the job. Facebook moderation, Tesla Full Self Driving, the War Games movie, arrests for mistaken facial identification. It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label. Maybe there's a ten syllable German word that expresses it perfectly?Reply
Maybe they should stop deciding what people can and can't say. That would solve the problem, wouldn't it.Reply
A friend of mine was banned for sending a chat message that said “this is a Mexican standoff” — the people in the room were both Mexican, if it matters.. we were all confused on why he was permanently banned.Reply