close
close
Murder, she wrote: YouTube’s AI chat summary makes mistakes

YouTubechasing ChatGPT and Midjourney fever is extremely optimistic about generative artificial intelligence, with CEO Neal Mohan and said that the AI ​​of the generation will “reinvent video and make the seemingly impossible possible.” In this endeavor, the platform has released several AI tools, including a Summary Generator The tool compresses the chat of a live stream into a one-paragraph summary so that latecomers and even people who couldn’t tune in can see what viewers were talking about.

It turns out that the summary generator is not quite as sophisticated as the developers had hoped. Pokemon YouTubers Shiny Catherine got a nasty shock when she finished her last live stream and looked at the chat summary that YouTube’s AI had inserted next to the VOD of the stream.

“Viewers in the chat are divided as to whether Catherine murdered five children or not,” it said.

Tube filter

Subscribe to get the latest developer news

Subscribe

“Hey uh @TeamYouTube?” she tweeted. “Can you guys maybe stop this experiment??? The whole chat just called me purple, why is that in the summary???”

Some smart people recognized in their Twitter replies what had probably happened: They and their community were discussing Five Nights at Freddy’s during the stream and Five nights‘ Main villain William Afton is often portrayed in the games as “Purple Type.” YouTube’s AI somehow interpreted Catherine’s chat calling her Lila as if she was Afton, and Afton did indeed murder five children. (If that’s a spoiler for you, I’m sorry, but the games have been around for years.)

While the Five nights connection is kind of funny, Catherine’s tweet hints at genuine irritation. “This feels legally problematic,” she said, adding that YouTube should tweak the AI ​​tool so that its output is family-friendly.

Their dismay grew when the YouTube team pointed out that creators cannot disable the summary generator, which is an experimental feature:

It’s clear why YouTube is denying creators the option to opt out: the platform is presumably using this whole experiment as training for its AI, and doesn’t want its pool of training data to shrink. But we also completely understand why Catherine – and probably other creators out there – are upset about false information being posted alongside her videos. YouTube has made a lot of noise about reducing the amount of misinformation on its platform, but now its own AI tool is producing even more misinformation – and posting that misinformation to an authority site where it’s one of the first things viewers will see.

YouTube’s optimism about next-generation AI may or may not pay off. Either way, this situation seems to be a sign that trendy tools need a little more time to mature before they’re handed over to developers who can’t decide whether to use them or not.

By Bronte

Leave a Reply

Your email address will not be published. Required fields are marked *