The unexpected passing of former NBA player Brandon Hunter at the age of 42 sent shockwaves through the sports world, as fans remembered his memorable career with the Boston Celtics and Orlando Magic during the 2000s. However, this somber moment took an astonishing twist when a garbled and seemingly AI-generated article sparks controversy by insulting late NBA Player with a derogatory headline was published on Microsoft’s MSN news portal.
AI-Generated Article Sparks Controversy by Insulting Late NBA Player
The headline in question, which referred to Hunter as “useless” in a shocking manner, left many in disbelief. The rest of the article was equally incomprehensible, mentioning that Hunter had “handed away” after achieving “vital success as a ahead [sic] for the Bobcats” and playing in “67 video games.”
The incident immediately ignited a firestorm on social media, with readers expressing their outrage and concern over the use of AI in journalism. “AI should not be writing obituaries,” one reader exclaimed, adding, “Pay your damn writers MSN.”
Another redditor pointed out the dystopian aspect of AI-generated content, stating, “The most dystopian part of this is that AI which replaces us will be as obtuse and stupid as this translation, but for the money men, it’s enough.”
This incident isn’t the first time that Microsoft, a major supporter of ChatGPT maker OpenAI, has faced embarrassment due to AI-generated content on MSN. In a recent episode, the platform published a bewildering AI-generated travel guide for Ottawa, Canada, which even recommended visiting a local food bank. The article was eventually removed after receiving widespread criticism.
In response to such incidents, Jeff Jones, a senior director at Microsoft, clarified that the content was not generated by unsupervised AI. He explained that it involved a combination of algorithmic techniques with human review, not solely relying on a large language model or AI system.
However, the deeper issue lies in the background of MSN’s content vetting process. In 2020, MSN laid off its team of human journalists responsible for reviewing published content, leading to the syndication of numerous sloppy articles, some even discussing topics like Bigfoot and mermaids, which were subsequently deleted.
Despite promises of “human oversight” on its “About Us” page, MSN’s recent content history calls into question the efficacy of this claim. The recent article about Brandon Hunter’s passing was originally published by a source named “Race Track,” which had multiple red flags, including anonymous bylines and questionable associations on its MSN profile.
Further investigation revealed that the articles published by “Race Track” on MSN were not only of abysmal quality but were also plagiarized from various sources, including TMZ Sports. The lack of content scrutiny, combined with AI-generated content, creates a dangerous recipe for misinformation and erodes public trust in journalism.
In response to this story, Microsoft deleted the problematic articles and stated, “The accuracy of the content we publish from our partners is important to us, and we continue to enhance our systems to identify and prevent inaccurate information from appearing on our channels.”
This incident serves as a stark reminder of the challenges and risks associated with AI in journalism. As the media industry explores the use of AI to replace human editors and writers, it must navigate carefully to avoid undermining the credibility and integrity of news reporting. Accusations against an NBA legend during the week of his passing are not just an isolated event but a glimpse into a potential future where AI-generated content threatens the very foundations of journalism.
Comments