12 Epic AI Fails That Shocked the World

It’s been almost 2 years since ChatGPT was first released in November 2022. This conversational AI chatbot was a massive hit and was accompanied by several other AI product/tool launches in the following years. While AI is positively transforming various industries and processes, it is also responsible for scams and disasters. Let’s explore 12 famous AI disasters that made headlines.

Fake Dublin Halloween Parade

Image Credits: Eduardo Mogollán from Imágenes de Eduardo ML via Canvas.com

A website named ‘My Spirit Halloween’ announced a Halloween parade organized by Macnas. It attracted thousands of Dublin locals gathering near the Parnell Square to Temple Bar route, only to find that no such parade was organized. Upon investigation, authorities discovered that the website published AI-generated news and fictional events for commercial gains. While the entire event became a laughing matter, it also raised concerns about how AI-generated misinformation can create chaos and confusion in any city.

‘Goodbye Meta AI’ Hoax

Image Credits: Erik Mclean from Pexels via Canva.com

An online hoax grabbed massive attention when celebrities like Ashley Tisdale, Tom Brady, and James McAvoy participated in the ‘Goodbye Meta AI’ trend. It was believed that sharing the message would prevent Meta from using users’ information to train their AI models. It was later declared fake when Meta announced sharing such stories isn’t a valid form of objection. It left most celebrities feeling embarrassed for falling for such AI tricks. 

AI Accused of Liberal Bias

Image Credits: SeventyFour via Canvas.com

Alexa, Amazon’s AI-voice assistant, created a massive uproar when it was caught displaying liberal bias by some conservative leaders. A widely circulated video showed Alexa listing positive qualities and achievements of Kamala Harris when asked about voting for Kamala Harris. The same chatbot denied talking positively about Donald Trump when asked a similar question. It resulted in massive backlash, and Amazon had to instantly fix this glitch and issue a clarifying statement to the public.

Donald Trump Falling for AI-Generated Image

Image Credits: Kaboompics.com from Pexels via Canvas.com

A big controversy stirred when Donald Trump shared an image of Taylor Swift endorsing him for president before the 2024 elections. After he shared this image on his Truth Social account, it was picked by major publications and became widely circulated on different social media platforms. It was later found that the image was AI-generated. The situation became embarrassing for Democratic party supporters when Taylor Swift later clarified that it was a fake image and she supported Kamala Harris, not Donald Trump for presidential elections.

Grok Accuses NBA Player of Vandalism

Image Credits: Stanley Morales from Pexels via Canvas.com

X’s chatbot Grok accused NBA player Klay Thompson of vandalizing homes with bricks. According to reports, Grok generated this fabricated story because it took social media posts talking about his ‘shooting bricks’ too literally. Shooting bricks is a basketball slang term that means ‘missing his shots,’ but AI mistook it for shooting real bricks, creating a fictional story.

Netflix Received Backlash for Using AI-generated Images

Image Credits: View more by prathan chorruangsak via Canvas.com

Netflix was surrounded by controversy when people highlighted the use of AI-generated imagery incorporated into the documentary ‘What Jennifer Did.’ Pan’s fingers and teeth looked abnormal in many images, raising questions about using AI. While the executive producer later clarified that the abnormalities resulted from altering other parts of her original pictures, users weren’t entirely convinced. The incident presented Netflix in a bad light, so it’s safe to assume they’ll be more careful with AI usage going forward.

AI Chatbot Asks Businesses to Break Laws

Image Credit: Iqbal Nuril Anwar from corelens

In a bizarre event, a conversational AI chatbot created by New York City was found advising small businesses to violate laws. It even misrepresented city laws that could have landed business owners in legal trouble. For example, the chatbot falsely suggested to an employer that it’s legal to fire an employee who complains about sexual harassment. It even said that restaurants can serve cheese nibbled by a rodent to customers. This event raised concerns about using chatbots too casually or overly relying on their advice to make crucial decisions.

Google’s Gemini Accused of Racial Bias

Image Credits: Monkey Business Images via Canva.com

Google’s Gemini started offering an AI image creation feature, but it soon met with accusations of racial bias. According to reports, Gemini depicted prominent white figures, such as the US Founding Fathers, as people of color. Some right-leaning accounts also highlighted how Google’s AI chatbot creates AI-generated people of color when asked for a picture of an American woman or Swedish woman. Google instantly offered clarification and promised to improve its AI chatbot for greater accuracy. 

AI Adds Disturbing Polls to Articles

Image Credits: Annastills via Canva.com

Microsoft’s overreliance on automation and AI is lowering the news industry standards. Since it has licensing agreements with major publications, it republishes publications’ articles in exchange for a portion of its advertising revenue. In one such instance, Microsoft republished an article about a 21-year-old woman’s death from severe head injuries, and its AI automatically added a poll asking readers to vote for the woman’s death reason. The poll options included murder, suicide, and accident as options. It irked many readers and created a backlash, after which the poll was pulled down.   

AI-Written Obituary Calls Deceased ‘Useless’

Image Credits: Vlada Karpovich from Pexels Via Canva.com

Microsoft is known to use AI to write content for its news site, but it isn’t going too well for them. The company has received a massive backlash after its AI-generated obituary for NBA star Brandon Hunter called him ‘useless.’ The obituary titled ‘Brandon Hunter useless at 42.’ Users found the headline offensive and the remaining article incomprehensible. Microsoft quickly deleted the obituary, but the reputational damage was already done. 

Supermarket AI Suggests Deadly Recipes

Image Credits: PhonlamaiPhoto’s Images via canva.com

Every organization is trying to integrate AI into its business, but it isn’t going as expected. One of New Zealand’s supermarket chains found itself in trouble when its AI-powered meal planners suggested users to make deadly chlorine gas, poison bread sandwiches, and mosquito-repellent roast potatoes. While the supermarket took notice and prevented users from entering ingredients, the meal planner is still active. 

OpenAI Sued for Framing False Allegations

Image Credits: August de Richelieu from Pexels via Canva.com

AI chatbots have a terrible reputation for hallucinating to generate false responses to queries, and one such instance landed the chatbot maker into legal trouble. A radio host, Mark Walters, sued OpenAI after ChatGPT created a fake legal summary accusing him of defrauding and embezzling funds from a gun rights organization. OpenAI later clarified that not all information generated by AI is fully reliable, raising serious concerns about its reliability.   

Recommended