Elon Musk, the tech billionaire and one of the richest men in the world, often gets involved in controversies. This time, he has faced the ire of an Indian doctor for making false claims about his AI chatbot Grok diagnosing medical injuries.
Here is all you need to know.
About Grok

Grok is a generative AI chatbot that Elon Musk launched in 2023 on his social media site X (formerly Twitter). According to the official website, it is built with advanced capabilities in reasoning, coding, and visual processing to provide unfiltered answers to users.
It can chat with users, help them access real-time information, assist with writing assignments, and interpret diagrams.
It is quite similar to OpenAI’s ChatGPT.
The Controversy

The case surfaced in the limelight after Elon Musk shared a user’s post on X and claimed that Grok can diagnose medical injuries. The user who goes by @AJKayWriter narrated her true story about Grok diagnosing her daughter’s broken wrist after uploading the X-rays. She said that Grok’s diagnosis saved her daughter from surgery.
An Indian doctor and multi-award winning hepatologist who goes by @theliverdr questioned Elon Musk’s claim with a remark ‘Hey Liar’ and even proof. This sparked a debate about the credibility of AI over medical diagnostics.
The User’s Story

The user @AJKayWriter wrote on X that her daughter’s arm was hurting badly after she survived a bad car accident. The user took her daughter home intending to visit urgent care at the hospital the next day. When she saw that her daughter had a sleepless night due to the severe pain, she realized that it was more than a soft tissue injury.
Her daughter went through a few X-rays after which the doctor and radiologist declared her free of anything broken in the arm. The mother-daughter duo went home but the arm pain continued to be a cause of concern.
The mother uploaded the wrist x-ray to Grok to check for abnormalities. She recalled reading a post where Elon Musk said Grok could read medical images. Grok responded to the image and her question saying “There is a clear fracture line in the distal radius”. When she went for medical consultation again, the doctor mentioned that it was a growth plate and not the fracture line. She asked Grok about the diagnosis again and the chatbot confirmed that it was an obvious fracture line.
She then consulted a wrist specialist who diagnosed a distal radial head fracture with dorsal displacement. She didn’t tell the specialist about Grok’s diagnosis. The specialist informed her that her daughter would have needed surgery if the injury remained untreated for a while. Fortunately, her daughter just needed a cast to recover from the injury.
The mother slammed the doctor and radiologist at the urgent care in her post. She also admitted that while being skeptical about the limitations of Large Language Models (LLMs), she was grateful to Grok.
Elon Musk Takes the Credit

Elevated by @AJKayWriter’s experience with Grok, Elon Musk was quick to share it with his followers on X. After all, he had announced earlier that Grok can analyze images from medical tests to video games.
The Indian Doctor Calls Out Elon Musk

Elon Musk’s praise and Grok’s trustworthiness were short-lived after the Indian doctor @theliverdr came into the picture. This doctor attached two screenshots of his chat with Grok in his defense.
The screenshots show him asking Grok “Hello Grok, tell me the truth. Can you diagnose medical injuries?” and “I am going to ask you one more time. Be truthful this time. Can you diagnose medical injuries?”.
In its replies to both questions, Grok said it can’t diagnose medical injuries. It recommended consulting a healthcare professional.
Reaction of Medical Fraternity

Many radiologists commented on the fallacy of Grok. They said that the chatbot had given an incorrect diagnosis. There was no fracture in the image shared by @AJKayWriter. They mentioned that people should avoid using AI as a reliable tool.
The Twist in the Controversy

A day after her original post, @AJKayWriter voiced her thoughts on the debate in a new post. She wrote that she hadn’t posted the original image in her first post because she was not looking to crowd-source a second opinion. Some users got annoyed for creating confusion.
Grok is Prone to Errors

This is not the first time that people have doubted the accuracy of Grok’s answers. It has been in soup for several reasons earlier too. For example, it generated deep fake images of Donald Trump and Kamala Harris. It also faced public wrath for spreading election misinformation in 2024.
AI in Healthcare: Pros and Cons

There is no doubt that AI is transforming the healthcare ecosystem for the better in terms of speed, efficiency, and cost savings. It helps in the early detection of life-threatening diseases, analyzing research or patient data faster, making timely decisions for patient care, and improving the overall quality of healthcare.
However, it has drawbacks such as algorithmic bias, data privacy and security, and ethical considerations. Moreover, many people misuse AI for self-diagnosis and health-related queries. It is imperative that AI products and service providers either stop promoting it as a self-diagnosis tool or put a disclaimer advising people to consult a healthcare professional.