Scientists warn patients that ChatGPT is making up false information about cancer

Researchers at the University of Maryland School of Medicine have warned against using for medical advice after a study found that the (AI)-powered chatbot made up facts when asked about .

According to an article published on the Daily Mail website citing the research, the chatbot answered one in ten questions about screening incorrectly, and the correct answers were not as “complete” as those found by a simple search.

The researchers reported that, in some cases, the AI chatbot even used fake newspaper articles to support its claims.

The study comes as users are advised to treat the software with caution, as it has a tendency to hallucinate or, in other words, make things up.

The study that put ChatGPT in the spotlight

Researchers asked ChatGPT to answer 25 questions related to breast cancer screening advice. Because the chatbot is known to vary its answers, each question was asked three times. The results were then analyzed by three radiologists trained in mammography.

Eighty-eight percent of the answers were appropriate and easy to understand. However, some answers were inaccurate or even fictitious, they cautioned.

One answer, for example, was based on outdated information. It advised delaying mammography for four to six weeks after receiving the COVID-19 vaccine, when that advice was changed more than a year ago to recommend that women not wait.

ChatGPT also provided inconsistent responses to questions about breast cancer risk and where to get a mammogram. The study found that responses “varied significantly” each time the same question was asked.

Dr. Paul Yi, co-author of the study, said, “Our experience has shown us that ChatGPT sometimes invents fake newspaper articles or health consortia to support its claims. Consumers should be aware that these are new and unproven technologies, and should always rely on their doctor, rather than ChatGPT, for advice.”

3.8/5 - (13 votes)