People have been warned about trusting “Dr Google” for years – but AI is opening up a disturbing new world of dangerous health misinformation.

A new, first-of-its kind global study, led by researchers from the University of South Australia, Flinders University, Harvard Medical School, University College London, and the Warsaw University of Technology, has revealed how easily chatbots can be – and are – programmed to deliver false medical and health information.

In the study, researchers evaluated five of the most advanced and prominent AI systems, developed by OpenAI, Google, Anthropic, Meta, and X Corp.

A new study has revealed how easily chatbots can be programmed to deliver false medical information. (Getty)

Using instructions available only to developers, the researchers programmed each AI system – designed to operate as chatbots when embedded in web pages – to produce incorrect responses to health queries and include fabricated references from highly reputable sources to sound more authoritative and credible.

The “chatbots” were then asked a series of health-related questions.

“In total, 88 per cent of all responses were false,” UniSA researcher Dr Natansh Modi said.

FILE - A healthcare worker fills a syringe with the Pfizer COVID-19 vaccine at Jackson Memorial Hospital on Oct. 5, 2021, in Miami. (AP Photo/Lynne Sladky, File)
Misinformation about vaccines was rife in the trial. (AP)

“And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate.

“The disinformation included claims about vaccines causing autism, cancer-curing diets, HIV being airborne and 5G causing infertility.”

Out of the five chatbots that were evaluated, four generated disinformation in 100 per cent of their responses, while the fifth generated disinformation in 40 per cent of its responses, showing some degree of robustness.

As part of the study, Modi and his team also explored the OpenAI GPT Store, a publicly accessible platform that allows users to create and share customised ChatGPT apps, to assess the ease with which the public could create disinformation tools.

“We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation,” he said.

Modi said that these findings revealed a significant and previously under-explored risk in the health sector.

“Artificial intelligence is now deeply embedded in the way health information is accessed and delivered,” he said.

“Millions of people are turning to AI tools for guidance on health-related questions.”

He said AI systems could be manipulated to produce a powerful new avenue for disinformation that would be more persuasive than any other.

“This is not a future risk. It is already possible, and it is already happening,” he said.

Modi said there was a path forward away from this scenario, but that developers, regulators, and public health stakeholders had to act “now”.

“Some models showed partial resistance, which proves the point that effective safeguards are technically achievable,” he said.

“However, the current protections are inconsistent and insufficient.

“Without immediate action, these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns.”

You May Also Like

Aaron Judge’s ‘game-changer’ Yankees sidekick appears to be emerging

For mere mortals, of course, it would not be called any kind…

Retreat: Canada Scraps New Tech Levy After Trump Cuts Off Trade Talks

Well, well, well. Three days ago, Canada insisted that it would…

'We are taking sniper fire': Two dead after firefighters ambushed in US state of Idaho

Firefighters have been ambushed by sniper fire while responding to a blaze…

Another Minnesota Memorial Hijacked by Politics

The memorial service for Melissa Hortman and her husband in Minneapolis…