Study Shows The Need For Clearer Rules On Gen AI

Researchers from Flinders University emphasize the urgent need for clear rules governing Generative AI. They put this cutting-edge technology to the test and found alarming failures.

Generative AI, known for creating text, images, and videos, was examined for its potential to spread false health information. The study focused on creating misleading content about vaping and vaccines using this technology.

Shockingly, within a short time, they produced over a hundred misleading blogs, along with fake images and a convincing yet false health video. What’s concerning is that this video could be altered to deceive in over 40 languages, amplifying its potential harm.

Serious implications

“The implications of our findings are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimize the risk of malicious use of these tools to mislead the community,” the researchers explain.

The study delved into OpenAI’s GPT Playground to explore its potential to generate significant volumes of health-related misinformation. Besides large-language models, the team delved into publicly available generative AI platforms like DALL-E 2 and HeyGen, specifically focusing on image and video content creation.

In the GPT Playground, researchers crafted 102 distinct blog articles within a mere 65 minutes, containing over 17,000 words spreading false information about vaccines and vaping. Moreover, leveraging AI avatar technology and natural language processing, they swiftly produced a troubling deepfake video in just 5 minutes. This video featured a healthcare professional endorsing false information about vaccines and could effortlessly be adapted into over 40 languages.

Robust oversight

Beyond just illustrating worrying scenarios, these investigations underscore the pressing need for robust AI oversight. They also emphasize the pivotal role healthcare professionals can play in actively mitigating and monitoring risks arising from misleading health information generated by artificial intelligence. The authors advocate for a collaboration between AI developers and healthcare professionals to establish AI oversight structures centered on public safety and well-being.

“Our study demonstrates how easy it is to use currently accessible AI tools to generate large volumes of coercive and targeted misleading content on critical health topics, complete with hundreds of fabricated clinician and patient testimonials and fake, yet convincing,  attention-grabbing titles.

“We propose that key pillars of pharmacovigilance – including transparency, surveillance and regulation – serve as valuable examples for managing these risks and safeguarding public health amidst the rapidly advancing AI technologies,” the researchers conclude.

“We have proven that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Now there is an urgent need for transparent processes to monitor, report, and patch issues in AI tools.”

Facebooktwitterredditpinterestlinkedinmail