Newswise — Researchers from the University of Zurich recently conducted a study to explore the potential risks and benefits of AI models, with a specific focus on OpenAI's GPT-3. The team, led by postdoctoral researchers Giovanni Spitale and Federico Germani, along with Nikola Biller-Andorno, the director of the Institute of Biomedical Ethics and History of Medicine (IBME) at the University of Zurich, aimed to assess the capabilities of GPT-3 in generating and disseminating (dis)information. Involving 697 participants, the study aimed to determine whether individuals could distinguish between accurate information and disinformation presented in the form of tweets. Additionally, the researchers sought to evaluate participants' ability to identify whether a tweet was authored by a genuine Twitter user or generated by GPT-3, an advanced AI language model. The study covered various topics, including climate change, vaccine safety, the COVID-19 pandemic, flat earth theory, and homeopathic treatments for cancer.

AI-powered systems could generate large-scale disinformation campaigns

The study conducted by the University of Zurich revealed interesting aspects about the capabilities of GPT-3, OpenAI's AI language model. On one hand, GPT-3 demonstrated its ability to generate accurate and easily understandable information, surpassing tweets from real Twitter users in this regard. However, the researchers also uncovered a concerning aspect: GPT-3 had a disconcerting proficiency in producing highly persuasive disinformation. Even more worrisome was the fact that participants in the study struggled to reliably distinguish between tweets created by GPT-3 and those written by genuine Twitter users.

Federico Germani, one of the researchers, emphasized the implications of these findings, stating, "Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems."

These findings suggest that in situations such as public health crises, where timely and clear communication is crucial, information campaigns created by GPT-3, with well-structured prompts and evaluated by trained humans, could prove to be more effective. However, the study also highlights significant concerns regarding the potential threat of AI in perpetuating disinformation, particularly in contexts where misinformation and disinformation spread rapidly and extensively during crises or public health events. The study unveils the risk of AI-powered systems being exploited to generate large-scale disinformation campaigns on various topics, posing a danger not only to public health but also to the integrity of the information ecosystems that are essential for functioning democracies.

Proactive regulation highly recommended

With the growing influence of AI in information generation and assessment, the researchers emphasize the need for policymakers to take decisive action. They urge the implementation of stringent regulations based on evidence and ethical considerations to address the potential threats posed by these disruptive technologies. It is crucial to ensure the responsible use of AI in shaping our collective knowledge and well-being.

Nikola Biller-Andorno stresses the importance of proactive regulation in light of the study's findings, stating, "The findings highlight the critical importance of implementing regulations that can proactively mitigate the potential harm caused by AI-driven disinformation campaigns." Recognizing the risks associated with AI-generated disinformation is vital for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age. Policymakers must prioritize this issue to create an environment that promotes responsible and beneficial use of AI while minimizing its potential negative impacts.

Transparent research using open science best practice

The study followed a comprehensive open science approach, adhering to best practices from pre-registration to dissemination. Giovanni Spitale, who serves as an UZH Open Science Ambassador, emphasizes the significance of open science in promoting transparency and accountability in research. By allowing scrutiny and replication, open science plays a vital role in the credibility of research findings. In the context of this study, open science becomes even more crucial as it enables stakeholders to access and evaluate the data, code, and intermediate materials. This accessibility enhances the credibility of the study's findings and facilitates informed discussions about the risks and implications associated with AI-generated disinformation.

Interested individuals can access these resources through the OSF repository at the following link: https://osf.io/9ntgf/.

Literature:

Giovanni Spitale, Federico Germani, Nikola Biller-Andorno: AI model GPT-3 (dis)informs us better than humans. Science Advances, 28 June 2023: https://arxiv.org/abs/2301.11924

Journal Link: Science Advances