• The likes of ChatGPT, Google’s Bard and Midjourney can also help spread incorrect, nonsensical information
  • Marginalised groups are disproportionately affected
  • Children are at particular risk

Newswise — Over the span of a few months, generative AI systems like ChatGPT, Google's Bard and Midjourney, have gained increasing popularity among individuals across diverse domains. However, emerging studies are emphasizing that these models are embedding biases and adverse stereotypes in their users while also generating and disseminating ostensibly precise yet nonsensical data. Alarmingly, marginalized communities bear a disproportionate burden due to the propagation of this nonsensical information.

Furthermore, as these models proliferate and populate the World Wide Web, the widespread fabrication has the potential to sway human beliefs. This is because individuals not only consume information from the internet but also because a significant portion of the primary training data for AI models is sourced from the web itself. In essence, a perpetual feedback loop emerges wherein biases and nonsensical content are repeatedly generated, propagated, and eventually embraced.

These discoveries, along with a call for urgent collaboration between psychologists and machine learning experts to evaluate the magnitude of the problem and develop remedies, are presented in a compelling Perspective published today in the esteemed global journal, Science. The Perspective is co-authored by Abeba Birhane, an adjunct assistant professor in Trinity's School of Computer Science and Statistics, affiliated with Trinity's Complex Software Lab, and a Senior Fellow in Trustworthy AI at the Mozilla Foundation. It highlights the need for collective action and offers valuable insights into the subject matter.

Professor Birhane emphasized, "Individuals frequently convey uncertainty through expressions like 'I think,' pauses in their responses, corrections, and speech disfluencies. However, generative models, in contrast, provide confident and fluent answers without any indication of uncertainty or the capability to convey its absence. Consequently, this can result in greater distortion compared to human inputs, leading people to accept these responses as factually accurate. Moreover, these challenges are amplified by financial and liability motives that encourage companies to humanize generative models by portraying them as intelligent, sentient, empathetic, or even childlike."

The Perspective offers a notable example highlighting how statistical patterns within a model can result in Black defendants being assigned higher risk scores. Consequently, court judges, having learned these patterns, may adjust their sentencing practices to align with the algorithm's predictions. This fundamental process of statistical learning could lead judges to perceive Black individuals as having a higher likelihood of reoffending, even if the usage of such systems is halted by regulations like those recently implemented in California.

A significant concern lies in the difficulty of eradicating biases or fabricated information once they have been internalized by individuals. This is especially troubling for children, who are particularly susceptible to belief distortion. Children are more prone to anthropomorphizing technology and are easily influenced, making them highly vulnerable in this context. The impact on their perceptions and understanding can be profound, warranting careful attention and protective measures.

What is needed is swift, detailed analysis that measures the impact of generative models on human beliefs and biases. 

Prof Birhane said: “Studies and subsequent interventions would be most effectively focused on impacts on the marginalised populations who are disproportionately affected by both fabrications and negative stereotypes in model outputs. Additionally resources are needed for the education of the public, policymakers, and interdisciplinary scientists to give realistically informed views of how generative AI models work and to correct existing misinformation and hype surrounding these new technologies.”

Journal Link: Science