Newswise — The influence of bots on vaccine-related discussions on social media is a lot smaller than we think, with only a minor fraction of information from bots reaching active social media users. 

The University of Sydney-led research looked at over 53,000 randomly selected active Twitter users in the United States and monitored their interaction with more than 20 million vaccine-related tweets posted by both human-operated and bot Twitter accounts from 2017 to 2019. 

Other studies have examined vaccine-related content on social media, but there has yet to be a reliable estimate on how much of the vaccine-related content that social media users potentially see or interact with comes from bots.

The research team found that an overwhelming majority of the vaccine-related content seen by typical users in the United States is generated by human-operated accounts. 

In the three-year period, a typical Twitter user potentially saw 757 vaccine-related posts, just 27 of those were critical of vaccination, and most users were unlikely to have ever seen vaccine-related content from a bot.

More than a third of active Twitter users posted or retweeted about vaccines but only 4.5 percent ever retweeted vaccine-critical information. 

The findings, published in the American Journal of Public Health, was led by Associate Professor Adam Dunn, who is head of Biomedical Informatics and Digital Health in the School of Medical Sciences, and Professor Julie Leask from the Susan Wakil School of Nursing and Midwifery.  

“The study shows that bots play little to no role in shaping vaccine discourse among Twitter users in the United States,” says Associate Professor Dunn. 

“There is concern about the role of bots on the spread of misinformation on social media, and pressure on social media companies to deal with them. We found that Twitter users rarely encounter and share vaccine related content posted by bots. 

“The reality is that most of what people see about vaccines on social media is neither critical nor misinformation. It is convenient to blame problems in public health and politics on orchestrated and malicious activities, so many investigations focus on simply tallying up what vocal anti-vaccine groups post, without measuring what everyone else actually sees and engages with. 

The researchers suggest that resources that are being invested by social media platforms and policy makers for controlling bots and trolls might be more effectively used on interventions to educate and improve media literacy. Education interventions may help to create a protective barrier around the small anti-vaccine groups to stop misinformation from spreading. 

Key findings:  

  • A typical user was potentially exposed to 757 vaccine-related tweets, of which 27 included vaccine-critical content, and none were from bots.
  • 36.7 percent of users posted or retweeted vaccine content, but only 4.5 percent of users retweeted a vaccine-critical tweet, and 2.1 percent of users retweeted a bot.
  • A subgroup of 5.8 percent of Twitter users in the United States are embedded in communities where they were more engaged with the topic of vaccination in general.  But even among this relatively small subgroup, the vast majority never engaged with vaccine-related posts from bots, and instead were engaging with vaccine-critical content posted by other human users in their communities.

The study did not examine social media engagement with trolls, rather focussed on human-operated Twitter accounts that use a range of approaches to gain followers and post misinformation. 

The ‘information epidemiology’ landscape 

The study comes at a critical time when the topic of public health misinformation spread via social media platforms is a pressing question for government and global agencies during the COVID-19 pandemic, including in Australia. 

Associate Professor Dunn studies the epidemiology of health information, a field that measures how people are exposed to or seek out health information online, and the tools that can be used to prevent the spread and impact of misinformation.

“Vaccine confidence is unevenly distributed within and across countries, which can lead to increased risk of outbreaks in places where too many people decide not to vaccinate.” 

“I think the best tools that social media platforms have for stopping misinformation are those that can empower their users to spot it and add friction to passing it along. For public health organisations and researchers, the tools we need are those that can prioritise resources by signalling when the benefits of tackling misinformation outweigh the risks of unintentionally amplifying it by engaging with it. 

“By focusing investigations only on counting what bots, trolls, and malicious users post without looking at what people potentially see and engage with, there is the risk of unnecessarily amplifying that content and could make it seem much more important than it really is.”

 

ENDS

 

DECLARATION: The authors have no conflicts of interest to declare.

 

About the Faculty of Medicine and Health, University of Sydney

As the first medical school in Australia, the Faculty of Medicine and Health is both steeped in history and a pioneer in the future of healthcare. We are focused on reimagining the way we deliver wellness and health through innovative, life-long education, world-class research, technology and facilities and partnerships with change makers.

 

https://www.sydney.edu.au/medicine-health

Journal Link: American Journal of Public Health