Newswise — A new UCL-led study suggests that incorporating 'trust' and 'distrust' buttons, in addition to the conventional 'like' buttons, on social media platforms might mitigate the dissemination of false information.

Incentivising accuracy cut in half the reach of false posts, according to the findings published in eLife.

Professor Tali Sharot, who co-led the study alongside researchers from UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology, highlighted the surge in misinformation dissemination, commonly known as 'fake news', which has escalated in recent years. This phenomenon has played a significant role in political polarization and has influenced people's beliefs on various topics, ranging from vaccine safety to climate change to attitudes towards diversity. The effectiveness of current strategies, such as flagging inaccurate posts, has been limited in combating this issue.

“Part of why misinformation spreads so readily is that users are rewarded with ‘likes’ and ‘shares’ for popular posts, but without much incentive to share only what’s true.

“Here, we have designed a simple way to incentivise trustworthiness, which we found led to a large reduction in the amount of misinformation being shared.”

In a recent publication in the journal Cognition, Professor Sharot and her colleagues discovered that individuals exhibited a greater inclination to share statements on social media if they had encountered them before. This tendency stemmed from the perception that repeated information was more likely to be true, thereby highlighting the influence of misinformation through the power of repetition.

To explore a potential remedy, the researchers conducted a series of six experiments using a simulated social media platform. A total of 951 participants were engaged in the study. The platform simulated the sharing of news articles, half of which were intentionally misleading. Participants had the ability to react to these articles not only with traditional 'like' or 'dislike' responses and reposting options but also with additional 'trust' or 'distrust' reactions in certain versions of the experiment.

The researchers observed that the introduction of the trust/distrust buttons on the social media platform proved to be both popular and effective. Users demonstrated a greater inclination to utilize the trust/distrust buttons compared to the traditional like/dislike buttons. This shift in behavior led to an increase in the sharing of accurate information as individuals sought to acquire 'trust' reactions. Computational modeling was employed to delve deeper into the data, revealing that participants became more attentive to the perceived reliability of news stories when deciding whether to repost them after the incorporation of trust/distrust reactions.

Furthermore, the researchers observed that participants who had interacted with the versions of the platform featuring trust/distrust buttons developed more accurate beliefs compared to those who had not used this feature. This finding suggests that the incorporation of trust/distrust buttons had a positive impact on the participants' overall belief accuracy.

PhD student Laura Globig, who co-led the study alongside researchers from UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology, emphasized the feasibility and potential benefits of incorporating trustworthiness indicators into current social media platforms. Her statement highlighted that such buttons could effectively combat the spread of misinformation while maintaining user engagement, offering a practical and viable solution to the problem.

Although forecasting the real-world implications with a broader array of influences is challenging, considering the severe perils of online misinformation, incorporating this could serve as a valuable supplement to existing endeavors combating misinformation.

 

Journal Link: eLife