Research shows that bots may have less influence on people than previously thought

Article ID: 702772

Released: 24-Oct-2018 5:05 PM EDT

Source Newsroom: University of Arkansas at Little Rock

  • Credit: Lonnie Timmons III/University of Arkansas at Little Rock

    Dr. Nitin Agarwal

  • Credit: Ben Krain/University of Arkansas at Little Rock

    Zachary Stine

Newswise — New research at the University of Arkansas at Little Rock digs into assumptions about the influence of bots on people’s opinions. 

Some people often assume that disinformation campaigns carried out on social media by bots (a computer program that can automate social media messaging and content engagement) were highly effective in changing people’s opinions through repetition tactics.

However, Zachary Stine, a UA Little Rock doctoral student in computer and information science, tested this theory through an experiment to determine how easily a group of artificial agents could be influenced under three computation models. Artificial agents are simulated objects in a computer program that represent simplified versions of things, people in this case, in the real world.

Stine is also a researcher at COSMOS (Collaboratorium for Social Media and Behavioral Studies) – a research group led by Dr. Nitin Agarwal, Jerry L. Maulden-Entergy Endowed Chair and Distinguished Professor of Information Science. Agarwal is a co-author of the study.

The researchers adopted a strategy called amplification, commonly employed by bots within social media. They treat it as a simple agent strategy situated within three models of opinion dynamics using three different mechanisms of social influence. Although many studies have been published which show how bots propagate misinformation within social media, very few studies exist that show how the bots affect a population’s opinions.

Three broad classes of social influence models used in this study were assimilative influence, similarity-biased influence, and repulsive influence. Each mechanism is a set of rules that govern how the artificial agents change their opinions. In assimilative influence, the artificial agents always compromise. They change their opinions to be more similar to each other. In similarity-biased influence, artificial agents will only compromise if their opinions are already similar enough to the other agents’ opinions. In repulsive influence, artificial agents will compromise if their opinions are already similar enough to the other agents’ opinions. However, if they are dissimilar, the agents will not compromise and instead change their opinions to become even more different.

A total of 91 unique sets of conditions were tested for this study. For each of these, 500 simulation runs were performed and analyzed, totaling 45,500 simulation runs.

“It is often assumed that when bots on social media amplify some opinion, that inevitably more people will adopt the opinion being amplified,” Stine said. “Our findings suggest that this assumption only holds under very specific and rigid assumptions about how people influence each other.”

The researchers employed agent-based models (a class of computational models for simulating the actions and interactions with autonomous agents to measure the impact on the system) of opinion dynamics, which provide a useful environment for understanding and assessing social influence strategies. This approach allowed the researchers to build theory about the efficacy of various influence strategies and highlight potential gaps in the existing models.

Stine and Agarwal observed that, in models where artificial agents are inherently polarizing, it is very difficult to sway the majority of the population’s opinions. It is only under complex strategies that the researchers found the agents could be influenced. In other words, agents who had strong inherent opinions are very less likely to be influenced.

Instead of simply repeating the same opinion over and over again, the complex strategies work by amplifying an initial opinion and then gradually shifting that opinion until it reaches the target opinion that the bot actually wants the population to adopt,” Stine said. “While the findings presented in this paper are theoretical, they illustrate how small changes in our assumptions about how people influence each others’ opinions can dramatically affect the success or failure of a campaign that tries to manipulate a population’s opinions.”

In conclusion, the researchers theorize that it would be extremely challenging for bots to influence a real audience through only the use of repetition tactics employed by bots.

“Examining social influence strategies of bots from a theoretical perspective of agent-based models is not just timely and relevant, but also foundational in advancing our understanding of sociotechnical behaviors, their evolution, and effects on society and democratic values,” Agarwal said.

Stine presented the findings, “Agent-based models for assessing social influence strategies,” at the International Conference on Complex Systems (ICCS 2018) July 22-27 in Cambridge, Massachusetts.

This research is funded in part by the U.S. National Science Foundation, U.S. Office of Naval Research, U.S. Air Force Research Lab, U.S. Army Research Office, U.S. Defense Advanced Research Projects Agency and the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock.

In the upper right photo, Zachary Stine’s research involves examining the influence of bots on users of social media. Photo by Ben Krain.


Chat now!