Newswise — Athens, Ga. – A trending story on Twitter could mean thousands of people care about an issue—or that some computers are doing their jobs.

New research from the University of Georgia found that Twitter “bots” can be the driving forces behind dialogue in social movements, possibly leading to journalistic attention and governmental change. 

“When a topic trends on Twitter, chances are a lot of central or very well-connected accounts are tweeting about it and perhaps shaping how others react. We found that some of these central accounts are actually bots,” said Terry College of Business Ph.D. student Carolina Salge, who co-authored the research. “Once enough accounts are tweeting about the same thing, that creates buzz, and organizations really respond to buzz.”

Bots (short for robots) are simple computer programs designed to carry out automated tasks. In internet terms, bots are non-human actors that often try to go undetected.

Although we’ve known about Twitter bots for years, the new research, recently published in Academy of Management Discoveries, marks the first time that bots’ social clout was studied in the field of information systems and management. Because of the increasing prevalence and sophistication of bots, their invisible influence may be affecting news reports and social media research, said Elena Karahanna, study co-author and professor of management information systems at Terry.

“Bots amplify the message. They amplify how many people the message reaches and how fast it reaches them,” said Karahanna, also the Rast Professor of Business at UGA. “They spread the word very, very quickly. That’s one reason they can become central actors in these networks.” 

The notion that bots can be central to a social movement was nearly overlooked by researchers in information systems and management. Salge was completing a class assignment on social networks when she discovered odd patterns in her data that led her to uncover a secret world of fake Twitter accounts working to push an agenda.

She took the discovery to Karahanna, and the two began to investigate other ways bots are being used and new ways for researchers to identify them.

For example, they studied the “fem bots” created by the extramarital dating site Ashley Madison. They found the organization populated its site with fake female profiles that were actually bots automated to send simple messages to male users in order to entice them into paying for memberships.

But while bots often try to pass as humans online, their purposes are not always nefarious, Salge said.

“Most of the research on bots focuses on detection because there is a clear assumption that they’re often bad,” she said. “But we started to see that bots can also be used for good, like protesting corruption. We know from prior research that boycotts and protests that attract mainstream media attention are in a better position to get their demands met. It appears that a lot of movements are using bots to increase awareness of their cause on social media with the hopes to be reported by the mainstream media. And if that is indeed the case, it is definitely one way to put pressure on organizations or governments to do something.”

In another instance, the authors examined the online protest that erupted following a 2013 ruling by Brazil’s Supreme Federal Court that was seen as too lenient on corrupt politicians. Once the verdict was handed down, thousands of Brazilians took to Twitter to proclaim their outrage. Some protestors created bots that retweeted relevant hashtags or news stories, catapulting the story to “trending” status on Twitter and gaining widespread attention.

But just as protesters may employ bots to push for government reform, employees can potentially use bots to add volume to their complaints, Karahanna said. 

“Uber had disputes with its contractors—they don’t call them employees—about their compensation and benefits,” she said. “One could easily think that these contractors could create bots to make their demands more salient and more visible to the general public and thus pressure Uber to respond positively to their demands.”

The pair has more research on bots and their relevance to social dialogue in the works. They are studying how the strategies between the most-important human and bot accounts differ and are looking into cyborg accounts that are comprised of both automated tweets and human-produced tweets.

Bots and cyborg accounts can occupy an ethical gray area, which makes being able to identify them important, Karahanna said.

“They may be used to spread fake news, but they may also be used to spread facts,” she said. “And I think that’s where the ethical line is. If they are spreading the truth, it’s not unethical.”

### 

 

The journal article can be viewed at http://amd.aom.org/content/early/2016/12/08/amd.2015.0121.abstract

 

MEDIA CONTACT
Register for reporter access to contact details