President Donald Trump has declared war on Twitter after the tech giant flagged his posts as being incendiary and misleading by placing public warning labels on some of his tweets this past week.
The commander in chief retaliated by signing an executive order aimed at limiting the broad legal protections enjoyed by social media companies.
Twitter contends the president’s tweets fall under a “public interest notice,” which is why they have added a disclaimer to his tweets when they feel it violates policy.
The move has sparked debate among media, scholars and the public, leaving many to ask: Is this really the responsibility of a social media platform?
Dan Gillmor, an ASU Professor of Practice at the Walter Cronkite School of Journalism and Mass Communication and the co-founder of the News Co/Lab, and Kristy Roschke, the News Co/Lab managing director, who runs the day-to-day operations, weigh in on the topic. The Cronkite School initiative started in 2017 and is aimed at helping people find new ways of understanding and interacting with news and information.
Question: What was your initial reaction to the announcement that Twitter was going to start fact-checking Donald Trump's tweets, and why do you think they did this?
Gillmor: My first reaction was that this was better than nothing — but if I was running Twitter, I'd have removed the Trump account long ago due to his repeated, willful violations of the terms of service. The only reason he's been able to continue posting so relentlessly is the weird presidential exception that Twitter executives instituted solely for this president.
Fact-checking is an odd solution in its own right. Given how often Trump tweets, and how often he is untruthful in what he says, this is going to keep people at Twitter busy, and I'm betting the company's handling of the Trump tweet stream will be, at best, inconsistent and contradictory. It will raise obvious questions, such as why is Trump the only person being fact-checked?
Q: Kristy, what precedent does this set?
Roschke: I suppose it sets a precedent that Twitter will hold the president to the same standards it purports to hold other users to when it comes to misleading or incendiary speech. But I’m not ready to assume this will become business as usual, as there are many complicating factors surrounding each new circumstance. And given the reaction and backlash, it’s unclear whether this will last or will be implemented with a consistency that is clear to users.
Q: Is it dangerous for a social media platform like Twitter, which is not a news outlet, to start performing this function?
Gillmor: Twitter has every right to take whatever action it deems appropriate with Trump's account, or anyone else’s. It's a private company.
The issue isn’t that Twitter is a platform for other people’s speech. It's that Twitter is such a dominant platform, one of a small number of giant services that, combined, host an enormous percentage of public speech in America and around the world. Do we really want them to be the editors of the internet? The broader danger is in the centralization taking place in our technology and communications, a consolidation of power into a few hands.
If all of the major platforms that host other people's speech decide jointly to remove individuals or organizations they consider beyond the pale, a process called “de-platforming,” we have to ask ourselves whether that's too much power. The issue isn't whether everyone has a right to speak; we do. It’s whether everyone has a right to speak in places where a lot of other people can hear them. We do not. But if a very few giant companies can make that decision, it’s troubling.
Q: Facebook CEO Mark Zuckerberg has spoken out and said private companies should not be the “arbiter of truth.” Does doing this become problematic or is it helpful to consumers trying to cut through all the information out there?
Roschke: I believe platform companies have a responsibility to mitigate the impact of misinformation that flows freely through their products. They can do this by prioritizing credible sources and content and downplaying or, when necessary, removing harmful content. Platform companies are already controlling what content users see via opaque algorithms that we have no insight into. Labeling verifiably false information as such, or directing people to credible expert sources definitely does not equal to being an “arbiter of truth” any more than choosing the people and groups that appear most often in my feed does.
Q: Let me play devil's advocate for a second: Twitter does have a right of refusal and terms of service agreement with all of its users. With that said, do they still have the right to do this to Trump or any politician?
Gillmor: Of course they do. They’re a private company. They’re not obliged to publish what I say, or what you say, or what the president of the United States says.
Q: Can misinformation or disinformation be combatted by doing this?
Roschke: Though the research is mixed, there is evidence that labeling information and using credible expert sources to correct misinformation on social media can be effective interventions in fighting misinformation. But no one method is going to solve this complex problem. It will take a concerted effort that involves things like tracking and thwarting disinformation campaigns, integrating and consistently (and transparently) implementing in-platform features to flag, remove and/or correct misinformation, and educating users on why and how these efforts work and how to verify information for themselves. We all have a role to play.
This issue continues to become more polarized, a trend that is not helped by the media reporting on it as such. Using typical conflict frames to describe misinformation naturally pits groups against one another. This is amplified by traditional media and then exacerbated by social media in an unending cycle.
My experience with students, however, has been that they are eager to better understand how and why they encounter the information that they do and what they can do about it. We have nuanced and cordial conversations, even when we disagree. I understand that a classroom conversation is very different from a social media conversation, but I believe that we can help each other in ways that don’t always seem possible when looking at the media coverage of misinformation.
Q: Trump's latest reaction was to sign an executive order aimed at limiting the broad legal protections enjoyed by social media companies. What kind of impact do you think this will have?
Gillmor: Those legal protections were not designed for the enjoyment of social media companies. They were put in place to help ensure robust freedom of expression online — for you, for me, for all of us.
I’m assuming, based in legal opinions from qualified scholars, that Trump can’t amend the law unilaterally. But his apparent dislike of what's called "Section 230" (part of a 1996 law) is shared by some powerful people in Congress who, regrettably from my perspective, want to force companies that host other people's speech to edit that speech in granular ways or face a torrent of lawsuits.
The consequences would be awful if they succeed. The current tech giants could probably live with it, but they have the resources. Who doesn’t? New entrants in the market. So if you want to entrench the current group of huge, wealthy companies, that's an effective strategy. By the way, the collateral damage would be breathtaking. Wikipedia, among other valuable sites and services we take for granted, has been clear that its ability to continue would be very much in doubt.