Newswise — Next week, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google, a case that could have major implications on how tech platforms host and promote content. The following Cornell University experts are available to discuss the case. 

James Grimmelmann, professor of digital and information law, studies how laws regulating software affect freedom, wealth, and power. Grimmelmann is an expert in content moderation, search engine regulation and online governance.

Grimmelmann says:

“The Gonzalez plaintiffs are asking the Supreme Court to distinguish between carrying content and ‘targeted recommendations.’ But an Internet without targeted recommendations would be unusable. 

“Every Google search result is a targeted recommendation. If Google were liable for every illegal piece of content that shows up in its search results, it could not exist.  

“Elon Musk is trying to reconfigure Twitter’s recommendation algorithm so that it does a better job of showing users interesting tweets. If the Supreme Court rules for the Gonzalez plaintiffs, Twitter would have to push everyone to a chronological timeline – or risk company-killing liability. 

“Critics of big social-media platforms like Google, Facebook, and Twitter have tried to argue that algorithmic recommendations are suspicious and dangerous—pointing to isolation, addiction, and echo chambers. But limiting Section 230 won’t fix recommendation algorithms. Instead, it will damage the rest of the internet, making everything uniformly worse.”

----

Sarah Dean, assistant professor of computer science, studies machine learning and can speak to what companies can do to tweak their recommendation systems.

Dean says:

“We have the technology to identify and avoid amplifying harmful content. Research by myself and others suggests that this can be done without sacrificing the overall quality of recommendations. However, there are several challenges. 

“How do we teach automated tools to identify harmful content? Advances in artificial intelligence and machine learning have given us powerful pattern recognition technologies but they cannot recognize new patterns without supervision. Given the ever-changing flow of content on the internet, human judgements are necessary on an ongoing basis.

“Also, do algorithms have harmful long term impacts? Algorithm design choices may interact with social dynamics to reinforce damaging behaviors or the promulgation of harmful content. Research shows that this is a concern even if individual recommendations are not harmful.

“The final point is about power: whose interests are represented when deciding what counts as harm? Technical solutions can only be as reliable and trustworthy as their specification.”

----

Gautam Hans is an associate clinical professor of law and associate director of the First Amendment Clinic. Hans analyzes the legal and policy issues implicating technology and civil liberties. He says Congress – not the courts – are responsible for making changes to the law. 

Hans says:

“While Section 230 has its shortcomings, the Supreme Court is not the right venue to tinker with the statute — Congress is. While various statutory proposals have been floated in recent years (and only one adopted), SCOTUS has never taken a Section 230 case in the 25 years since it was enacted.

“SCOTUS has little experience with this complicated federal statutory regime, often creates problems when it does tinker around with such regimes (like the Voting Rights Act), and in the intervening 25 years a broad and still evolving consensus has developed in the lower courts. 

“Given the amount of misinformation about Section 230 and the hostility towards tech platforms, particularly from conservatives, we should all be concerned about what a broad ruling in this case might do to the Internet. Without Section 230, platforms would be much more cautious in allowing potentially liable speech to go live or remain up, which would likely limit the discussion online of “controversial” topics like Black Lives Matter or #MeToo.”

-30-