Meta announced Tuesday that it will add labels to images on its platforms that were generated by Open AI and other third-party artificial intelligence tools. Meta already labels photorealistic images created with its own AI generator tool. It comes as the 2024 election season heats up and amid growing concerns of AI misuse and the spread of misinformation. 

Faculty experts at the George Washington University are available to offer insight, analysis and commentary on AI and misinformation.


Misinformation & Trustworthy AI

David Broniatowski, an associate professor of engineering management and systems engineering, is GW’s lead principal investigator of a newly launched, NSF-funded institute called TRAILS that explores trustworthy AI. He conducts research in decision making under uncertainty, collective decision making, the design and regulation of complex information flow systems, and how behavior spreads online. Broniatowski can discuss a number of topics related to AI’s role and use in spreading misinformation as well as efforts to combat misinformation online, including the challenges of tackling misinformation. 

Neil Johnson, professor of physics, leads a new initiative in Complexity and Data Science which combines cross-disciplinary fundamental research with data science to attack complex real-world problems. He is an expert on how misinformation and hate speech spreads online and effective mitigation strategies. Johnson recently published new research on bad-actor AI online activity in 2024. The study predicts that daily, bad-actor AI activity is going to escalate by mid-2024, increasing the threat that it could affect election results.

Patrick Hall is a teaching assistant professor of decision sciences at the GW School of Business, teaching data ethics, business analytics, and machine learning classes. He also conducts research in support of NIST's AI risk management framework and is affiliated with leading fair lending and AI risk management advisory firms. He can discuss topics related to building trustworthy AI, bias in AI systems, and AI regulation efforts.

Politics & AI

David Karpf, associate professor of media and public affairs, focuses his work on strategic communication practices of political associations in America, with a particular interest in Internet-related strategies. Two of his published books discuss how digital media is transforming the work of political advocacy and activist organizations. Karpf is an expert on artificial intelligence, internet politics, and political communication and can discuss the role of AI and misinformation plans in campaigns and elections. 

Law & AI

Mary Anne FranksEugene L. and Barbara A. Bernard Professor in Intellectual Property, Technology, and Civil Rights Law is an expert on civil rights, free speech, and technology. Additionally she’s an expert in criminal law, criminal procedure, First Amendment law, and family law. Franks is one of the authors of the Shield Act and can discuss the threats of AI deepfakes and legal procedure to protect oneself. 

Aram Gavoor, Associate Dean for Academic Affairs; Professorial Lecturer in Law; Professor (by courtesy), Trachtenberg School of Public Policy & Public Administration is an expert in administrative law, national security, and federal courts. Gavoor can speak to the risks and challenges of using AI on a national scale.