Seven leading companies building artificial intelligence – including Google, Amazon, Microsoft, Meta and Chat GPT-maker OpenAI – have agreed to a voluntary pledge to mitigate the risks of AI, according to an announcement by the White House. The companies committed to allowing independent security experts to test their systems before they’re released and to develop systems that will alert the public when content is created by AI, through a method known as “watermarking,” among other pledged steps. 

GW faculty experts are available to offer insight, analysis and commentary on responsible and trustworthy AI as well as efforts by lawmakers and the Biden Administration to regulate artificial intelligence.


David Broniatowski, an associate professor of engineering management and systems engineering, is GW’s lead principal investigator of a newly launched, NSF-funded institute called TRAILS that explores trustworthy AI. Broniatowski is leading the institute’s third research arm of evaluating how people make sense of the AI systems that are developed, and the degree to which their levels of reliability, fairness, transparency and accountability will lead to appropriate levels of trust. He can discuss the risks and benefits of AI development and what developing trustworthy AI means and looks like.

Broniatowksi says watermarking is a useful tool, but there is no evidence that it will mitigate risks of AI harms on its own.

Susan Ariel Aaronson, research professor of international affairs, is the director of GW’s Digital Trade and Data Governance Hub and co-PI of the TRAILS Institute. Under the TRAILS research initiative, Aaronson is using her expertise in data-driven change and international data governance to lead one of the institute’s research arms in participatory governance and trust. In all, her research focuses on AI governance, data governance, competitiveness in data-driven services such as XR and AI and digital trade. She can discuss the latest efforts to regulate artificial intelligence.