Facebook has dominated news headlines over the past weeks, after former employee-turned-whistleblower Frances Haugen leaked thousands of internal company documents to a consortium of news organizations, now referred to as the “Facebook Papers,” followed by an announcement on Thursday that Facebook is changing its name to Meta.

The name change, a move Facebook CEO Mark Zuckerberg hopes will help rebrand the company and reposition it as a “metaverse” player, comes after the disclosure of the Facebook Papers to the U.S. Securities and Exchange Commission by Haugen’s legal counsel. They resulted in an avalanche of stories shedding light on the inner workings of the tech giant on a range of issues, including suppressing deceptive content, tracking harms exacerbated by its platforms, ignoring employee warnings and exposing international communities to dangerous content.

Kirsten Martin, the William P. and Hazel B. White Center Professor of Technology Ethics at the University of Notre Dame’s Mendoza College of Business and director of the University’s Technology Ethics Center, says, “One person who benefits from the rebranding and corporate name change is Zuckerberg.”

“It’s dystopian,” she said. “If we don’t trust Facebook executives in the real world, why would we in the virtual world? If they cannot get a handle on the content on Facebook where the app was used to recommend an insurrection, how will those same executives get a handle on content in the virtual space they are proposing?”

Martin and her colleague, Elizabeth M. Renieris, associate professor of the practice and founding director of the Notre Dame-IBM Technology Ethics Lab, emphasize that almost none of the Facebook leaks reveal anything surprising to academics, researchers and civil society organizations that have been working on these issues for decades, though more is expected to be released in the coming weeks.

With respect to Facebook-amplified “lawful but awful” content, Renieris says many of the company’s failures, especially in regard to content moderation decisions, come down to scale and having too much to police in too many countries.

“Insufficient cultural, linguistic, contextual or other expertise results in an overreliance on AI and other technologies,” she said. “Is this a question of being too big to fail or just too big? Do we have to fix Facebook or break it up?

“Interestingly, the scale of the Facebook Papers is also overwhelming and suffers from its own problems of scale. There is much more than journalists can reasonably sort through. This could lead to more confusion and paralysis, especially on the part of lawmakers. There is no clear path to regulation. We may actually be further away from a solution.”

Martin, who wrote a case on the ethics of Facebook’s content moderation algorithm, agrees what’s new now is having such a huge volume of raw data available for scrutiny.

“People are talking about breaking Facebook up or fixing what they currently do,” Martin said. “What’s odd is that Facebook’s answer is, ‘No, we want to grow and take on the metaverse.’”

Martin says there needs to be more whistleblower protections or professionalization of Facebook’s engineers so they have an obligation to report what they are finding.