Seemingly over the last year, artificial intelligence (AI) has taken center stage: whether it be about how to use it, how to regulate it, how to build trustworthy systems, or how to combat disinformation. As AI increasingly becomes a part of everyday life, the fast-growing technology is making its way into the workplace and may be a tool used in important human resources decisions, like the hiring and firing of employees. One professor at the George Washington University says there’s important conversations we might not be having, but should be, around the use of AI in the workplace.

Vikram R. Bhargava is an assistant professor of strategic management and public policy at the GW School of Business. He also is co-editor of the Journal of Business Ethics‘s section on technology & business ethics. Bhargava’s research centers around the distinctive ethics and policy issues that technology gives rise to in organizational contexts. He is interested in topics including technology addiction, mass social media outrage, autonomous vehicles, artificial intelligence, hiring algorithms, the future of work, and other topics related to technology policy and ethics.

Reflecting on the conversations around ethics in artificial intelligence over the last year, Bhargava says there’s often an overlooked ethical concern, especially as it pertains to incorporating AI into the workplace.

“Many of the most widely discussed ethical concerns regarding AI in the last year pertain to when it generates bad outcomes, errors, or technological mishaps: Were its decisions accurate? Were there unforeseen bad consequences? Were untoward racial or gender biases reflected in the judgments of the algorithms? These are all important concerns, of course,” Bhargava says.

Yet, a largely overlooked area of concern pertains to under what circumstances we shouldn't rely on AI, even when there aren't technological mishaps or bad outcomes. In “Hiring, Algorithms, and Choice: Why Interviews Still Matter” (forthcoming, Business Ethics Quarterly) my co-author and I argue even if the various bad outcomes are ultimately engineered away, it still doesn’t settle the question of whether policymakers should, for example, permit managers to defer to hiring algorithms. This is not because managerial gut instincts are far superior—often they’re not. Rather, there are important (and overlooked) ethical values created through us making choices that would be jeopardized, were labor market choices abdicated to an algorithm. This is so, no matter how sophisticated algorithms ultimately become at predicting employee fit and performance.”

If you would like to speak with Prof. Bhargava, please contact GW Media Relations Senior Specialist Cate Douglass at [email protected].