As Elon Musk recently testified to U.S. governors, automation poses a serious threat to American jobs – a point also proven by New York Institute of Technology’s (NYIT) expert in technology and culture, Professor Kevin LaGrandeur.
In a USA Today op-ed and newly published book, Surviving the Machine Age: Intelligent Technology and the Transformation of Human Work, LaGrandeur, argues that intelligent technology is displacing not only manual labor, but even middle-class jobs and higher level jobs. He notes this displacement also includes accountants, who risk a very significant percent chance of being displaced by intelligent technology in the next ten years. This is also true of other professionals such as journalists and technical writers.
“Technological unemployment is growing rampant in the United States, with intelligent machines displacing American workers every day,” says LaGrandeur. “Eighty-eight percent of manufacturing job losses over the past few years are a result of decreased demand for human labor. Machines have been a bigger job-killer to U.S. jobs than both immigrants and outsourcing, and the problem is only growing worse.”
In addition to agreeing with Musk’s proposal for a universal basic income system, LaGrandeur stresses the need for other radical economic policy reforms.
“Relieving the effects of technological unemployment will require fundamentally new approaches to economic policy,” says LaGrandeur. “Potential reforms might include a universal basic income, as Musk has mentioned, or perhaps a shorter workweek and a mechanism for paying individuals when their personal data is used by technology firms to turn a profit.”
While Musk’s appeal to regulate intelligent machines may seem viable, LaGrandeur offers a more realistic perspective.
“Limits on the development of AI would likely result in malicious groups and outside nations finding more creative ways to violate regulations, but alternative forms of regulation may work,” says LaGrandeur. “For instance, scientists and governing bodies could develop protocols to build and test AI, procedures for fail-safe controls built into AI, and methods to examine the reliability of these controls. Most importantly, governments could invest in funding to research non-military forms of AI, so that benevolent innovations in the technology could offset the more dangerous ones.”