While digital technologies have undoubtedly created new goods/services and boosted productivity in some activities (e.g. Brynjolfsson and McAfee 2015), there is also evidence that productivity gains from these technologies have sometimes fallen well below expectations (e.g. Acemoglu et al. 2016). Either way, over the past 40 years, waves of digital technologies – including personal computers, numerically controlled machinery, robotics, and office automation – have increased inequality. This is both because some of these technologies, such as personal computers, have been highly complementary to more educated workers (Autor et al. 1998, Autor et al. 2003, Goldin and Katz 2008), and because many of these tools have been used to automate work, with unequal impacts on different types of workers (Autor et al. 2003, Acemoglu and Restrepo 2022a, 2022b).
Unfortunately, the currently predominant direction for AI emphasises automation, displacement of skilled labour, and diminished worker voice due to stepped-up monitoring and surveillance. There is an alternative, ‘human-complementary’ path that could contribute more to productivity growth and could help reduce economic inequality. However, getting onto this other path would require substantial policy effort, including in both the US and Europe.
Automation – the substitution of machines and, more recently, algorithms for tasks previously performed by humans – has been a constant since at least the beginning of the Industrial Revolution, widening inequality. The automation of blue-collar and office jobs using digital technologies has been an important driver of the rise in inequality since the 1980s (Acemoglu and Restrepo 2022a).
It is inevitable that AI systems will be used for some automation, both for technical and business strategy reasons. On the technical front, a major barrier to pre-AI forms of automation has been that many service and production tasks demand flexibility, judgement, and common sense – which have historically required a human decision maker. Artificial intelligence, especially generative AI, can potentially master such tasks (Susskind 2021). A broad swath of computer security tasks that used to be performed by skilled human operators can now be performed by AI bots. Similarly, generative AI systems can write advertising copy, parse legal documents, transcribe physicians’ medical notes, and perform language translation. Currently, the technologies driving this new form of automation are immature, but they could contribute to sizeable productivity gains as costs fall and reliability improves.
Businesses may also choose machines over workers for reasons other than productivity. Automation appeals to managers who are seeking greater consistency and less opposition from organised or unorganised labour (Acemoglu and Johnson 2023).
But there is another path available. We can harness the power of generative AI to complement workers and create productive new tasks for them, instead of just automating work.
In much of the 20th century, automation of traditional work alongside creation of new tasks proceeded in relative balance, and was foundational to wage and employment growth, underpinning shared prosperity. New technologies both displaced existing tasks and complemented humans, enabling them to perform higher-quality work and generating new tasks (Acemoglu and Restrepo 2018, Autor et al. 2022, Acemoglu and Johnson 2023).
Sometime around 1980, however, this balance was lost. While automation has maintained its pace or even accelerated over the ensuing five decades, the offsetting force of new task creation has slowed, particularly for workers without four-year college degrees (Acemoglu and Restrepo 2019, Autor et al. 2022). Non-college workers have been displaced from factories and offices by computerisation and, for blue-collar workers, also by import competition (Autor et al. 2013), but no new equivalently well-paid opportunities have emerged to attract these workers. As a result, non-college educated workers are increasingly found in low-paid services such as cleaning, security, food service, recreation, and entertainment. These jobs are socially valuable, but they require little specialised education, training, or expertise, and hence pay poorly.
Today, advances in generative AI create both vast potential for human augmentation and sweeping scope for worker displacement through accelerating automation. We thus have a critical choice to make: either continue to double down on automation, or use these powerful tools in a pro-worker fashion.
Several recent studies provide ‘proof-of-concept’ examples that demonstrate how generative AI can supplement expertise rather than displace experts. Peng et al. (2023) demonstrate that GitHub Copilot, a generative AI-based programming aid, made programmers 56% faster. Noy and Zhang (2023) find that workers improved the speed and quality of their writing output by using ChatGPT, with less-capable writers improving the most. Generative AI did not make the least-skilled writers quite as effective as the most-skilled writers, but it made all writers faster and substantially reduced the quality gap between the two groups. Finally, Brynjolfsson et al. (2023) show that customer service agents who receive background information on their cases from generative AI tools significantly improve their productivity. Again, novice workers experience the biggest gains, improving their performance three times faster than workers without these tools.
In all three cases, generative AI tools automate and augment human work simultaneously. The automation saves workers time: AI writes the first draft of computer code, advertising copy, and customer support responses. Augmentation happens because workers are called upon to apply expertise and judgment to intermediate between the AI’s suggestions and the final product – whether it is software, text, or customer support.
What could help move the US, Europe, and other economies onto the human-complementary path? In a new CEPR Policy Insight (Acemoglu et al. 2023), we suggest five potential elements for policy attention.
Tax system
The current US tax code places a heavier burden on firms that hire labour than those that invest in algorithms to automate work (Acemoglu et al. 2020). In all countries, we should aim to create a more symmetric tax structure, where marginal taxes for hiring (and training) labour and for investing in equipment/software are equated. This will shift incentives toward human-complementary technological choices by reducing the bias of the tax code toward physical capital over human capital.
Labour voice
The direction of AI will have profound consequences for all workers. Creating an institutional framework in which workers also have a voice would be a constructive step – and there is an important role for civil society in pressing for this to happen, including through articulating needs at the local and state level. At a minimum, government policy should restrict deployment of untested (or insufficiently tested) AI for applications that could put workers at risk – for example, in high-stakes personnel decision-making tasks (including hiring and termination) or in workplace monitoring and surveillance. Health and safety rules need to be updated accordingly.
Funding for more human-complementary research
Given that the current path of research has a bias towards automation, additional support for the research and development of human-complementary AI technologies could have significant impact. It is best to focus on specific sectors and activities where opportunities are already abundant. These include education, healthcare, and modern craft worker training. Just as the US Department of Defense orchestrated investments and competitions to foster the development of self-driving cars and dexterous robotics, the federal government should foster competition and investment that pairs AI tools with human expertise, aiming to improve work in vital social sectors.
AI expertise within the federal government
AI will touch every area of government investment, regulation, and oversight, including (but not limited to) transportation, energy production, labour conditions, healthcare, education, environmental protection, public safety, and military capabilities. Developing a consultative AI division within government (or at the EU level for Europe) that can support the many agencies and regulators tackling these challenges will support more timely and effective decision-making at every level.
Technology certification
Government can encourage appropriate investments by advising on whether purported human-complementary technology is of sufficient quality to be adopted in publicly funded education and healthcare programmes. For this advice to be meaningful, experts need to be engaged and independent – i.e. they should not be directly or indirectly working for the tech companies. It is hard to attract talent to government or universities when the private sector is paying top dollar for expertise. This further strengthens the case for building a high-prestige, cross-cutting federal AI service.
There is no guarantee that the transformative capabilities of generative AI will be used for the betterment of work or workers. The bias of the tax code, of the private sector generally, and of the technology sector specifically, leans towards automation over augmentation. But there are also potentially powerful AI-based tools that can be used to create new tasks, boosting expertise and productivity across a range of skills.
To redirect AI development onto the human-complementary path requires changes in the direction of technological innovation, as well as in corporate norms and behaviour. This needs to be backed up by appropriate governmental priorities and a broader public understanding of the stakes and the available choices. We know this is a tall order. But this makes focusing on what is needed even more important.
References
Acemoglu, D, and S Johnson (2023), Power and progress: Our 1000-year struggle over technology and prosperity, PublicAffairs, Hachette.
Acemoglu, D and P Restrepo (2018), “The race between man and machine: Implications of technology for growth, factor shares, and employment”, American Economic Review 108(6): 1488–1542.
Acemoglu, D and P Restrepo (2019), “Automation and new tasks: How technology displaces and reinstates labor”, Journal of Economic Perspectives 33(2): 3–30.
Acemoglu, D and P Restrepo (2022a), “Tasks, automation, and the rise in US wage inequality”, Econometrica 90(5): 1973–2016.
Acemoglu, D and P Restrepo (2022b), “Demographics and Automation”, Review of Economic Studies 89(1): 1–44.
Acemoglu, D, D Autor, D Dorn, G Hanson, and B Price (2016), “Import competition and the great US employment sag of the 2000s”, Journal of Labor Economics 34(1, Pt. 2) 141–198.
Acemoglu, D, A Manera, and P Restrepo (2020), “Does the US tax code favor automation?”, NBER Working Paper no. 27052.
Acemoglu, D, D Autor, and S Johnson (2023), “Can we have pro-worker AI? Choosing a path of machines in service of minds”, CEPR Policy Insight No. 123.
Autor, D, L Katz, and A Krueger (1998), “Computing inequality: Have computers changed the labor market?”, Quarterly Journal of Economics 113(4): 1169–1213.
Autor, D, F Levy, and R Murnane (2003), “The skill content of recent technological change: An empirical investigation”, Quarterly Journal of Economics 118(4): 1279–1333.
Autor, D, D Dorn, and G Hanson (2013), “The China syndrome: Local labor market effects of import competition in the United States”, American Economic Review 103(6): 2121–2168.
Autor, D, C Chin, A Salomons, and B Seegmiller (2022), “New frontiers: The origins and content of new work, 1940–2018”, NBER Working Paper no. 30389.
Brynjolfsson, E, and A McAfee (2016), The second machine age: Work, progress, and prosperity in a time of brilliant technologies, W.W. Norton.
Brynjolfsson, E, D Li, and L Raymond (2023), “Generative AI at work”, NBER Working Paper No. 31161.
Goldin, C, and L Katz (2009), “The evolution of US educational wage differentials, 1890 to 2005”, Chapter 8 in The race between education and technology, Harvard University Press.
Noy, S, and W Zhang (2023), “Experimental evidence on the productivity effects of generative artificial intelligence”, Science 381(6654): 187–192.
Peng, S, E Kalliamvakou, P Cihon, and M Demirer (2023), “The impact of AI on developer productivity: Evidence from GitHub Copilot”, arXiv Working Paper no. 2302.06590.
Susskind, D (2021), “Technological unemployment”, forthcoming in J Bullock (ed.), The Oxford handbook of AI governance, Oxford University Press.