On September 27th the monthslong writers’ strike concluded. Writers worried that ChatGPT and similar text generation technologies will lead to a future in which scripts are no longer written by humans, but manufactured by machines. Certainly corporations find the technology promising, with Disney, Netflix, and other major entertainment companies vacuuming up AI specialists. Holding signs saying “AI? More like AI-Yi-Yi!” and expressing similar sentiments, writers fought to secure their place in a rapidly changing technological landscape. And when the smoke cleared and the dust settled, the Writers Guild of America had won a surprisingly strong contract. While it does not prohibit the use of AI, it does ensure that human writers will not become the mere handmaidens of computerized text generators – editing and refining machine-generated content.
From the flood of garbage ChatGPT has sent into the internet to the multiplying complexities of intellectual property, Artificial Intelligence is, in many ways, a distinctly modern challenge. But it also follows a well-worn pattern: it continues a long legacy (and revives an old debate) regarding the labor market’s ability to absorb the impact of new technology.
John Maynard Keynes, the famous British economist, used the phrase “technological unemployment” to describe the mismatch in how quickly human labor could be replaced by technological innovation versus how quickly new uses for human labor emerged. For Keynes, this was essentially a lag-time problem caused by rapid technological shifts, and it remains controversial whether “technological unemployment” causes an overall drop in the employment rate or just a momentary hiccup. Regardless, for workers who lose their jobs due to the adoption of new technology, whether jobs are being created just as fast in some other corner of the economy is rather beside the point. Because of this, workers are often anxious, even adversarial, when marked technological change makes an appearance at their workplace.
The most famous example is the Luddites, the British machine-smashing protestors of the early 1800s. With some textile manufacturers all too willing to use new technologies such as the mechanized loom to replace and undercut skilled laborers, workers responded by destroying these machines.
“Luddite” has since become a term to describe a broader resistance to (or ignorance of) technology. But the explanation that workers resistant to their employers adopting new technologies are simply anti-technology or gumming up the gears of progress out of self-interest is too simplistic. Has “progress” occurred just because there is a new product on the market?
New technology can have disparate effects on society, and few would assert that AI, social media, and smartphones deliver nothing but benefits. Even in cases where technological innovation improves the quality or eases the production of a particular good, it can be debatable whether meaningful societal progress has occurred. Companies are incentivized to simply pocket savings rather than passing on the benefits of technological advancement to their employees or customers. This represents “progress,” then, only if we measure according to shareholder value or executive compensation. Ultimately, whether technological advance produces societal progress depends on which particular technology we’re talking about. Lurking in the background are questions of who benefits and who gets to decide.
In part, the writers’ strike was over just this set of questions. Entertainment companies no doubt believe that they can cut labor costs and benefit their bottom line. Writers, however, can also benefit from this technology, using AI for editing and other purposes. It is the writers’ assertion that they need to be part of the conversation about how this technology – which affects their lives acutely – should be deployed, as opposed to a decision made unilaterally by company leadership. But rather than looking to ban or destroy this new technology, the writers were simply demanding guardrails to protect against exploitation.
In the same 1930 essay where he discussed technological unemployment – “Economic Possibilities for our Grandchildren” – Keynes raised the hope of a 3-hour workday. The economist watched the startling increases in efficiency of the early 20th century and posited, naturally enough, that a glut of leisure time would soon be upon us. Why exactly this failed to materialize is contentious, but it is clear that workers, in neither leisure time nor pay, have been the prime beneficiaries of productivity gains.
As Matthew Silk observed recently in The Prindle Post, many concerns about technology, and especially AI, stem not from the technology itself but from the benefits being brought to just a few unaccountable people. Even if using AI to generate text instead of paying for writers could save Netflix an enormous amount of money, the bulk of the benefits would ultimately accrue to a relatively small number of corporate executives and major shareholders. Most of us would, at best, get more content at a slightly cheaper price. Netflix’s writers, of course, lose their jobs entirely.
One take on this is that it is still good for companies to be able to adopt new technologies unfettered by their workers or government regulations. For while it’s true that the writers themselves are on the losing end, if we simply crunch the numbers, perhaps shareholder gains and savings to consumers outweigh the firing of a few thousand writers. Alternatively, though, one might argue that even if there is a net societal benefit in terms of resources, this is swamped by harms associated with inequality; that there are attendant problems with a deeply unequal society – such as people being marginalized from the democratic political processes – not adequately compensated for merely by access to ever-cheaper entertainments.
To conclude, let us accept, for the sake of argument, that companies should be free to adopt essentially whatever technologies they wish. What should then be done for the victims of technological unemployment? Society may have a pat response blaming art majors and gender studies PhDs for their career struggles, but what about the experienced writing professional who loses their job when their employers decide to replace them with a large language model?
Even on the most hard-nosed analysis, technological unemployment is ultimately bad luck. (The only alternative is to claim that workers are expected to predict and adjust for all major technological changes in the labor market.) And many philosophers argue that society has at least some duty to help those suffering from things beyond their control. From this perspective, unemployment caused by rapid technological change should be treated more like disaster response and preparedness. It is either addressed after the fact with a constructive response like robust unemployment insurance and assistance getting a new job, or addressed pre-emptively through something like universal basic income (a possibility recently discussed by The Prindle Post’s Laura Siscoe).
Whatever your ethical leanings, the writers’ strike has important implications for any livelihood.