Iran, with technological support from Chinese companies, has assembled a powerful system of digital censorship and surveillance over the past 15 years. That infrastructure was recently employed – using face recognition, internet blackouts, and, AI – to brutally crush protests, resulting in at least 7,000 deaths. On the other side of the world, two senior AI researchers, Zoë Hitzig at OpenAI and Minank Sharma at Anthropic, resigned, citing concerns about the AI business model and AI safety, respectively. Underlying these seemingly dissimilar events is a shared worry about the dangers of technology and who controls it.
Our stories of innovation and technological progress tend to focus on the broader public. We will have access to new mind-bending entertainment sources, life-changing medical technologies,and a vast array of time-saving devices, so the narrative goes. Yet, the most important impact may not be how it is enjoyed by the everyday consumer, but rather how it is wielded by powerful entities such as governments or large corporations – Iran has over 90 million people; OpenAI’s ChatGPT has over 700 million weekly users.
For many of us, especially in advanced economies, our lives are completely infused with technology. Communication with our friends, the news we read, our access to government services, the tools on which we work, recommendations for doctors and restaurants, our political engagement and activism, are all facilitated by either government- or corporate-controlled digital infrastructure. Often we are exchanging our personal details — birth date, favorite websites, anxieties, etc. — for access. Off our computers, we can be monitored by our phone’s GPS, or watched by our Ring cameras.
Increasingly, the tendency has been towards centralization and top-down control. The largest technology companies, such as Alphabet (Google), Microsoft, and Apple have all embraced a platform approach, where they provide digital real estate and tools, which can then be “rented” by others. Likewise, major Large Language Models, such as ChatGPT, charge users or product developers for access to their model. This has led to a digital landscape with very few owners and many borrowers. Even most e-books are simply licensed, rather than owned the way a paper copy is.
At the same time, countries are increasingly asserting digital sovereignty and their right to control digital infrastructure within their territorial domains. China’s Great Firewall is the most famous example, but nations such as Russia and Iran have also developed sophisticated ways to block and shut-off internet access. Even the EU has come to embrace digital sovereignty, although its current concern is minimizing dependence on US tech companies.
This wraparound technological infrastructure – and the data it harvests – represents a great deal of potential control over our lives. This has its advantages. Powerful actors can secure data, fight cybercrime, and provide valuable tools and products. Digital surveillance can be used to fight terrorism. Advertising and data collection allow companies to provide their services at discounted rates.
However, these same powers greatly amplify a tendency already present in 20th-century politics: governments and corporations’ translation of power and knowledge into impact and influence. Their ability to track, monitor, and influence is unrivaled historically.
Given this reality, it is valuable to consider what protects us from the undue exercise of power.
At the most extreme, is the nonexistence of that power. One way to prevent large corporations from wielding such awesome power, for example, is to simply break them up. Similarly, a weakened government can be limited in its capacity to oppress (at the cost of being limited in its capacity to help).
Less extreme are various restraints or counterweights to the exercise of power. For corporations, this includes regulations, supervisory bodies, robust consumer and worker protection laws, and competitive alternatives. For governments, this includes free and fair elections, an independent judiciary, and the separation of powers. A well-functioning government that is responsive to the interests of the people is, of course, better positioned to impose meaningful regulation on corporations than a government that is weak, corrupt, or malfeasant.
Finally, there is mere discretion. Here it is simply a matter of internal restraint whether corporations or governments exercise certain power. As governments and, in a sense, corporations, build up their data-gathering and surveillance architectures, we increasingly rely on trust to maintain data integrity and prevent abuse. This is especially the case for countries like the US with relatively lean regulations, consumer protections, and workers’ rights. On the topic of AI, the US administration asserted in a December executive order that “AI companies must be free to innovate without cumbersome regulation.” Given the known role such technology can play in deepfakes, data gathering, face recognition, and even cybercrime, this puts a lot of trust in these companies.
Some philosophers emphasize what is called non-domination or republican freedom. The key feature here is that the arbitrary exercise of power is not possible (or is, at least, prohibited), as opposed to merely voluntarily withheld. They emphasize that a slave with a permissive master is still not free.
By the same token, domination represents a particular risk for a world with extraordinarily powerful governmental and corporate actors. We need not just worry about what they do, but what they could do. Good governance may help take the edge off, but can it eliminate the risk entirely? Not every country is blessed with good governance.
We will have to think deeply if we want a world that contains such powerful actors and prevents potential abuses. Do the potential benefits they can provide through incredible resources and economies of scale outweigh the risk of abusing their power? Is it too late to go back?
The accumulation of digital power and the weaponization of technology raise a more general point about the complexity of technological progress. Technological improvement and societal improvement need not walk in lockstep. Certainly, some innovations and new technologies are nearly uncontroversial good things: antibiotics, seatbelts, sanitation, braille.
Still, technological growth is not without its costs and risks. We cannot always see the full effect of new products and innovations – there are always unforeseen dangers and unanticipated applications. It’s also good to remember that the effects of technologies may not refract across a society evenly. AI-fueled innovations that are good for landlords are not necessarily good for renters; those good for companies are not necessarily good for their workers. Technology can exacerbate existing power differentials in society. Nor can we see the combined effect of many different technologies and the, often disorienting, changes they can bring to a society. How will, for example, large language model chatbots like ChatGPT impact how we learn, think, and socialize. It is worth considering what we lose, not just what we gain, in the pursuit of progress.