On Thursday, June 12th, a Boeing Dreamliner 787 piloted by Air India crashed, killing more than 260 people. But, for certain internet searches, Google’s Artificial Intelligence overview asserted (incorrectly) that the June 12th crash was actually an Airbus airplane — Boeing’s main rival company. This was especially true for searches that looked for a recent Airbus crash.
Google, for their part, does provide a warning: At the bottom of their AI Overview, in small print, “AI responses may include mistakes” can be seen. The reason for this is the enduring challenge of so-called “hallucinations.” The Large Language Models that compose today’s prominent AIs, don’t know true from false. Instead, they present an expected answer based on the data that was used to train them. Oftentimes the expected answer is the correct one, but it does not have to be. When an AI extrapolates from a pattern in the data to construct false claims, this is called a hallucination. The AI “sees” something that is not there. In this case, the AI overview — at least for certain searches — hallucinated that the recent crash was an Airbus plane.
Whatever one thinks of the adequacy of Google’s disclaimer in this particular case, it is indicative of a broader “buyer beware” approach, which relies upon consumers doing their due diligence. Don’t want heavy metal in your protein powder? Get tested versions. Don’t want BPA leeching into your food? Buy glass containers instead. Want to use Disney+? Then agree to their extensive terms and conditions.
This follows from an ethics of individual responsibility in which consumers knows themselves best and are therefore best positioned to make the right decision. From this perspective, more options are always preferable to limited choice. If you don’t like what’s on offer, then don’t buy it. Unsurprisingly, the current Republican administration anti-regulation takedown of paper straws, electric cars, and energy star appliances has often used the rhetoric of consumer choice.
But even on this thin framework, consumer protection cannot be wholly abandoned. The conditions under which someone can make a free and authentic choice must still hold. For this we can pull from medical ethics, which has thought extensively about choice in the context of consenting to medical treatment. Just like patients, consumers need to have capacity (are cognitively able to make an informed decision), be informed, and consent voluntarily without excessive pressure or inducement.
By this same token, a company should not be able to force or coerce someone to buy their products, nor should a company engage in deceptive practices. Buyers need to know what exactly they are choosing.
If we are true believers in the power of consumer choice and individual responsibility, then almost any product can be brought to market as long as it is labeled responsibly and buyers can freely choose. If someone wants to buy slimy, week-old lettuce rinsed in raw sewage, that sounds like a personal problem – as long as they know what they are getting into. Other harmful substances, like drugs, can also be reasonably put up for sale. We might draw the line at products which hurt other people, but as long as the only person at risk is the customer themselves, then we can tell a tidy story about prioritizing the preferences of the consumer.
However, this neat story quickly encounters complications.
First, regulations are often not just about protecting the individual consumer. A more tightly regulated AI market would not only prevent Googlers from being led astray, but would also protect Airbus’s reputation. Environmental regulations, too, are not just about individual safety but about protecting the environment more broadly. The protection of others has long been a justification for curtailing individual rights and freedoms.
Second, the condition of “being informed” can be surprisingly hard to fulfill. Often customers will know very little about the product. How much does a company have to do to inform them? How big does a warning have to be? How much information should it provide? And even if they provide the information, how much research is it reasonable to expect consumers to do? It seems impossible to really have informed consent about a purchasing decision, without an in-depth knowledge of what is being purchased. Yet, we clearly cannot be expected to each do this level of research before every marketplace decision we make. Consider harmful chemicals in food. Third-party entities like government agencies or consumer watchdog groups can do the work to ensure people are informed. This role seems especially important as sellers often have an incentive not to disclose information. From a strictly profit-oriented perspective, Boeing would probably be quite happy to have Airbus share the blame.
Third, consumer decisions may not always be truly voluntary. Poorer people especially can be wedged into making decisions they would prefer not to because it is the cheaper option. Maybe they want to buy safer, more carefully raised and processed food, but simply cannot afford to do so. Is their cheap, factory-farm raised chicken truly voluntarily chosen? Moreover, some products distort consumer decision making. For addictive products like alcohol or social media, the decision consumers would make after careful consideration may very well depart from the decision they make in a moment of weakness.
Appealing to personal choice and individual responsibility is a powerful tool in ethics. All else being equal, we should try to respect people and their decisions. And yet, fundamentally, the modern consumer does not have the power to stand as an equal to big corporations like Alphabet (the parent company of Google) or Boeing. We all have constraints on our time, our pocket books, and our research skills. The fully autonomous consumer making fully informed decisions is a good aspiration, but it is ultimately just a fantasy. And simply appealing to that standard represents a problematic ethical shortcut. Why worry about what level of accuracy AI overviews have? There’s a disclaimer. If a consumer doesn’t read, that’s on them.
By offloading questions of AI accuracy, food safety, and environmental impact onto the individual, we can unintentionally undermine the necessary ethical work of deciding how responsible businesses should operate in our modern society.