Back to Prindle Institute

Should AI Development Be Stopped?

photograph of Arnold Schwarznegger's Terminator wax figure

It was a bit of a surprise this month when the so-called “Godfather of AI” Geoffrey Hinton announced that he was quitting at Google after working there for more than a decade developing Google’s AI research division. With his newfound freedom to speak openly, Hinton has expressed ethical concerns about the use of the technology for its capacity to destabilize society and exacerbate income equality. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told The New York Times this month. That such an authoritative figure within the AI field has now condemned the technology is a significant addition to a growing call for a halt on AI development. Last month more than 1,000 AI researchers published an open letter calling for a six-month pause on training AI systems more powerful than the newest ChatGPT. But does AI really pose such a risk that we ought to halt its development?

Hinton worries about humanity losing control of AI. He was surprised, for instance, when Google’s AI language model was able to explain to him why a joke he made up was funny. He is also concerned that despite AI models being far less complex than the human brain, they are quickly becoming able to do complex tasks on par with a human. Part of his concern is the idea of algorithms seeking greater control and that he doesn’t know how to control the AI that Google and others are building. This concern is part of the reason for the call for a moratorium as the recent letter explains, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? […] Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Eliezer Yudkowsky, a decision theorist, recently suggested that a 6-month moratorium is not sufficient. Because he is concerned that AI will become smarter than humans. His concern is that building anything that is smarter than humans will definitely result in the death of everyone on Earth. Thus, he has called for completely ending the development of powerful AI and believes that an international treaty should ban its use with its provisions subject to military action if necessary. “If intelligence says that a country outside the agreement is building a GPU cluster,” he warned, “be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”

These fears aren’t new. In the 1920s and 1930s there were concerns that developments in science and technology were destabilizing society and would strip away jobs and exacerbate income inequality. In response, many called for moratoriums on further research – moratoriums that did not happen. In fact, Hinton does not seem to think this is practical since competitive markets and competitive nations are already involved in an arms race that will only compel further research.

There is also the fact that over 400 billion dollars has been invested in AI in just 2022, meaning that it will be difficult to convince people to bring all of this research to a halt given the investment and potentially lucrative benefits. Artificial intelligence has the capability to make certain tasks far more efficient and productive, from medicine to communication. Even Hinton believes that development should continue because AI can do “wonderful things.” Given these , one response to the proposed moratorium insists that “a pause on AI work is not only vague, but also unfeasible.” They argue, instead, that we simply need to be especially clear about what we consider “safe” and “successful” AI development to avoid potential missteps.

Where does this leave us? Certainly we can applaud the researchers who take their moral responsibilities seriously and feel compelled to share their concerns about the risks of development. But these kinds of warnings are vague, and researchers need to do a better job at explaining the risks. What exactly does it mean to say that you are worried about losing control of AI? Saying something like this encourages the public to imagine fantastical sci-fi ideas akin to 2001: A Space Odyssey or The Terminator. (Unhelpfully, Hinton has even agreed with the sentiment that our situation is like the movies. Ultimately, people like Yudkowsky and Hinton don’t exactly draw a clear picture of how we get from ChatGPT to Skynet. The fact that deep neural networks are so successful despite their simplicity compared to a human brain might be a cause for concern, but why exactly? Hinton says: “What really worries me is that you have to create subgoals in order to be efficient, and a very sensible subgoal for more or less anything you want to do is get more power—get more control.”  Yudkowsky suggests: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” He suggests that “A sufficiently intelligent AI won’t stay confined to computers for long.” But how?

These are hypothetical worries about what AI might do, somehow, if they become more intelligent than us. These concepts remain hopelessly vague. In the meantime, there are already real problems that AI is causing such as predictive policing and discriminatory biases. There’s also the fact that AI is incredibly environmentally unfriendly. One AI model can emit five times more carbon dioxide than the lifetime emissions of a car. Putting aside how advanced AI might become relative to humans, it is already proving to pose significant challenges that will require society to adapt. For example, there has been a surge in AI-generated music recently and this presents problems for the music industry. Do artists own the rights to the sound of their own voice or does a record company? A 2020 paper revealed that a malicious actor could deliberately create a biased algorithm and then conceal this fact from potential regulators owing to their black box nature. There are so many areas where AI is being developed and deployed where it might take years of legal reform before clear and understandable frameworks can be developed to govern their use. (Hinton points at the capacity for AI to negatively affect the electoral process as well). Perhaps this is a reason to slow AI development until the rest of society can catch up.

If scientists are going to be taken seriously by the public,the nature of the threat will need to be made much more clear. Most of the more serious ethical issues involving AI such as labor reform, policing, and bias are more significant, not because of AI itself, but because AI will allow smaller groups to benefit without transparency and accountability. In other words, the ethical risks with AI are still mostly owing to the humans who control that AI, rather than the AI itself. While humans can make great advancements in science, this is often in advance of understanding how that knowledge is best used.

In the 1930s, the concern that science would destroy the labor market only subsided when a world war made mass production and full employment necessary. We never addressed the underlying problem. We still need to grapple with the question of what science is for. Should AI development be dictated by a relatively small group of financial interests who can benefit from the technology while it harms the rest of society? Are we, as a society, ready to collectively say “no” to certain kinds of scientific research until social progress catches up with scientific progress?

Were Parts of Your Mind Made in a Factory?

photograph of women using smartphone and wearing an Apple watch

You, dear reader, are a wonderfully unique thing.

Humor me for a moment, and think of your mother. Now, think of your most significant achievement, a long-unfulfilled desire, your favorite movie, and something you are ashamed of.

If I were to ask every other intelligent being that will ever exist to think of these and other such things, not a single one would think of all the same things you did. You possess a uniqueness that sets you apart. And your uniqueness – your particular experiences, relationships, projects, predilections, desires – have accumulated over time to give your life its distinctive, ongoing character. They configure your particular perspective on the world. They make you who you are.

One of the great obscenities of human life is that this personal uniqueness is not yours to keep. There will come a time when you will be unable to perform my exercise. The details of your life will cease to configure a unified perspective that can be called yours. For we are organisms that decay and die.

In particular, the organ of the mind, the brain, deteriorates, one way or another. The lucky among us will hold on until we are annihilated. But, if we don’t die prematurely, half of us, perhaps more, will be gradually dispossessed before that.

We have a name for this dispossession. Dementia is that condition characterized by the deterioration of cognitive functions relating to memory, reasoning, and planning. It is the main cause of disability in old age. New medical treatments, the discovery of modifiable risk factors, and greater understanding of the disorder and its causes may allow some of us to hold on longer than would otherwise be possible. But so long as we are fleshy things, our minds are vulnerable.

*****

The idea that our minds are made of such delicate stuff as brain matter is odious.

Many people simply refuse to believe the idea. Descartes could not be moved by his formidable reason (or his formidable critics) to relinquish the idea that the mind is a non-physical substance. We are in no position to laugh at his intransigence. The conviction that a person’s brain and and a person’s mind are separate entities survived disenchantment and neuroscience. It has the enviable durability we can only aspire to.

Many other people believe the idea but desperately wish it weren’t so. We fantasize incessantly about leaving our squishy bodies behind and transferring our minds to a more resilient medium. How could we not? Even the most undignified thing in the virtual world (which, of course, is increasingly our world) has the enviable advantage over us, and more. It’s unrottable. It’s copyable. If we could only step into that world, we could become like gods. But we are stuck. The technology doesn’t exist.

And yet, although we can’t escape our squishy bodies, something curious is happening.

Some people whose brains have lost significant functioning as a result of neurodegenerative disorders are able to do things, all on their own, that go well beyond what their brain state suggests they are capable of, which would have been infeasible for someone with the same condition a few decades ago.

Edith has mild dementia but arrives at appointments, returns phone calls, and pays bills on time; Henry has moderate dementia but can recall the names and likenesses of his family members; Maya has severe dementia but is able to visualize her grandchildren’s faces and contact them when she wants to. These capacities are not fluky or localized. Edith shows up to her appointments purposefully and reliably; Henry doesn’t have to be at home with his leatherbound photo album to recall his family.

The capacities I’m speaking of are not the result of new medical treatments. They are achieved through ordinary information and communication technologies like smartphones, smartwatches, and smart speakers. Edith uses Google Maps and a calendar app with dynamic notifications to encode and utilize the information needed to effectively navigate day-to-day life; Henry uses a special app designed for people with memory problems to catalog details of his loved ones; Maya possesses a simple phone with pictures of her grandchildren that she can press to call them. These technologies are reliable and available to them virtually all the time, strapped to a wrist or snug in a pocket.

Each person has regained something lost to dementia not by leaving behind their squishy body and its attendant vulnerabilities but by transferring something crucial, which was once based in the brain, to a more resilient medium. They haven’t uploaded their minds. But they’ve done something that produces some of the same effects.

*****

What is your mind made of?

This question is ambiguous. Suppose I ask what your car is made of. You might answer: metal, rubber, glass (etc.). Or you might answer: engine, tires, windows (etc.). Both answers are accurate. They differ because they presuppose different descriptive frameworks. The former answer describes your car’s makeup in terms of its underlying materials; the latter in terms of the components that contribute to the car’s functioning.

Your mind is in this way like your car. We can describe your mind’s makeup at a lower level, in terms of underlying matter (squishy stuff (brain matter)), or at a higher level, in terms of functional components such as mental states (like beliefs, desires, and hopes) and mental processes (like perception, deliberation, and reflection).

Consider beliefs. Just as the engine is that part of your car that makes it go, so your beliefs are, very roughly, those parts of your mind that represent what the world is like and enable you to think about and navigate it effectively.

Earlier, you thought about your mother and so forth by accessing beliefs in your brain. Now, imagine that due to dementia your brain can’t encode such information anymore. Fortunately, you have some technology, say, a smartphone with a special app tailored to your needs, that encodes all sorts of relevant biographical information for you, which you can access whenever you need to. In this scenario, your phone, rather than your brain, contains the information you access to think about your mother and so forth. Your phone plays roughly the same role as certain brain parts do in real life. It seems to have become a functional component, or in other words an integrated part, of your mind. True, it’s outside of your skin. It’s not made of squishy stuff. But it’s doing the same basic thing that the squishy stuff usually does. And that’s what makes it part of your mind.

Think of it this way. If you take the engine out of your ‘67 Camaro and strap a functional electric motor to the roof, you’ve got something weird. But you don’t have a motorless car. True, the motor is outside of your car. But it’s doing basically the same things that an engine under the hood would do (we’re assuming it’s hooked up correctly). And that’s what makes it the car’s motor.

The idea that parts of your mind might be made up of things located outside of your skin is called the extended mind thesis. As the philosophers who formulated it point out, the thesis suggests that when people like Edith, Henry, and Maya utilize external technology to make up for deficiencies in endogenous cognitive functioning, they thereby incorporate that technology (or processes involving that technology) into themselves. The technology literally becomes part of them by reliably playing a role in their cognition.

It’s not quite as dramatic as our fantasies. But it’s something, which, if looked at in the right light, appears extraordinary. These people’s minds are made, in part, of technology.

*****

The extended mind thesis would seem to have some rather profound ethical implications. Suppose you steal Henry’s phone, which contains unbacked biographical data. What have you done? Well, you haven’t simply stolen something expensive from Henry. You’ve deprived him of part of his mind, much as if you had excised part of his brain. If you look through his phone, you are looking through his mind. You’ve done something qualitatively different than stealing some other possession, like a fancy hat.

Now, the extended mind thesis is controversial for various reasons. You might reasonably be skeptical of the claim that the phone is literally part of Henry’s mind. But it’s not obvious this matters from an ethical point of view. What’s most important is that the phone is on some level functioning as if it’s part of his mind.

This is especially clear in extreme cases, like the imaginary case where many of your own important biographical details are encoded into your phone. If your grip on who you are, your access to your past and your uniqueness, is significantly mediated by a piece of technology, then that technology is as integral to your mind and identity as many parts of your brain are. And this should be reflected in our judgments about what other people can do to that technology without your permission. It’s more sacrosanct than mere property. Perhaps it should be protected by bodily autonomy rights.

*****

I know a lot of phone numbers. But if you ask me while I’m swimming what they are, I won’t be able to tell you immediately. That’s because they’re stored in my phone, not my brain.

This highlights something you might have been thinking all along. It’s not only people with dementia who offload information and cognitive tasks to their phones. People with impairments might do it more extensively (biographical details rather than just phone numbers, calendar appointments, and recipes). They might have more trouble adjusting if they suddenly couldn’t do it.

Nevertheless, we all extend our minds into these little gadgets we carry around with us. We’re all made up, in part, of silicon and metal and plastic. Of stuff made in a factory.

This suggests something pretty important. The rules about what other people can do to our phones (and other gadgets) without our permission should probably be pretty strict, far stricter than rules governing most other stuff. One might advocate in favor of something like the following (admittedly rough and exception-riddled) principle: if it’s wrong to do such-and-such to someone’s brain, then it’s prima facie wrong to do such-and-such to their phone.

I’ll end with a suggestive example.

Surely we can all agree that it would be wrong for the state to use data from a mind-reading machine designed to scan the brains of females in order to figure out when they believe their last period happened. That’s too invasive; it violates bodily autonomy. Well, our rough principle would seem to suggest that it’s prima facie wrong to use data from a machine designed to scan someone’s phone to get the same information. The fact that the phone happens to be outside the person’s skin is, well, immaterial.

Kill-Switch Tractors and Techno-Pessimism

photograph of combine harvester in field

On May 1st, CNN reported that Russian troops had stolen around $5 million worth of John Deere tractors and combines from the Russian-occupied Ukrainian city of Melitopol. At nearly $300,000 each, these pieces of farm equipment are extremely expensive, massive, and unbelievably high-tech. This last feature is particularly important for this story, which ended on a seemingly-patriotic note:

John Deere had remotely kill-switched the tractors once they became aware of the theft, rendering them useless to the Russians.

A remote kill-switch that thwarts invading Russian troops from using stolen Ukrainian goods is easy to read as a feel-good story about the power of creative thinking, and the promising future of new technological inventions. But some are concerned that the background details give us more reason to be fearful than excited. Notably, activist and author Cory Doctorow, whose writing focuses primarily on issues in new and emerging technologies, wants to redirect the attention of the reader to a different aspect of the Russian-tractors story. When John Deere manufactured these particular tractors, they had no idea that they would be sold to Ukraine, and eventually stolen by Russian troops. Why, then, had the company installed a remote kill-switch in the first place?

What follows in the rest of Doctorow’s blog post is an eerie picture. John Deere’s high-tech farm equipment is capable of much more than being shut down from thousands of miles away. Sensors built into the combines and tractors pull scores of data about machine use, soil conditions, weather, and crop growth, among other things, and send this data back to John Deere. Deere then sells this data for a wild profit. Who does Deere sell the data to? According to Doctorow, it was indirectly sold back to the farmer (who could not, until very recently, access it for free) by coming bundled with a seed package the farmers have to purchase from Monsanto. Doctorow goes on:

But selling farmers their own soil telemetry is only the beginning. Deere aggregates all the soil data from all the farms, all around the world, and sells it to private equity firms making bets in the futures market. That’s far more lucrative than the returns from selling farmers to Monsanto. The real money is using farmers’ aggregated data to inform the bets that financiers make against the farmers.

So, while the farmers do benefit from the collection of their data — in the form of improved seed and farm equipment based on this data — they are also exploited, and rendered vulnerable, in the data collection process.

Recent exposés on the (mis)uses of big data paint a picture of this emerging technology as world-changing; and not, necessarily, in a good way. Doctorow’s work on this case, as well as the plethora of other stories on big data manipulation and privacy invasion, can easily lead one to a position sometimes referred to as “techno-pessimism.” Techno-pessimism is a general bleak disposition toward technological advancement that assumes such advancements will be for the general worsening of society/culture/human life. The techno-pessimist is exactly what the name implies: pessimistic about the changes that technological “advancements” will bring.

Opposite the techno-pessimist is the techno-optimist. Nailing down a definition for this seems to be a bit trickier. Doctorow, who (at least once) identified as a techno-optimist himself, defines the term as follows: “Techno-optimism is an ideology that embodies the pessimism and the optimism above: the concern that technology could be used to make the world worse, the hope that it can be steered to make the world better.” Put in these terms, techno-pessimism seems akin to a kind of stodgy traditionalism that valorizes the past for its own sake; the proverbial old man telling the new generation to get off his law. Techno-optimism, on the other hand, seems common-sensical: for every bleak Black Mirror-esque story we hear about technology abuse, we know that there are thousands more instances of new technology saving and improving the lives of the global population. Yet, tallying up technology uses vs abuses is not sufficient to vindicate the optimist.

What can we say about our overall condition given the trajectory of new and emerging technology? Are we better-off, on the whole? Or worse?

What is undeniable is that we are physically healthier, better-fed, and better protected from disease and viruses. Despite all the unsettling details of the John Deere kill-switch tractors, such machines have grown to enormous sizes because of the unimaginable amount of food that individual farms are able to produce. Because of advances in the technology of farming equipment and plant breeding, farmers are able to produce exponentially more product, and do so quicker and with greater efficiency. Food can also now be bio-fortified, to help get needed nutrients to populations that otherwise would lack them. These benefits are clearly not evenly distributed — many people-groups remain indefensibly underserved. Still, living standards as averages have increased quite radically.

It is also clear that some of the most horrifying misuses of technology are not unprecedented. While many gasp at the atrocity of videos of violent acts going viral on message boards, human lust for blood sport is an ancient phenomenon. So, does techno-pessimism have a leg to stand on? Should the drive toward further technological advancement be headed despite the worrying uses, because the good outweighs the bad?

In his work Human, All Too Human, the 19th century German philosopher Friedrich Nietzsche penned a scathing review of what he took to be the self-defeating methods by which Enlightenment humanity strove toward “progress”:

Mankind mercilessly employs every individual as material for heating its great machines: but what then is the purpose of the machines if all individuals (that is to say mankind) are of no other use than as material for maintaining them? Machines that are an end in themselves—is that the umana commedia?

While there is no reason to read him here as talking about literal human-devouring machines, one can imagine Nietzsche applying this very critique to the state of 21st century technological advancement. We gather data crucial for the benefit of humanity by first exploiting individuals out of their personal data, leaving them vulnerable in the hands of those who may (or may not) choose to use this information against them. The mass of data itself overwhelms the human mind — normal rational capacities are often rendered inert trying to think clearly in the midst of the flood. Algorithms pick through the nearly infinite heap at rates that far exceed the human ability to analyze, but much machine learning is still a black box of unknown mechanisms and outcomes. We are, it seems, hurtling toward a future where certain human capacities are unhelpful, able to either be used fruitlessly, inefficiently, or else abandoned in favor of the higher machine minds. At the very least, one can imagine the techno-pessimist’s position as something nearly Nietzschean: can we build these machines without ourselves becoming their fuel?

Zoom, Academic Freedom, and the No Endorsement Principle

photograph of empty auditorium hall

It was bound to be controversial: an American university sponsoring an event featuring Leila Khaled, a leader of the U.S.-designated terrorist group Popular Front for the Liberation of Palestine (PFLP), who participated in two hijackings in the early 1970’s. But San Francisco State University’s September webinar has gained notoriety for something else: it was the first time that the commercial technology company Zoom censored an academic event. It would not be the last.

In November, faculty at the University of Hawaii and New York University organized webinars again featuring Khaled, ironically to protest the censoring of her September event. But Zoom deleted the links to these events as well.

Zoom has said that the webinars violated the company’s terms of service, which prohibit “engaging in or promoting acts on behalf of a terrorist organization or violent extremist groups.” However, it appears that the real explanation for Zoom’s actions is fear of possible legal exposure. Prior to the September event, the Jewish rights group Lawfare Project sent a letter to Zoom claiming that giving a platform to Khaled would violate a U.S. law prohibiting the provision of material support for terrorist groups. San Francisco State gave assurances to Zoom that she was not being compensated for her talk or was in any way representing the PFLP, but a 2009 Supreme Court decision appears to support Lawfare’s broad interpretation of the law. In any case, the Khaled incidents highlight the perils of higher education’s coronavirus-induced dependence upon private companies like Zoom, Facebook, and YouTube.

The response to Zoom’s actions from academia has been unequivocal denunciation on academic freedom grounds. San Francisco State’s president, Lynn Mahoney, released a statement affirming “the right of faculty to conduct their scholarship and teaching free of censorship.” The American Association of University Professors sent a letter to NYU’s president calling on him to make a statement “denouncing this action as a violation of academic freedom.” And John K. Wilson wrote on Academe magazine’s blog that “for those on the left who demand that tech companies censor speech they think are wrong or offensive, this is a chilling reminder that censorship is a dangerous weapon that can be turned against progressives.”

How do Zoom’s actions violate academic freedom? Fritz Machlup wrote that,

“Academic freedom consists in the absence of, or protection from, such restraints or pressures…as are designed to create in minds of academic scholars…fears and anxieties that may inhibit them from freely studying and investigating whatever they are interested in, and from freely discussing, teaching or publishing whatever opinions they have reached.”

On this view, academic freedom is not the same as free speech: instead of being the freedom to say anything you like, it is the freedom to determine what speech is valuable or acceptable to be taught or discussed in an academic context. By shutting down the Khaled events, the argument goes, Zoom violated academic freedom by usurping the role of faculty in determining what content is acceptable or valuable in that context.

While there is surely good reason for Zoom to respect the value of academic freedom, it is also understandable that it would prioritize avoiding legal exposure. As Steven Lubet writes, “as [a] publicly traded compan[y], with fiduciary duties to shareholders, [Zoom was]…playing it safe in a volatile and unprecedented situation.” Businesses will inevitably be little inclined to take to the ramparts to defend academic freedom, particularly as compared to institutions of higher education explicitly committed to that value and held accountable by their faculty for failing to uphold it. The relative reluctance of technology companies to defend academic freedom is one important reason why in-person instruction must remain the standard for higher education, at least post-COVID.

A less remarked upon but equally important principle underlying the objections to Zoom’s actions is that giving speakers an academic platform is not tantamount to endorsing or promoting their views. Call this the “no-endorsement” principle. It is this idea that underwrites the moral and, perhaps, legal justifiability of inviting former terrorists and other controversial figures to speak on campus. It was explicitly denied in a letter signed by over eighty-six Pro-Israel and Jewish organizations protesting SFSU’s September event. The letter rhetorically asks, “what if an invitation to speak to a class—in fact an entire event—is an endorsement of a point of view and a political cause?” As Wilson noted, if that’s true, then freedom of expression on campus will be destroyed: “if every speaker on a college campus is the endorsement of a point of view by the administration, then only positions endorsed by the administration are allowed.”

Quite recently, the philosopher Neil Levy has added some intellectual heft to the denial of the “no-endorsement” principle. Levy writes that “an invitation to speak at a university campus…is evidence that the speaker is credible; that she has an opinion deserving of a respectful hearing.” Levy argues that in some cases, this evidence can be misleading, and that “when we have good reason to think that the position advocated by a potential speaker is wrong, we have an epistemic reason in favor of no-platforming.” Levy makes a good point: inviting a speaker on campus means something — it sends a message that the university views the speaker as worth listening to. But Levy seems to conflate being worth listening to and being credible. Even views that are deeply wrong can be worth listening to for a variety of reasons. For example, they might contain a part of the truth while being mostly wrong; they might be highly relevant because they are espoused by important figures or groups or a large proportion of citizens; and they might be epistemically useful in presenting a compelling if wrongheaded challenge to true views. For these reasons, the class of views that are worth listening to is surely much larger than the class of true views. Thus, it is not necessarily misleading to invite onto campus a speaker whose views one knows to be wrong.

The use of Zoom and similar technology in higher education contexts is unlikely to completely cease following the post-COVID return of some semblance of normalcy. But the Khaled incidents should make us think carefully about using communications technology provided by private companies to deliver education. In addition, the notion that giving a person a platform is not tantamount to endorsing their views must be defended against those who wish to limit academic discourse to those views held to be acceptable by university administrators.

Search Engines and Data Voids

photograph of woman at computer, Christmas tree in background

If you’re like me, going home over the holidays means dealing with a host of computer problems from well-meaning but not very tech-savvy family members. While I’m no expert myself, it is nevertheless jarring to see the family computer desktop covered in icons for long-abandoned programs, browser tabs that read “Hotmail” and “how do I log into my Hotmail” side-by-side, and the use of default programs like Edge (or, if the computer is ancient enough, Internet Explorer) and search engines like Bing.

And while it’s perhaps a bit of a pain to have to fix the same computer problems every year, and it’s annoying to use programs that you’re not used to, there might be more substantial problems afoot. This is because according to a recent study from Stanford’s Internet Observatory, Bing search results “contain an alarming amount of disinformation.” That default search engine that your parents never bothered changing, then, could actually be doing some harm.

While no search engine is perfect, the study suggests that, at least in comparison to Google, Bing lists known disinformation sites in its top results much more frequently (including searches for important issues like vaccine safety, where a search for “vaccines autism” returns “six anti-vax sites in its top 50 results”). It also presents results from known Russian propaganda sites much more frequently than Google, places student-essay writing sites in its top 50 results for some search terms, and is much more likely to “dredge up gratuitous white-supremacist content in response to unrelated queries.” In general, then, while Bing will not necessarily present one only with disinformation – the site will still return results for trustworthy sites most of the time – it seems worthwhile to be extra vigilant when using the search engine.

But even if one commits to simply avoiding Bing (at least for the kinds of searches that are most likely to be connected to disinformation sites), problems can arise when Edge is made a default browser (which uses Bing as its default search engine), and when those who are not terribly tech-savvy don’t know how to use a different browser, or else aren’t aware of the alternatives. After all, there is no particular reason to think that results from different search engines should be different, and given that Microsoft is a household name, one might not be inclined to question the kinds of results their search engine provides.

How can we combat these problems? Certainly a good amount of responsibility falls on Microsoft themselves for making more of an effort to keep disinformation sites out of their search results. And while we might not want to say that one should never use Bing (Google knows enough about me as it is), there is perhaps some general advice that we could give in order to try to make sure that we are getting as little disinformation as possible when searching.

For example, the Internet Observatory report posits that one of the reasons why there is so much more disinformation in search results from Bing as opposed to Google is due to how the engines deal with “data voids.” The idea is the following: for some search terms, you’re going to get tons of results because there’s tons of information out there, and it’s a lot easier to weed out possible disinformation sites about these kinds of results because there are so many more well-established and trusted sites that already exist. But there are also lots of search terms that have very few results, possibly because they are about idiosyncratic topics, or because the search terms are unusual, or just because the thing you’re looking for is brand new. It’s when there are these relative voids of data about a term that makes results ripe for manipulation by sites looking to spread misinformation.

For example, Michael Golebiewski and danah boyd write that there are five major types of data voids that can be most easily manipulated: breaking news, strategic new terms (e.g. when the term “crisis actor” was introduced by Sandy Hook conspiracy theorists), outdated terms, fragmented concepts (e.g. when the same event is referred to by different terms, for example “undocumented” and “illegal aliens”), and problematic queries (e.g. when instead of searching for information about the “Holocaust” someone searches for “did the Holocaust happen?”). Since there tends to be comparatively little information about these topics online, those looking to spread disinformation can create sites that exploit these data voids.

Golebiewski and boyd provide an example in which the term “Sutherland Springs, Texas” was a much more popular search than it had ever previously been in response to news reports of an active shooting in November of 2017. However, since there was so little information online about Sutherland Springs prior to the event, it was more difficult for search engines to determine out of the new sites and posts which should be sent to the top of the search results and which should be sent to the bottom of the pile. This is the kind of data void that can be exploited by those looking to spread disinformation, especially when it comes to search engines like Bing that seem to struggle with distinguishing trustworthy sites from the untrustworthy.

We’ve seen that there is clearly some responsibility on Bing itself to help with the flow of disinformation, but we perhaps need to be more vigilant when it comes to trusting sites with regards to the kinds of terms Golebiewski and boyd describe. And, of course, we could try our best to convince those who are less computer-literate in our lives to change some of their browsing habits.

How Much Should We Really Use Social Media?

Photograph of a person holding a smartphone with Instagram showing on the screen

Today, we live in a digital era. Modern technology has drastically changed how we go about our everyday lives. It has changed how we learn, for we can retrieve almost any information instantaneously. Even teachers can engage with students through the internet. Money is exchanged digitally. Technology has also changed how we are entertained, for we watch what we want on our phones. But perhaps one of the most popular and equally controversial changes that modern technology has brought to society is how we communicate. Social media. We live in an era where likes and retweets reign supreme. People document their every thought using platforms such as Facebook and Twitter. They share every aspect of their lives through platforms like Instagram. Social media acts as way to connect people who never would have connected without it, but the effects of social media can also be negative. Based on all the controversy that surrounds social media, should we be using it as often as we do?

If you were to walk down the street, or go wait in line at a restaurant, or go to a sporting event, or go anywhere, you’d most likely see people on their phones. They’re scrolling through various social media platforms or sharing the most recent funny dog video. And this phenomenon is happening everywhere and all the time. Per Jessica Brown, a staff writer for BBC, three billion people, which is around 40 percent of the world’s population, use social media. Brown went on to explain that we spend an average of two hours per day on social media, which translates to half a million pieces of content shared every minute. How does this constant engagement with social media affect us?

According to Amanda Macmillan of Time Magazine, in a survey that aimed to gauge the effect that social media platforms had on mental health, results showed that Instagram performed the worst. Per Macmillan, the social media platform was associated with high levels of anxiety, depression, bullying, and other negative symptoms. Other social media platforms, but Instagram especially, can cause FOMO, or the “fear of missing out.” Users will scroll through their feed and see their friends having fun that they cannot experience. For women users, there is the pressure of an unrealistic body images. Based on the survey that ranked social media platforms and their effect on users, one participant explained that Instagram makes girls and women feel that their bodies aren’t good enough because other users add filters and alter their pictures to look “perfect,” or the ideal image of beauty. The manipulation of images on Instagram can cause users to feel low self-esteem, anxiety, and feel insecure about themselves overall. The negativity that users feel because of what others post can create a toxic environment. Would the same effects be happening if people spent less time on social media? If so, maybe users need to take a hard look at how much time they are spending. Or social media platforms could monitor the content that is being posted more to prevent some of the mental effects that some users are getting from social media usage.

Although Instagram can cause have adverse effects on mental health, it can create a positive environment for self-identity and self expression. It can be a place of community building support as well. However, such positive outcomes from social media must be a result of all users cooperating and working to make the digital space a positive environment. Based on the survey of social media platforms, though, this does not seem to be the case and currently, the pros of social media platforms like Instagram seem to be far outweighed by the cons.

Although Facebook and Twitter were ranked higher than Instagram in terms of negatively affecting the mental health of users, they can still have adverse effects as well. In a survey of 1,800 people, women were found to be more stressed than men and a large factor to their stress was Twitter. However, it was also found that the more women used Twitter, the less stressed they became. It’s likely that Twitter acting as both a stressor and a coping mechanism comes from the type of content that women were interacting with. In another survey, researchers found that participants reported lower moods after using Facebook for twenty minutes compared to those who just browsed the internet. But the weather that was occurring that day (i.e rainy, sunny) could have also been a factor in the user’s mood.

Although social media seems to only have adverse effects on the mental health of its users, social media is a great way to connect with others. It can act as a cultural bridge, bringing people from all across the globe together. It’s way to share content that can be positive and unite people with similar beliefs. With the positives and negatives in mind, should we change how much we are using social media? Or at least try to regulate? People could take it upon themselves to simply try and stay off social media sites, although with the digital age that we live in, that might be a hard feat to pull off. After all, too much of a good thing can be a bad thing, as demonstrated from the surveys on social media. But perhaps we should be looking at the way that we are using social media rather than the time we spend on it. If users share positive content and strive to create a positive online presence and community, other users might not deal with the mental health issues that arise after usage of social media. But then again, people should be free to post whatever content they want. At the end of the day, users have their own agenda for how they manage their social media. So perhaps it’s dependent on every individual to look at their own health and their social media usage, and regulate it based on what they see in themselves.

Is Google Obligated to Stay out of China?

Photograph of office building display of Google China

Recently, news broke that Google was once again considering developing a version of its search engine for China. Google has not offered an official version of its website in China since 2010, when it withdrew its services due to concerns about censorship: the Chinese government has placed significant constraints on what its citizens can access online, typically involving information about global and local politics, as well as information that generally does not paint the Chinese government in a positive light. Often referred to as “The Great Firewall of China”, one notorious example of Chinese censorship involves searches for “Tiananmen Square”: if you are outside of China, chances are your searches will prominently include in its results information concerning the 1989 student-led protest and subsequent massacre of civilians by Chinese troops, along with the famous picture of a man standing down a column of tanks; within China, however, search results return information about Tiananmen Square predominantly as a tourist destination, but nothing about the protests.

While the Chinese government has not lifted any of their online restrictions since 2010, Google nevertheless is reportedly considering re-entering the market. The motivation for doing so is obvious: it is an enormous market, and would be extremely profitable for the company to have a presence in China. However, as many have pointed out, doing so would seem to be in violation of Google’s own mantra: “Don’t be evil!” So we should ask: would it be evil for Google to develop a search engine for China that abided by the requirements for censorship dictated by the Chinese government?

One immediate worry is with the existence of the censorship itself. There is no doubt about the fact that the Chinese government is actively restricting its citizens from accessing important information about the world. This kind of censorship is often considered to be a violation of free speech: not only are Chinese citizens restricted from sharing certain kinds of information, they are prevented from acquiring information that would allow them to engage in conversations with others about political and other important matters. That people should not be censored in this way is encapsulated in the UN’s Universal Declaration of Human rights:

Article 19. Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

The right to freedom of expression is what philosophers will sometimes refer to as a “negative right”: it’s a right to not be restricted from doing something that you might otherwise be able to do. So while we shouldn’t say that Google is required to provide its users with all possible information out there, we should say that Google should not actively prevent people from acquiring information that they should otherwise have access to. While the UN’s declaration does not have any official legal status, at the very least it is a good guideline for evaluating whether a government is treating its citizens in the right way.

It seems that we should hold the Chinese government responsible for restricting the rights of its citizens. But if Google were to create a version of their site that adhered to the censorship guidelines, should Google itself be held responsible, as well? We might think that they should not: after all, they didn’t create the rules, they are merely following them. What’s more, the censorship would occur with or without Google’s presence, so it does not seem as though they would be violating any more rights by entering the market.

But this doesn’t seem like a good enough excuse. Google would be, at the very least, complicit: they are fully aware of the censorship laws, how they harm citizens, and would be choosing to actively profit as a result of following those rules. Furthermore, it is not as if Google is forced to abide by these rules: they are not, say, a local business that has no other choice but to follow the rules in order to survive. Instead, it would be their choice to return to a market that they once left because of moral concerns. The fact that they would merely be following the rules again this time around does not seem to absolve them of any responsibility.

Perhaps Google could justify its re-entry into China in the following way: the dominant search engine in China is Baidu, which has a whopping 75% of the market share. Google, then, would be able to provide Chinese citizens with an alternative. However, unless Google is actually willing to flout censorship laws, offering an alternative hardly seems to justify their presence in the Chinese market: if Google offers the same travel tips about Tiananmen Square as Baidu does but none of its more important history, then having one more search engine is no improvement.

Finally, perhaps we should think that Google, in fact, really ought to enter the Chinese market, because doing so would fulfil a different set of obligations Google has, namely towards its shareholders and those otherwise invested in the business. Google is a business, after all, and as such should take measures to be as profitable as it reasonably can for those who have a stake in its success. Re-entering the Chinese market would almost certainly be a very profitable endeavour, so we might think that, at least when it comes to those invested in the business, that Google has an obligation to do so. One way to think about Google’s position, then, is that it is forced to make a moral compromise: it has to make a moral sacrifice – in this case, knowingly engaging in censorship practices – in order to fulfil other obligations that it has – those it has towards its shareholders.

Google may very well be faced with a conflict of obligations of this kind, but that does not mean that they should compromise in a way that favors profits: there are, after all, lots of ways to make money, but that does not mean that doing anything and everything for a buck is a justifiable compromise. When weighing the interests of those invested in Google, a company that is by any reasonable definition thriving, against being complicit in aiding in the online censorship of a quarter of a billion people, the balance of moral consideration seems to point clearly in only one direction.

The Digital Humanities: Overhyped or Misunderstood?

An image of the Yale Beinecke Rare Books Library

A recent series of articles appearing in The Chronicle of Higher Education has reopened the discussion about the nature of the digital humanities. Some scholars argue the digital humanities are a boon to humanistic inquiry and some argue they’re a detriment, but all sides seem to agree it’s worth understanding just what the scope and ambition of the digital humanities is and ought to be.

Continue reading “The Digital Humanities: Overhyped or Misunderstood?”

Law Enforcement Surveillance and the Protection of Civil Liberties

In a sting operation conducted by the FBI in 2015, over 8,000 IP addresses in 120 countries were collected in an effort to take down the website Playpen and its users. Playpen was a communal website that operates on the Dark Web through the Tor browser. Essentially, the site was used to collect images related to child pornography and extreme child abuse. At its peak, Playpen had a community of around 215,000 members and more than 117,000 posts, with 11,000 unique visitors a week.

Continue reading “Law Enforcement Surveillance and the Protection of Civil Liberties”

On Drones: Helpful versus Harmful

During the Super Bowl halftime show this past month, Lady Gaga masterfully demonstrated one of the most unique mass uses of drones to date. At the conclusion of her show, drones powered by Intel were used to form the American flag and then were rearranged to identify one of the main sponsors of the show, Pepsi. This demonstration represented the artistic side of drones and one of the more positive images of them.

Continue reading “On Drones: Helpful versus Harmful”

Overworking the Western World

There’s no question that technology has caused Americans and others around the world to work more. It’s not uncommon for a typical white-collar job in the United States to come with a company phone, company iPad and company computer. All these devices contribute to increased work and work-related stress. Carol Olsby, a member of the Society for Human Resource Management’s expertise panel, states, “Technology allows us to work anywhere, anytime.” This culture of overworking is prominent in the United States and worldwide, and has detrimental effects for mental health.

Continue reading “Overworking the Western World”

An APP(le) a Day: Can Smartphones Provide Smart Medical Advice?

I am not going to shock anyone by stating that we live in a time where distrust of government is high, where people believe that they need to ‘take back’ whatever they feel needs taking back. This opinion runs especially strong in matters surrounding healthcare, where people question a range of issues, including: universal insurance, low cost pharmaceuticals, the efficacy of particular medical tests, and autonomy as regards end of life (and other medical) decisions.

Continue reading “An APP(le) a Day: Can Smartphones Provide Smart Medical Advice?”

Apple and the iPolice

A few months ago, the San Bernardino Shooting, the deadliest terror attack on American soil since 9/11, took place when Syed Rizwan Farook and his wife Tashfeen Malik burst into an office party at Farook’s job, armed with semi-automatic weapons and dressed in black ski masks and tactical gear. Sixty-five to seventy bullets ripped through the crowd, seriously injuring 22 civilians and leaving 14 dead. Before being killed in a shootout with the police, the couple posted a message to Facebook pledging allegiance to the Islamic State. In the suspects’ destroyed car, investigators found an iPhone belonging to Farook. The battle between the FBI and Apple over the decryption of this device has brought this incident back into the news.

Continue reading “Apple and the iPolice”

Forbidden Fruit: Apple, the FBI and Institutional Ethics

Your birthday, a pet’s name, or the nostalgia of a high school sports number; the composition of our iPhone password can seem so simple. But a recent case levied by the FBI against Apple has led to a conflict over the integrity of these passwords and sparked debate concerning privacy and security. A California court ordered Apple to produce a feature that would circumvent software preventing the FBI from accessing the phone of Syed Farook, who, along with his wife, committed the San Bernardino terrorist attacks. The couple died in a shootout following their heinous assault, and their electronics were seized by the FBI. They had smashed their cell phones and tampered with their laptop hard drive, but Farook’s work phone, an iPhone 5c, was found undamaged in his car.

Continue reading “Forbidden Fruit: Apple, the FBI and Institutional Ethics”

Judged by Algorithms

The Chinese government announced in October that they are setting up a “social credit” system, designed to determine trustworthiness. Every citizen will be put into a database which uses fiscal and government information – including online purchases – to determine their trustworthiness ranking. Information in the ranking includes everything from traffic tickets to academic degrees to if women have taken birth control. Citizens currently treat it like a game, comparing their scores to others in attempts to get the highest score out of their social circle. Critics call the move “dystopian,” but this is only the latest algorithm designed to judge people without face to face interaction.

Continue reading “Judged by Algorithms”

Pediatricians back away from screen use guidelines

This piece originally appeared in the Providence Journal on December 9, 2015.

The American Academy of Pediatrics has long advised parents to keep children under age 2 away from video screens, and to limit older children to two hours of screen time per day. The thinking has been that children deluged with video are less likely to get the proper cognitive, social and emotional development that comes from non-video play and interaction with real human beings.

Volumes of research support the need to keep kids from becoming video sponges, regardless of whether that video comes from a television, video game or computer screen. Children who spend the most time in front of screens are generally less socially capable, less physically fit and less successful in school than their low-media peers.

That’s why it is so puzzling to see the AAP indicate it is backing away from those long-held guidelines regarding screen time and kids. An essay published in a recent AAP newsletter promises new guidelines to be released in 2016. The essay, written by three AAP doctors, points out that current AAP advice was published before the proliferation of iPads and explosion of apps aimed at young children. It goes on to argue, “In a world where screen time is becoming simply ‘time,’ our policies must evolve or become obsolete.” Another casual observation is that “media is just another environment.”

The AAP article further explains its planned updates, writing, “The public needs to know that the Academy’s advice is science-driven, not based merely on the precautionary principle.” That all sounds quite lofty. But ample, rigorous research already demonstrates that heavy screen exposure for kids links with a variety of social and cognitive difficulties. Precautionary advice is even more imperative in today’s media-saturated environment.

Beyond what can be learned from science-driven research, just check in with any second grade teacher and ask that teacher which students have the most difficult time focusing in class. Odds are those struggling students get too much screen time at home. Ask high school guidance counselors which students are most depressed and anxious, and you will find those teens are more likely to be heavy users of social media and/or video games.

It is true that children are more saturated in media than ever, and parents have a near impossible task to control and referee media absorption by their kids. As the AAP reports, almost 30 percent of children “first play with a mobile device when they are still in diapers.” Teenagers, of course, are constantly absorbed in electronic devices.

It is also true, as the AAP points out, that the screen-time guidelines have been in place for many years. So, too, have been recommendations against teens smoking cigarettes, but nobody is suggesting teen smoking is now acceptable. Commonsense guidelines should not be considered “outdated,” no matter how old they are.

The AAP is a highly respected professional organization that surely wants what is best for children. The recent AAP essay correctly points out the importance of parents monitoring kids’ media consumption, keeping technology away from mealtime and out of bedrooms, along with other solid advice. But it is not helpful to suggest that the world is now so media driven that parents must concede defeat to the media tidal wave on kids. Instead, the AAP can give parents the backbone and rationale needed to limit screen time and, indeed, just turn the devices off. To say screen time is just “time” is a surrender to an “anything goes” mentality in which tough judgments are avoided.

Media use is, of course, only one of many factors that influence a child’s overall environment. It is clear, however, that media-created worlds don’t effectively replace healthy human interaction. Every minute a child is in front of a screen reduces time for more productive activities, such as playing with others, outdoor recreation, exercise or creative play. Thus, even when kids are consuming educational video content, they are missing out on more useful, human endeavors.

When the AAP issues its formal recommendations next year, here’s hoping the Academy doesn’t take a “What’s the use?” approach, and instead, gives parents the stern warnings needed to help raise well-adjusted kids who use media sensibly.

The Socioeconomic Divide of Dating Apps

Many are familiar with the popular dating app “Tinder,” best known for its quick “swiping” method of indicating interest in nearby users and creating “matches.” In an apparent effort to get away from its reputation of simply being a convenient “hook-up” app and get closer to its original dating purpose, Tinder recently announced that profiles will now feature work and education information. The change doesn’t go so far as to completely eliminate those with less education or a lower income, such as apps like Luxy, but it does bear possibly problematic consequences. Tinder has marketed this change as a response to user requests for these added profile details to help make more “informed choices”. Yet some are wary that this change comes with an ulterior motive.

Continue reading “The Socioeconomic Divide of Dating Apps”

I Am The Lorax, I Tweet for the Trees

Lovers of social media, rejoice! It appears that even the furthest expanses of nature are not beyond the range of wireless internet. This was only further underscored this morning, when Japanese officials announced that Mount Fuji, the country’s iconic, snow-topped peaked, will be equipped with free Wi-Fi in the near future. Tourists and hikers alike will now be able to post from eight hotspots on the mountain, in a move likely to draw scorn from some environmental purists.

Continue reading “I Am The Lorax, I Tweet for the Trees”

The Ethical Navigator

For the directionally challenged among us, the advent of the GPS has proven revolutionary. Never before has it been so easy to figure out how to get somewhere and, with apps like Google Maps, how long it will take to do so. In this light, such apps provide a crucial source of information for millions of users. But what if these navigational apps could provide an ethical benefit, as well?

Continue reading “The Ethical Navigator”

Let’s Talk About…50 Shades of Grey (Part I)

50 Shades has swept audiences off their feet, selling 100 million copies worldwide, and making $237.7 million in its global opening in theaters. In many ways, this success could have been predicted by its eerie similarity to other phenomenally profitable franchises (ahem…Twilight), but in other ways, what 50 Shades presents is entirely new. The series follows the love story of an unassuming bookworm girl and an older millionaire businessman—nothing new there. But the catch? He’s into BDSM: he likes violent sex.

At a cultural moment where sex is a hot-button issue, the timing of the movie release couldn’t be more perfect (read: lucrative). But as Emma Green pointed out in her article for the Atlantic, on film, “the Fifty Shades version of hot, kinky sex will become explicit and precise, no longer dependent upon the imaginations of readers.” With the book, interested folks could discretely download the books on their e-readers and choose if they wanted to share their guilty fascinations. With the movie, it’s public.

So now that everyone knows you’re curious, let’s talk about some of the major debates surrounding the movie and the issues it brings to light.

One of the biggest arguments is the sheer volume of sex. With a full twenty-five minutes of pure sex scenes, one has to wonder what the motives are. Is it simply an empty shortcut to boost ticket sales? Or is the motive to open up the viewer to a more liberal approach to nudity and sex? One thing is clear: sex sells. But is there something inherently wrong with steamy sales tactics? Or, more specifically, is there something inherently wrong with using fetishized sex to sell?

This leads us to another point—one that was discussed in the Atlantic article: the movie’s representation of a particular community. The fact is that there really are people who are into BDSM. Some of these people have come forth, claiming that the movie misappropriates, among other things, the level of emotion and consent that’s actually involved. The movie didn’t show the couple talking, going on dates, falling in love… the focus is on the sex. And the moments not in the bedroom are uncomfortable: he sells her possessions, and pops up unannounced, and somehow maintains an constant aura of creepiness…uh, no thanks. As Green put it, “the most troubling thing about the sex… isn’t the BDSM itself: It’s the characters’ terrible communication.”

Now, it could be that the film is riding the wave of sexual liberalism, giving the public insight into a taboo community and ultimately promoting openness. And some feminists will give it this. In her HuffPost article, feminist writer Soraya Chemalay says that, “this not secret, not silent, non-judgmental openness is a feminist success”.

Many have also brought up the issue of class. Yes, this type of rare sexual preference is glamorous in a marble-floored penthouse between two good-looking people. Basically, would it have been as appealing without the helicopter?

This conversation goes on and on, back and forth and then back again. If you’re interested in reading more about the discussions surrounding the movie, as well as my personal take, head over to the PrindlePost.org. Personally, I haven’t read the book, but I saw the movie. I said it was just because everyone else was, but the truth is that I was genuinely curious…which I was 50 shades of embarrassed to admit.

Look Inside: The Moral Implications of Personal Choice

Back when Microsoft Windows XP and Intel’s Pentium 4 Processor were technology du jour, not much was known about the origins of the raw minerals integral to the technology we depend on daily or about the horrendous labors that made possible the innovations of the day. Ethical concepts of blame and praise did not make a lot of sense to the consumer faced with no choice but to buy electronic devices manufactured by laborers cheated out of a living wage and f rom raw materials that fuel atrocities in the eastern remote province of the Democratic Republic of Congo (DRC). This is based on the understanding that one cannot be held morally accountable for something they have no control over.

The complexities of the ongoing conflict in eastern DRC, and the role of student activism in addressing the role of conflict minerals in the deadly conflict have been well documented here, here, and here. Fast forward to 2014, with Intel CEO’s announcement that all Intel microprocessor chips manufactured from this year will not contain raw materials originating from mines that bankroll atrocities in eastern DRC and neither will every Intel product by the year 2016. Consumers are now faced with a choice. As the global market for commodities becomes increasingly competitive, consumers are inundated with choices, most of which present tough ethical challenges. For example, whether or not to buy the warm sweatshirt that robs factory workers of safe working conditions, or the chocolate bar whose cocoa ingredient was harvested through child labor in West Africa.  Even more familiar to the everyday American consumer is the choice between organic or conventionally grown produce. For me, it’s the choice between picking a Styrofoam cup or bringing my own re-usable coffee mug to the office each morning. These choices are personal, but our free will to act upon them implies a moral responsibility.

However, such choices are neither simple nor removed from other moral imperatives. To be fair, Intel’s efforts to manufacture “conflict-free” products highlighted in the short video above are commendable and praiseworthy, but to be sure, a reform at one point of a product’s manufacturing supply chain does not guarantee that the product becomes conflict free. To put this into perspective, imagine technology products manufactured from ethically sourced raw materials but through unfair labor practices, or in factories powered by mountain-top removal coal. Many are the challenges we face in today’s world of ever increasing global demands. Faced with this reality and with the holiday shopping season fast approaching, do we have a real ethical choice over what we buy? What choices do we have? As consumers, how do our personal choices affect the lives of millions of people across the world and around us, and are we morally culpable? Weigh in your comments below.