← Return to search results
Back to Prindle Institute

The Ethical Tradeoffs of Medical Surveillance: Tracking, Compassion, and Moral Formation

photograph of medical staff holding patients hand

Our ability to track doctors – their movements, their location, and everything they accomplish while on the job – is increasing at a rapid pace. Using RFID tags, hospitals are able to not only track patients and medical equipment, but hospital staff as well, allowing administrators to monitor the exact amount of time that physicians spend in exam rooms or at lunch. On top of that, electronic health record systems (EHRs) require doctors to meticulously record the time they spend with patients, demanding that doctors spend multiple hours a day charting. And more could be on the way. Researchers are now working on technology that would track physician eye movement, allowing surveillance of how long a doctor looks at a patient’s chart or test results before making a diagnosis.

There are undeniable benefits to all of this tracking. Along with providing patients and their families with detailed examination notes, such detailed surveillance ensures that doctors are held to a meaningful standard of care even when they are tired or stressed. And workplace accountability is nothing new. Employers have used everything from punch clocks, supervisors, and drug tests to make sure that their staff is performing while on the job.

Yet as the surveillance of physicians becomes ever more ubiquitous, the number of moral concerns increases as well. While tracking typically does improve behavior, it can also stunt our moral growth. Take, for example, plagiarism detectors. If they are 100% accurate at detecting academic dishonesty, then they drastically reduce the incentive to cheat, making it clearly counterproductive for those who want to pass their classes. This will cause most students to avoid plagiarism simply out of sheer self-interest. At the same time though, it robs students of an opportunity to develop their moral characters, relieving them of the need to practice doing the right thing even when they might not get caught.

On the other hand, while school might be an important place to build the virtues, hospitals clearly are not. We want our doctors to be consistently attentive and careful in how they diagnose and treat their patients, and if increased surveillance can ensure that, then that seems like a worthwhile trade off. Sure, physicians might miss out on a few opportunities for moral growth and formation, but this loss can be outweighed by not leaving it up to chance whether any patients fall through the cracks. If more surveillance means that more patients get what they need, then so be it.

The problem, however, is that surveillance may not mean that hospitals are always getting more quality care, but simply getting more of what they measure. As doctors become more focused on efficient visit times and necessary record-keeping, there is evidence piling up that suggests that technological innovations like EHRs actually decrease the amount of time that physicians spend with their patients. Physicians now spend over 4 hours a day updating EHRs, including over 15 minutes each time they are in an exam room with a patient. Many doctors must also continue charting until late into the night, laboring after hours to stay on top of their work and burning out at ever increasing rates. So, while patient records might be more complete than ever before, time with and for patients has dwindled.

All of this becomes particularly concerning in light of the connection between physician compassion and patient health. Research has shown that when healthcare providers have the time to show their patients compassion, medical outcomes not only improve, but unnecessary costs are reduced as well. At the same time, compassion also helps curtail physician burnout, as connecting with patients makes doctors happier and more fulfilled.

So maybe the moral formation of doctors is not irrelevant after all. If there is a strong link between positive clinical outcomes and doctors who have cultivated a character of compassion (doctors who are also less likely to burn out), then how hospitals and clinics form their physicians is of the utmost importance.

This, of course, raises the question about what this means for how we track doctors. The most straightforward conclusion is that we shouldn’t give physicians so much to do that they don’t have any time for empathy. Driven by an emphasis on efficiency, 56% of doctors already say that they do not have enough time for compassion in their clinical routines. If compassion plays a significant role in providing quality healthcare, then that obviously needs to change.

But an emphasis on compassion and the moral characters of doctors raises even deeper questions about whether medical surveillance is in need of serious reform. It is extremely difficult to measure how compassionate doctors are being with their patients. Simply tracking a certain period of time, or particular eye movements, or even a doctor’s tone of voice might not truly reflect whether doctors are being empathetic and compassionate towards their patients, making it unclear whether more in-depth surveillance could ever ensure the kinds of personal interactions that are best for both doctors and their patients. And as we have seen, whatever metrics hospitals attempt to track, those measures are the ones that doctors will prioritize when organizing their time.

For this reason, it might be that extensive tracking will always subtly undermine the outcomes that we want, and that creating more compassionate healthcare requires a more nuanced approach to tracking physician performance. It may be possible to still have metrics that ensure all patients get a certain baseline of care, but doctors might also need more time and freedom to connect with patients in ways that can never be fully quantified in an EHR.

State of Surveillance: Should Your Car Be Able to Call the Cops?

close-up photograph of car headlight

In June, 2022, Alan McShane from Newcastle, England was heading home after a night drinking and watching his favorite football club at the local pub when he clipped a curb and his airbags were activated. The Mercedes EQ company car that he was driving immediately called emergency services, a feature that has come standard on the vehicle since 2014. A sobriety test administered by the police revealed that the man’s blood alcohol content was well above the legal limit. He was fined over 1,500 pounds and lost his driving privileges for 25 months.

No one observed Mr. McShane driving erratically. He did not injure anyone or attract any attention to himself. Were it not for the actions of his vehicle, Mr. McShane may very well have arrived home safely and without significant incident.

Modern technology has rapidly and dramatically changed the landscape when it comes to privacy. This is just one case among many which demonstrates that technology may also pose threats to our rights against self-incrimination.

There are compelling reasons to have technology of this type in one’s vehicle. It is just one more step in a growing trend toward making getting behind the wheel safer. In the recent past, people didn’t have cell phones to use in case of an emergency; if a person got in a car accident and became stranded, they would have to simply hope that another motorist would find them and be willing to help them. However, this significant improvement to safety isn’t always accessible during a crash. One’s phone may not be within arm’s reach and during serious car accidents a person may be pinned down and unable to move. Driving a car that immediately contacts emergency services when it detects the occurrence of an accident may often be the difference between life and death.

Advocates of this technology argue that a person simply doesn’t have the right to drive drunk. It may be the case that under many circumstances a person is free to gauge the amount of risk that is associated with their choices and then choose for themselves the amount that they are willing to take on. This simply isn’t true when it comes to risk that affects others in serious ways.

A person doesn’t have the right to just cross their fingers and hope for the best — in this case to simply trust that they don’t happen to encounter another living being while driving impaired.

When people callously rely on luck when it comes to driving under the influence, living beings can die or be injured in such a way that their lives are involuntarily altered forever. Nevertheless, many people simply do not think about the well-being of others when they make their choices. Since this is the case, some argue that if technology can protect others from the selfish and reckless actions of those who can’t be bothered to consider interests other than their own, it should.

Others argue that we can’t let technology turn any country into a police state. Though such people agree that there are clear safety advantages to technology that can help a person in the event of an accident, this particular technology does more than that — it serves as a non-sentient witness against the driver. This radically changes the role of the car. A vehicle may once have been viewed as a tool operated by a person — a temporary extension of that person’s body. Often cars used as tools in this way are the property of their operators. Until now, a person’s own property hasn’t been in the position to turn them in. Instead, if a police officer wanted information about some piece of a person’s body, they’d need a search warrant. This technology removes the element of choice on behalf of the individual when it comes to the question of whether they want to get the police involved or to implicate themselves in a crime.

This is far from the only technology we have to be worried about when it comes to police encroachment into our lives and privacy. Our very movement through our communities can be tracked by Google and potentially shared with police if we agree to turn location services on when using our phones.

Do we really have a meaningful expectation of privacy when all of the devices we use as extensions of our bodies are accessible to the police?

Nor is it only the police that have access to this information. In ways that are often unknown to the customer, information about them is frequently collected and used by corporations and then manipulated to motivate that customer to spend more and more money on additional products and services. Our technology isn’t working only for us, it’s also working for corporations and the government, sometimes in ways that pretty clearly run counter to our best interests. Some argue that a product on which a person spends their own hard-earned money simply shouldn’t be able to do any of this.

What’s more, critics argue that the only conditions under which technology should be able to share important information with any third party is if the owner has provided fully free and informed consent. Such critics argue that what passes for consent in these cases is nowhere near what would be required to meet this standard.

Accepting a long list of terms and conditions written in legalese while searching for crockpot recipes at the grocery store isn’t consenting to allowing police access to knowledge about your location.

Turning a key in the ignition (or, more and more often, simply pressing an ignition button) does not constitute consent to abandon one’s rights against self-incrimination or to make law enforcement aware of one’s blood alcohol content.

Advocates of such technology argue in response that technology has always been used as important evidence in criminal cases. For instance, people must be careful what they do in public, lest it be captured on surveillance cameras. People’s telephone usage has been used against them since telephones were invented. If one does not want technology used against them in court, one shouldn’t use technology as part of the commission of a crime.

In response, critics argue that, as technology develops, it has the potential to erode our fourth amendment rights against unlawful search and seizure and our fifth amendment rights against self-incrimination to the point of meaninglessness. Given our track record in this country, this erosion of rights is likely to disproportionately affect marginalized and oppressed populations. It is time now to discuss principled places to draw defensible lines that protect important democratic values.

The Ethics of Digidog

photograph of german shepherd next to surveillance robot dog

On February 24, the New York City Police Department employed a robot “dog” to aid with an investigation of a home break in. This is not the first time that police departments have used robots to aid in responding to criminal activity. However this robot, produced by Boston Dynamics, and affectionately named “Spot,” drew the attention of the general public after New York Congresswoman Alexandria Ocasio-Cortez tweeted a critique of the decision to invest money in this type of policing technology. While Digidog’s main purpose is to increase the safety for both officers and suspects during criminal investigations, some are concerned that the implementation of this type of technology sets a bad precedent, contributes to police surveillance, and diverts resources away from more deserving causes.

Is employing surveillance technologies like Digidog inherently bad? Is it ever okay to monitor citizens without their consent? And which methods should we prioritize when seeking to prevent or respond to crime?

In the United States, an estimated 50 million surveillance cameras watch us, many escaping out notice. The use of these surveillance cameras, formally called Closed Circuit TV’s (CCTVs), have dramatically expanded due to the interest in limiting crime in or around private property. Law enforcement often rely on video surveillance in order to identify potential suspects, and prosecutors may use this footage as evidence during conviction. However, there has been some debate about whether or not the proliferation of CCTV’s has really led to less crime, either through deterrence or through successful identification and capture. This lack of demonstrable proof of positive effect is especially concerning given the pushback against this type of surveillance as potentially violating individuals’ privacy. In a 2015 study by Pew Research Center, 90% of Americans ranked “not having someone watch you or listen to you without your permission” as very important or somewhat important. In a follow-up question, 2/3 of respondents ranked “being able to go around in public without always being identified” as important. Considering the clear importance of privacy to many Americans, increased surveillance might be considered a fundamental infringement on what many see as their right to privacy.

Digidog is a police surveillance tool. Equipped with a camera and GPS, the robot dog is capable of allowing officers to monitor dangerous situations without risking their lives. However, many are skeptical that Digidog’s use will be limited only to responding to crime and could soon instead become a tool to patrol the streets. In one particularly alarming example, an MSCHF Product Studio armed their robot dog with a paintball gun and demonstrated how easy it was to shoot the gun from a remote location. If police departments began arming these robot dogs, the potential for violence and brutality stands to increase. As yet, these uses have not been explicitly suggested by any law enforcement agency, and defenders of Digidog point to its limitations, such as its maximum travel speed, usage in many fields other than policing, and its lack of covert design. These features suggest that Digidog is not yet the powerful tool to patrol or surveil the general public that critics fear.

In terms of Digidog as an investment to combat crime, is it an unethical diversion of money, as Congresswoman Ocasio-Cortez has suggested? In the past year, calls to decrease or reallocate police funding have entered the mainstream. The decision to invest in Digidog could be considered unethical because its benefits to the public can’t justify its significant cost. Digidogs themselves cost around $74,000 each. Considering that they are only intended for use in extreme and dangerous situations, their usage is rare, and they do not appear to improve the life of the average individual. However, by serving as a weaponless first responder, Digidogs could save both the lives of officers or those suspected of engaging in criminal activity. Human error and reactivity can be removed from the equation by having robots surveil a situation in place of an armed officer.

Whether or not the Digidog represents an ethical use of public funds may turn on the legitimacy of investing in crime response rather than crime prevention. As previously noted, “Spot” is primarily used to respond to existing crime. Because of this, critics have suggested that these funds would be better aimed at programs that seek to minimize the occurrence of crime in the first place. Congresswoman Ocasio-Cortez’s tweet, for example, makes reference to those who suggested resources should go to school counseling instead. In fact, some criminology experts argue that investing in local communities and schools can drastically decrease the incidence of crime. Defenders of Digidog are quick to point out that the two goals are not mutually exclusive; it is possible to invest in both crime response and crime prevention, and we need not pit these two policy aims against one another. It is unclear in this situation, however, whether similar funds were directed at preventing crime as were spent in purchasing Digidogs.

This investment in Digidog could also be seen as unethical not just in terms of its lack of efficiency in addressing crime, but also in terms of the lack of similar treatment in other areas of social concern. In a reply to her original tweet, Ocasio-Cortez retorted “when was the last time you saw next-generation, world class technology for education, healthcare, housing, etc. consistently prioritized for underserved communities like this?” In a time when so many have called to defund the police following centuries of police violence against Black people, it seems an affront to invest money in technologies designed to aid arrest rather than address systemic injustice. Highlighting this disparity in funding shows that urgent social needs are being unjustly prioritized last. Again, defenders of Digidog might respond that this comparison is a false one, and that technology can be employed for both policing and social needs.

Together, these concerns mean that Digidog’s usage will continue to be met by skepticism, if not hostility, by many. As police and surveillance technology develop, it remains especially important that we measure the value these new tools offer against their costs to both our safety and our privacy.

Sensorvault and Ring: Private-Sector Data Collection Meets Law Enforcement

closeup photograph of camera lens

Concerns over personal privacy and security are amplifying as more information surfaces about the operations of Google’s Sensorvault, Amazon’s Ring, and FamilyTreeDNA.

Sensorvault, Google’s enormous database, stands out from the group as a major player in the digital profiling arena. Since at least 2009, it has been amassing data and constructing individual profiles for all of us based on vast information about our location history, hobbies, race, gender, income, religion, net worth, purchase history, and more. Google and other private-sector companies argue that the amassment of digital dossiers facilitates immense improvements in their efficiency and profits. However, the collection of such data also raises thorny ethical concerns about consent and privacy.

With regard to consent, the operation of Sensorvault is morally problematic for three main reasons. First, the minimum age required for managing your own Google account in North America is 13, meaning that Google can begin constructing the digital profiles of children, despite the likelihood that they are unable to comprehend the Terms and Service agreement or its implications. Their digital files are thus created prior to the (legal) possibility of providing meaningful consent.

Second, the dominance of Google’s Search Engine, Maps, and other services are making it increasingly less feasible to live a Google-free life. In the absence of a meaningful exit option, the value of supposed consent is significantly diminished. Third, as law professor Daniel Solove puts it, “Life today is fueled by information, and it is virtually impossible to live as an Information Age ghost, leaving no trail or residue.” Even if you avoid using all Google services, your digital profile can and will still be constructed from other data point references about your life, such as income level or spending habits.

The operation of Sensorvault and similar databases also raise moral concerns about individual privacy. Materially speaking, the content in Sensorvault puts individuals at extreme risks of fraud, identity theft, public embarrassment, and reputation damage, given the detailed psychological profiles and life-patterns contained in the database. Google’s insistence that protective safeguards are in place is not particularly persuasive either in light of recent security breaches, such as Social Security numbers and health information of military personnel and their families being stolen from a United States Army Base.

More abstractly, these data collection agencies represent an existential threat to our private selves. Solove argues in his book “The Digital Person” that the digital dossiers amassed by private corporations are eerily reflective of the files that Big Brother has on its citizens in 1984. He also makes a comparison between the secrecy surrounding these profiles and The Trial, in which Kafka warns of the dangers of losing control over personal information and enabling bureaucracies to make decisions about our lives without us being aware.

The stakes are growing increasingly high as Google, Amazon, and FamilyTreeDNA move beyond using data collection for their own purposes and are now collaborating with law enforcement agencies. These private companies attempt to justify their practices on the grounds that they are a boon to policing practices and are effectively helping to solve and deter crime. However, even if you are sympathetic to their justification, there are still significant ethical and legal reasons to be concerned by the growing relationship between data collecting private-sector companies and law enforcement agencies.

In Google’s case, the data in Sensorvault is being shared with the government as part of a new policing mechanism. American law enforcement agencies have recently started issuing “Geofence warrants” which grant them access to the digital trails and location patterns left by individuals’ devices in a specific time and area, or “geofence.” Geofencing warrants differ significantly from traditional warrants because they permit law enforcement to obtain access to Google user’s data without probable cause. According to one Google employee, “the company responds to a single warrant with location information on dozens or hundreds of devices,” thus ensnaring innocent people in a digital dragnet. As such, Geofencing warrants raise significant moral and legal concerns in that they circumvent the 4th Amendment’s protection of privacy and probable cause search requirement.

Amazon’s Ring (a home surveillance system) is also engaged in morally problematic relations with law enforcement. They have partnered with hundreds of departments in the US to provide police with data from their customers’ home security systems. Reports suggest that Ring has shared the locations of their customers’ homes with law enforcement, is working on enabling police to automatically activate Ring cameras in an area where a crime has been committed, and that Amazon is even coaching police on how to gain access to user’s cameras without a warrant.

FamilyTreeDNA, one of the country’s largest genetic testing companies, is also putting consumers’ privacy and security at risk by providing its data to the FBI. FamilyTree has offered DNA testing for nearly two decades, but in 2018, it willingly granted law enforcement access to millions of consumer profiles, many of which were collected before users were aware of the company’s collaboration with law enforcement. While police have long been using public genealogy databases to solve crime, FamilyTree’s partnership with the FBI marks one of the first times a private-sector database has willingly shared the sensitive information of its consumers with governmental agencies.

Several strategies might be pursued to mitigate the concerns raised by these companies regarding consent, privacy, and law enforcement collaboration. First, the US ought to consider adopting safeguards similar to the EU’s General Data Protection Regulations which, for example, sets the minimum age of consent for Google Users at 16 and stipulates that Terms of Service “should be provided in an intelligible and easily accessible form, using clear and plain language and it should not contain unfair terms.” Second, all digital and DNA data collecting companies should undergo strict security testing to protect against theft, fraud, and the exposure of personal information. Third, given the extremely private and sensitive nature of such data, regulations ought to be enacted to prevent private companies like Family Tree from sharing profiles they amassed before publicly disclosing their partnership with law enforcement. Fourth, the US Congress Committee on Energy and Commerce should continue to monitor and inquire into companies as they did in their 2019 letter to Google. There needs to be greater transparency regarding what data is being stored and for what purposes. Finally, the 4th Amendment must become a part of the mainstream conversation regarding the amassing of digital dossiers, DNA profiles, and the access to such data by law enforcement agencies without probable cause.

Government Leakers: Liars, Cowards, or Patriots?

James Comey, former Director of the FBI, recently testified in front of the Senate Intelligence Committee regarding conversations that he had with President Trump. The public knew some of the details from these conversations before Comey’s testimony, because he had written down his recollections in memos, and portions of these memos were leaked to the press. We now know that Comey himself was responsible for leaking the memos. He reportedly did so to force the Department of Justice to appoint a special prosecutor. It turned out that his gamble was successful, as Robert Mueller was appointed special prosecutor to lead the investigation into possible collusion between the Trump campaign and the Russian government.

After the testimony, President Trump blasted Comey as a Leaker. He tweeted, “Despite so many false statements and lies, total and complete vindication…and WOW, Comey is a leaker!” Trump later tweeted that Comey’s leaking was “Very ‘cowardly!’” Trump’s antipathy towards leaking makes sense against the background of the unprecedented number of leaks occurring during his term in office. It seems as if there is a new leak every day. Given the politically damaging nature of these leaks, supporters of the president have been quick to condemn them as endangering national security, and to call for prosecutions of these leakers. Just recently, NSA contractor Reality Winner was charged under the Espionage Act for leaking classified materials to the press. However, it is worth remembering that, during the election campaign, then-candidate Trump praised Wikileaks on numerous occasions for its release of the hacked emails from the Democratic National Committee.

A cynical reading of this recent chain of events suggests that the stance that government figures take towards the ethics of leaking is purely motivated by politics. Leaking is good when it damages a political opponent. Leaking is bad when it damages a political ally.  Sadly, this may be a true analysis of politicians’ shifting stances towards leakers. However, it does not answer the underlying question as to whether leaking can ever be morally permissible and, if it can be, under what circumstances might it be?

Approaches may differ, but I think it is reasonable to ask this question in a way that assumes that government leaking requires special justification. This is for two reasons. First, the leaking of classified information is almost always a violation of federal law. Leaking classified information violates the Espionage Act, which sets out penalties of imprisonment for individuals who disclose classified information to those not entitled to receive it. As a general moral rule, individuals ought to obey all laws, unless a special justification exists for their violation. General conformity to the law ensures an order and stability necessary to the safety, security, and well-being of the nation. More specifically, the Espionage Act is intended to protect the nation’s security. Leaking classified information to the press risks our nation’s intelligence operations by potentially exposing our sources and methods to hostile foreign governments.

Second, as Stephen L. Carter of Bloomberg points out, “leakers are liars,” and there is a strong moral presumption against lying. Carter provides a succinct explanation: “The leaker goes to work every day and implicitly tells colleagues, ‘You can trust me with Secret A.’ Then the leaker, on further consideration, decides to share Secret A with the world. The next day the leaker goes back to work and says, ‘You can trust me with Secret B.’ But it’s a lie. The leaker cannot be trusted.”

The strong presumption against lying flows from the idea that morality requires that we do not make an exception of ourselves in our actions. We generally want and expect others to tell us the truth; we have no right ourselves, then, to be cavalier with the truth when speaking with others. Lying may sometimes be justified, but it requires strong reasons in its favor.

Ethical leaking might be required to meet two standards: (A) the leak is intended to achieve a public good that overrides the moral presumption lying and law-breaking, or (B) leaking is the only viable option to achieving this public good. What public good does leaking often promote? Defenders of leaks often argue that leaking reveals information that the public needs to know to hold their leaders accountable for wrongdoing. Famous leaker Edward Snowden, for example, revealed information concerning the surveillance capabilities of the National Security Agency (NSA); it is arguable that the public needed to know this information to have an informed debate on the acceptable limits of government surveillance and its relation to freedom and security.

Since leaking often involves lying and breaking the law, it must be considered whether other options exist, besides leaking, to promote the public good at issue. Government figures who criticize leakers often claim that they have avenues within the government to protest wrong-doing. Supporters of Snowden’s actions pointed out, however, that legal means to expose the NSA’s surveillance programs were not open to him because, as a contractor, he did not have the same whistleblower protections as do government employees and because NSA’s programs were considered completely legal by the US government at the time. Leaking appeared to be his only viable option for making the information public.

Each act of leaking appears to require a difficult moral calculation. How much damage will my leaking do to the efforts of the national security team? How important is it for the public to know this classified information? How likely is it that I could achieve my goals through legal means within the government system? Though a moral presumption against leaking may exist—you shouldn’t leak classified information for just any old reason—leaking in the context of an unaccountable government engaged in serious wrongdoing has been justified in the past, and I expect we will see many instances in the future where government leaks will be justified.

Law Enforcement Surveillance and the Protection of Civil Liberties

In a sting operation conducted by the FBI in 2015, over 8,000 IP addresses in 120 countries were collected in an effort to take down the website Playpen and its users. Playpen was a communal website that operates on the Dark Web through the Tor browser. Essentially, the site was used to collect images related to child pornography and extreme child abuse. At its peak, Playpen had a community of around 215,000 members and more than 117,000 posts, with 11,000 unique visitors a week.

Continue reading “Law Enforcement Surveillance and the Protection of Civil Liberties”

Richard Mosse and the Ethics of Photographing Crisis

The ongoing Syrian refugee crisis has raised ethical concerns surrounding immigration, borders, and terrorism. However, one less-discussed ethical dilemma surrounding refugees is that of photojournalism and art. Irish photographer Richard Mosse made headlines last week after publishing photographs taken of refugee camps using cameras with military grade thermal radiation. The photographs are extremely detailed and might even portray a sense of voyeurism.

Continue reading “Richard Mosse and the Ethics of Photographing Crisis”

Making Sense of Trump’s Wiretapping Accusations

At 3:35am on March 4, President Donald Trump tweeted an accusation that former President Barack Obama wiretapped the phones in Trump Tower prior to the election. Trump compared it to Watergate and called Obama “sick.” A spokesperson for Obama quickly and strongly denied the allegations, stating that “neither President Obama nor any White House official ever ordered surveillance on any U.S. citizen.” FBI Director James Comey asked the Justice Department to immediately reject the president’s allegations on the grounds that it falsely implies that the FBI broke the law.

Continue reading “Making Sense of Trump’s Wiretapping Accusations”

On Drones: Helpful versus Harmful

During the Super Bowl halftime show this past month, Lady Gaga masterfully demonstrated one of the most unique mass uses of drones to date. At the conclusion of her show, drones powered by Intel were used to form the American flag and then were rearranged to identify one of the main sponsors of the show, Pepsi. This demonstration represented the artistic side of drones and one of the more positive images of them.

Continue reading “On Drones: Helpful versus Harmful”