← Return to search results
Back to Prindle Institute

The Ethical Tradeoffs of Medical Surveillance: Tracking, Compassion, and Moral Formation

photograph of medical staff holding patients hand

Our ability to track doctors – their movements, their location, and everything they accomplish while on the job – is increasing at a rapid pace. Using RFID tags, hospitals are able to not only track patients and medical equipment, but hospital staff as well, allowing administrators to monitor the exact amount of time that physicians spend in exam rooms or at lunch. On top of that, electronic health record systems (EHRs) require doctors to meticulously record the time they spend with patients, demanding that doctors spend multiple hours a day charting. And more could be on the way. Researchers are now working on technology that would track physician eye movement, allowing surveillance of how long a doctor looks at a patient’s chart or test results before making a diagnosis.

There are undeniable benefits to all of this tracking. Along with providing patients and their families with detailed examination notes, such detailed surveillance ensures that doctors are held to a meaningful standard of care even when they are tired or stressed. And workplace accountability is nothing new. Employers have used everything from punch clocks, supervisors, and drug tests to make sure that their staff is performing while on the job.

Yet as the surveillance of physicians becomes ever more ubiquitous, the number of moral concerns increases as well. While tracking typically does improve behavior, it can also stunt our moral growth. Take, for example, plagiarism detectors. If they are 100% accurate at detecting academic dishonesty, then they drastically reduce the incentive to cheat, making it clearly counterproductive for those who want to pass their classes. This will cause most students to avoid plagiarism simply out of sheer self-interest. At the same time though, it robs students of an opportunity to develop their moral characters, relieving them of the need to practice doing the right thing even when they might not get caught.

On the other hand, while school might be an important place to build the virtues, hospitals clearly are not. We want our doctors to be consistently attentive and careful in how they diagnose and treat their patients, and if increased surveillance can ensure that, then that seems like a worthwhile trade off. Sure, physicians might miss out on a few opportunities for moral growth and formation, but this loss can be outweighed by not leaving it up to chance whether any patients fall through the cracks. If more surveillance means that more patients get what they need, then so be it.

The problem, however, is that surveillance may not mean that hospitals are always getting more quality care, but simply getting more of what they measure. As doctors become more focused on efficient visit times and necessary record-keeping, there is evidence piling up that suggests that technological innovations like EHRs actually decrease the amount of time that physicians spend with their patients. Physicians now spend over 4 hours a day updating EHRs, including over 15 minutes each time they are in an exam room with a patient. Many doctors must also continue charting until late into the night, laboring after hours to stay on top of their work and burning out at ever increasing rates. So, while patient records might be more complete than ever before, time with and for patients has dwindled.

All of this becomes particularly concerning in light of the connection between physician compassion and patient health. Research has shown that when healthcare providers have the time to show their patients compassion, medical outcomes not only improve, but unnecessary costs are reduced as well. At the same time, compassion also helps curtail physician burnout, as connecting with patients makes doctors happier and more fulfilled.

So maybe the moral formation of doctors is not irrelevant after all. If there is a strong link between positive clinical outcomes and doctors who have cultivated a character of compassion (doctors who are also less likely to burn out), then how hospitals and clinics form their physicians is of the utmost importance.

This, of course, raises the question about what this means for how we track doctors. The most straightforward conclusion is that we shouldn’t give physicians so much to do that they don’t have any time for empathy. Driven by an emphasis on efficiency, 56% of doctors already say that they do not have enough time for compassion in their clinical routines. If compassion plays a significant role in providing quality healthcare, then that obviously needs to change.

But an emphasis on compassion and the moral characters of doctors raises even deeper questions about whether medical surveillance is in need of serious reform. It is extremely difficult to measure how compassionate doctors are being with their patients. Simply tracking a certain period of time, or particular eye movements, or even a doctor’s tone of voice might not truly reflect whether doctors are being empathetic and compassionate towards their patients, making it unclear whether more in-depth surveillance could ever ensure the kinds of personal interactions that are best for both doctors and their patients. And as we have seen, whatever metrics hospitals attempt to track, those measures are the ones that doctors will prioritize when organizing their time.

For this reason, it might be that extensive tracking will always subtly undermine the outcomes that we want, and that creating more compassionate healthcare requires a more nuanced approach to tracking physician performance. It may be possible to still have metrics that ensure all patients get a certain baseline of care, but doctors might also need more time and freedom to connect with patients in ways that can never be fully quantified in an EHR.

Informed Consent and the Joe Rogan Experience

photograph of microphone and headphones n recording studio

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


The Joe Rogan Experience (JRE) podcast was again the subject of controversy when a recent episode was criticized by scientific experts for spreading misinformation about COVID-19 vaccinations. It was not the first time this has happened: Rogan has frequently been on the hot seat for espousing views on COVID-19 that contradict the advice of scientific experts, and for entertaining guests who provided similar views. The most recent incident involved Dr. Robert Malone, who relied on his medical credentials to make views that have been widely rejected seem more reliable. Malone has himself recently been at the center of a few controversies: he was recently kicked off of YouTube and Twitter for violating their respective policies regarding the spread of misinformation, and his appearance on the JRE podcast has prompted some to call for Spotify (where the podcast is hosted) to employ a more rigorous misinformation policy.

While Malone made many dubious claims during his talk with Rogan – including that the public has been “hypnotized,” and that policies that have been enforced by governments are comparable to policies enforced during the Holocaust – there was a specific, ethical argument that perhaps passed under the radar. Malone made the case that it was, in fact, the moral duty of himself (and presumably other doctors and healthcare workers) to tell those considering the COVID-19 vaccine about a wide range of potential detrimental effects. For instance, in the podcast he stated:

So, you know my position all the way through this comes off of the platform of bioethics and the importance of informed consent, so my position is that people should have the freedom of choice particularly for their children… so I’ve tried really hard to make sure that people have access to the information about those risks and potential benefits, the true unfiltered academic papers and raw data, etc., … People like me that do clinical research for a living, we get drummed into our head bioethics on a regular basis, it’s obligatory training, and we have to be retrained all the time… because there’s a long history of physicians doing bad stuff.

Here, then, is an argument that someone like Malone may be making, and that you’ve potentially heard at some point over the past two years: Doctors and healthcare workers have a moral obligation to provide patients who are receiving any kind of health care with adequate information in order for them to make an informed decision. Failing to provide the full extent of information about possible side-effects of the COVID-19 vaccine represents a failure to provide the full extent of information needed for patients to make informed decisions. It is therefore morally impermissible to refrain from informing patients about the full extent of possible consequences of receiving the COVID-19 vaccine.

Is this a good argument? Let’s think about how it might work.

The first thing to consider is the notion of informed consent. The general idea is that providing patients with adequate information is required for them to have agency in their decisions: patients should understand the nature of a procedure and its potential risks so that the decision they make really is their decision. Withholding relevant information would thus constitute a failure to respect the agency of the patient.

The extent and nature of information that patients need to be informed of, however, is open for debate. Of course, there’s no obligation for doctors and healthcare workers to provide false or misleading information to patients: being adequately informed means receiving the best possible information at the doctor’s disposal. Many of the worries surrounding the advice given by Malone, and others like him, pertain to just this worry: the concerns that they have are overblown, or have been debunked, or are generally not accepted by the scientific community, and thus there is no obligation to provide information that falls under those categories to patients.

Regardless, one might still think that in order to have fully informed consent, one should be presented with the widest range of possible information, after which the patient can make up their own mind. Of course, Malone’s thinking is much closer to the realm of the conspiratorial – for example, he stated during his interview with Rogan that scientists manipulate data in order to appease drug companies, as well as his aforementioned claims to mass hypnosis. Even so, if these views are genuinely held by a healthcare practitioner, should they present them to their patients?

While informed consent is important, there is also debate about how fully informed, exactly, one ought to be, or can be. For instance, while an ideal situation would be one in which patients had a complete, comprehensive understanding of the nature of a relevant procedure, treatment, etc., there is reason to think that many patients fail to achieve that degree of understanding even after being informed. This isn’t really surprising: most patients aren’t doctors, and so will be at a disadvantage when it comes to having a complete medical understanding, especially if the issue is complex. A consequence, then, may be that patients who are not experts could end up in a worse position when it comes to understanding the nature of a medical procedure when presented with too much information, or else information that could lead them astray.

Malone’s charge that doctors are failing to adhere to their moral duties by not fully informing patients of a full range of all possible consequences of the COVID-19 vaccination therefore seems misplaced. While people may disagree about what constitutes relevant information, a failure to disclose all possible information is not a violation of a patient’s right to be informed.

Should Clinicians Have Soapboxes?

blurred photograph of busy hospital hallway

Despite the tendency to talk about the pandemic in the past tense, COVID-19 hasn’t gone. Infection rates in multiple countries are swelling, prompting some – like Kenya, Austria, the Netherlands, and Belgium – to employ increasingly stringent measures. Unsurprisingly, alongside increasing infection rates comes an increase in hospital admissions. Yet, there’s one trait that most of those requiring COVID-19 treatment share – they’re unvaccinated.

This trend isn’t surprising given that one of the points of vaccination is to reduce the seriousness of the infection, thus reducing the need for serious medical interventions. Simply put, vaccinated people aren’t ending up in hospitals as often because they’re vaccinated. The people who haven’t been vaccinated, for whatever reason, are more likely to have severe complications if infected, thus needing clinical care. So far, so simple.

This tendency for hospital beds to be occupied by the unvaccinated invites questions regarding the burden on healthcare systems. After all, emergency care services are better placed to respond to emergencies – like bus crashes, heart attacks, or complicated births – when their wards, ambulances, and hallways aren’t preoccupied with patients. If those patients are there because of their choice not to be vaccinated, it’s only natural to wonder whether they are equally deserving of that resource-use.

But is it appropriate for those working in the medical profession to voice such concerns? If you’re in the hospital seriously ill, does it help to know that your nurse, doctor, consultant, or porter may resent your being there?

This question’s been brought to the forefront of the COVID-19 discussion because of a recent Guardian article entitled ICU is full of the unvaccinated – my patience with them is wearing thin. In it, an anonymous NHS respiratory consultant writes, “I am now beaten back, exhausted, worn down by the continuous stream of people that we battle to treat when they have consciously passed up the opportunity to save themselves. It does make me angry.” Similar sentiments come from the Treating the unvaccinated article in The New Yorker, where critical care physician Scott Aberegg recounts:

There’s a big internal conflict… On the one hand, there’s this sense of ‘Play stupid games, win stupid prizes.’ There’s a natural inclination to think not that they got what they deserved, because no one deserves this, but that they have some culpability because of the choices they made… When you have that intuition, you have to try to push it aside. You have to say, [t]hat’s a moral judgment which is outside my role as a doctor. And because it’s a pejorative moral judgment, I need to do everything I can to fight against it. But I’d be lying if I said it didn’t remain somewhere in the recesses of my mind. This sense of, Boy, it doesn’t have to be this way.

It’s not unsurprising that clinicians feel this way. They’ve seen the very worst this pandemic has to offer. The prospect that any of it was avoidable will undoubtedly stir up feelings of anger, betrayal, or even injustice; clinicians are, after all, only human. While expecting clinicians not to have such opinions seems like an impossible demand, should they be voicing them on platforms with such a broad reach?

On the one hand, the answer is yes. Entering the medical professions in no way invalidates one’s right to free speech, be that in person or print. Much like how any other member of the public can pen an article in an internationally respected newspaper if invited, clinicians have the right to share their views. If that view concerns their increasing inability to accept the preventable loss of life, then, at least in terms of that clinician’s rights, there is very little to stop them ethically. To try would be to revoke a privilege which many of us would likely consider to be fundamental and, without a robust justification, unassailable.

However, those experiencing the pandemic’s horrors may have more than just a right to share their opinions; they might have a duty. Those working on the frontlines in the battle against the pandemic know better than most the state of the healthcare services, the experience of watching people die from the illness, and the frustration from having to cope with much of it is seemingly preventable. Given that they have this unique knowledge, both from a medical and personable standpoint, it would seem that clinicians have a responsibility to be as honest with the general public as possible. If that means sharing their woes and frustrations about the reluctance of people to take even the most basic steps to save themselves, then so be it. After all, if they don’t tell us this information, it seems unlikely that anyone else is.

But, such a principled stance may detrimentally affect trust in the healthcare system, and subsequently, that system’s effectiveness.

As The Prindle Post has recently explored, shame is a complex phenomenon. Its use in trying to shape people’s behaviors is far from simple. This complexity has been seen in several previous public health concerns where shame has had the opposite effect intended. As both The Wall Street Journal and NPR have recently reported, shame makes for a terrible public health tool as it deters engagement with clinicians. If you believe that you’re going to be shamed by your doctor, you’re probably less likely to go. For smokers and alcoholics, this chiefly detrimentally affects only a single person’s health. During a global pandemic,however,  it means there’s one more potentially infectious person not receiving medical care. Scaled-up, this can easily result in countless people refusing to visit hospitals when they need to – increasing infection rates and preventing medical assistance from getting to those that need it.

All this is not to say that doctors, nurses, surgeons, and countless others involved in the care of the vulnerable should be automatons, devoid of emotion and opinion about the unvaccinated. Again, they’re human, and they’re going to have thoughts about what they see during the course of their professional careers. But whether those opinions should be broadcast for the entire world to read and see is an entirely different question.

COVID Vaccines and Primary Care

photograph of elderly man masked in waiting room

Dr. Jason Valentine, a general practitioner in Alabama, has decided to no longer treat unvaccinated patients. Starting October 1st, that is. At the beginning of August, Valentine’s clinic made the announcement, clarifying that his personal rule applied to both current patients and new patients. So long as you are unvaccinated, Dr. Valentine will not be seeing you. When asked why he was choosing not to treat unvaccinated patients, Valentine said “COVID is a miserable way to die and I can’t watch them die like that.” In Alabama, the state with the highest number of new COVID cases per day, such a sentiment is understandable. But is it ethical?

As most people know, doctors are bound by a creed called the Hippocratic Oath. The name of this oath comes from the historical figure of Hippocrates, a fifth century Greek physician, to whom the oath is traditionally attributed (although he was likely not the original author). The Hippocratic oath is the earliest-known source of many central idea of medical ethics that we still hold to today: e.g., the patient’s right to privacy, the obligation of the physician to not discriminate between the poor and the rich, and, most famously, the pledge to do no harm.

Doctors today continue to take a version of the Hippocratic Oath, though the oath has undergone major alterations in the past 2500 years. Still, the pledge to “do no [intentional] harm” remains. Major debates have been carried out historically over what exactly falls under the pledge to “do no harm” — that is, under what conditions are doctors guilty of breaking their oaths? More specifically, is Dr. Valentine breaking the Hippocratic Oath by refusing to see unvaccinated patients?

One argument for thinking that Valentine is breaking his oath is that refusing to see unvaccinated patients constitutes an illegitimate act of medical discrimination. Medical doctors have, historically, been stoically determined to ignore unpalatable particulars about the individuals they were treating. For example, during the Civil War, doctors in both the Union and the Confederate armies treated soldiers injured on the battlefield, regardless of their allegiance (excluding, sadly, Black soldiers on either side). During the second World War, British surgeons operated on Nazi prisoners of war, in many cases saving their lives. Under the Geneva convention, doctors are bound to treat soldiers from their army and enemy soldiers impartially — enemy soldiers are not to receive worse treatment or a lower medical priority because of their military allegiance. Surely, then, if the Geneva convention would forbid a doctor to refuse to see patients who were Nazis, it would prevent doctors from refusing to treat patients who had not received a vaccination for a dangerous and highly-contagious disease?

But there is legal precedent that complicates this verdict, as well. Specifically, doctors are allowed to, and do frequently, refuse to see children who have not received their recommended childhood vaccines and do not have a medical reason barring them from receiving vaccines. Reasons for these policies often include considerations of the extreme vulnerability of other patients that the voluntarily-unvaccinated may encounter in the office, including young children who are immunocompromised and babies who have not yet received all of their vaccines. Another consideration is that many childhood vaccines prevent infection from nearly eradicated diseases like the measles. When children are not vaccinated against these illnesses, breakthrough cases stand a higher chance of spreading, thereby resurrecting an almost defeated enemy.

For these reasons, one may be inclined to praise the doctor’s choice. Surely, if people are barred from seeing their general practitioner, this might motivate the unvaccinated to receive the vaccination, and undo some of the damage done by rampant misinformation regarding vaccine safety and efficacy. However, consider a (hypothetical) doctor who refused to treat patients who drank too much alcohol, or refused to exercise. In these cases, doctors would surely be seen as refusing to do their primary job: assuring the health of their patients to the best of their (possibly limited) abilities. Some philosophers, like Cass Sunstein, refer to actions and laws like these as “paternalism”: acts of mild coercion for the sake of protecting the coerced, are sometimes seen as acceptable — seatbelt laws and cigarette taxes are commonly-accepted paternalistic laws aimed at mildly coercing safer behavior. But when the coercion becomes harmful, or potentially harmful, these measures are generally seen as morally impermissible. For example, holding someone at gunpoint until they throw away all of their cigarettes may be incredibly effective, and maybe even good for the smoker in the long-run, but is surely morally wrong if anything is. The difference between paternalistic measures and harmful coercion is usually understood as a difference in potential harm and a difference in the degree of autonomy the coerced maintains. When laws increase the tax in cigarettes, smokers may be mildly financially harmed, but this generally will not amount to anything financially destructive. Generally, they retain the choice between either taking on a small additional financial burden or giving up smoking. In the gun-to-the-head case, the smoker no longer (meaningfully) retains a free choice. She must give up smoking or face her own death. Anything less than compliance, in this case, results in the most extreme kind of harm.

Clearly there will be many instances of coercive measures that fall somewhere between these two extremes. This raises a tough question for Dr. Valentine: does refusing to treat voluntarily unvaccinated patients constitute a case of permissible paternalism, or impermissible harmful coercion? One reason for thinking that such a decision may not result in real harm is the abundance of options of doctors that most people have access to. Surely needing to switch primary care doctors is merely an inconvenience, and not a significant harm. However, there are factors complicating this. Many people have insurance plans that severely limit what doctors they can see. Additionally, if Valentine is allowed to refuse unvaccinated patients, there is nothing stopping all of the doctors in his area from taking on the same rule. Someone may be effectively denied all medical care, then, if all local doctors decide to take up a similar rule. An inability to access a primary care doctor seems like a more severe harm than the instances of mild coercion in the cases of paternalistic cigarette tax laws.

There is no easy ethical analysis to give to Dr. Valentine’s decision. While we can surely sympathize with the protocol, and hope it leads to increased vaccination rates, we do not want large swaths of the general public living without a primary care doctor. Like many other aspects of COVID-19, ethicists here have their work cut out for them mapping brand new territory.

Do Terminally Ill Patients Have a “Right to Try” Experimental Drugs?

This article has a set of discussion questions tailored for classroom use. Click here to download them. To see a full list of articles with discussion questions and other resources, visit our “Educational Resources” page.


In his recent State of the Union speech, President Trump urged Congress to pass legislation to give Americans a “right to try” potentially life-saving experimental drugs. He said, “People who are terminally ill should not have to go from country to country to seek a cure — I want to give them a chance right here at home.  It is time for the Congress to give these wonderful Americans the ‘right to try.’” Though only a brief line in a long speech, the ethical implications of the push to expand access to experimental drugs are worth much more attention.

First, let us be clear on what federal “right to try” legislation would entail. Generally, a new drug must go through several phases of clinical research trials before a pharmaceutical company can successfully apply for approval from the Food and Drug Administration to market the drug for use. Advocates of “right to try” legislation want some terminally ill patients to have access to drugs before they go through this rigorous and often protracted process. Recent legislation in California, for example, protects doctors and hospitals from legal action if they prescribe medicine that has passed phase I of clinical trials, but not yet phase II and phase III. Phase I trials test a drug for its safety on human subjects. Phase II tests drugs for effectiveness. Phase III tests drugs to see if they are better than any available alternative treatments.

Thus, “right to try” is a misnomer. First, these experimental drugs are still expected to meet some safety standards before patients can access them. Second, such legislation would not likely mandate that a pharmaceutical company provides access to their experimental drugs. The company can always deny the patient’s request. Third, these laws do not address cost issues. Insurance plans are unlikely to cover any portion of the costs, and pharmaceutical companies are likely to expect the patient to foot the entire bill.

Ethical debate over “right to try” legislation recapitulates a conflict that regularly occurs in American political debate: to what extent does government intervention to protect public welfare by ensuring that drugs are both safe and effective impede the rightful exercise of a patient’s autonomy to choose for herself what risks she is willing to take? Advocates of expanded “right to try” laws view regulatory obstacles set up by the FDA as patronizing hindrances. Lina Clark, the founder of the patient advocacy group HopeNowforALS, put it this way: “The patient community is saying: ‘We are smart, we’re informed, we feel it is our right to try some of these therapies, because we’re going to die anyway.’” While safety and efficacy regulations for new pharmaceuticals generally protect the public from an industry in which some bad actors may be otherwise motivated to push out untested and unsafe drugs on an uninformed populace, the regulations can also prevent some well-informed patients from taking reasonable risks to save their lives by preventing them from getting access to drugs that may be helpful. Therefore, it is reasonable to carve out certain exceptions from these regulations for terminally ill patients.

On the other hand, medical ethicists worry that terminally ill patients are uniquely vulnerable to the allure of “miracle cures.” Dr. R. Adams Dudley, director of UCSF’s Center for Healthcare Value, argues that “we know some people try to take advantage of our desperation when we’re ill.” Terminally ill patients may be vulnerable to exploitation of their desire to find hope in any possible avenue. Their intense desire to find a miracle cure may prevent them from rationally weighing the costs and benefits of trying an unproven drug. A terminal patient may place too much emphasis on the small possibility that an experimental drug will extend his or her life while ignoring greater possibilities that side effects from these drugs will worsen the quality of the life he or she has left. Unscrupulous pharmaceutical companies who see a market in providing terminally ill patients “miracle cures” may exploit this desire to circumvent the regular FDA process.

The Food and Drug Administration already has “compassionate use” regulations that allow patients with no other treatment options to gain access to experimental drugs that have not yet been approved. The pharmaceutical company still must agree to supply the experimental drug, and the FDA still must approve the patient’s application. According to a recent opinion piece in the San Francisco Chronicle, nearly 99 percent of these requests are granted already. “Right to try” legislation at the federal level would not likely mandate that pharmaceutical companies provide the treatment. Such legislation would likely only remove the FDA review step from the process described above.

Proponents of the current system at the FDA view it as a reasonable compromise between respect for patient autonomy and protections for the public welfare. Terminally ill patients have an avenue to apply for and obtain potentially life-saving drugs, but the FDA review process helps safeguard patients from being exploited due to their vulnerable status. The FDA serves as an outside party that can more dispassionately weigh the costs and benefits of pursuing an experimental treatment, thus providing that important step in the rational decision-making process that might otherwise be unduly influenced by the patient’s hope for a miracle cure.

Should Conscientious Objections Apply to Healthcare?

An image of a surgeon operating on a patient.

While executive orders and high-profile legislation garner the most media coverage, much of the change that comes with a new presidential administration happens in the individual departments staffed by new political appointees. The current administration has pushed far-reaching changes regarding the place of religious belief in the healthcare system through actions at the Health and Human Services Department. I’ve previously covered the administration’s decision in October 2017 to widen the scope of exemptions to the contraception mandate. More recently, NPR reported that the Department of Health and Human Services is opening a new Division of Conscience and Religious Freedom to defend health care workers who object to participating in medical care for patients because of their sincerely held religious beliefs. Notably, the establishment of the division also reverses an Obama-era rule barring “health care workers from refusing to treat transgender individuals or people who have had or are seeking abortions.”

Continue reading “Should Conscientious Objections Apply to Healthcare?”