← Return to search results
Back to Prindle Institute

Optimizing the IRB

By Conner Gordon
10 Jun 2015

For the average person, the notion of medical research may conjure dramatic images of lab-coated scientists handing test tubes and analyzing data. What hardly ever comes up, though, is a process some researchers dread: approval by an institutional review board (IRB). Notoriously lengthy and sometimes difficult to navigate, the process is an oft-unseen yet critical piece of conducting research. And, as CNN contributor Robert Klitzman argues, the demands it places in its current form may have become more of a burden on research than anything.

Part of the problem in this is that, as Klitzman points out, IRBs have no universal structure or procedure. Nor are many particularly transparent about their practices – according to Klitzman, some do not even let the researchers in question attend approval meetings for their research. And he argues that, with 4,000 IRBs around the country, a lack of standardized oversight has led to an inefficient and opaque approval process.

This lack of standardization plays into existing problems surrounding the practice of medical research in the United States. Central among these is the degree of financial bloat involved in research; a recent study in PLOS Biology, for example, reported that approximately $28 billion a year is spent on medical studies with results that are ultimately not reproducible. Whether due flawed experimental design, ambiguous documentation or misidentified testing materials, such experiments place a significant strain on the efficiency of medical research today.

It is clear, then, that IRB approval plays an important role. Without it, it is entirely possible that even more irreproducible or unsound studies would be undertaken – ultimately worsening the problems the medical research industry faces. However, with a lack of standardization, it appears that IRBs may actually add to the inefficiency, both in time and money, already plaguing the field.

At the same time, though, it could be argued that the frustrating delays Klitzman cites are necessary. Without proper deliberation, both on the part of IRB board members and the researchers in question, it would be all too easy to let suspect experimental design or inadequate methods slip through. Though agonizing at times, such delays would help board members bring potential issues to light. Within limits of reason, then, such delays may be beneficial for research as well.

However, clearly there are limits to how much deliberation is required. In some cases, the line between a proper amount of examination and a redundant one can be particularly fine. And if there is no universal standard for IRBs to follow, determining where this line should fall may be more challenging. It is here, Klitzman argues, that increased communication and transparency about IRB procedures is also needed.

If implemented, this transparency will ultimately raise new sets of questions. With such variable practices among IRBs in the United States, it is yet to be seen which policies standardization would focus on. Also of contention will likely be to what degree IRB processes should be centralized, and how any proposed oversight powers would be reconciled with institutional autonomy. These questions will be critical to the future of IRBs and, given the inefficiency in some ways defining medical research today, they likely must be answered soon.

Conner was a Graduate Fellow at the Prindle Institute from 2016-2018. Conner's writing focuses on memory, politics and culture. He is currently an MFA candidate at the University of Oregon.
Related Stories