← Return to search results
Back to Prindle Institute
BusinessFeatured

Rolling the Dice: The Ethics of Randomized Research Funding

By Richard Gibson
10 Jun 2024
photograph of bingo balls in a lottery machine

There is only so much money to go around.

We hear this reasoning all the time in our personal and political lives. Want to buy a new car and a next-gen console? Tough. You can only afford one (or quite possibly, none). So, decisions must be made about where you spend your money. Do you forgo the essentials to get the luxuries? Probably not. It’s usually best to buy what you need before what you want, and for many, paying for the former leaves little for the latter.

The same is true for governments. They can’t simply print money without consequences, as they don’t have an unlimited balance from which to draw. Indeed, discussions about fiscal responsibility – being economically sensible by balancing the books – permeate political debates worldwide. As now infamously encapsulated by former U.K. Prime Minister Theresa May’s “magic money tree” speech, when it comes to deciding where money is spent, just like us individuals managing our household budgets, those in charge have to make decisions that mean some things we’d want to dedicate money to get shafted.

But it is not only in the personal and political spheres that this is a reality. It also occurs in philosophy and, more broadly, in academia. It costs money to employ people to ponder life’s great mysteries. It also costs money to train them so they have the required skills. It costs money to build and maintain the administrative systems required to employ them and those they work with, and it costs money to send them to conferences to share their research with others. And while philosophers don’t typically need much resources (there’s no large hadron collider for ethics or metaphysics), we need our basic needs met; we need to get paid to afford to live. So, those holding the purse strings make similar decisions about what projects to fund and who to employ as you and I do about what to spend our money on. They have a limited fund to divvy up. For every researcher – senior, established, early career, or even pre-PhD – who secures funding to run their project or fund their post, countless more aren’t so lucky.

This places funding bodies in a somewhat unenviable situation as they must decide, from amongst the numerous applications they receive, which ones they should award funding to and which they have to reject; and there are always more rejections than acceptances. For instance, the British Academy – one of the U.K.’s largest research funders – runs an early career fellowship scheme with a typical success rate of less than 10%. Similar stats can be attributed to comparable schemes run by other U.K. funding bodies like the Wellcome Trust and the Leverhulme Trust. I suspect the same is true for funders in other jurisdictions.

So, how do funders decide which projects to support? Typically (hopefully), these decisions are made based on merit. Applicants identify a scheme they want to apply for and submit a research proposal, a CV, and referee statements (and maybe some other documentation). The funding body then considers these applications, ranks them according to a set list of criteria, and rewards the lucky few with funding. Those falling short receive a nicely worded email and a “better luck next time” metaphorical pat on the head.

At least, this is how it is supposed to work. Recently, however, funding bodies have been increasingly vocal about how hard it is to distinguish worthy from unworthy proposals. Or, to be more accurate, they’re receiving so many proposals of top quality that they can’t rank them. When it comes to selecting worthy projects, according to those funders, even after a rigorous selection process, they still have more in the “yes” pile than the available funding permits, and they simply can’t choose which projects deserve to be greenlit.

The question, then, which touches upon themes of fairness and responsibility, is what to do about this. How should funding bodies respond when faced with more worthy projects than they can fund and seemingly no way to choose between them? Some have decided that the best way forward is to leave it up to chance.

This method, typically called randomization, is seen as a way for funders to offload the work of selecting between seemingly equally deserving projects onto Lady Luck. In essence, projects are put into a hat, and those pulled out receive funding. This sidesteps the messy work of picking favorites and the need for splitting hairs. Of course, an entirely random selection process would be unfair as it would entail all projects, regardless of merit, being given an equal chance at receiving funding. So, when employed, the randomization is only partial. Prospective projects still go through the same evaluation process as before, thus maintaining the quality of work; it is only at the final step when only worthy projects remain, and if there is a need for it, that randomization is employed.

The aforementioned British Academy was the first major funder to trial partial randomization, trying it out in 2022 for a three-year period as part of their Small Research Grants scheme. Since then, other funders have followed its lead, including the Natural Environment Research Council, the Novo Nordisk Foundation, the Wellcome Trust, and the Carnegie Trust. It is not unreasonable to expect that other funders, upon seeing the increasing use of partial randomization, might also follow suit.

However, the justification for its use goes beyond simply making the funder’s life easier. According to those same funders, it also promotes diversity and fairness. The envisioned mechanisms powering these proposed benefits are relatively intuitive. If all the proposals selected for random selection meet the necessary standards, other factors that might inadvertently influence funding decisions – such as perceived socio-economic or cultural backgrounds – would not be a factor. In other words, partial randomization removes a layer of human bias from the selection process. Indeed, there’s evidence to support such an idea, as the British Academy has already announced that since their trial started, there has been a notable increase in successful projects originating from scholars from previously underrepresented backgrounds. As noted by Professor Simon Swain, the British Academy’s Vice-President for Research and Higher Education Policy:

The increase in successful applications from historically underrepresented ethnic backgrounds and those based in Scotland and Northern Ireland, along with broader institutional representation, suggests that awarding grants in this way [via partial randomization] could lead to more diverse cohorts of Small Research Grant-holders.

So, not only does partial randomization relieve decision pressures on the funders, but it also benefits those who have historically faced exclusion from such opportunities, which, in turn, enhances the quality of academic research overall. This is undoubtedly a good thing.

Provided that partial randomization is genuinely random, I believe it can also provide solace to researchers whose projects do not get selected. This is because it makes the luck aspect of grant chasing explicit. Like much in life, luck plays a massive role in whether a project gets funding. Even if your work is as good as possible, success depends on multiple factors outside an applicant’s control: is the reviewer familiar with the project’s field? Has another applicant got better-written references? Is the reviewer hungry? Or ill? Or tired? All these things, which shouldn’t influence funding decisions, inevitably do. By building into the system a degree of randomization – a quantifiable stage in which luck is explicit – prospective applicants can (or I think should) be able to take solace from the fact that their project may not get selected not because of something they did or didn’t do, but because it just wasn’t their day.

However, while partial randomization might have some genuinely desirable benefits, it leaves me slightly uneasy because it has an air of abandonment (maybe even a dereliction) of duty on the funder’s behalf.

It is the funder’s job, or at least the job of those on the relevant selection committees, to rank the projects according to the relevant criteria and decide which should be awarded funding. By outsourcing the final part of this process to a randomized system – be that as complex as a dynamic, multifactored algorithm or as simple as a hat full of names – the funders avoid discharging this duty. They avoid deciding which projects should get funding and avoid responsibility for the outcome. They can wipe their hands of the final selection stage and so wipe their hands of the joy and, crucially, disappointment they bring to applicants. While I think prospective applicants can take solace from knowing their project might fail based on nothing but luck, this robs those applicants of a figure at which to be mad; you can be angry at an envisioned funder or selection committee if you know that, somewhere, a person said your project shouldn’t receive funding. But, when a project is rejected based on luck, you have no one at which to direct any anger or sadness. An algorithm isn’t as good a target for frustration as a person or group.

Ultimately, while the anticipated benefits of partial randomization (increased diversity and fairness) are desirable, the selection method’s usage has an air of avoidance and ease. It’s the funder’s job to pick the appropriate projects. If they can’t do this fairly, do we want them to take the easy way out, or would we prefer they worked harder to make a justifiable decision?

Richard B. Gibson received his PhD in Bioethics & Medical Jurisprudence from the University of Manchester. His research interests lie at the intersection of philosophy and biology, the philosophy of law, nihilism, and normative ethics. Richard’s currently working on a series of papers examining the social, legal, and ethical implications of cryopreservation.
Related Stories