A (much better) grant evaluation algorithm

2015 October 19
by Daniel Lakeland

There are lots of problems with the way that grants are evaluated in NIH or NSF study sections. A few (typically 3) people read each grant, they give a score, then if the score is high enough, they discuss the grant with the rest of the group, then the rest of the group votes on the results of that discussion, then you add up all the scores and get a total score, then you rank the scores and fund the top 5-15% based on available funds (or some approximation of this fairy-tale).

To make it past the first round (ie. into discussion) you need to impress all 3 of the randomly selected people, including people who might be your competitors, who might not know much about your field, who might hold a grudge against you... And then, you need those people to be good advocates for you in the discussion... It's a disaster of unintended potential biases. Furthermore, the system tends to favor "hot" topics, and spends too little time searching the wider space of potentially good projects.

Here is an alternative that I think is far far better:

  1. A study section consists of N > 5 people with expertise in the general field (as it does now).
  2. Each grant submitted by the deadline is given a sequential number.
  3. Take the Unix time of the grant deadline expressed as a decimal number, and the last names of all authors on grant submissions in ascending alphabetically sorted order with upper-case ASCII characters, and compute the SHA512 hash (or other secure crypto hash) of this entire string. Then using AES or another secure block cipher in CBC feedback mode, with the first 128 bits of the hash as the key, and the rest of the hash as a starting point for the cyphertext, encrypt the sequence x,x+1,x+2,x+3... starting at x= the rest of the SHA hash. This defines a repeatable and difficult to muck-with random number sequence for a random number generator.
  4. Each grant is reviewed by 5 people chosen at random. (In sequential order, choose a grant number, then choose 5 people at random with replacement to review it... repeat with the next grant.)
  5. Allow each reviewer to score the grant on the usual criteria (feasibility, innovation, blablabla) with equal weight put on the various criteria. For each grant add up the total score for each of the 5 scorers.
  6. For each grant, take the median of the 5 scores it was assigned. This prevents your friends or foes or clueless people who don't understand the grant, or whatever from having too much influence.
  7. Divide each grant's score by the maximum possible score.
  8. Add 1 to the score.
  9. Divide score by 2. You now have a score between 0 and 1 and 50% of that score is influenced by the reading of the grant, and 50% is constant under the assumption that most of the grants are of similar quality, this prevents too much emphasis on the current hot topic.
  10. While there is still money left: select a grant at random with probability proportional to the overall scores of the remaining grants. Deduct the grant's budget from the total budget and fund that grant, removing it from the pool. Repeat with the next randomly chosen grant until all the money is spent.

Why is this a good idea? So much of grant scoring is influenced by things other than the science: whether the person writing the grant has published in this field a lot, whether they are well known and liked by the committee members, whether they have been funded in the past, whether they are working on a hot topic, whether they're a new investigator, how MANY papers they've published (not so much how good those papers were) whether they have a sexy new technique to be applied, etc etc.

But the truth is most grants are probably similar mostly lousy-quality projects. It's hard to do science, very few experiments are going to be pivotal, revolutionary, or open new fields of research. There's going to be a lot of mediocre stuff that's very similar in quality, and so ape-politics is going to have a big influence on total score across the 5 reviewers.

But, the review process does offer SOME information. When at least 3 of 5 randomly chosen reviewers recognize that a grant is seriously misguided, that's information you should use. Taking the median of 5 scores, and using it as 50% of the decision making criterion seems like a good balance between a prior belief that most grants are of similar quality, and specific knowledge that the 5 reviewers are going to bring to the table after reading it.

No comments yet

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS