We should be experimenting more with funding mechanisms

James Edward Dillard
4 min readJun 17, 2021
A scientist fills test tubes with a purple liquid. A tray of test tubes with different colored liquids is in front of her.
Photo by Julia Koblitz on Unsplash

This past week I’ve been helping a friend apply for funding from the Department of Energy. I don’t bring a lot to the table from a technical perspective, but I can look for places where someone who hasn’t been thinking about the topic all day for months on might lose the thread. I hope I’m being helpful.

It just so happened that this was also the week that ChinaTalk did a podcast on how to do R&D right with Ben Reinhardt. The combination of the two has me thinking a lot about funding mechanisms.

One of the themes of the ChinaTalk episode is that R&D funding in the US (in general) tends to be pretty monolithic and consensus driven. The hypothesis they discuss on the show is that this drives out high variance ideas and that one of the things that DARPA gets right is that it gives relatively small amounts of money to highly capable people and gives them a fair amount of latitude to chase an idea and see where it goes.

All of this sounds pretty logical to me. But I couldn’t help but wonder why we’re not doing more to test different funding mechanisms. While listening and jogging, I wanted to shout: “We should be testing out funding mechanisms!”

  • Which funding mechanisms are most optimal for different problem types (e.g., incremental progress towards a well understood goal vs. truly unique 0 to 1 style innovation)?
  • When is it better to have individuals fund ideas vs. committees?
  • What is the optimal check size for a given idea / category?
  • When are increased application requirements helpful? What about interviews? Reporting requirements?
  • When is it better to have one funder vs. multiple funders?

The list goes on — I’m sure I’m missing something. It’s logical to me that for certain categories of problems that a fully buttoned up 30 page paper reviewed by experts in the field out performs a one pager. But it also seems likely that a lot of the work that goes into getting funding is extraneous.

In a perfect world, the NIH or DoD or someone with a big check book would have competing funding models for various points in the reporting pipeline, all compared to a randomized baseline and have some cadence for sharing what they’ve learned.

In a less perfect world, this gap would be filled by a private institution, maybe a foundation or an exotic billionaire. At the very least, there would be at least one major funding institution that compares their funding model to randomly selected applications to prove some sort of “alpha” for scientific progress.

Surely someone must have thought of this before, I thought. After all, I’m just a guy on a jog listening to a podcast.

And because it’s 2021 and the internet exists, someone who knows more than me has thought of this before. José Luis Ricón has written about this a bunch (one recent example is here), but as far as I can tell, while some groups are trying out funding models that they view as unique compared to the status quo, no one is actively trying to compare different funding models, sort of like a Kaggle Competition, but for innovation.

So I’m going to try and push this forward by outlining what I think you would need to set something like this up.

Here’s what I think you need:

  1. You need a way to be able to judge externally relatively easily; it should be obvious whether or not a given application succeeded or not. An implication of this is that you probably want to have time horizons that aren’t too terribly long
  2. It would help to have a topic area where the size of a typical investment isn’t too large so you can fund a relatively high number of applicants
  3. You need a topic area that’s broad enough that it can support a lot of different applicants
  4. You’re going to want to 2–3 models for funding that are pretty different from each other
  5. You need a way to randomly assign applicants to the various models

Here’s my attempt to apply this to a concrete example

Topic: Grants to reduce the emission of greenhouse gasses. I picked this because it seems important, broad, and an area where you might be able to do something interesting on a smaller budget

Competing funding models:

  • Model A: Random selection
  • Model B: A one page application selected by an individual
  • Model C: A one page application selected by a committee
  • Model D: A long application (e.g., multiple sections with rubric grading) selected by an individual
  • Model E: A long application selected by a committee

Funding:

  • Max grant size: $20,0000
  • Total number of grants: 50
  • Note: I haven’t worked this out yet, but I feel like you should be able to be clever and have a data set somewhat larger than 50 since some of the individually selected applicants might also be selected randomly or by committees.

How it works in practice:

  • Create a website for receiving applications
  • When a user signs up / creates an account, assign them an application length (short vs. long)
  • Run the selection process
  • Award the funds
  • Track the participants

Success metrics:

  • After 2 years, which of the grant recipients have clearly demonstrated the ability to remove greenhouse gases
  • Similarly, you could probably repeat this analysis annually through 5 years

Budget: $1M in grant funds plus some amount of overhead; 10–15%? So max 1.25–1.5M?

I think this is interesting as a thought exercise. Maybe you think the grant size is too small or the number of grant recipients isn’t big enough to draw conclusions. But even if you scale both numbers up pretty considerably, you could experiment with this at a funding amount that isn’t outside the realm of possibility for governments and foundations.

--

--