The Supreme Court recently ruled that aggregate contribution limits to political candidates are unconstitutional. Although we are disappointed by this outcome, we will continue to push for real-time transparency of hard money contributions.

Join us in our call for real-time                     disclosure

Join Us

How the NSF allocates billions of federal dollars to top universities

by

As another college year begins, tens of thousands of academics will once again be scrambling to submit proposals to the National Science Foundation, hoping to secure government funding for their research. Each year, the National Science Foundation (NSF) bestows more than $7 billion worth of federal funding on about 12,000 research proposals, chosen out of about 45,000 submissions.

Thanks to the power of open data, we can now see how representation on NSF federal advisory committees connects to which universities get the most funding. (Federal advisory committee membership data is a feature of Influence Explorer.)

Our analysis finds a clear correlation between the universities with the most employees serving on the NSF advisory committees and the universities that receive the most federal money. Overall about 75% of NSF funding goes to academic institutions.

Even when controlling for other factors, we find that for each additional employee a university has serving on an NSF advisory committee that university can expect to see an additional $125,000 to $138,000 in NSF funding.

Although the 144 NSF advisory committees do not make funding decisions directly, they do “review and provide advice on program management, overall program balance, and other aspects of program performance,” according to the NSF.

At a big picture view, looking at the data on NSF grant awards and NSF advisory committee representation reinforces just how much of the money and representation is concentrated in a limited number of major universities.

Twenty percent of top research universities got 61.6% of the NSF funding going to top research universities between 2008 and 2011. These universities also had 47.9% of the representatives on NSF advisory committees who came from top research universities during the same period. The next 20% of universities got 21.9% of the funding, and had 25.7% of the representatives. The bottom 20% research universities had just 1.0% of the funding and have 2.4% of the representatives.

Just 23 universities account for more than half of the funding awarded by the NSF top to research universities. See Table 1.

Table 1. The 23 Universities that get half of NSF funding to research universities

University Average Annual Funding (in Millions, 2008-2011) Average NSF Committee Representation (2008-2011) Average # of Committees with representatives (2008-2011)

University of California (entire system)
$360.90 639 41

Cal Tech
$93.90 59 22

Illinois at Urbana-Champaign
$80.10 267 32

Michigan-Ann Arbor
$68.20 214 34

Cornell
$67.60 138 30

Washington
$65.90 166 36

Wisconsin-Madison
$65.20 185 29

Purdue
$63.10 198 32

Columbia
$62.90 118 28

M.I.T.
$61.70 27 12

Texas-Austin
$55.50 258 31

Minnesota-Twin Cities
$53.50 155 32

Arizona
$51.70 111 29

Colorado-Boulder
$51.70 135 32

Florida State
$51.10 64 26

Stanford
$50.40 99 29

Penn State
$50.20 96 23

Carnegie-Mellon
$48.20 115 23

Maryland -College Park
$47.40 221 32

Arizona State
$41.40 146 29

Rutgers- New Brunswick
$40.80 120 28

Harvard
$39.40 61 27

Michigan State
$38.40 124 28

 

The University of California tops the list by far, because we combined all University of California campuses (due to data issues, see our data and methodology section), followed by Cal Tech, the University of Illinois, Michigan and Cornell. Interestingly, of the traditional top three universities (Harvard, Princeton and Yale), only Harvard shows up on the above list, at No. 22.

For complete data on 171 major research universities, click here. (The 171 universities come from the US News and World Report list of 200 major research universities. We selected only universities that had some interaction with the NSF between 2008 and 2011).

More representatives on advisory committees, more funding

Figure 1 plots the average NSF funding level for the university from 2008-2011, and the average number of representatives serving on NSF committees during this same period.

Figure 1. NSF funding and committee representation

 

 

 

 

The correlation is clear. The more university-affiliated individuals serve on NSF advisory committees, the more NSF funding the university gets. Mostly, big state schools, with a few Ivy League schools in the mix, dominate the higher echelons of funding and representation. Interestingly, both Cal Tech and M.I.T., two of the pre-eminent research institutions in the country, get substantial NSF funding with limited representation. (Note: The University of California is left off this chart since it is a far outlier on both average funding ($361 million) and average representation (638.5 members). Because the quality of our data prevents us from breaking down the University of California by campus, we largely omit it from our analysis.)

A second scatterplot (Figure 2) examines the relationship between the number of committees and the funding levels. Here the data take on a slightly different relationship. With the exception of a few outliers, there is a changing relationship between the diversity of committees and the NSF funding levels.  It is more exponential than linear. Having representation on just a few committees doesn’t consistently correlate with higher funding, but having representation on a lot of committees is strongly correlated with higher funding.

Figure 2. NSF funding and committee diversity

 

 

 

 

Do more representatives help universities secure more funding?

The NSF “strives to conduct a fair, competitive, transparent, merit-review process for the selection of projects,” based on intellectual merit and broader impacts. Each year, the NSF produces an annual report on the merit review process. To make funding decisions, the NSF relies on tens of thousands of expert reviewers, though program officers make the final decisions.

Advisory committees oversee the general direction of the NSF program areas, including identifying “disciplinary needs and areas of opportunities.” As for who gets on these committees, the NSF explains that: “Many factors are weighed when formulating Committee membership, including the primary factors of expertise and qualifications, as well as other factors including diversity of institutions, regions, and groups underrepresented in science, technology, engineering, and mathematics.”

An example of such a committee is the Proposal Review Panel for Information and Intelligent Systems. Following the hyperlink provided would take you to a list of committee members in Influence Explorer, most of whom have university affiliations.

Showing that more representatives help universities get more funding than they would otherwise have received is difficult. There is a very good and reasonable explanation for the patterns we observe in the two above scatter plots: The NSF tries to get the most knowledgeable experts and accomplished academics to serve on its committees. Not surprisingly, the universities that attract the most NSF money are also likely to be home to many accomplished experts, since they are all leading research universities.

However, there are a few ways in which representatives could help their own universities to improve their chances. One possibility is that if a department has a representative on an NSF committee, that representative will be able to pass along funding opportunities and advice on navigating on the decision-making process of the committee to others in the university, thus strengthening others’ chances. Insiders can help others to better understand what a review committee might be looking for.

Another possibility is that in directing the general funding strategies of NSF program areas, advisory committees might see what their universities are doing as particularly valuable. Or more benignly, they might be more aware of the cutting-edge research within their universities just because it is being done by colleagues they interact with on a regular basis.

One way to investigate the relationship is to do a regression analysis, which allows us to control for different factors simultaneously. For those of a more technical mind, the details are below. For those who want the quick takeaway, it goes like this: Controlling for previous NSF funding and university endowment, universities with more NSF advisory committee representatives get more NSF funding than those that don’t. Each additional representative translates into about an extra $125,000 to $138,000 in NSF funding, controlling for other factors. The number of representatives is more important than the number of committees with representatives. Lobbying expenditures make no difference.

 

Regression analysis in detail

To investigate the relationship between NSF committee representation and funding, we use a multivariate regression to control for multiple factors (We again leave out the University of California because it is such a strong outlier)

We estimate three models:

Model 1: NSF funding = a + ß1 Previous year funding + ß 2 University Endowment + ß 3 Dummy for Year 2010 + ß 4  Dummy for year 2011 + ß 5 Representatives on Advisory Committees + ß 6 Lobby Expenditures + m

Model 2: NSF funding = a + ß1 Previous year funding + ß 2 University Endowment + ß 3 Dummy for Year 2010 + ß 4  Dummy for year 2011 + ß 5 Committees with University representatives + ß 6 Committees with University representatives squared + ß 7 Lobby Expenditures + m

Model 3: NSF funding = a + ß1 Previous year funding + ß 2 University Endowment + ß 3 Dummy for Year 2010 + ß 4  Dummy for year 2011 + ß 5 Representatives on Advisory + ß 6 Committees with University representatives + ß 7 Committees with University representatives squared + ß 8 Lobby Expenditures + m

 

Each regression is an attempt to explain the outcome variable (the funding level) in terms of a set of other variables. The values presented are the coefficients, and the numbers below in parentheses are the standard errors (the +/- value to be applied to the coefficient).

 

Table 2. Regression results

Variable       Model 1             Model 2             Model 3      
Baseline (intercept) 4,960,000 6,750,000 5,020,000
(895,000) (1480000) (1420000)
Previous year NSF funding 0.498 0.563 0.502
(0.027) (0.0277) (0.0274)
University endowment 0.544 0.508 0.547
(0.133) (0.141) (0.133)
Year is 2010 -8,270,000 -8,890,000 -8,210,000
(1,020,000) (1,080,000) (1,030,000)
Year is 2011 -6,820,000 -7,400,000 -6,750,000
(1,030,000) (1,090,000) (1,030,000)
Number of representatives 125,000 138,000
(11,400) (18,000)
Number of committees -255,000 20,700
(176,000) (170,000)
Number of committees squared 21,600 -3,490
(5,340) (6,000)
Lobbying expenditures 0.347 1.31 0.757
(1.87) (2.05) (1.94)
R-squared 0.7876 0.7614 0.7872

To understand the results presented in Table 2, read down each column. Start with the “baseline” – about $5 million. Then, add about half of the previous year’s NSF funding (funding levels tend to be similar from year to year). Then add about half of the university’s endowment. Then subtract about $8.2 million if the year is 2010 or about $7 million if the year is 2011 (depending on the model). This adjusts for the fact that there was less money awarded in 2010 and 2011 than in 2009.

Then we come to the variables we really care about. The first model looks only at representatives, and estimates for each representative a university has on an NSF committee, that university can expect to get an additional $125,000 in NSF funding awards.

The second model estimates the predicted funding in terms of number of committees. Since the relationship appears to be more exponential than linear, we put in a term for the number of committees squared, which reflects that what really matters is having many committees. To get the predicted funding level, you take the number of committees and subtract $255,000 for each committee, and then take the squared value of the number of committees and multiply by $21,600. What this shows is that only between 11 and 12 committees does extra committees start becoming associated with more funding, and the value grows exponentially from there.

The third model shows that when total representation and committee diversity are put up against each other (or held constant for each other), it is total representation that is the statistically significant predictor, and thus more important than the diversity of committees. Controlling for the diversity of committees, the predicted value of each representative on a committee is $138,000 in funding.

We also tested the effect of lobbying expenditures on NSF funding. There is no statistically significant relationship.

Overall, these models do a very good job of explaining the variation in the data. The R-squared values show that between 76% and 78% of the variation in the data can be explained by our factors, which is pretty good.

Again, it’s hard to be sure that we are observing a causal relationship. But the data do show a clear correlation between NSF advisory committee representation and university level funding, and some very suggestive evidence that these two are connected – that universities who have representatives on the inside seem to do better than those that don’t.

 

Data and Methodology

We began with a list of the top 200 universities in US News and World Report for 2011. Of these universities, 171 received NSF funding or served on an NSF committee at some point between 2008 and 2011. Our data on NSF advisory committee membership were pulled from data on Influence Explorer.

We collected the funding data from research.gov for the years 2008-2011. Only grants that were awarded by the NSF were used in computing the funding amount... In the case of the University of California system, all of the universities were combined into one since the data in Influence Explorer was also aggregated.

Complete data for university funding and representation can be found here.

Special thanks to Christopher Mascaro and Alex Engler for their help in preparing this post.