Historical context

In the past fifteen years policy-makers have made demands for accountability in the research funding system. In short, they have begun to recognize the fact that it is no longer sufficient to assume that more federal dollars equals more societal benefit; policy-makers (and tax-payers) increasingly want to see evidence of the outcomes of investments in science. In 1993 Congress cancelled the Superconducting Super Collider project after having already spent two billion dollars on it, largely because the massive project costs could not be justified as having any societal benefit. Also in 1993 came the Congressional passage of the Government Performance and Results Act (GPRA), demanding strategic planning and proof of the return on investment for all government-funded programs including science. Some in the science policy community -- especially agencies like the NSF who funded so-called basic research -- reacted to GPRA with dismay, as its demand for performance-driven management threatened the autonomy and isolation from social concerns they had grown accustomed to (Cozzens 1999).

In 2005 John Marburger, Science Adviser to President George W. Bush, gave a keynote address to the American Association for the Advancement of Science in which he described the relationship between basic and applied research as much more complex than is usually shown by the linear (reservoir) model of science and technology policy. In this speech and a subsequent editorial in the journal Science Marburger called for "better benchmarks" to understand and predict the relationship between science and societal benefit.

Finally, in 2007 Congress passed the America COMPETES Act (H.R. 2272), which calls for increased funding for federal science and technology in order to ensure the country’s economic competitiveness. America COMPETES is an important development in science policy as it makes explicit the presumed link between investment in research and development and economic return.

Responding to such calls for science to be held accountable to society, the NSF has implemented a suite of ways to require and demonstrate the societal impacts of the research funded through this agency. In 1997 the NSF updated its merit review criteria for evaluating research proposals. All proposals were required to include a “Broader Impacts” criterion, anticipating how the proposed research would impact society. Scientists had mixed reactions to this requirement, some even claiming that it conflicted with the intellectual merit criterion. But Congress continued to support a focus on broader impacts. In 1998 and again in 1999, Congress directed NSF to engage the National Academy of Public Administration (NAPA) to examine and report on the “effectiveness” of the new criteria. The NAPA Report, published in 2001, found that compliance with the Broader Impacts criterion was unsatisfactory, and in response NSF redoubled its efforts to increase proposer and reviewer attention to the broader impacts criterion, insisting that failure to address the criterion would disqualify the proposer from the review process.

After Marburger’s 2005 address to AAAS and Science editorial the NSF instituted a new research program to identify and measure the societal impacts of science. This program, called the Science of Science and Innovation Policy (SciSIP), is the NSF’s attempt to develop an intellectual community devoted to identifying the processes by which basic science turns into broader societal impacts.