DEB Numbers: FY 2016 Wrap-Up

Fiscal year 2016 officially closed out on September 30. Now that we are past our panels in October and early November, we have a chance to look back and report on the DEB Core Program merit review and funding outcomes for FY 2016.

This post follows the format we’ve used in previous years. For a refresher, and lengthier discussions of the hows and whys of the metrics, you can visit the 2015,  2014, and 2013 numbers.

Read on to see how 2016 compares.

FY2016 Summary Numbers

The charts below all reflect proportions of DEB Core Program projects through each stage of the review process: preliminary proposals, full proposals, and awards.

In the review process leading to awards in FY2016:

DEB reviewed 1502 preliminary proposals received under the DEB Core Programs solicitation and LTREB solicitation in January 2015, about 26% of which were invited to the full proposal stage.

The preliminary proposal invitees were joined at the full proposal stage by 1) Direct submissions to DEB under the CAREER, OPUS, and RCN solicitations, and 2) Projects shared for co-review by another NSF program. Altogether 524 full proposals were reviewed in DEB during October and November of 2015.

From this pool of full proposals, DEB made awards to 133 projects (technically, these were 202 separate institutional awards but for analysis purposes we count collaborative groups once, i.e., as a single proposed project).

Below, we present and discuss the Division-wide success rate and some selected project demographics that were concerns coming in to the preliminary proposal system. The demographic numbers are presented as proportions for comparison across the review stages. However, the progressive reduction in size of the denominators from preliminary proposals (1502) to awards (133) means each step becomes more sensitive to small absolute changes.

Success Rate

Success rate is a function of the number and size of requests submitted by the research communities and appropriated funding levels. The success rate for research proposals in the DEB Core Programs (Figure 1) has stabilized (even rebounded somewhat) since the preliminary proposal process was instituted. This stabilization emerges from:

  • Stable award numbers: Since FY 2013 the number of Core Program awards has consistently been between 131 and 136 funded projects[i].
  • Reduced request numbers: The initial wave of new people “testing the waters” by submitting a preliminary proposal is subsiding. As PIs became more familiar with the process, total submissions have dropped by about 10% across the last three years. With stable award numbers, fewer submissions translate directly into a 10% increase in overall success rate from 7.3% in FY2013 to 8.1% for 2016.
Figure 1: DEB Core Program success rates from fiscal year 2007 through the present. Prior to fiscal year 2012, there were two rounds of full proposal competition per fiscal year. Preliminary proposals were first submitted in January 2012, initiating the 2-stage review process and leading to the fiscal year 2013 award cohort.

Figure 1: DEB Core Program success rates from fiscal year 2007 through the present. Prior to fiscal year 2012, there were two rounds of full proposal competition per fiscal year. Preliminary proposals were first submitted in January 2012, initiating the 2-stage review process and leading to the fiscal year 2013 award cohort.

 

Calculation Notes:

Preliminary proposal success rate is calculated as the number of invitations made divided by the number of preliminary proposals submitted.

Full proposal success rate is calculated as the number of awards made, divided by the number of full proposals reviewed.

Note that post-2012, under the preliminary proposal system, the set of full proposals reviewed is ~80% invited full proposals and ~20% CAREER, OPUS, RCN and co-reviewed proposals, the latter of which are exempt from the preliminary proposal stage.

Overall success rate is calculated as the number of awards made divided by the total number of distinct funding requests (i.e., the sum of preliminary proposals submitted plus the exempt CAREER, OPUS, RCN, and co-reviewed full proposals).

Reminder: Elevated success rates (in 2009 and 2012) were due to:

  • a one-time ~50% increase in funding for FY2009 (the ARRA economic stimulus funding) without which success would have been ~13-15%; and,
  • a halving of proposal submissions in FY2012 (the first preliminary proposal deadline replaced a second full proposal deadline for FY2012), without which success would have been ~8-9%.

Individual and Collaborative Projects

As seen in Figure 2 below, there was little year-to-year change in the submission and funding success of single investigator projects. While the proportion of single investigator preliminary proposals increased slightly, there was a small decrease in both the full proposal and award groups. As a reminder to readers: the gap between the proportion of single investigator projects in the preliminary proposal and full proposal stages is due to the single-investigator proposals in the CAREER and OPUS categories. The CAREER and OPUS proposals are not subject to the preliminary proposal screen and make up a relatively larger portion of the full proposals.  Similarly, the absence of CAREER and OPUS proposals at the preliminary proposal stage lowers the single investigator proportion of the preliminary proposal counts relative to the historical full proposal baseline.

Figure 2: The proportion of DEB Core Program projects lead by a single PI over time and at the different stages of merit review.

Figure 2: The proportion of DEB Core Program projects lead by a single PI over time and at the different stages of merit review.

The proportion of collaborative proposals in our award portfolio rebounded from last year’s drop and is near the all-time high for both full proposals and awards. This is consistent with the general trend toward greater collaboration over the past decade and beyond.

Figure 3: The proportion of DEB Core Program projects with two or more different institutional participants over time and at the different stages of merit review.

Figure 3: The proportion of DEB Core Program projects with two or more different institutional participants over time and at the different stages of merit review.

Readers may notice that the collaborative and single-investigator groupings don’t sum to 100%. The remainders are intra-institutional multi-PI arrangements; such projects are certainly intellectual collaborations, but they are not a “collaborative project” per the NSF PAPPG definition (Figure 3).

Early Career Scientists

The best identifier of researcher career stage is a metric we calculate. It is the difference between the year that the PI obtained their Ph.D. (a self-reported by the PI) and the current date. This number can be used as a proxy for how long each individual has been in the population of potential PIs.

Figure 4: Distribution of degree ages among PIs on DEB Core Program full proposal submissions.

Figure 4: Distribution of degree ages among PIs on DEB Core Program full proposal submissions.

 

Figure 5: Full proposal success rates for PIs on DEB Core Program proposals by degree age. Figure displays annual data and a 4-year mean for the period of the preliminary proposal system in DEB.

Figure 5: Full proposal success rates for PIs on DEB Core Program proposals by degree age. Figure displays annual data and a 4-year mean for the period of the preliminary proposal system in DEB.

Little changes in the profile of submitter- and awardee-degree ages from 2013 through 2016. Moreover, success rate improves slightly with degree age. Success rates climb slowly from 20% for the newest PhDs, to 30% at 35 years post PhD. Note: PIs more than 35 years post-PhD comprise ~5% or less of the total PIs on proposals and awards. Although more experienced PIs have a somewhat better individual success rate, the PI population skews toward the early career group. Thus, early-, mid-, and later-career PIs wind up with similar representation in core program award decisions.

Gender & Predominantly Undergraduate Institution (PUI) Status

Another concern heading in to the preliminary proposal system was that there would be unintended consequences for different categories of submitters. Two years ago we saw a small change in award numbers lead to a visually jarring drop in the representation of female PIs among DEB awards, as well as a jump in the proportion of PUI awardees. Last year, we saw the pattern reversed. In addition to the apparent negative correlation between the proportion of female PI awardees and PUI awardees that has appeared, the award data just appears noisier than it was under the old system. But, as we stated in last year’s 2015 Wrap-up post:

We conclude that we are witnessing a small numbers effect; 131 awards is simply not a sufficiently large “sample” to be representative of the population across all of the potential decision-making variables and concerns.  PUIs are a minority component of the proposal pool (~18%). Female PIs are a minority component of the proposal pool (~30%). Beginning Investigators are a minority component of the proposal pool (~20%). Proposals that fall into two categories are an even smaller fraction of the proposal pool (~6%) and proposals that fit into all three are even smaller yet (~1%).”

Which now brings us to 2016.

Figure 6: The representation of female PIs and predominantly undergraduate institutions in DEB Core Program proposals and awards. These two groups were noted by the community as groups of concern that would be potentially impacted by the pre-proposal system.

Figure 6: The representation of female PIs and predominantly undergraduate institutions in DEB Core Program proposals and awards. These two groups were noted by the community as groups of concern that would be potentially impacted by the pre-proposal system.

Once again, we see the same pattern in the representation of female PIs and PUIs in the award portfolio: one goes up, the other down. As best as we can determine, our previous conclusion still provides the best explanation: with only 133 projects awarded, it’s a small numbers problem.

The longer-term pattern for both groups is not affected by these latest numbers. The proportion of female PIs has been increasing between by 0.5-1 percentage points per year. The proportion of primarily undergraduate institutions has held steady and there’s a ~4 percentage point gap from the full proposal to the award stage. The PUI gap pre-dates the preliminary proposal system and this group was not intended to be impacted by the preliminary proposal process, so we didn’t expect change.

Moreover, we note that the proportion of PUI preliminary proposals is growing. This represents a small absolute increase but is exaggerated by the reduced total number of preliminary proposals. While there has been no corresponding change in full proposals and awards, if these preliminary proposals represent new entrants to competition then we would expect some lag in those later-stage metrics. Should this development persist, it would be interesting to see if there is any effect on the representation gap between PUIs in preliminary proposals and DEB Core Programs awards. This gap is something that would likely be worth a deeper examination in the future.

Concluding Thoughts

Since the implementation of the preliminary proposal system in January 2012, we have not seen major departures from pre-2012 trajectories across measures of our portfolio with respect to PI or institution demographics or collaborations. Four years in, we have not seen indication of Division wide changes, especially any worsening of submission and award proportions among any of these major groups of concern. Success rates appear stable across multiple years for the first time in recent memory. However, the overall climate for awards remains poor due to continued funding stagnation. If there is any bright side to that fact, funding woes at least appear to not have been amplified for any particular subgroup in recent years. But, the limited purchasing power of program budgets means our ability to support the full diversity of the research community will continue to be tested and subject to difficult trade-offs.


[i] During the 13 years before preliminary proposals, DEB Core Program award numbers fluctuated quite a bit; they regularly rose or fell by 20-30 (and as many as 80!) awards per year.


Panel Summaries: Panelist Guidance and PI Expectations

We’ve previously posted about what we are looking for in strong individual reviews of proposals. After receiving individual reviews, most proposals handled by DEB are brought to a panel meeting. After discussing a proposal, the panel prepares a document called the Panel Summary. In this post, we describe Panel Summaries, our goals in what we want them to communicate, and the steps DEB has recently taken to improve them.

What are Panel Summaries? (short version)

A Panel Summary is the written record of the review panel discussion of a proposal.

When you hear back from DEB about a proposal, you typically receive several documents in FastLane:

  • all individual reviews (generally at least 3 from panelists and ad hoc reviewers),
  • a context statement describing the program and review process employed,
  • a Panel Summary and/or a Program Officer comment explaining the program decision.

The Panel Summary is the justification of the panel’s recommendation to the Program and to the PI. It is the most important document the PI receives. It acts as a bridge between the reviews and the panel’s recommendation, helping the PI to understand how and why the panel came to its decision.

What are we hoping to see in a well-written Panel Summary?

The single most important point to keep in mind for crafting a useful Panel Summary is that it needs to provide evaluative statements about a proposal and to justify those statements with specific details and feedback. However, this is not easy to do given that variation in proposals, reviews, panel discussions, and panelists’ writing styles all contribute to the Panel Summary. [This is why Panel Summaries are one of the items we’ve been monitoring and seeking to better manage and improve under the preliminary proposal system. More on this below.]

A good Summary is clear and concise in regards to the panel evaluation. It provides a consensus advisory statement from the panel to NSF about the merits of a particular proposal after consideration and discussion of all viewpoints. As with individual reviews, the panelists are asked to consider the proposal in light of the NSF merit review criteria and any additional criteria applicable to a specific program or funding opportunity. Program Officers and staff also provide feedback during the panel meeting to ensure that Summaries are complete and compliant with policies (e.g., confidentiality).

As far as style and approach, our previous advice on crafting individual reviews applies here too. But keep in mind that a Panel Summary differs from an individual review in that it is a summary of panel discussion, not of the individual reviews – This is another important point. The other major difference between an individual review and a Panel Summary is the context in which they are written. While your individual review is written prior to the panel, alone, and in an environment of your choosing, Panel Summaries are written in the midst of a panel with several other panelists who must sign-off on the final product providing advice.

What steps are we taking to encourage useful Panel Summaries?

For many years now, DEB has been providing panelists with a template for completing Panel Summaries. Ongoing evaluation has motivated us to modify the template and provide additional instructions to panelists on the purpose of a Panel Summary.

The purpose of the new template is to provide greater clarity for both the panelists and the PIs as to what we expect to see in each of the sections of a Summary.

DEB Panel Summary Template provided to panelists during Fall of 2015.

DEB Panel Summary Template

The new template features familiar headings that outline the major points to be considered by the panel and couples those with brief prompts (in red italics) that are intended to be kept in the document so that both panelists and PIs will have constant reminders of what we are asking of panelists in each section of the template.

In addition to this template, which will be provided to each panelist as a document file, panelists will also receive hard-copy guidance documents for their panel work-spaces that reiterate our verbal instructions about writing strong and complete Panel Summaries. This guidance document includes short example phrasings, and call-out boxes to highlight common issues with content, style, and formatting. You can read it for yourself below:

DEB Panel Summary Guidance handout for Fall of 2015

DEB Panel Summary Instructional Handout, Page 1 DEB Panel Summary Instructional Handout, Page 1

By putting these documents out here, we are hoping you, our community members, who are both our PIs and panelists, can be partners with us in maintaining awareness of what we are looking for in high quality Panel Summaries. We think establishing clarity and mutual understanding of the role of panel summaries before you find yourself in panel or receiving a decision from a Program Officer will contribute toward a culture that demands and provides high quality review documentation for everyone.

 

 


Panel Summaries: Panelist Guidance and PI Expectations

We’ve previously posted about what we are looking for in strong individual reviews of proposals. After receiving individual reviews, most proposals handled by DEB are brought to a panel meeting. After discussing a proposal, the panel prepares a document called the Panel Summary. In this post, we describe Panel Summaries, our goals in what we want them to communicate, and the steps DEB has recently taken to improve them.

What are Panel Summaries? (short version)

A Panel Summary is the written record of the review panel discussion of a proposal.

When you hear back from DEB about a proposal, you typically receive several documents in FastLane:

  • all individual reviews (generally at least 3 from panelists and ad hoc reviewers),
  • a context statement describing the program and review process employed,
  • a Panel Summary and/or a Program Officer comment explaining the program decision.

The Panel Summary is the justification of the panel’s recommendation to the Program and to the PI. It is the most important document the PI receives. It acts as a bridge between the reviews and the panel’s recommendation, helping the PI to understand how and why the panel came to its decision.

What are we hoping to see in a well-written Panel Summary?

The single most important point to keep in mind for crafting a useful Panel Summary is that it needs to provide evaluative statements about a proposal and to justify those statements with specific details and feedback. However, this is not easy to do given that variation in proposals, reviews, panel discussions, and panelists’ writing styles all contribute to the Panel Summary. [This is why Panel Summaries are one of the items we’ve been monitoring and seeking to better manage and improve under the preliminary proposal system. More on this below.]

A good Summary is clear and concise in regards to the panel evaluation. It provides a consensus advisory statement from the panel to NSF about the merits of a particular proposal after consideration and discussion of all viewpoints. As with individual reviews, the panelists are asked to consider the proposal in light of the NSF merit review criteria and any additional criteria applicable to a specific program or funding opportunity. Program Officers and staff also provide feedback during the panel meeting to ensure that Summaries are complete and compliant with policies (e.g., confidentiality).

As far as style and approach, our previous advice on crafting individual reviews applies here too. But keep in mind that a Panel Summary differs from an individual review in that it is a summary of panel discussion, not of the individual reviews – This is another important point. The other major difference between an individual review and a Panel Summary is the context in which they are written. While your individual review is written prior to the panel, alone, and in an environment of your choosing, Panel Summaries are written in the midst of a panel with several other panelists who must sign-off on the final product providing advice.

What steps are we taking to encourage useful Panel Summaries?

For many years now, DEB has been providing panelists with a template for completing Panel Summaries. Ongoing evaluation has motivated us to modify the template and provide additional instructions to panelists on the purpose of a Panel Summary.

The purpose of the new template is to provide greater clarity for both the panelists and the PIs as to what we expect to see in each of the sections of a Summary.

DEB Panel Summary Template provided to panelists during Fall of 2015.

DEB Panel Summary Template

The new template features familiar headings that outline the major points to be considered by the panel and couples those with brief prompts (in red italics) that are intended to be kept in the document so that both panelists and PIs will have constant reminders of what we are asking of panelists in each section of the template.

In addition to this template, which will be provided to each panelist as a document file, panelists will also receive hard-copy guidance documents for their panel work-spaces that reiterate our verbal instructions about writing strong and complete Panel Summaries. This guidance document includes short example phrasings, and call-out boxes to highlight common issues with content, style, and formatting. You can read it for yourself below:

DEB Panel Summary Guidance handout for Fall of 2015

DEB Panel Summary Instructional Handout, Page 1 DEB Panel Summary Instructional Handout, Page 1

By putting these documents out here, we are hoping you, our community members, who are both our PIs and panelists, can be partners with us in maintaining awareness of what we are looking for in high quality Panel Summaries. We think establishing clarity and mutual understanding of the role of panel summaries before you find yourself in panel or receiving a decision from a Program Officer will contribute toward a culture that demands and provides high quality review documentation for everyone.

 

 


Are small grants doing well in review?

In contrast to the trend of decreasing numbers of preliminary proposals, we have seen a rapid increase in the category of Small Grant preliminary proposals (these are also included in the total counts in our previous post).

DEB Small Grants 2012 2013 2014 2015
Submitted N/A 83 95 126
Invited N/A 20 25 29
Invite Rate N/A 24% 26% 23%

 

We attribute this to a growing awareness of this option to submit preliminary proposals with total budgets under $150K. Small grants came about in the second year of the preliminary proposal system in response to a long-standing desire, expressed by numerous voices in our communities, for some sort of “small” category. DEB realized it was particularly appropriate in the case of the preliminary proposal system in order that reviewers be able to adjust their expectations for the scope of a project relative to the expense without requiring the extensive preparations of a full budget. We added the category to our solicitation for the 2013 preliminary proposal deadline.

We’ve had lots of positive feedback on this option, but also recognize that awareness still needs to be improved among both applicants and reviewers. This year, 8% of all preliminary proposals were identified as small grants.

Small Grants are found in all four clusters and are generally on the increase, but we also think feedback, such as this post, is necessary to successfully integrate this idea into our communities and maintain enthusiasm for this option. We would not be surprised to see these numbers grow to the point where SGs make up as large a part (or larger) of the preliminary proposal pool as Predominantly Undergraduate Institutions or Beginning Investigators.

Since 2013, we’ve funded 22 awards based on invited full small grants (9 of 18 in 2013, 12 of 24 in 2014, and 1 of 1 in 2015 thus far[1]), for a 51% success rate at the full proposal stage. This is roughly twice the success rate of full proposals without the SG designation.

 

[1] Not everyone who received an invitation eventually submitted a full proposal (individual reasons vary). Also, we have an award already based on a 2015 preliminary proposal because instead of inviting a full proposal, DEB determined this project was appropriate for the EAGER mechanism and invited the team to submit an EAGER proposal allowing for quick turnaround of an award.


DEB Spring 2015 Panel Update

At this point everyone should have heard back on your DEB Preliminary Proposals from the spring panels. If you have not:

1) Log in to your FastLane account. The information should be accessible there, but also make sure your contact email is correct because a typo there would prevent you from receiving updates and notifications.

2) If you were a CoPI, check with the lead PI on the preliminary proposal. The lead PI should have the results of review.

3) Did it wind up in your spam folder?

4) If you have exhausted all of the above options and have had no other contact with your DEB Program Officer, then it’s probably a good time to send us an email.

 

Preliminary Proposal Panel Results

DEB panels reviewed 1495 preliminary proposals; in consideration of the reviews and panel discussion, DEB Program Officers extended 383 invitations to submit full proposals for the August 2015 deadline. The Division-wide invitation rate for the preliminary proposals was 26%. Below, we detail the results of preliminary proposal review by programmatic cluster.

Cluster Invited Not Invited Total Invite Rate
Systematics and Biodiversity Science 87 221 308 28%
Evolutionary Processes 105 331 436 24%
Population and Community Ecology 107 320 427 25%
Ecosystem Science 84 240 324 26%
Grand Total 383 1112 1495 26%

 

This is the fourth round of preliminary proposal review for DEB core programs, which was started in 2012. DEB extended more invitations and achieved a higher invitation rate in comparison to prior years.

2012 2013 2014 2015
Reviewed 1626 1629 1590 1495
Invited 358 365 366 383
Invite Rate 22% 22% 23% 26%

 

As we discussed in our recent post on per-person success rate, the launch of the preliminary proposal system drew in a large number of “new” applicants. We believe we are now seeing this wave of applicants pass and this is reflected in the decrease in number of preliminary proposals reviewed in DEB as our communities realize that preliminary proposals do not make grants easier to get.

At the same time, the number of invitations has gone up. The increase is primarily a result of program management decisions as we have been able to refine our expectations for the number of proposals that will come to the full proposal panel through other mechanisms (CAREER, OPUS, RCN, and co-review).


DEB Numbers: DEB Panelist Survey Results

What do panelists think of the preliminary proposal process?

Here we present the results of recent surveys of DEB panelists, starting with the first round of preliminary proposal panels in March/April 2012 through the fall 2014 full proposal panels.

This survey was meant to gauge panelist reactions to the preliminary proposal process during the first few years in a case where baseline opinion data were lacking. Since many of the survey items relate to perceived differences between the new system and panelist experiences prior to 2012, the usefulness of these questions heading in to the future will be limited as the pool of panelists continues to grow and memories of those prior experiences fade.

Quick Summary

Over the six panel seasons, more than 70% of panelists completed the survey. The respondent counts were higher for preliminary proposal rounds (~170 respondents per round) than for full proposal rounds (~80 respondents per round), consistent with the larger number of panelists required for preliminary proposals.

The majority of respondents reported an improved panel experience in both preliminary and full proposal panels compared to pre-2012 full proposal panels.

Preliminary proposal panelists overwhelmingly agreed that preliminary proposals are conducive to a quality panel review.

Most panelists perceived no change in the intellectual merit of highly rated proposals.

Panelist Experience

2014panelsurvey.1

A majority of respondents reported their panel experiences under the preliminary proposal system were better than in years prior. This is consistent across all preliminary proposal panels and the first two years of full proposal panels, with slightly more positive responses from preliminary proposal panelists. The latest full proposal panelists were less positive, with a larger proportion reporting “No change” but also a smaller proportion reporting “worse” than in the first two full proposal panel cycles.

Preliminary proposal panels represented a greater departure from prior panels in size, goals, and implementation than full proposal panels, so the potential for changes to panelist experience (better and worse) seemed greater here. This was borne out by the large majority of panelists reporting a directional change and of those, positive experiences greatly outweighed negative experiences. For full proposal panels, the major difference from prior panels was that most reviewed proposals had already been through a preliminary proposal review and subsequent revision. We did not expect much change in overall experience of full proposal panelists, so the extent of panelists reporting positive change is encouraging.

Written comments shed light on what full proposal panelists saw as positive and negative changes to their full proposal panel experience. On the plus side, a greater proportion of time and effort was spent on providing feedback to strong proposals. The flip side of that was that the invited full proposals necessitated difficult choices and highlighted the discrepancy between worthwhile science and available resources.2014panelsurvey.2

In addition to their overall experience with the panel, we also asked preliminary proposal panelists about their preparations for the panel. In order to manage the review of the large volume of shorter preliminary proposals, DEB planned for an increased number of assignments per panelist to avoid increasing total panelist burden; this assumed a relationship between proposal length and review work.

The majority of preliminary proposal panelists reported a decreased time burden to prepare for panels, even though the number of proposals per panelist was increased. This indicated to us that we succeeded in balancing the volume/work per proposal trade-off. Comments from panelists also indicated a qualitative improvement in their panel preparation and individual review writing experience. This was generally ascribed to a feeling that, with shorter proposals less time was required to simply cover the entire proposal and they could instead engage more deeply with the project description and literature. A minority of respondents reported that extra preparation time was needed, citing difficulty in adjusting to the new format and changing old review habits.

View of Preliminary Proposal MechanismBar chart depicting DEB panelist survey response results related to the Preproposal format.

Across the 3 rounds of preliminary proposals, we asked panelists to provide feedback on the preliminary proposal format. Over 90% of respondents are in agreement that the content of the preliminary proposals provides adequate basis for evaluating the projects. The 2012 panel highlighted two issues for NSF to consider regarding the preliminary proposal content: reviewer expectations should be better aligned with the preproposal format, and low-cost projects might be identified, relative to higher-cost competitors. The former was resolved, in part, through experience with a new process and by changes to panelist instructions. The latter provided support for the “small grants” track, adopted for the 2013 preliminary proposal submissions. Since then, panelists are nearly unanimous in finding the content adequate for review.

We separately asked panelists to weigh-in on the length adequacy of a 4-page project description. Again, these results are overwhelmingly positive. Based on written comments, some reviewers suggested the page limit would be improved either by adding a page or setting aside specific lengths for various sub-components to ensure PIs sufficiently address them (e.g., 1 page exclusively for broader impacts, 1 page for graphics, limiting the length of introductory material). Others felt that 4 pages was too long and that if preliminary proposals are to stay, DEB should go “all in” with 1 or 2 page descriptions that leave only the main idea and no possibility for reviewers to expect or demand preliminary data and detailed experimental designs. And, a few suggested that while the length was adequate for most work, the complexity of their own specialized research interests defied such succinct description and deserved special dispensation. These conflicting opinions however do not appear much different from concerns about proposal structure typical for the 15-page full proposals prior to the preliminary proposal system. For the vast majority of reviewers, the 4-page project description works for a preliminary proposal evaluation.

Perceived Changes in Proposal Content

The questions about proposal content were deliberately selective. The wording we used specified a perceived change in the quality of the “highest rated” or “top 20%” of panel ratings. The thought behind this when developing the questions prior to the first panel was that 1) we are primarily concerned with changes that might affect the eventual awards, and 2) the relative ease of preparing preliminary proposal packages might invite more off-the-cuff submissions, which would be screened out at that stage but also depress the perception of average quality without altering any actual award decisions.2014panelsurvey.4

The results have been largely consistent with our expectations. The majority of respondents reported no change in intellectual merit of the top proposals for both preliminary and full proposals. During the preliminary proposal panels, respondents reporting a perceived change were pretty evenly split between the proposals being better and worse. Opinions were more positive during the full proposal panels; however, we aren’t putting much weight on that difference since the majority still reports no change. As far as the positive response by full proposal panelists regarding improved quality of proposals, there are at least two non-exclusive explanations: 1) panelists didn’t respond to the question and instead reflected their view of the entire body of full proposals from which most non-competitive applicants had already been removed, and 2) full proposals actually improved in quality after feedback from the preliminary proposal review.2014panelsurvey.5

Respondents’ perceptions of changes in broader impacts mirror those for intellectual merit, though they were more polarized. In all 3 preliminary proposal cycles, the majority reported no change to broader impacts in the top proposals. However, greater proportions reported both positive and negative changes. This seems to reflect a still-divided opinion in the community on what ought to be the emphasis for broader impacts. Comments suggested that the broader impacts were both improved and worsened through less detail in the preliminary proposal. Quite unexpectedly, few respondents thought broader impacts declined at the full proposal stage; far more panelists, even a majority in 2013, felt this component improved over prior panels. We’re not sure how to explain this response, although we note this coincides with the release (January 2012) and incorporation into NSF documents (January 2013) of the revised merit review criteria that sought to clarify the context for assessing broader impacts.

Synthesis

Through 3 cycles of two-stage review, the preliminary proposal process in DEB appears to be improving panelist workload and experience. Panelists also report a high degree of satisfaction with the preliminary proposal mechanism and, allowing for individual differences in formatting preferences, generally find that preliminary proposals supply sufficient evidence on which to base their evaluations. Further, few returning panelists perceived any decline in the quality of projects coming highly rated out of preliminary and full proposal panels. This supports a view that the preliminary proposal process is achieving positive changes and not adversely affecting the overall quality of the review process and award portfolio.

Supplemental Survey Background

Traditionally, DEB includes toward the end of the panel one or two drop-in sessions with DEB and BIO leadership for some general Q&A. As an informal discussion, it’s helpful for sharing information with panelists and for us to hear about external concerns. However, it’s not at all standardized: the topics jump around, much of the discussion depends on who is most willing to speak up first, and the take-away points are mainly just a general sense about several disparate issues. With the launch of the preliminary proposal process, we realized it would be helpful to collect better information from panelists. A survey that didn’t link individuals to their responses was thought to be a “safer” venue to encourage all panelists to voice their opinions. This would hopefully avoid the professional and inter-personal dynamics that may bias who is willing to speak up, how forcefully things are said, and what we ultimately interpret as important or common feelings. The downsides of course were that we were asking for subjective (perception and opinion) information and lacked an established baseline against which to measure any change.

Since the first preliminary proposal panels in April of 2012, we have been asking our panelists to answer a few questions about their perceptions of the preliminary proposal process. Within the limitations of the context, we asked respondents questions about 1) their views of the preliminary proposal mechanism, 2) their panel experience relative to prior participation, 3) and their perception of the proposal content relative to prior participation. A similar and shorter set of questions was used for full proposal panels. There have been minor changes to the survey from year to year as we received feedback that helped clarify wording, and some questions were added or removed as our specific needs changed. This post presents the responses to the core questions that continued across the lifetime of the survey in DEB.


How to win over panels and influence program officers: advice for effective written reviews.

For a while now we’ve been pondering how to approach a delicate subject: writing reviews. This subject is something of a minefield of tensions, conflicts, opinions, and opportunities to offend, alienate, and otherwise ruffle feathers by implying, “you’re doing it wrong.” So we feel it is important to explain here why we are tackling this topic and mention some of the approaches that didn’t fly.

We receive requests from less experienced reviewers who want advice outside the trial-by-fire of an actual panel to hone their review-writing skills. And we also hear from PIs who are disappointed in the utility or quality of the reviews they have received.

To set the tone, let’s make a couple of blanket statements straight off. You are all volunteers in this effort and you do a great job and deserve immense gratitude. With that said, probably every established PI who has submitted NSF proposals has received at least one review that was… a bit less than they expected; perhaps lacking a certain degree of usefulness; or even a complete enigma.

What is going on with these sub-par reviews? Well, it’s difficult to address because we don’t think there is a singular problem. Simply put, in managing the whirlwind of research life, we may occasionally fall short in creating the ideal, critical, insightful and helpful proposal review.

Why are we addressing this here? There is quite a bit out there about writing manuscript reviews, and even some nice posts about PIs coming to grips with the reviews received. But there’s a pretty big void when it comes to discussion of actually writing proposal reviews. As a program office, we have a unique platform to start a discussion around review expectations and feel it is a topic worth talking about openly. What we hope to achieve is not a static prescription for a “good review” but for this to start a discussion, raising community awareness about the importance of proposal reviews and the need for continual improvement in writing them, regardless of how well we think we’re doing already.

Some things we ARE NOT going to do here:

  • We are not going to provide you with real review examples. That would break confidentiality; even with names stripped someone might recognize the writing enough to correctly (or worse, incorrectly) ascribe the writing to an individual with unintended consequences.
  • We are not going to provide you with fictional examples. There are many different ways that a review can be good or bad in the broad sense but it’s not clear that an example is needed to make the point. And, the finer points of a review often need the context of the full proposal to make sense and would therefore be lost in such an example.
  • We are not seeking to create a specific template for a written review. That already exists at a very basic level in the form you access via the FastLane review module. Different directorates, divisions, programs, and specific solicitations establish and enforce varied requirements and norms for reviews under the broad scope of the same FastLane form. There is no universal, detailed guide to review expectations and we don’t think it would be possible or wise to attempt one. Even within DEB, a single template would not work for all of our solicitations.

There are many paths to a good proposal review. But the major commonalities can be distilled into a few points.

Good proposal reviews:

  • are conceived with an open mind and reflect serious consideration of the proposal
  • make a compelling case for the evaluation, citing evidence or examples as needed
  • are well-written and direct
  • contain no ad hominem criticisms or unsupported assertions

Of course the real challenge is, “How do I do it?” To help out we’ve started a list (in step-wise order[i]) of tips to guide you past the most common pitfalls on the path to completing a great review.

  1. Refresh your expectations, EVERY TIME: You are asked to judge a proposal against a specific set of review criteria. These review criteria are not static. They change over time; some change almost yearly and may have changed since you last looked. They vary by organization; each successive layer of directorates, divisions, and programs has an opportunity to modify and specify review criteria; what you are asked to do in education or geosciences will differ in the specifics from what you are asked to do in DEB even though all still abide the NSF merit review criteria[ii]. They can be stacked and combined; even within a single panel some proposals will include additional review criteria relevant to a submission option or solicitation on top of those of the conceptual program at the heart of the proposal.

The best way to keep on top of this shifting body of expectations is to READ THE MATERIAL WE SEND YOU when you volunteer to review. Googling and looking through your own files can lead you to outdated information. In DEB, we send you a nicely packaged summary with the latest information and our interpretation of specific criteria all in one place with links to the source documents should you desire to read more.

  1. Recognize your audiences (yes, plural): You, as a reviewer are always writing for two, and potentially three, audiences. Firstly, the review is to inform us (NSF) as to why you think the proposal is or is not worth funding; we consider this in light of others’ reviews and our award portfolio to arrive at a funding recommendation. Secondly, the review is to help the PI learn how to prepare a better proposal next time, or to improve the project if it is funded. Lastly, the written review may be used as documentary evidence that you as the reviewer gave the proposal thorough and thoughtful consideration.
  2. Don’t summarize, review: Every audience for the review will also have access to the actual proposal and doesn’t need you to summarize it; we need your well-argued opinion of it. Note: Starting out with largely descriptive text may be important to organizing your thoughts as a reviewer. That’s ok. That’s great. But it’s not the end point. What we’re saying here is: take whatever part of the proposal you felt important enough to describe and fill it out by giving your opinion of it and telling us why you hold that opinion.
  3. Provide clear and substantiated comments: To satisfy the distinct audiences and avoid summarizing, every point you make in your review should exhibit these 3 characteristics:
    1. It is EVALUATIVE
    2. It is SUPPORTED with appropriate details, evidence, and comparisons
    3. It provides CONSTRUCTIVE feedback in the context of the proposal

Notes: EVALUATIVE means using terms that express an opinion about the subject like “good”, “bad”, “excellent”, “inadequate”, “exemplary”, “satisfactory”, etc. SUPPORTED means following the evaluative term with an explanation like “because the [specific information in the proposal] is/is not reflective of [some important external knowledge like cited literature, best practice, or other relevant information].”

  1. Be self-critical of your critical comments: A good critical comment delves below the surface of your initial reaction and constructively reflects the opportunities and constraints for addressing the issue. This step could easily be its own separate post; we’ll provide a longer discussion of this at a later date. For now though, the fast summary is: see the flaws, point them out but give them context, and don’t get hung up on small stuff. Stop and ask: why is it a flaw for this proposal? Do I fault others for this consistently? If not, why have I pointed it out now? Is there a deeper cause of this flaw that I can describe? (If so, do it.) Search out your biases. Recognize and differentiate between actual problems in the proposal and legitimate differences from how you would do the work. Make sure your suggestions for improvement are clear and reflect the constraints on space: it’s limited, and PIs can’t include “more” material on X without cutting Y. A useful suggestion needs to identify both the X that needs expansion and the Y that could stand to be cut.
  2. Minimize or omit purely descriptive text: This is a repeat of point 3. We’re serious about this. It is very easy to fall into the summarizing trap – it’s much easier to write that material – and we have to remind ourselves about this constantly too. If some aspect of the proposal is important enough for you to describe, tell us how it affects your overall opinion of the proposal and why.
  3. Use the FastLane review form boxes: The FastLane form has fields for your review of Intellectual Merit and Broader Impacts, and for a Summary. Use the first two to make your points (large and small, positive and negative) about the proposal. Use the summary box to explain how those points come together to form your overall opinion – reiterate the major deciding factors, explicitly note where you discount minor issues. This provides clarity for both NSF and the PI as to how the other comments translate into a single overall rating.

 

What other advice might you as a PI receiving a review want to give the reviewers? Is there a Golden Rule for review? If so, how could it be phrased?

Maybe some of our readers would be interested in trying to improvise a few “model review” lines, give it a go in the comments.

 

[i] Based on our general perception of where we get the most questions/complaints, tips 1, 4, 5, and 3 encompass the largest areas for improvement.

[ii] The current NSF merit review criteria are 2-fold (Intellectual Merit and Broader Impacts), under each criterion NSF poses 5 questions to guide reviewers’ thinking (but does not require explicit answers to each):

  1. What is the potential for the proposed activity to:
  2. Advance knowledge and understanding within its own field or across different fields (for Intellectual Merit); or
  3. Benefit society or advance desired societal outcomes (for Broader Impacts)?
  4. To what extent do the proposed activities suggest and explore creative, original, or potentially transformative concepts?
  5. Is the plan for carrying out the proposed activities well-reasoned, well-organized, and based on a sound rationale?
  6. How well qualified is the individual, team, or organization to conduct the proposed activities?
  7. Are there adequate resources available to the PI (either at the home organization or through collaborations) to carry out the proposed activities?

Review Service in DEB, Part 3: Finding you for Review

This is part 3 of a discussion of reviewer service in DEB. In parts 1 and 2 we talked about the roles of panelists and ad hoc reviewers and how contacting us to volunteer as a reviewer doesn’t often work. With that as background we can provide some insight into actual reviewer selection and assignment in DEB.

Let’s mention a few caveats. First, all reviewers are chosen based on three key principles: expertise, interest, and lack of conflicts. In choosing a reviewer we strive for individuals who are not only highly qualified to evaluate the proposal, but are also very interested in the topic of the research (and therefore likely to do a thorough review) and who have no apparent conflicts of interests with the individuals and institutions involved in the proposal.

Second, every time a proposal is submitted, the PI has the opportunity to submit a list of potential reviewers (a “single copy document” which does not get seen by any reviewers). Program Officers try to use at least a few of those potential reviewers after first checking that they are appropriate experts and are not in conflict with anyone (or institution) involved in the proposal. BTW, if you want to make your Program Officer happy, when submitting a proposal include ~8 such expert reviewers, free of conflicts, and provide full names, institutions, and email addresses (only about 1 in 5 PIs submits this information).

Third, Program Officers often can think of a few highly appropriate reviewers for any given proposal they are managing simply because they know the literature and lots of scientists in their field. Still, this is rarely enough for what we need. Ideally, we want 3 ad hoc reviews for each full proposal, to inform the three panelist reviewers. Given a response rate of ~50%, that means requesting at least 6 ad hoc reviews

So how do we find and solicit reviewers’ participation? It’s a little different depending on the reviewer role: ad hoc reviewer or panelist.

Ad hoc reviewers

First, think of 5 or 6 keywords which in combination are specific to your research expertise.

We can wait….

Got ‘em?

Good. Now open a new browser tab or window and navigate to your favorite academic search (e.g., Google Scholar, Scholar Universe, Web of Science, etc.) or even just a plain search engine.

Search on your combination of keywords.

Do you come up? If not, who does? Where are you? Are you even in the results? If you’re somewhere near the top, kudos! Otherwise….

This is how we find you in a nutshell. Except of course we don’t have your custom list of keywords; we use descriptions from proposals submitted by others. When we are identifying potential reviewers and panelists we’re looking for good matches. If you don’t even match your own description it’ll be hard to find you when we’re searching based on someone else’s project. Seriously, the most important thing you can do to make yourself available for review is to maintain a searchable presence on the web[i]. If you’re a post-doc or grad student and don’t have a lab but want to get started, use professional and academic social media to create a presence.

But that’s only step 1. Now we need to determine if you’re both an appropriate match and have a reasonable likelihood of actually responding to our request and completing a review.

What is behind the hyperlinks our search brought up? Boilerplate departmental faculty descriptions, publication lists that stop in 2009, old slide decks you never knew were out there, or an active lab website with descriptions of projects and recent publications? There’s a huge trust and confidentiality issue with proposal review — we’re not just going to send researchers’ best ideas, pre-publication data, etc. out to an unknown entity[ii] and hope it arrives in caring hands. You may come up at the top of the results but if those results look like an online ghost town, we’ll be skipping down to the next name on the list.

Once we’ve found an active online presence, we need to actually compare the content of the proposal and the potential reviewer’s expertise, check for conflicts, and then putting our judgment to the test ask ourselves “Will this person want to see this proposal?” If we think so, then there’s a good chance they’ll provide a thorough review or at least decline politely.

Panelists

As we mentioned above, finding panelists differs from matching individuals for ad hoc reviews. For one, the stakes are higher with a panelist. For a given set of proposals, an individual panelist plays a larger role in providing reviews than an individual ad hoc reviewer. An ad hoc review that is superficial, cursory, or goes totally off the rails can be ignored and those that never arrive have no weight at all. The same issues arising with a panelist, however, can undermine the discussion and evaluation of multiple proposals. While getting new faces onto panels is valuable to us, we also need to manage the risks of bringing in inexperienced participants.

How do we manage these risks?

Often, we look to people who have or previously had an award from us to become panelists. We’ve seen your proposal and report writing. Plus, while not required, we hope your sense of community includes giving back through review service.

We identify potential panelists through a professional network. Professional relationships actually play an important role when it comes to populating a panel. We often rely on suggestions from current reviewers and rotating Program Officers for new panelists since they both know the requirements of review service and have knowledge of different professional circles.

We first offer opportunities to gain experience through ad hoc review and service on smaller panels. Especially for early career faculty, panels outside of the core programs, like DDIGs, are often used to provide an introduction to reviewership.

Since we don’t have a database of reviewer/panelist expertise, each cluster in DEB has to rely on keeping and sharing notes to maintain institutional knowledge of the panelist pool: who has served, wants to serve, and might be able to provide needed expertise at specific future dates? Most members of a panel come out of this pool in response to a broad invitation noting the dates of upcoming panels. Of those available to serve, volunteers are selected based on the expected needs of the panel. This is often done well in advance of the proposal submission deadline and includes a few “swing” spots to round out the assembled expertise with later invitations. Those swing spots are where new people often enter the process, but we’re filling specific needs and so finding these folks more-or-less follows the same path as finding ad hoc reviewers.

So what can you do to become involved in review?

Drawing from the across this and the previous posts, we offer the following suggestions, in order of importance:

1) Make it easy for us to find you online in connection with your scientific work.

2) If you have no NSF review experience, talk to colleagues who are already involved in review and ask them to suggest you as a reviewer.

3) Recognize that, if you’re going to reach out to us, there’s not much we can do with your information if it’s not an immediate match to a specific need.

4) Say yes when you can, since more opportunities often follow. It’s ok if you can’t take on a review when we ask; tell us — we understand. But, please don’t ignore the request or we will get the idea that you have no interest in review.

 

[i] While we don’t ask you or your close collaborators to review your proposal, seeing those names (yours and theirs) in search results is a good sign.

[ii] With respect to having served within the NSF review system.


Review Service in DEB, Part 3: Finding you for Review

This is part 3 of a discussion of reviewer service in DEB. In parts 1 and 2 we talked about the roles of panelists and ad hoc reviewers and how contacting us to volunteer as a reviewer doesn’t often work. With that as background we can provide some insight into actual reviewer selection and assignment in DEB.

Let’s mention a few caveats. First, all reviewers are chosen based on three key principles: expertise, interest, and lack of conflicts. In choosing a reviewer we strive for individuals who are not only highly qualified to evaluate the proposal, but are also very interested in the topic of the research (and therefore likely to do a thorough review) and who have no apparent conflicts of interests with the individuals and institutions involved in the proposal.

Second, every time a proposal is submitted, the PI has the opportunity to submit a list of potential reviewers (a “single copy document” which does not get seen by any reviewers). Program Officers try to use at least a few of those potential reviewers after first checking that they are appropriate experts and are not in conflict with anyone (or institution) involved in the proposal. BTW, if you want to make your Program Officer happy, when submitting a proposal include ~8 such expert reviewers, free of conflicts, and provide full names, institutions, and email addresses (only about 1 in 5 PIs submits this information).

Third, Program Officers often can think of a few highly appropriate reviewers for any given proposal they are managing simply because they know the literature and lots of scientists in their field. Still, this is rarely enough for what we need. Ideally, we want 3 ad hoc reviews for each full proposal, to inform the three panelist reviewers. Given a response rate of ~50%, that means requesting at least 6 ad hoc reviews

So how do we find and solicit reviewers’ participation? It’s a little different depending on the reviewer role: ad hoc reviewer or panelist.

Ad hoc reviewers

First, think of 5 or 6 keywords which in combination are specific to your research expertise.

We can wait….

Got ‘em?

Good. Now open a new browser tab or window and navigate to your favorite academic search (e.g., Google Scholar, Scholar Universe, Web of Science, etc.) or even just a plain search engine.

Search on your combination of keywords.

Do you come up? If not, who does? Where are you? Are you even in the results? If you’re somewhere near the top, kudos! Otherwise….

This is how we find you in a nutshell. Except of course we don’t have your custom list of keywords; we use descriptions from proposals submitted by others. When we are identifying potential reviewers and panelists we’re looking for good matches. If you don’t even match your own description it’ll be hard to find you when we’re searching based on someone else’s project. Seriously, the most important thing you can do to make yourself available for review is to maintain a searchable presence on the web[i]. If you’re a post-doc or grad student and don’t have a lab but want to get started, use professional and academic social media to create a presence.

But that’s only step 1. Now we need to determine if you’re both an appropriate match and have a reasonable likelihood of actually responding to our request and completing a review.

What is behind the hyperlinks our search brought up? Boilerplate departmental faculty descriptions, publication lists that stop in 2009, old slide decks you never knew were out there, or an active lab website with descriptions of projects and recent publications? There’s a huge trust and confidentiality issue with proposal review — we’re not just going to send researchers’ best ideas, pre-publication data, etc. out to an unknown entity[ii] and hope it arrives in caring hands. You may come up at the top of the results but if those results look like an online ghost town, we’ll be skipping down to the next name on the list.

Once we’ve found an active online presence, we need to actually compare the content of the proposal and the potential reviewer’s expertise, check for conflicts, and then putting our judgment to the test ask ourselves “Will this person want to see this proposal?” If we think so, then there’s a good chance they’ll provide a thorough review or at least decline politely.

Panelists

As we mentioned above, finding panelists differs from matching individuals for ad hoc reviews. For one, the stakes are higher with a panelist. For a given set of proposals, an individual panelist plays a larger role in providing reviews than an individual ad hoc reviewer. An ad hoc review that is superficial, cursory, or goes totally off the rails can be ignored and those that never arrive have no weight at all. The same issues arising with a panelist, however, can undermine the discussion and evaluation of multiple proposals. While getting new faces onto panels is valuable to us, we also need to manage the risks of bringing in inexperienced participants.

How do we manage these risks?

Often, we look to people who have or previously had an award from us to become panelists. We’ve seen your proposal and report writing. Plus, while not required, we hope your sense of community includes giving back through review service.

We identify potential panelists through a professional network. Professional relationships actually play an important role when it comes to populating a panel. We often rely on suggestions from current reviewers and rotating Program Officers for new panelists since they both know the requirements of review service and have knowledge of different professional circles.

We first offer opportunities to gain experience through ad hoc review and service on smaller panels. Especially for early career faculty, panels outside of the core programs, like DDIGs, are often used to provide an introduction to reviewership.

Since we don’t have a database of reviewer/panelist expertise, each cluster in DEB has to rely on keeping and sharing notes to maintain institutional knowledge of the panelist pool: who has served, wants to serve, and might be able to provide needed expertise at specific future dates? Most members of a panel come out of this pool in response to a broad invitation noting the dates of upcoming panels. Of those available to serve, volunteers are selected based on the expected needs of the panel. This is often done well in advance of the proposal submission deadline and includes a few “swing” spots to round out the assembled expertise with later invitations. Those swing spots are where new people often enter the process, but we’re filling specific needs and so finding these folks more-or-less follows the same path as finding ad hoc reviewers.

So what can you do to become involved in review?

Drawing from the across this and the previous posts, we offer the following suggestions, in order of importance:

1) Make it easy for us to find you online in connection with your scientific work.

2) If you have no NSF review experience, talk to colleagues who are already involved in review and ask them to suggest you as a reviewer.

3) Recognize that, if you’re going to reach out to us, there’s not much we can do with your information if it’s not an immediate match to a specific need.

4) Say yes when you can, since more opportunities often follow. It’s ok if you can’t take on a review when we ask; tell us — we understand. But, please don’t ignore the request or we will get the idea that you have no interest in review.

 

[i] While we don’t ask you or your close collaborators to review your proposal, seeing those names (yours and theirs) in search results is a good sign.

[ii] With respect to having served within the NSF review system.


Reviewer Service in DEB, Part 2: Missed Connections

In the prior post we described the roles of panelists and ad hoc reviewers in the DEB merit review process and how others have highlighted the value of taking part in this process. We left off with an observation that has popped up in several comment threads on those other discussions: “I volunteered but no one ever called.” This post addresses why that happens.

Foremost among the reasons we haven’t called you back is a possibly startling admission: we don’t have an actual “reviewer database”. We have a reviewer records system but it’s an old architecture, built for a smaller scientific enterprise[i], and lacks the sort of substantial or topical content (not even research keywords) that would be needed for it to function as a database for identifying appropriate reviewers. Even with such content, because it is primarily for record-keeping and not discovery, it would simply search reviewers we’ve used in the past and wouldn’t help us to identify “new blood.” Yes, this is incredibly, woefully behind-the-times compared to what many journals use[ii]. This is an issue for which NSF is trying to find a solution.

The next major roadblock for self-identifying reviewers is timing. DEB programs need to find large numbers of reviewers in narrow windows of time. If you’re just sending us emails whenever the thought strikes you, they are likely landing on our desks in-between when we’re looking for reviewers. You can check out the review process calendar we posted previously to see when this happens for various programs. The take-home point is that we’re often looking for panelists before proposals have even been submitted (and months before a panel meets) and we are looking for specialist reviewers during just the few weeks when we are sending out proposals for “ad hoc” reviews. Hitting those periods can help put you in the right place at the right time, but is no guarantee, simply because…

Any email is one among numerous offers being passed around daily and is likely to have passed out of memory and been supplanted by other more recent offers by the time we have a need. Suffice it to say, there’s a lot of low-information-value noise in our in-boxes: CVs without context, boilerplate introductory letters where “review” appears as an afterthought, etc.

Even with a strategically-timed, well-written introduction, however, a potential reviewer may not best match our needs for a current proposal or panel and, ultimately, we don’t have anywhere to put the information and retrieve it efficiently. Any system we put in place for keeping tabs on volunteer offers is going to be 1) competing with the whole of the internet to quickly identify sufficient numbers of relevant experts and 2) filled with dead ends unless it is regularly updated. Your CV might get dropped in a shared folder and might turn up in a document search while you’re still at the same job, but it just as easily can get buried amongst the daily deluge of emails.

While advice around the academic web to “just send in your information” reflects individual experience, it is based on a perception of cause and effect when the reality is often just coincidence. You may get a call to review from someone who never saw your email if you come up in a search result. Around the time most people are first entering the potential PI/reviewer pool, they are also developing a professional web presence. And, as we said above, without a dedicated reviewer database, we put the whole internet to work to find reviewers. So, being searchable and showing up at the right moment can make sending us any sort of introduction moot. We’ll address this further in part 3.

 

[i] Even just 10 years ago we were dealing with only ~half the proposals we see today.

[ii] We would like to think we are pretty adept at working around this information system deficiency. Between panelists and individual (ad hoc) reviewers, DEB manages to obtain some ~10,000 separate evaluations of proposals each year. And, taking some pride in facing this adversity, we note that lots of folks never notice our lack of a proposal & reviewer matching database until they come here and we tell them.