Review Strategy: Two-Stage Review
Case Study: Centers of Excellence for Translational Research (CETR) Review 2013, SRO contact Dr. Lynn Rust
Table of Contents:
Driving Factors for Strategy Choice
Overview
Detailed Approach
Unique Features of the Strategy
Lessons Learned
Technical Challenges
Timeline
Primary Tools
Questions and Answers
Review Strategy: Two-Stage Review
Case Study: Centers of Excellence for Translational Research (CETR) Review 2013,SRO contact Dr. Lynn Rust
Driving Factors for Strategy Choice
An unexpectedly large responseof multi-project applications thatcould not be reviewed in single stage due to volume and complexity
Overview
- The CETR RFA resulted in the submission of 112 U19 applications consisting of 848 components (overall, projects, and cores). The review was conducted as a two-stage review; the first stage involved mail-in critiques of projects and cores with a read/edit week, and the second stage involved face-to-face streamlining and discussion of the overall applications. The first stage reviewers were primarilyspecialists and technical experts, while the second stage reviewers were primarilysenior investigators with broad views of the significance and feasibility of the field. The two-stage review was structured as a single meeting (i.e.,one roster) in IAR. The Stage 1 reviewers were initially coded as “Teleconference” attendees in order to accommodate the Stage 1 Read/Edit phase. Both stages had 8 weeks to read and write critiques, with a 2-week overlap (3 weeks including the Stage 1 Read/Edit phase).
Detailed Approach
- Stage 1
- 378 reviewers with special, technical expertise were recruited to provide written critiqueson the project and core components of the U19 applications. A slightly higher than average number of reviewers for each project and core component was assigned to cover potential reassignments and loss of reviewers. Each reviewer received approximately 8 assignments (projects and cores). The number of applications assigned was driven by expertise of the reviewer. The first-stage reviewers were instructed to write detailed critiques and assign preliminary scores, including criteria scores where appropriate, to individual components. The reviewersuploaded the critiques to IAR. Stage 1 reviewers were given a one-week Edit phase to read others’ critiques and edit their own critiques and scores. There was no Special Emphasis Panel in this stage.
- Stage 2
- The second stage consisted of fewer (54) reviewers who weresenior investigators in the field withbroad views of the significance, synergy, and feasibility of the applications. These reviewers were assigned to review the overall U19 applications with the benefit of the Stage 1 critiques. There were some dual-service Stage1/Stage 2 reviewers and they were assigned different applications for each Stage.Stage 2 reviewers were provided access to the Stage 1 critiques in IAR. Three reviewers and one discussant were assigned to each U19 application. Each reviewer was assigned approximately 7 applications and one as discussant. The reviewerswereinstructed to provide an Overall Impact critique and a preliminary Overall Impact scorefor each U19 application on IAR. A single Special Emphasis Panel (SEP)was convened. Special issues were discussed during the Stage 2 review, using comments provided by Stage 1 reviewers. Panel members assigned the final, Overall Impact scores for each of the U19 applications discussed.
- Streamlining was conducted at the beginning of the stage 2 review. The reviewers were provided two lists of impact ranksthat were determined by the Stage 1 and 2 combined preliminary scores and Stage 2 preliminary impact scores. Impact rank was calculated by the fraction of scores less than 4 for each U19. The Stage 1 and 2 combined preliminary scores included project and core scores for Stage 1 and preliminary overall impact scores for Stage 2. The Stage 2 ranking only considered Stage 2 preliminary overall impact scores. A “to be discussed” list was generated based on the Stage 2 ranking. The applications not streamlined (33)were discussed at an overall application level.
UniqueFeatures of the Strategy
- Stage 1
- Stage 1 critiques of projects and coreswere generated by technical experts in the required fields.
- Since Stage 1 reviewers were eventually coded as “mail-in”, waivers were not required for reviewers recruited from the same institution.
- Conflicts of interest (COI) concerns were limited to only those applications that were reviewed by each individual.
- Each component received the same number of critiques as the traditional review procedure, and applicants were informed of the average preliminary score for each component.
- Stage 2
- The discussions at the review meetingwere informed by the critiques submitted by the Stage 1 reviewers.
- The same panel discussed (or decided not to discuss) each U19 application, providing for a high level of consistency in review.
- The discussions focused on the overall U19 applications rather than on the detailed approach of individual components.
- Each U19 was reviewed in its entirety by several reviewers;each reviewer provided critiques and final scores.
- The SEP lasted a reasonable number of days (3 days).
- The applicants, whether their applications were discussed or not, had the benefit of the overall critiques from Stage 2. The average preliminary scores were included in the summary statements: Stage 1 preliminary projects/cores averages were provided for all applications, Stage 2 preliminary overall average scoreswere provided forstreamlined applications.
Lessons Learned
- Recruitment
- The Stage 1 call log was begun over a month before receipt. This was critical, as the call log compilation and screening was labor-intensive in order to compile over 4000 names and contact information. The call log was generated from previous rosters, previous call logs, high impact journal editorial boards, merit award winners, etc. and the source of the name was tracked in this log. A Stage 2 reviewer call log of ~ 200 of the high-profile, nationally renowned, broad-based scientists was separated out.
- The use of a drop-down menu and conditional color formatting to track potential reviewers’ responses on the call log was helpful, and this worked well for tracking and sorting. See “CETR Reviewer Tracking Process” (J. Bruce Sundstrom) and Primary Tools below.
- A common recruitment tracking spreadsheet, the Reviewer Status Summary spreadsheet, was generated for this review (see Primary Tools below). This spreadsheet had sets of columns for both COI and assignment entry into spreadsheets and eRA Peer Review (initiated and completed).
- Stage 1 and Stage 2 reviewers were recruited simultaneously. The advantage of this was that the review date could be set based on Stage 2 availability and the intervening deadlines mapped out from there. Also, reviewers who could not participate in the Stage 2 SEP could then be invited for Stage 1.
- The PubMed macro was not used to identify potential conflicts (this would have been enormously cumbersome); rather, the reviewers’ COI identifications were used.
- Stage 1
- The re-coding of Stage 1 reviewers from Teleconference to Mail-in was tedious but appears to be the only way of enabling a Stage 1 Read/Edit phase.
- Stage 2 reviewers would have preferred to have Stage 1 critiques concatenated and sent via secure email transfer instead of accessing them in raw format in IAR. Concatenating and sending the critiques would have delayed critique access to the reviewers, but could have been a worthwhile investment of time.
- The variability of the Stage 1 critique thoughtfulness and thoroughness posed a problem for some Stage 2 reviewers. Stage 2 reviewers were instructed to focus on the overview section, the project Specific Aims, and the Stage 1 critiques for their reviews. Due to the lack of detail in some Stage 1 critiques, many Stage 2 reviewers spent a great deal of time reading and critiquing entire applications.
- Stage 2
- The Stage 2 panel felt the workload was too high. Stage 2 reviewers were assigned 7 written critiques and one discussant role per reviewer. The panel’s feedback is that a more reasonable number of multi-project applications assigned for a Stage 2 review of this type should be 5-6 total. However, at 3 reviewers per application, this would have increased the Stage 2 panel size to 61 reviewers.
- There was a lot of disparity between Stage 1 and Stage 2 high-ranking applications (9 different applications in the top 28 applications; ~30%). The Stage 2 reviewers ended up working from its own ranking for streamliningthat was independent of the combined Stage 1/Stage 2 impact ranking. Consequently Stage 1 scores had less of an impact on the outcome than originallyhoped.
- Stage 2 reviewers referred frequently to the Stage 1 critiques as either bringing out perspectives they had not thought of, information they didn’t know, or reaffirming their own thoughts, so in these ways, Stage 1 critiques had an impact on the outcome. On a scale of 1-5, with 5 being the most helpful, Stage 2 reviewers considered the Stage 1 critiques to be 3.8, or “somewhat helpful”.
Technical Challenges
- Council round adjustment
- The council round may have to be changed to accommodate the longer review process required by a Two Stage Review. If the council round change is made midstream in IMPAC, it may not be reflected in the IRG/SRG Reassignment/Referral section in IMPAC.
- When Sonia Kim and DRR/CSR move the SEP to a new council round, they do not change the SEP name. To assign a new SEP name associated with the new council round (e.g., M1S1), one would have to go to the Committee Management module and make sure that the new SEP name is associated with the correct meeting. This can be fixed by typing in the correct new SEP name where it asks for the study section name. Alternatively, the IMPAC Helpdesk can make these corrections.
- Recruitment
- Shared spreadsheets should be backed up each night into an Archives folder. Ghost “open” files must be deleted daily COB. Shared spreadsheets should not be “sorted” or “filtered” by any user, or they may get corrupted.
- COI entry into eRA Peer Review was initially done for each reviewer as their documents came in, then entered onto a spreadsheet by application to be filtered by application, and then entered into eRA Peer Review by application. Entering the COI information by “application” rather than by “reviewer” reduced the work load; this reduced eRA Peer Review entries from 432 reviewers to 112 applications, but had to be done after recruiting was completed.
- Stage 1
- The IAR teleconference designation for Stage 1 reviewers was necessary to give the reviewers the read/edit phase in IAR. However, the Stage 1 reviewers had to be individually re-coded as Mail-in reviewers after Stage 1 was completed in order to reflect their actual involvement and to avoid requirements for concurrent service waivers, etc.
- Since Stage 1 reviewers were initially coded as Teleconference, Stage 1 COI’s were entered into eRA Peer Review so that reviewers were notpermitted to see conflicted applications. If Stage 1 reviewers had been initially coded as Mail-in they would only have been able to see applications to which they were assigned.
- Stage 2
- The Stage 2 assignment table for the meeting had to be manually generated. eRA Peer Review could only generate a single assignment report with all 432 reviewer conflicts from both Stage 1 and Stage 2 reviews. Consequently, an ESA and an SRO had to compile a report manually.
- Post Meeting
- Because of the size of the meeting, the voter matrix could not be generated from eRA Peer Review; the system would time-out before the matrix was downloaded. Thus, a sweep of NDs, etc. could not be accomplished and each Stage 2 reviewer’s score sheet had to be individually checked and edited. Also, the scores for the post-meeting memo had to be generated as an Excel spreadsheet rather than an eRA Peer Review report.
Timeline
Primary Tools(see Rust/Archives/Grants/CETR/Tools):
• Call log with drop-down menus and conditional formatting (see “Reviewer TrackingProcess”)
• Admin Review checklists for ESAs, SROs
• Admin Review sign-up spreadsheet
• Reviewer Status Summary to track recruiting, COI docs and data entry, assigning, emails
• PR COIs by Application to enter reviewer COIs into eRA Peer Review by application
• Assignment Spreadsheet with formulas and conditional formatting, expertise and term consolidationmacro sheet (see notes iii and iv, below)
• Cross-check assignment spreadsheet
• Stage 1 critique editing sign-up spreadsheet
• Streamlining spreadsheet
• Stage 2 SREA notes, adapted from the call log
• Summary Statement sign-up spreadsheet for resumes, Stage 2 critique editing, final check and release
Tool Tips
i)You can sort or filter, but it does add a layer of complication. It works ok if you save your changes often and don’t leave the shared document open unless you’re working on it. The corruption occurs more often when the “ghost” users don’t get removed. (Kelly Poe)
ii)Days are in workdays; multiply interval by 5 to get #weeks.
iii)The reviewer assignment form export from the Access database became the beginning columns for the recruitment log;then, we created and added a formula that counted the assignments by application/core/project and by reviewer on that spreadsheet. Note: If others want to use a similar formula they would need to know that the formula is an ‘array’ type, so they need to hit ctrl+shift+enter to activate it/leave the formula cell. Doing so adds the red brackets around the formula – it doesn’t work if someone just manually adds these. Formula {=SUM(--(LEN(TRIM(E3:E2517))>0))} (Lisa Vytlacil)
iv)The expertise macro basically concatenated the expertise values that were marked high and medium within each category listed on the attachment 4 and put them on a 2nd worksheet within that same book. These could then be copied and pastedinto the assignment spreadsheet. The sample expertise check-off form I used to create the macro and also shows the resultson the 2nd worksheet is here: I:\Misc Task Files\CETR-ExpMacro\MacroTest-Expertise check-off.xls (Lisa Vytlacil).
Questions and Answers
- Why did you choose this strategy over splitting the panel into multiple SEPs?
This two-stage strategy provided standardization for the panel which is especially important when only funding a small percentage of applications. This strategy also ensured that each project within the U19 applications received individual critiques.
- What would be your “cut-off” or threshold number of applications for choosing this strategy?
If a review cannot be accomplished in two SEPs then a two-stage review strategy is a reasonable choice.
- Why did you choose to streamline applications at the Stage 2 SEP instead of via a pre-meeting teleconference?
Did not want to streamline prior to the meeting due to the potential leakage of confidential meeting information to the scientific community.
- What were the specific tasks for Stage 2 reviewers?
They were instructed to read the Stage 1 critiques, the overall section of the application, and the specific aims of the projects and cores. Based on this reading, Stage 2 reviewers prepared an overall critique of the application. They were discouraged from reading individual projects and cores but many still did due to lack of clarity in the Stage 1 critiques.
- What kind of input was given from Management for the choice of the two-stage review strategy?
The idea was proposed to the Branch Chief (Dr. Ed Schroder) first and then taken to Program representatives. There was general agreement for the review strategy.