NASA “Clutter” Experimental Protocol

Pre- Flight Scenarios

  • Participant completes Informed Consent
  • Participant completes Demographics questionnaire
  • Participant reads written instructionsAND/OR experimenter provides:
  • Examples of the display components
  • Explanation and examples of how to rate the relevance of each of the semantic pairs
  • Participants fill out a “pre” Ranking and Rating questionnaire that evaluates their initial perceptions of each of the pairs independent of the display configurations that will be presented as part of the flight scenarios

Flight Scenarios

  • Participant (or experimenter) reads overall flight status and conditions of the flight scenario (e.g., weather, VFR/IFR, altitude, approach information). This information provides the context for their ratings within each scenario
  • Participants provide feedback on the list of semantic pairs for each of 16 different configurations of the display components. Display components will be presented on a computer display and participants will provide their ratings on a paper and pencil questionnaire. Participants will also provide a single rating of overall display “clutter” for each configuration. This process is repeated for all four approaches (16 X 4 = 64 sets of ratings across the four flight scenarios)

Post- Flight Scenarios

  • Following the simulated approaches, each participant will provide a single set of ratings on the remaining 16 configurations of display components that they have not yet rated. Similar to above, the displays components will be presented on a computer monitor and the participants will provide ratings using the same paper and pencil questionnaire. Again, participants will provide a single rating of overall display “clutter” for each configuration
  • Participant will fill out a “post” Ranking and Rating questionnaire that evaluates their perceptions of each of the pairs independent of the display configurations presented during the experiment
  • Participants will also fill out a set of Open-ended questionnaires that evaluate their perception of overlap/redundancy for the semantic pairs. Each questionnaire also requests feedback on why participants feel pairs are relevant, why they feel pairs are not relevant, suggestions for alternative words/terms, and/or additions to the list of semantic pairs
  • Experimenter debriefs all participants

Dependent Measures

  • Ratings of relevance for the semantic pairs provided during the flight scenarios
  • Overall rating of display “clutter” for the flight scenario display configurations
  • Ratings of relevance for the semantic pairs provided for the non-flight scenario display configurations
  • Overall rating of display “clutter” for the non-flight scenario display configurations
  • Overall “Ranking” of relevance for the semantic pairs(pre- flight scenarios)
  • Overall “Rating” of relevance for the semantic pairs(pre- flight scenarios)
  • Overall “Ranking” of relevance for the semantic pairs (post- flight scenarios)
  • Overall “Rating” of relevance for the semantic pairs (post- flight scenarios)
  • Ratings of redundancy/overlap between the Target Pair and the other semantic pairs from the Open-ended questionnaires
  • We will also have a significant amount of qualitative data which we will evaluate for frequencies when it comes to comments, etc.