A National Test of Braille Competency Achieve: Where Do We Go from Here?

By Mark A. Riccobono, Director of Education

National Federation of the Blind Jernigan Institute

In the Winter 2006 issue of the DVIQ (pages 9-12), we reviewed the history and status of the National Literary Braille Competency Test (NLBCT) in an article entitled “United We Stand for Braille Competency Testing: Closing the Gap between Dreams and Reality.” As a follow up, this article provides an update on the status of the test and some thoughts for the future. For background on the NLBCT, please refer to the previous article.

As a refresher, the purpose of the National Literary Braille Competency Test is to measure the level of an individual’s knowledge/proficiency in reading and writing the literary Braille code. Furthermore, the NLBCT Steering Committee described “just-sufficiently qualified examinees” as individuals who are proficient with the literary Braille code including the alphabet, numbers, Braille composition signs, common punctuation, and contractions. They have committed these to memory and are able to recognize them and apply the rules for using them with few exceptions such as problem words (e.g., chemotherapy, mistrust, and Parthenon) and uncommon punctuation (e.g., brackets and braces). This statement is meant to describe the test candidate who is minimally qualified to pass the test—essentially this sets the floor of competency.

The NLBCT Steering Committee worked diligently to meet an ambitious deadline for pilot testing early in 2006. After reviewing and editing test content, procedures were finalized for the administration of the test. Three pilot test sites were identified and candidates were recruited to take the test. The test sites were:

  • Philadelphia, Pennsylvania (February 26)
  • St. Paul, Minnesota (March 2)
  • Anaheim, California (March 9)

The pilot testing served as a step in the process of validating the test content as well as an opportunity to refine the test administration procedures. We need to give special thanks to the individuals who put in a tremendous effort to make the pilot testing process as smooth as it could be. These include Diane P Wormsley and her husband Bill Wormsley (Pennsylvania site), Mary Archer (Minnesota site), and Stuart Wittenstein and Joanna Venneri (California site). We also need to extend a special thanks to the Braille Institute which stepped up to provide volunteers and equipment to support the pilot test in California. This pilot test was held in conjunction with the California Transcribers and Educators of the Visually Handicapped (CTEVH) Conference and, as a result, was bigger and more complex to coordinate. Thanks to all those individuals who made it work.

Test Structure

In order to achieve the objectives of the NLBCT, the test includes four sections designed to test all of the relevant knowledge required to be considered minimally “competent” in the Literary Braille code. A number of versions of each section were piloted. Below is the structure of the test as presented during the pilot phase:

  • Section 1: Braille Writing—Braillewriter (2 hours)
  • Section 2: Braille Writing—Slate & Stylus (1 hour)
  • Section 3: Proofreading (2 hours)
  • Section 4: Multiple Choice (1.5 hours)

For purposes of the pilot test, this structure includes an additional thirty minutes for the multiple choice section. In the actual administration, this section would only be allotted one hour--making the entire test six hours. The additional half an hour was included because pilot test candidates were asked to complete two versions of the multiple choice section. This allowed for a stronger analysis of the multiple choice test items without negatively impacting test candidates (only the version they scored better on was counted in their overall test results).

Pilot Test Candidates

Pilot test candidates were recruited through national publications and organizations as well as through the local test site coordinators. Forty-eight individuals sat for the NLBCT pilot early in 2006. In addition to taking the test, individuals were asked to complete a background information form to assist in the analysis of the pilot test results. Here is some demographic information about the pilot participants:

  • Braille experience: The average amount of experience reading Braille ranged from fewer than one year to forty-seven years, with an average of thirteen years. Fifteen candidates had five or fewer years experience.
  • Braille training: Twenty-five had taken college Braille courses, four others received training through the Library of Congress.
  • Transcribers: Only four indicated that they were transcribers.
  • Teachers: Thirty-six of the candidates have taught Braille. Seven have taught Braille teacher preparation courses.
  • Braille readers: Nine candidates indicated that they read Braille tactually.

Scoring

In late spring 2006, a panel of experts came together to score the pilot tests. Led by Mary Archer, six highly qualified volunteers from the National Braille Association worked on scoring the tests and provided feedback on the scoring protocols. The scoring team was charged with scoring section 1, 2, and 3 of the pilot test for each of the forty-eight candidates. The fourth section, multiple choice, was machine scored. Each section of every candidate’s test was scored independently by two scorers. Once the independent scoring was complete, the two scorers reconciled differences between the errors they identified. Errors were identified according to the knowledge areas listed in the test blueprint (the test blueprint was used in building the test to ensure that each of the appropriate knowledge areas was covered in the test content). The team reported a high degree of reliability between scorers. Discrepancies typically related to how an error was classified rather than a dispute over whether something was or was not an error. In both sections 1 and 2, errors would result from mistakes made by the candidate in producing Braille. In section 3 (proofreading), candidates might be given demerits for missing embedded errors in the passages or marking something as an error that is in fact correct.

Content Validation

Once the scoring phase was complete, all of the scoring data was sent to Human Resources Research Organization (HumRRO)—the testing company that provided consultation and technical assistance to the test development. HumRRO analyzed the scores and prepared for the validation of the test content. A “content panel” of Braille experts was brought together in Baltimore in early August. The panel included some members of the NLBCT Steering Committee as well as additional individuals viewed as “content” experts (i.e., having a high degree of competence in the Literary Braille Code). The content panel was tasked with finalizing the scoring protocols, setting the passing scores, and examining the content validity of the test. Each panel member assigned a content validity rating to each passage and multiple-choice item in order to rate the degree to which the knowledge tapped by the test question or passage was needed to competently read and write Braille. The level of agreement among the panel members was high. All passages were judged to have high content validity. A few multiple-choice items were dropped because of low content validity. In addition, a couple of changes were made to the passages because the relevant knowledge was judged too advanced.

Setting the Passing Scores

For the multiple choice questions, the panel used a rating system known as the Angoff method to evaluate each multiple choice item. For each item, panel members estimated the proportion of minimally-competent candidates who would answer the item correctly. The passing score on the multiple-choice section is the sum of these item proportions. In addition, each question was given a content validity rating. An acceptable number of errors made by an examinee was established for each passage. The “bookmark” method was used to do this. First, all of the completed tests for a specific section were put in a pile sorted by score. Then the panel members put a bookmark in the pile between two completed tests: the test that barely met the standard of competence and the test that barely missed the standard. This bookmark became the cut score. Throughout the process, panel members made their initial judgments without talking to other members. After a thorough discussion, panel members were allowed to revise their judgments. The passing scores are based on the mean Angoff rating or bookmark placements among the panel members. The level of agreement among the panel members was high. The end result of this two day effort was a set of multiple choice questions, passages to be transcribed, and proofreading passages that had been tested and validated.

Examinee Score Reports

In early October 2006, the pilot test examinees received a Candidate Score Report. This report indicated their performance on the NLBCT pilot. Individuals who passed the pilot test will receive an official certificate of competency. The score report includes pass/fail information for each section as well as the overall test. Examinees need to pass all four sections of the test in order to successfully receive an overall pass rating. If examinees fail just one section, the committee has determined that they will be allowed simply to retest on that one section within a certain period of time. Any examinee who fails two or more sections of the test would need to retake the entire test. In addition, the report shows examinees which knowledge areas they made errors in for each section. For example, if an examinee made two errors related to knowledge area 18 (Knowledge of One-Cell Whole-Word Contractions), on the Braillewriter section, their report would show two errors next to this knowledge area. The only difference in reporting was in the multiple choice section. In this section examinees were told what percentage of questions relating to a particular knowledge area they got correct as well as how many questions in their test form related to that area. Using the example from above, if there were three questions related to knowledge area 18 and the examinee got one wrong, the report would indicate that he or she scored 66% for that knowledge area. This information is designed to help examinees identify knowledge areas where they typically made mistakes.

Pilot Test Results

The results from the pilot test were used for three purposes: (1) to identify passages and multiple-choice items and passages that should be dropped, (2) to help the panel make their Angoff and bookmark judgments, and (3) to examine how closely related the scores are between the different test sections. Statistics were computed for each multiple-choice item and each passage for these purposes. As expected, the item statistics suggested that some multiple-choice items should be dropped. All of the passages, however, appeared to be working very well. The scores on the different test sections were highly related. For example, a candidate who did well on the multiple-choice section tended to do well on the other three sections as well. The multiple-choice scores were most closely related to the proofreading scores; and the slate and stylus scores were most closely related to the Braillewriter scores. The correlations (which indicate the strengths of the relationships) between the sections ranged from .44 to .62. Of the forty-eight people who took the pilot test early in 2006, twenty-three (48%) passed the test. A number of other individuals only failed one section of the test.

Conclusions on Test Development

The NLBCT has traveled a long path to come to its final form. Many years of work by dozens of individuals and organizations that have contributed to the process have come to an end. Over the last two years, the National Federation of the Blind Jernigan Institute has been honored to be able to play a significant leadership role in assisting with the critical testing and validation phase of the test. The results from the pilot test indicate that the test is a tough but fair representation of the knowledge and skills individuals need to be considered minimally competent in the Literary Braille Code. A strong foundation has been laid for building greater Braille competency across America and for developing a knowledge base about the methods and resources that assist individuals in mastering the code. We can be proud of our united achievement.

Where Do We Go from Here?

Now that we have a valid test that is ready for wide administration, the question that needs to be addressed is, “Where do we go from here?” A long-range plan for administering the test and for further development of test items needs to be formulated before a national roll out can occur. The analysis of the NFB Jernigan Institute is that the costs of administering the test are too prohibitive to allow the NLBCT to be fully self-supporting through fees paid by test candidates. Additional test development should be done in order to ensure the viability of the test long into the future. New test items should be developed and piloted along with the validated content in order to expand the pool of available test items. Furthermore, analysis should be done of test results in order to provide the field with guidance on best practices in preparing individuals to achieve competency in the Braille code.

We have built a united front and a strong product that will help advance our common goal of greater competency in Braille among those planning to teach Braille to the blind. Can we now mobilize the united support to secure the resources required to turn the corner? At the National Federation of the Blind Jernigan Institute, we are prepared to advocate for the resources necessary to implement the NLBCT. It is our hope that others in the field of blindness will continue to support this united effort and help make the long awaited dream of a national test of competency in Braille the reality that it now, for the first time, is poised to become.