On the heels of the Institute of Medicine’s somewhat contested report on the safety of health information technology (HIT)1,2, an international group of informatics experts warned that healthcare was entering a decade of danger3. They feared that the widespread deployment of HIT systems that are “less mature than we would like” would exceed developers’, managers’, and clinicians’ abilities to anticipate, understand, andcope with the potential consequences, and so would lead to a substantial increase in the harms associated with HIT until improvements in the quality of design, software, and implementation could catch up. In the meantime, I suspect that attempts to mitigate these risks will result in the decade of danger becomingknown as the decade of the kludge.

The Oxford English Dictionary tracesthe first published use of kludge to a 1962 satirical essay on computing that defined it as “an ill-assorted collection of poorly-matching parts, forming a distressing whole”4. It is widespread in the vernacular of computer science and engineering, where it is used to describe patching over a problem or bug in an inelegant manner without fundamentally resolving it.

In this issue of Annals, Green et al5report an approach to wrong-patient orders in computerized provider order entry (CPOE) systems that would be fair to call a kludge. Although it is difficult to estimate the volume of wrong-patient orders in EDs, they are widely thought to have increased following the introduction of CPOE system due to the loss of subtle cues of identity that had been part of the paper chart: physical placement, handwriting, length, differences in grammar, even coffee stains, etc6. Pham et al have reported wrong-patient orders were 3-fold higher in EDs using CPOE compared to those using paper orders7. Although many wrong-patient orders are intercepted before being carried out, and others may be inconsequential, the potential for devastating harm is obvious.

Green et al’s intervention involved displaying a patient verification dialog screen that required active confirmation from the physician prior to moving on to the order placement screen. It was designed so that physicians could not “click ahead” in anticipation of the confirmation request by means of a 2.5 second delay before any input other than cancelling the order session would be accepted. The design of the verification screen attempted to restoresome of the subtle cuesthat paper charts affordedfor distinguishing patientsby displaying not only the name, age, and gender, but also other attributes such as chief complaint, bed location, length of stay, and recent medication orders – all cues tohelp physicians form a more accurate picture of the chart they were actually working in, as opposed to theone they had intended to work in. This was a good design decision, because users tend to respond to these sorts of prompts as if they were questions about intentions (writing orders) rather than about the object of those intentions (patient X)8. Making the object of intention more salient helps to avoid inadvertent confirmation which is often, but not always, followed by2 word exclamations such as, “Oh, dear” or more scatological equivalents. This intervention was reasonably successful; the authors report a decrease in wrong patient orders from 2.0 to 1.5per thousand orders,which was sustained for 2 years after the intervention.

Given this success, why should we call it a kludge? First, it is certainly inelegant and intrusive. Emergency physicians already operate under a barrage of interruptions. Chisholm et alreported they experience 2 to 3 interruptionsper patient9, so adding a bit over 1 additional interruption per patient (albeit a shorter one), when only about 1 in 200 is helpful, is hardly a welcome development, even for a good cause. Although it is commonly thought that interruptions are less disruptive when the interruption and interrupted task are closely related10,11, there is some experimental evidence suggesting thatthis only holds if the interruption is similar to the future actions held in working memory, and that similarity in content (but not future steps-to-be-taken) can be even more disruptive than entirely irrelevant interruptioins12. Interruptions almost always have both positive and negative effects13,14, but the balance here is unknown; Greenet alcaptured the beneficial effects of reducing wrong patient orders, but did not attempt to identify any unintended consequences beyond the additional time required.

Second, the intervention here does not really solve the fundamental problem that current HIT designs make it too easy to select the wrong patient. There are many other tasks not addressed by this intervention in which wrong patient selection createsrisks: charting; reviewing laboratory results and imaging reports;arranging follow-up, admissions, or transfers; communicating with primary care or specialist colleagues;labeling specimens; preparing discharge instructions, and so on.

Third, this intervention is not likely to scale well. Implementing a similar solution for other potential wrong patient events will extract a toll in increased interruptions, task complexity, time on task, user frustration, and sub rosa work-arounds15. The lack of scalability is a particular worry, since there is a strong tendency to add more and more tasks, checks, barriers, or procedures to clinical work to meet safety and quality goals, but hardly ever to take any away16. A back of the envelope calculation based on Green’s data suggest that the nationwide implementation of such a system would requirealmost 900,000 additional hours of emergency physician time per year – over 400 full time equivalents of physicians doing nothing but right patient checking year-in and year-out so the impact of adding a few similar checks directed at note writing or report viewing is beyond contemplation, although the burden would be somewhat reduced by the time saved from not having to deal with the consequences of wrong patient orders.

Fourth, even though the time consumed by each patient-verification prompt was low, waiting time and the number waitingare a nonlinear functions of service time in queuing systems; when demand approaches service capacity, even small increases in average service time can produce dramatic increases in waiting times and queue length17.

Finally, there a general problem with kludges of all kinds. It is invariably more expensive, less effective, and more troublesome to address safety problems with after-the-fact, tacked on ‘fixes’ than it is to design safety into the technology from the beginning18,19. For a variety of reasons, the HIT industry and its ostensible regulators have failed to address safety issues such as this in substantive ways, despite multiple calls to do so2,20-23.

Given that failure, it is unfair and misguided to heap criticism on kludges as solutions to safety problems, because the real problem -- poor designs based on idealized models of work24 uninformed by careful ecological analyses of work as actually performed25 – is beyond the remit of teams such as Green et al. The authors here have done the best they could to address a serious problem with the means at their disposal, took pains to make their intervention more effective, and minimize the burden it placed on clinicians. Given the exceedingly remote possibility of a fundamental redesign of HIT for safety and usability, and the general weakness of administrative controls and exhortations (eg, “Pay attention! Be careful! Don’t forget to remember! Remember not to forget!”) in managing these risks – what else could they do but implement a kludge to work around a designed-in problem?

None of these issues should be misunderstood as a Luddite wish to return to paper. Wrong patient orders certainly occurred in paper ordering systems, as did illegible, misread,and outright wrong orders, the latter 3 perhaps more frequently than with CPOE. The potential for HIT to make clinical work safer and more effective is undeniable. But, we should not let faith-based belief in future potential blind us to the realities of current underachievement. Except for a handful of notable exceptions, most current implementations of CPOE close off some of the old paths to failure, but open up new ones26-28; in this case for example, trading illegible orders for wrong patient orders. Successful kludges such as the one reported here should be celebrated as necessary and useful work-arounds, but should also be embarrassing reminders of how much more needs to be done to make HIT an effective assistance rather than a disruptive burden29.

Fundamentally, these issues arise because the normative notions of work inscribed in current HIT systems clash too strongly with the nature of clinical work in the ED6;the resulting tensions and paradoxes add their own risks to an already risky technological system30. Healthcare has only recently begun to raise these issues in an organized way; the AMA and 34 other medical organization (including ACEP) recently asked the Office of the National Coordinator to modify the Meaningful Use program to formally address safety, security, and workflow issues31. While change may eventually come from this effort (which may be too little, too late), in the meantime we (and our patients) face a decade of danger, likely to remembered as the decade of the kludge.

References

1.Committee on Patient Safety and Health Information Technology. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: National Academies Press; 2012, 211 pages.

2.Cook RI. Dissenting Statement: Health IT is a Class III Medical Device. In: Committee on Patient Safety and Health Information Technology, ed. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: National Academies Press; 2012: pp E1 - E4.

3.Coiera E, Aarts J, Kulikowski C. The dangerous decade. J Am Med Inform Assoc 2012;19(1):2-5.

4.Granholm JW. How to design a kludge. Datamation 1962;8(2):30 - 31.

5.Green RA, Hripcsak G, Salmasian J, et al. Intercepting wrong-patient orders in a computerized provider order entry system. Ann Emerg Med 2015;x(x):(online ahead of print).

6.Berg M. Health Information Management: Integrating Information Technology in Health Care Work. London, UK: Routledge; 2004, 256 pages.

7.Pham JC, Story JL, Hicks RW, et al. National study on the frequency, types, causes, and consequences of voluntarily reported emergency department medication errors. J Emerg Med 2011;40(5):485-492.

8.Shneiderman B. Designing the User Interface: Strategies for Effective Human-Computer Interacton. 5th ed. Reading, MA: Addison-Wesley; 2009, 624 pages.

9.Chisholm CD, Collison EK, Nelson DR, Cordell WH. Emergency department work place interruptions: are emergency physicians multi-tasking or interrupt driven? Acad Emerg Med 2000;7:1239-1243.

10.Edwards MB, Gronlund SD. Task interruption and its effect on memory. Memory 1998;6:665 - 687.

11.Speier C, Valacich JS, Vessey I. The Influence of Task Interruption on Individual Decision Making: An Information Overload Perspective. Decision Sciences 1999;30(2):337-360.

12.Gould AJJ. What Makes an Interruption Disruptive? Understanding the Effects of Interruption Relevane and Timing on Performance. London, UK: University College London; 2014.

13.Grundgeiger T, Sanderson P. Interruptions in healthcare: Theoretical views. International Journal of Medical Informatics 2009;78(5):293-307.

14.Westbrook JI. Interruptions and multi-tasking: moving the research agenda in new directions. BMJ Quality & Safety 2014;23(11):877-879.

15.Westbrook JI, Coiera E, Dunsmuir WTM, et al. The impact of interruptions on clinical task completion. Quality and Safety in Health Care 2010;19(4):284-289.

16.Coiera E. Why system inertia makes health reform so difficult. BMJ 2011;342:d3693.

17.Newell GF. Applications of Queueing Theory. 2nd ed. London, UK: Chapman and Hall; 1982, 303 pages.

18.Leveson NG. Safeware: System Safety and Computers. Boston: Addison-Wesley; 1995, 680 pages.

19.Storey N. Safety-Critical Computer Systems. Harlow, UK: Pearson Education Limited; 1996, 453 pages.

20.Jackson D, Thomas M, Millett LI, eds. Software for Dependable Systems: Sufficient Evidence? Washington, DC: National Academy Press; 2007, 148 pages.

21.International Standards Organization. Health informatics -- application of clinical risk management to the manufacture of health software. International Standards Organization; ISO/TS 29321:2008; 2008, , 80 pages.

22.International Standards Organization. Health informatics -- guidance on the management of clinical risk relating to the deployment and use of health software systems. International Standards Organization; ISO/TS 29322:2008(E); 2008, , 74 pages.

23.Wears RL, Leveson NG. "Safeware": safety-critical computing and healthcare information technology. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. AHRQ Publication No. 08-0034-4 ed. Rockville, MD: Agency for Healthcare Research and Quality; 2008: pp 1 - 10.

24.Coiera EW. Designing interactions. In: Berg M, ed. Health Information Management: Integrating Information Technology in Health Care Work. London, UK: Routledge; 2004: pp 101 - 123.

25.Vicente KJ. Cognitive Work Analysis: Towards Safe, Productive, and Healthy Computer-Based Work. Mahwah, NJ: Lawrence Erlbaum Associates; 1999, 398 pages.

26.Koppel R, Metlay JP, Cohen A, et al. Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors. JAMA 2005;293(10):1197-1203.

27.Schiff GD, Amato MG, Eguale T, et al. Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems. BMJ Quality & Safety 2015:online ahead of print.

28.Ash JS, Sittig DF, Poon EG, et al. The Extent and Importance of Unintended Consequences Related to Computerized Provider Order Entry. J Am Med Inform Assoc 2007;14(4):415-423.

29.Klein G, Woods DD, Bradshaw JM, et al. Ten challenges for making automation a 'team player' in joint human-agent activity. IEEE Intelligent Systems 2004;19(6):91 - 95.

30.Greenhalgh T, Potts HW, Wong G, et al. Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method. Milbank Q 2009;87(4):729-788.

31.American Medical Association (and 34 other signatories). Letter to Karen DeSalvo, National Coordinator for Health Information Technology. .

1