Sound Information Handling:
Application to Errors in Medicine

James G. Williams, Ph. D.

jgwilliams@mindspring.com

 

Abstract

 

A mathematical theory of informatic soundness is applied to leading categories of information‑based errors in medicine to demonstrate ways of dramatically reducing these errors.  A drug overdose case study is then presented to show seven ways the patient’s death may have been averted.  Suggested roles and responsibilities for  hospital personnel and patients are outlined as well as suggested future actions for medical informatics specialists.

Introduction

One hundred eighty thousand people a year die in US hospitals from errors in treatment.  Dr. Lucien Leape published this startling result in late 1994 in the Journal of the American Medical Association.  Other authors corroborate his findings citing hundreds of thousands of patients each year in the US alone who endure extended hospital stays, increased treatment times, and death or disability because of errors in medical care. 

 

Medical errors take their toll not only in human lives but also in dollars.  A more recent study (Bates 1995) conservatively estimates that a 700-bed hospital incurs a direct cost of $1.1M for less than 10% of avoidable adverse events each year.   This study analyzed “adverse drug events” in which the wrong drug or wrong dosage was administered or the drug was administered incorrectly.  Since fewer than 10% of adverse events are drug-related, though (Leape 1991), the total direct cost of avoidable adverse events is roughly an order of magnitude higher.  In a large hospital,  direct costs could well exceed $20M each year.  In addition, there are large indirect costs to insurance companies, HMO's, patients, and their employers. 

 

Technology has contributed to these problems.  Many errors and resulting adverse events are the direct result of increasing complexity.  As the practice of medicine has become more sophisticated, the risk of errors has also increased.

 

Fortunately,  technology can also facilitate a solution.  Those involved in medical informatics have the ability to lower the staggering number of deaths caused by errors.  With technology currently available, we can provide medical practitioners tools to prevent loss of life and attendant adverse events from avoidable errors.  We can provide them sound informatic systems.  We can work with hospital administration to educate the community about our initiatives to enhance our quality of care.  As informatics specialists we have the opportunity and the responsibility to do so.

 

Historical Perspective

Sound information-handling dates back to Aristotle, whose syllogisms provide examples of both sound and unsound reasoning.  It was Aristotle who noted that false conclusions can be based on sound reasoning from false premises.  A modern-day example might be a misdiagnosis on the basis of incorrect lab test results.

 

Major advances since Aristotle have occurred only recently.  The semantics of sound reasoning was clarified by Alfred Tarski's model theory (1936).  Widely differing methods of sound information-handling in automated systems have been identified by computer-security experts (e.g., Clarke 1987) and modern logicians (e.g., Guttman 1994).

 

A general theory of informatic soundness emerged in 1995.  Presented in part in  (Williams 1995), it can be applied to individual, stand-alone systems and large networked systems — the total information-handling system associated with a given patient or health-care enterprise. Such systems usually include many computers in networks, intelligent medical devices, and human medical practitioners.  The theory describes how the validity and integrity of the data that  is entered, stored, processed, and reported by the system can be definitively established and realistically maintained. 

 

Overview

This paper describes how the theory can be applied to reduce errors in medicine dramatically.  The theory is comprehensive and capable of integrating all error-handling needs.  It is applicable to informatic systems of arbitrary size.  Already applied to a wide variety of errors in the airline and financial services industries, it has been found to be a useful tool for understanding what went wrong  and how to fix it.

 

We'll start out with an overview of system architecture, followed by some real-life examples of errors and descriptions of how they could be eliminated.  The final portion of the paper will deal with the all-important roles and responsibilities of users, without whose participation the successful implementation and application of sound information-handling would be impossible. 

 

System Architecture

Figure 1 outlines the system.  As data is entered, it must be certified as correct, meaning that it is accurate, timely, and conveys facts about the real world.  The basis for its acceptance as correct is stored for later use in the event that its correctness is challenged.  Additional mechanisms come into play if correctness is challenged, as we shall discuss later.

 

Figure 1.  A Structure for Sound Information Processing

 

All input data must satisfy one or more of the following criteria:

 

1.   The source of the input must be an appropriately qualified user, as defined in User Roles and Responsibilities below.  For example, a diagnostic radiologist is licensed and, therefore, assumed qualified to interpret patient x-rays.

 

2.   The input can be corroborated by another input from a different source.  For example, a second opinion is available from a consulting physician who is also licensed and perhaps board-certified in an appropriate specialty area.

 

3.   The input must pass system integrity-validation checks.  For example, the checks on a prescription must show the dosage to be consistent with existing drug protocols and with dosages for the same patient in previous treatment cycles. 

 

Data must also pass a check for stable form — information  must be expressed in such a way that its meaning does not change in transmission from one context to another.  “The patient in Room 321” is not of itself a stable form, for example, particularly if Jack Stemple has replaced diabetic Alexander Brown in Room 321 and someone's trying to give Jack insulin.

 

Examples of Errors

Leape and others suggest four major classes of medical errors that cause adverse events (Leape 1991).  These classes were derived from extensive analysis of empirical data. With the framework of sound information-handling in mind, let's look at these major classes of errors, some examples of each, and ways the errors might be eliminated. 

 

Procedures

Central venous catheter punctures pleura
Removal of obstruction punctures bowel

Preventive Medicine

Inadequate preparation before surgery
Failure to administer drug antidote

Diagnosis

Delay in notifying patient after positive lab test
Appendicitis symptoms, but no appendicitis

Drugs

Wrong dose, method, or drug
Inadequate follow-up

 

Drug Errors.  The proposed automated system can check prescriptions against patient conditions, known allergies, and relevant physical characteristics.  Drug protocols can also be checked for ambiguities.  The automated support for error handling can speed up detection of drug errors and help to identify their underlying causes.  It can also participate in countermeasures to suppress adverse drug reactions.

 

Diagnostic Errors.  The proposed system can perform integrity checks that flag unlikely diagnoses.  It can help doctors fulfill implied commitments by notification of need for  additional diagnostic procedures as well as timely and accurate delivery of lab results, for example.  The system can also handle diagnostic errors after their detection and suppress further use of resulting misinformation.

 

Preventive Errors.  The proposed system can perform integrity checks on planned surgical procedures to ensure that prerequisite activities are performed (e.g., avoidance of solid foods for twelve hours before surgery, availability of needed lab results).  Not only can the system prompt for confirmation of  prerequisite activities, it can also actively participate in determining the causes of failures after they occur.

 

Procedure/Performance Errors.  In the short term the proposed automated system can provide support that consists primarily of accurate, timely error-handling.  Over a longer term, collected data can reduce further incidence of errors by identifying error-prone procedures that can then be targeted for improvement or replacement.

 

The first three error categories account for roughly one-half of all serious adverse events.  They can be addressed with existing technology and eliminated.  These changes can take place immediately.  We have the opportunity to make a profound difference in terms of both patient suffering and medical costs. 

User Roles and Responsibilities

 Important as system design is to success, education and commitment of human beings is even more critical. Table 1 shows what medical practitioners, administrators, and patients need to do.  All roles are critical for success. 

 

 Soundness Role

Information-Handling Responsibilities

Stakeholder
Category

Qualified User

Provide correct inputs
within area of expertise

Patient, doctor, nurse,
researcher, pharmacist

Error Reporter

Detect incorrect outputs

Patient, pharmacist, nurse,
doctor, news reporter

Error Investigator

Determine causes of errors

Researcher, quality review
board, medical examiner

Error-Tracking
Administrator

Manage informatic roles, find errors in informatic systems

Hospital administrator,
malpractice insurer

 

Table 1.  Roles and Responsibilities

 

 

Qualified Users provide credible inputs that are accurate, timely, and highly reliable after integrity validation, possibly by an automated system.  For example, although an MD’s license to practice medicine is comprehensive, hospitals increasingly require certification by a specialty review board and evidence of continuing education before granting  the doctor admitting privileges at the hospital. 

 

Error Reporters identify and document potential errors.  Full error-reporting is essential.  If errors are not detected, they can easily propagate and compound one another with increasingly harmful results until finally detected and handled.  With proper informatic support, spurious error reports are harmless, imposing only additional processing overhead. 

 

Error Investigators determine the validity of the error reports they receive and provide corrective inputs to mitigate the effects of errors.  A finding by an error investigator must have greater credibility than information which is discredited by the error report and subsequent investigation.  Because error investigators may invalidate certified inputs, they also are reporters of certification errors. 

 

Error-Tracking Administrators make role assignments, ensuring that users are adequately qualified, that all errors are reported, and that error investigators successfully detect the causes of errors, including errors in the informatic systems themselves.

 

An Application of the Theory

Shortly after our first model of informatic soundness went to press, Betsy A. Lehman, Health Editor for the Boston Globe, died of a drug overdose at the Dana-Farber Cancer Institute.  The events surrounding her untimely death became the subject of a careful investigation and extensive press coverage (e.g., Kong 1995; Knox 1995a; Knox 1995b).  The reported events illustrate most of the key requirements from our theory.

 

1.   Stable form.  The drug manufacturer's treatment summary specified 4,000 mg in four days in a way that could have meant either 4g each day for four days or 4g total over a four-day treatment cycle.  The doctor who ordered the medication misinterpreted the manufacturer's intent.  Lack of stable form is a common source of drug-related errors; similar problems include sound-alike names and look-alike containers.

 

2.   Integrity-validation checks.  The amount prescribed for Ms. Lehman was inconsistent with what she had received in a previous treatment cycle.  This inconsistency was not checked, contrary to Dana Farber policy.  At the time, there was no dosage ceiling at Dana Farber for the drug. 

 

3.   Higher credibility for error investigations.  The same drug error occurred in another patient at Dana Farber at about the same time.  It was reported by a pharmacist.  The error report, investigated by the same doctor who was treating Betsy Lehman, was overridden.  Neither the doctor nor the pharmacist consulted the detailed protocol description for the drug.  Nor did they call the pharmaceutical company which had issued the ambiguous treatment protocol summary for clarification. 

 

4.   Higher credibility for corroborated data.  Two other pharmacists corroborated the original error report.  These reports were also dropped in favor of the original erroneous interpretation of the ambiguous treatment summary.

 

5.   Basis for investigation.  After the first dose, Betsy Lehman herself reported that something was wrong relative to her previous experience.  She reported quite a different reaction to the chemotherapy.  This report was overridden by her attendants.  Error reports were not routinely logged.  There was no process by which such information could accumulate and provide the basis for a thorough investigation.

 

6.   Investigation of antecedent causes.  Lab results showed an abnormal spike in a metabolite of the administered drug.  This did not lead to discovery of the original antecedent error which led to the spike.

 

7.   Propagation of error retractions.  Six months later, the same semantic ambiguity in daily versus treatment-cycle doses killed a cancer patient at the University of Chicago Hospital.

 

Examples similar to one cited above have driven the development of our methodology and the three primary objectives that it supports:

 

Correctness:

All certified inputs are correct.

Basis:

All inputs responsible for a given warranted output can be identified.

Error handling:

Incorrect or discredited outputs can be retracted.  Further use of incorrect or discredited inputs can be suppressed.

 

Achieving Informatic Soundness

Where do we go from here?  As medical informatics specialists we need to act both globally and locally. At the global level we can work to influence the development of standards.  The evolving HL7 standard, for example, needs to be subjected to a rigorous analysis of its ability to support informatic soundness.

 

In our own institutions we should initiate, educate, assess, plan, and implement error-reduction strategies.  The health-care industry is presently undergoing a massive automation effort aimed primarily at reducing costs.  Estimates of projected total expenditures run as high as $3T (Vendeland, 1995).  We need to enhance our own understanding of error-handling and share it throughout our institutions in applications ranging from purchasing decision-making for IS installations to quality review boards.

 

The task will not be easy, but the rewards will be great and measurable, one life at a time.

   

Appendix A:  Relevant Empirical Data

Causes of Adverse Events

Technical errors in surgical procedures

35%

Inadequate preventive measures

22%

Diagnostic errors

14%

Drug-related errors

9%

Inadequate Infrastructure

2%

Etc..

17%

 

The above table is taken from the work of Leape et al.

 

Appendix B: Sound Information Handling Objectives

The first correctness objective is to maintain correctness in the absence of introduced errors. As illustrated in the following diagram, the notion of correctness is slightly different for different kinds of information: 

  

 

 

The basis objective is to be able to justify each input and output on the basis of previously justified rules of acceptability. Thus, not all inputs can be accepted and made use of by the system, as suggested in the following diagram:

 

The error handling objective pertains to error handling once errors have been introduced.  As illustrated in the following diagram, errors arise and enter a system where they may become falsely treated as correct and lead to additional errors that propagate out into the system's environment.  The error handling objective is to discover such errors, issue revocations for propagated errors, and restore sound information handling by eliminating the errors and all of their erroneous consequences from the system.

 

 

References

Bates, D.W., et al., July 1995, “Incidence of Adverse Drug Events and Potential Adverse Drug Events,” Journal of the American Medical Association, Vol. 274, No. 1, pp. 29-34.

 

Clark, D.D., and D.R. Wilson, April 1987, “A Comparison of Commercial and  Military Computer Security Policies,” Proceedings of the 1987 Symposium on  Security and Privacy, IEEE.

 

Guttman, J. D., and D.M. Johnson, 1994, “Three Applications of Formal Methods at MITRE,” in FME '94:  Industrial Benefits of Formal Methods, edited by M. Naftalin, T. Denvir, and Miguel Bertran, Springer Lecture Notes in Computer Science, Vol. 873,

pp. 55--65.

 

Knox, R.A., and D. Golden, May 28, 1995, “Dana-Farber turmoil seen,” Boston Sunday Globe, pp. 1&13.

 

Knox, R.A., December 26, 1995, “Overdoses still weigh heavy at Dana-Farber,” The Boston Globe, pp. 1&20.

 

Kong, D., March 25, 1995, “Safeguards failed at Dana-Farber,” The Boston Globe, pp. 1&5.

 

Leape, L.L., et al., February 1991, “The Nature of Adverse Events in Hospitalized Patients: Results of the Harvard Medical Practice Study,” New England Journal of Medicine, Vol. 324, No. 6, pp. 377-384.

 

Leape, L.L., December 1994, “Error in Medicine,” Journal of the American Medical Association, Vol. 272, No. 23, pp. 1851-1857.

 

Tarski, A., 1936, “The Concept of Truth in Formalized Languages,” in Logic, Semantics, Metamathematics, translated by J. H. Woodger, Oxford University Press, 1956, pp. 152-278.

 

Vendeland, A. J., “Medical Moon Shot,” Computerworld, April 10, 1995.

 

Williams, J. G., and L. J. LaPadula, 1995, “Modeling External Consistency of Automated Systems,” Journal of High Integrity Systems, Vol. 1, No. 3, pp. 249-267.

 

 

Acknowledgments

The author thanks Len LaPadula for many substantive discussions and Dr. Lucian Leape for valuable suggestions.

 

Biography

James Williams has extensive experience in creating and evaluating high-assurance systems.  His primary interests are in promoting sound information handling in health care.  Dr. Williams has contributed professionally in the area of program verification — developing and testing methodologies for ensuring that computer programs perform their intended functions.  He has published many papers on formal modeling, program verification, automated deduction, computer security, and various mathematical topics including logic, topology, category theory, and algebra.  A member of Computer Scientists for Social Responsibility, he is an avid programmer in his spare time.  He holds a Ph.D. in mathematics from the University of California at Berkeley and did post-graduate work in computer science at the University of Texas at Austin.

 


 

Published in Toward an Electronic Patient Record '96, Vol. 2, pp. 348-355, Medical Records Institute, May 1996.