In laboratory medicine, the unwanted outcome is generally defined as patient harm. This figure is similar to one that appears in EP23-A. The occurrence of a failure mode creates an out-of-control condition. Depending on the type, size and duration of the out-of-control condition, some number of incorrect patient results are produced.
An incorrect patient result is one that fails to meet the requirements for its intended medical use. This is generally defined as a result with measurement error that exceeds an allowable total error specification.5 Some or all of the incorrect patient results produced during the existence of the out-of-control state will be reported based on how and when results are reported by the laboratory. While the likelihood that an incorrect result reported to a healthcare provider will lead to an incorrect action and the probability that the incorrect action causes patient harm is outside the primary control of the laboratory, it will depend on the nature of the analyte and characteristics of the patient population served by the lab.
Fig. 1 Sequence of events leading to patient harm
The basics of measurement uncertainty
So what exactly is measurement uncertainty, and where does it come from?
Uncertainty exists because, no matter how carefully assays are controlled or instruments are maintained, there will always be variation in a measurement process. When running tests in a laboratory, there are many different variables that can influence instrument performance. Factors like sample storage and handling, environmental conditions, operator changes, and calibrator conditions can all affect assay results. These factors are all sources of uncertainty.
There are many sources of uncertainty, and while attempts should be made to control them when possible, assay results will inevitably still vary. This is unavoidable, as variation and error are inherently involved in any measuring process. Defining and calculating uncertainty ranges provides a useful context for understanding how much variation we are working with.
Uncertainty sources exist in the pre- and post- analytical phases, but those sources are often difficult to identify and quantify. According to the ISO 15189 standards, uncertainty calculations only need to take into account uncertainty sources during the analytical phase, where the measurement actually occurs.
“The relevant uncertainty components are those associated with the actual measurement process, commencing with the presentation of the sample to the measurement procedure and ending with the output of the measured value.”
–ISO 15189 22.214.171.124
Identifying Potential Failure Modes
The first task is to map the total testing process and identify potential failure modes that could lead to patient harm. For each mode the lab should estimate its rate of occurrence. While quantitative estimates of the expected failure rates are desirable, it is recognized that these may be difficult to obtain. As an alternative, a descriptive semiquantitative approach is often employed. An example given in EP23-A suggests a 5 level categorization for rate of occurrence.
Frequent = once per week
Probable = once per month
Occasional = once per year
Remote = once every few years
Improbable = once in the life of the measuring system
The laboratory may identify many potential failure modes in the total testing process that could lead to an out-of-control condition. The lab must then develop a strategy to control the number of incorrect patient results reported due to any of the potential failure modes.
Detecting Out-of-Control Conditions
It is advantageous to devise specific control procedures that address each potential failure mode where failure can occur. However, there will always be potential failure modes that are never identified or that cannot be adequately controlled at the point of failure. A lab should strive to minimize the number of out-of-control conditions created but plan for their eventual presence, as sooner or later something unexpected will happen that causes an out-of-control condition.
Statistical quality control procedures based on the periodic measurement of stable QC materials is the approach that has been successfully employed for decades to detect out-of-control conditions. Defining a QC strategy based on the periodic measurement of stable QC materials involves answering three questions:
- when to schedule QC evaluations,
- how many QC samples to measure and
- what QC rule(s) to apply to the QC sample results to decide the in-control or out-of-control status of the testing process.6,7
Given the answers to these questions, the performance characteristics of the QC strategy can be quantitatively assessed. Different outcome metrics can be computed, but the outcome metric that best fits into the overall model of the sequence of events that can lead to patient harm depicted in Fig. 1 is the expected number of incorrect patient results produced and reported due to an out-of-control condition of a given type and magnitude.5
Assessing Likelihood, Severity
Even though the lab has little control over the probabilities leading to patient harm after an incorrect result is reported, the lab should make its best attempt at estimating these probabilities based on the analyte, patient population and medical judgment. Likewise, the severity of harm to a patient resulting from an incorrect lab result will depend on the analyte and the patient population. The severity of harm requires assessment of the various ways the results may be used. If multiple scenarios leading to different degrees of severity are possible, the lab should consider the most likely and most harmful scenarios. EP23-A provides an example of a severity scale using five descriptive categories:
- Negligible = inconvenience or temporary discomfort
- Minor = temporary injury or impairment not requiring professional medical intervention
- Serious = injury or impairment requiring professional medical intervention
- Critical = permanent impairment or life-threatening injury
- Catastrophic = patient death
Risk Management, Statistical QC
Risk management activities that identify failure modes and estimate the likelihood and severity of patient harm from an incorrect reported patient result (areas shaded in blue) combined with statistical QC planning and implementation to control the number of incorrect patient results produced and reported in the event of an out-of-control condition (areas shaded in green) complement one another and in combination address all aspects of the sequence of events that can lead to patient harm.
Risk management should be used to minimize the number of out-of-control conditions occurring in the analysis process. Statistical QC should be used to mitigate the impact of the eventual out-of-control conditions that will inevitably arise. Protocols based on statistical QC can reduce the probability that an out-of-control condition will lead to an incorrect result being reported. The combination of risk management mitigation activities and QC practices based on statistical QC can significantly reduce the number of incorrect results reported.
Fig. 2: The complementary domains of risk management and statistical QC
In summary, recent guidelines such as EP23-A introduce risk management principles that may not be familiar to many in the laboratory community. In combination with statistical QC, risk management principles and activities can help the laboratory estimate and control the chance of failures in the laboratory leading to incorrect patient results than cause patient harm.
Dr. Parvin is manager of Advanced Statistical Research; John Yundt-Pacheco is Scientific Fellow; and Andy Quintenz is Global Scientific and Professional Affairs Manager, Bio-Rad.
- CLSI. Laboratory Quality Control Based on Risk Management; Approved Guideline. CLSI document EP23-A. Wayne PA: Clinical and Laboratory Standards Institute; 2011.
- CLSI. Risk Management Techniques to Identify and Control Laboratory Error Sources; Approved Guideline - Second Edition. CLSI document EP18-A2. Wayne PA: Clinical and Laboratory Standards Institute; 2009.
- ISO. Medical devices - Application of risk management to medical devices. ISO 14971. Geneva, Switzerland: International Organization for Standardization; 2007.
- ISO. Medical laboratories - Reduction of error through risk management and continual improvement. ISO 22367. Geneva, Switzerland: International Organization for Standardization; 2008.
- Parvin CA, Yundt-Pacheco J, Williams M. The focus of laboratory quality control: Why QC strategies should be designed around the patient, not the instrument. ADVANCE for Administrators of the Laboratory 2011;20(3):48-9.
- Parvin CA, Yundt-Pacheco J, Williams M. Designing a quality control strategy: In the modern laboratory three questions must be answered. ADVANCE for Administrators of the Laboratory 2011;20(5):53-4.
- Parvin CA, Yundt-Pacheco J, Williams M. The frequency of quality control testing. QC testing by time or number of patient specimens and the implications for patient risk are explored. ADVANCE for Administrators of the Laboratory 2011;20(7):66-9.
Copyright 2015 Merion Matters. All rights reserved.