Home   Resources   QC Articles   Statistical QC and Risk Management

Risk Management and Statistical QC

The combination can improve the overall quality of patient result

By Curtis A. Parvin, PhD, John Yundt-Pacheco and Andy Quintenz

Why do we calculate measurement uncertainty?

Clinical Laboratory Standards Institute (CLSI) published a guideline titled, "EP23-A Laboratory Quality Control Based on Risk Management."1 It was the latest in a series of documents that provide risk management guidance to laboratories and medical device manufacturers.2-4 These efforts reflect an ongoing trend toward placing a greater focus on the patient throughout all areas of the healthcare enterprise, including the clinical laboratory.5

Risk management provides a formal approach to identify potential failure modes in the lab, rank those modes in terms of their risk, and establish policies and procedures to prevent or reduce (mitigate) the risks. The concept of risk has different definitions depending on the area of application. In the risk management arena risk is a concept comprised of two components:

  • the likelihood of occurrence of an unwanted outcome and
  • the severity of the unwanted outcome.

In laboratory medicine, the unwanted outcome is generally defined as patient harm. This figure is similar to one that appears in EP23-A. The occurrence of a failure mode creates an out-of-control condition. Depending on the type, size and duration of the out-of-control condition, some number of incorrect patient results are produced.

An incorrect patient result is one that fails to meet the requirements for its intended medical use. This is generally defined as a result with measurement error that exceeds an allowable total error specification.5 Some or all of the incorrect patient results produced during the existence of the out-of-control state will be reported based on how and when results are reported by the laboratory. While the likelihood that an incorrect result reported to a healthcare provider will lead to an incorrect action and the probability that the incorrect action causes patient harm is outside the primary control of the laboratory, it will depend on the nature of the analyte and characteristics of the patient population served by the lab.

Fig. 1 Sequence of events leading to patient harm

The basics of measurement uncertainty

So what exactly is measurement uncertainty, and where does it come from?

Uncertainty exists because, no matter how carefully assays are controlled or instruments are maintained, there will always be variation in a measurement process. When running tests in a laboratory, there are many different variables that can influence instrument performance. Factors like sample storage and handling, environmental conditions, operator changes, and calibrator conditions can all affect assay results. These factors are all sources of uncertainty.

There are many sources of uncertainty, and while attempts should be made to control them when possible, assay results will inevitably still vary. This is unavoidable, as variation and error are inherently involved in any measuring process. Defining and calculating uncertainty ranges provides a useful context for understanding how much variation we are working with.

Uncertainty sources exist in the pre- and post- analytical phases, but those sources are often difficult to identify and quantify. According to the ISO 15189 standards, uncertainty calculations only need to take into account uncertainty sources during the analytical phase, where the measurement actually occurs.

“The relevant uncertainty components are those associated with the actual measurement process, commencing with the presentation of the sample to the measurement procedure and ending with the output of the measured value.” 
–ISO 15189 5.5.1.4

Identifying Potential Failure Modes

The first task is to map the total testing process and identify potential failure modes that could lead to patient harm. For each mode the lab should estimate its rate of occurrence. While quantitative estimates of the expected failure rates are desirable, it is recognized that these may be difficult to obtain. As an alternative, a descriptive semiquantitative approach is often employed. An example given in EP23-A suggests a 5 level categorization for rate of occurrence.

Frequent = once per week
Probable = once per month
Occasional = once per year
Remote = once every few years
Improbable = once in the life of the measuring system

The laboratory may identify many potential failure modes in the total testing process that could lead to an out-of-control condition. The lab must then develop a strategy to control the number of incorrect patient results reported due to any of the potential failure modes.

Detecting Out-of-Control Conditions

It is advantageous to devise specific control procedures that address each potential failure mode where failure can occur. However, there will always be potential failure modes that are never identified or that cannot be adequately controlled at the point of failure. A lab should strive to minimize the number of out-of-control conditions created but plan for their eventual presence, as sooner or later something unexpected will happen that causes an out-of-control condition.

Statistical quality control procedures based on the periodic measurement of stable QC materials is the approach that has been successfully employed for decades to detect out-of-control conditions. Defining a QC strategy based on the periodic measurement of stable QC materials involves answering three questions:

  1. when to schedule QC evaluations,
  2. how many QC samples to measure and
  3. what QC rule(s) to apply to the QC sample results to decide the in-control or out-of-control status of the testing process.6,7

Given the answers to these questions, the performance characteristics of the QC strategy can be quantitatively assessed. Different outcome metrics can be computed, but the outcome metric that best fits into the overall model of the sequence of events that can lead to patient harm depicted in Fig. 1 is the expected number of incorrect patient results produced and reported due to an out-of-control condition of a given type and magnitude.5

Assessing Likelihood, Severity

Even though the lab has little control over the probabilities leading to patient harm after an incorrect result is reported, the lab should make its best attempt at estimating these probabilities based on the analyte, patient population and medical judgment. Likewise, the severity of harm to a patient resulting from an incorrect lab result will depend on the analyte and the patient population. The severity of harm requires assessment of the various ways the results may be used. If multiple scenarios leading to different degrees of severity are possible, the lab should consider the most likely and most harmful scenarios. EP23-A provides an example of a severity scale using five descriptive categories:

  • Negligible = inconvenience or temporary discomfort
  • Minor = temporary injury or impairment not requiring professional medical intervention
  • Serious = injury or impairment requiring professional medical intervention
  • Critical = permanent impairment or life-threatening injury
  • Catastrophic = patient death

Risk Management, Statistical QC

Risk management activities that identify failure modes and estimate the likelihood and severity of patient harm from an incorrect reported patient result (areas shaded in blue) combined with statistical QC planning and implementation to control the number of incorrect patient results produced and reported in the event of an out-of-control condition (areas shaded in green) complement one another and in combination address all aspects of the sequence of events that can lead to patient harm. Risk management should be used to minimize the number of out-of-control conditions occurring in the analysis process. Statistical QC should be used to mitigate the impact of the eventual out-of-control conditions that will inevitably arise. Protocols based on statistical QC can reduce the probability that an out-of-control condition will lead to an incorrect result being reported. The combination of risk management mitigation activities and QC practices based on statistical QC can significantly reduce the number of incorrect results reported.

Fig. 2: The complementary domains of risk management and statistical QC 

In summary, recent guidelines such as EP23-A introduce risk management principles that may not be familiar to many in the laboratory community. In combination with statistical QC, risk management principles and activities can help the laboratory estimate and control the chance of failures in the laboratory leading to incorrect patient results than cause patient harm.

Dr. Parvin is manager of Advanced Statistical Research; John Yundt-Pacheco is Scientific Fellow; and Andy Quintenz is Global Scientific and Professional Affairs Manager, Bio-Rad.

Reference

  1. CLSI. Laboratory Quality Control Based on Risk Management; Approved Guideline. CLSI document EP23-A. Wayne PA: Clinical and Laboratory Standards Institute; 2011.
  2. CLSI. Risk Management Techniques to Identify and Control Laboratory Error Sources; Approved Guideline - Second Edition. CLSI document EP18-A2. Wayne PA: Clinical and Laboratory Standards Institute; 2009.
  3. ISO. Medical devices - Application of risk management to medical devices. ISO 14971. Geneva, Switzerland: International Organization for Standardization; 2007.
  4. ISO. Medical laboratories - Reduction of error through risk management and continual improvement. ISO 22367. Geneva, Switzerland: International Organization for Standardization; 2008.
  5. Parvin CA, Yundt-Pacheco J, Williams M. The focus of laboratory quality control: Why QC strategies should be designed around the patient, not the instrument. ADVANCE for Administrators of the Laboratory 2011;20(3):48-9.
  6. Parvin CA, Yundt-Pacheco J, Williams M. Designing a quality control strategy: In the modern laboratory three questions must be answered. ADVANCE for Administrators of the Laboratory 2011;20(5):53-4.
  7. Parvin CA, Yundt-Pacheco J, Williams M. The frequency of quality control testing. QC testing by time or number of patient specimens and the implications for patient risk are explored. ADVANCE for Administrators of the Laboratory 2011;20(7):66-9.

Copyright 2015 Merion Matters. All rights reserved.

Your Privacy Matters

Before you visit, we want to let you know we use cookies to offer you a better browsing experience. To learn more about how we use cookies, please review our Cookie Policy, accessible from the Manage Preferences link below. We would appreciate your confirmation by either accepting all cookies or by declining and managing your cookie preferences under the Manage Preferences link below.

Back

Cookie Preferences

We use various types of cookies to enhance and personalize your browsing experience on our website. You may review the various types in the descriptions below and decide which cookie preferences you wish to enable. If you wish to decline all non-essential cookies, you may browse our site using strictly-necessary cookies. To learn more about how we use cookies, please visit our Cookie Policy.

Strictly-Necessary Cookies

These cookies are essential for our website to function properly. They either serve as the sole purpose of carrying out network transmissions or they allow you to browse and use features, such as accessing secure areas of the site. These cookies are strictly necessary because services like the shopping cart and invoicing cannot be provided without these cookies. Since these cookies are strictly necessary in order for our website to function, no consent is required to enable them. If you wish to disable these cookies, please update your settings under your browser’s preferences. If these cookies are disabled, please be aware that you will not be able to access certain features of the site like purchasing online.

Functionality Cookies

These cookies improve your browsing experience and provide useful, personalized features. They are used to remember selections that you have made such as your preferred language, region, and username. They also remember changes that you made in text sizes, fonts, and other customizable parts of the Web. Together, this information allows us to personalize features on our website in order to provide you with the best possible browsing experience. The information that these cookies collect is anonymous and cannot track your activity on other websites.

Analytics Cookies

These cookies are used to help ensure that your browsing experience is optimal. They collect anonymous data on how you use our website in order to build better, more useful pages. For instance, we can recognize and count the number of visitors, see how visitors moved around the site, and we can identify which pages returned error messages. This information enables us to enhance your experience and helps us troubleshoot any issues that prevented you from reaching the content that you needed. In order to improve the performance of our site, we use products such as WebTrends OnDemand and Google Analytics to track site usage. You can find the list of products that we use to collect information that is relevant to Analytics Cookies here:

  • Google Analytics
  • Adobe Analytics
  • SessionCam
  • ForeSee
  • WebTrends On Demand

Targeting or Advertising Cookies

These cookies are used to deliver personalized content based on your interests through third-party ad services. This allows us to improve your online experience by helping you find products that are relevant to your interests faster. They remember websites that you have visited and the information is shared with other organizations such as advertisers. These cookies are also used to limit the number of times you see an ad and help measure the effectiveness of a marketing campaign. You can find the list of products that we use to collect information that is relevant to Advertising Cookies here:

  • Marketo
  • Kenshoo
  • Doubleclick

Log In / Register

Log In / Register