Assessing analytical quality goals when the same analyte can be tested on multiple systems is explored.
By Curtis A. Parvin, PhD, John Yundt-Pacheco and Max Williams
Quality Assurance Series
Editor's note: This is the debut of a multi-part quality assurance series. In future issues, topics related to the evaluation, design and implementation of laboratory quality control strategies are addressed.
For a variety of reasons-consolidation, automation and rising sophistication in testing platforms-it is increasingly common to have multiple instruments or analytical "modules" that evaluate the same analyte in clinical diagnostic laboratories. For example, you may have serum glucose tested on three platforms: a single module Brand X instrument and a different Brand X model that contains two analytical modules. In this context, an analytical module is an analytical subsystem that requires calibration and generally behaves as an independent test device with its own performance characteristics. Some of the larger test systems have six or more analytical subsystems in a single unit, each requiring calibration and each having its own imprecision and bias characteristics. As well, track systems may connect multiple analyzers together to form a large system with each analyzer acting as an analytical subsystem.
Parallel test systems can produce high throughput and improved turnaround times, but they also create additional quality assurance issues. Your clients expect their results to be accurate and reliable. Dr. Jones doesn't specify which analyzer should perform Mr. Brown's glucose, nor do you want her to make such a request. Because each analytical module (or instrument) has its own imprecision and bias characteristics, a test result's uncertainty (range of test results that could be produced) will likely increase if a specimen can be evaluated on any one of multiple analytical modules. Regardless of which of the three Brand X modules performs the test, there must be confidence that an accurate result will be produced. An important quality management decision, therefore, is determining how much difference between analytical modules (or instruments) can be tolerated while maintaining an acceptable quality level.
Each time an analytical module produces a result, that result contains a certain amount of random measurement error. This random measurement error can make the result higher or lower than it should be so that if the same specimen is analyzed multiple times, multiple results are produced. The variation in results between multiple evaluations of the same specimen is a function of the analytical imprecision of the module that can be quantified by computing the coefficient of variation (CVa) of the module-the higher the CVa, the greater the range. If the range of serial evaluations is acceptable, the CVa of the module is acceptable. The amount of analytical imprecision that is acceptable can be expressed as part of a quality specification for the analyte.
Unfortunately, there is no universally accepted quality specification for most analytes. In 1999, a scientific conference was organized in Stockholm to evaluate how different quality specifications should be judged. Experts there concluded that quality specifications could be ranked in a hierarchy by preference and produced the following ranking from most to least preferable:1
- Quality specifications derived from specific clinical situations
- Quality specifications derived from general clinical situations
- Quality specifications derived from professional recommendations
- Quality specifications derived from regulatory agencies
- Quality specifications derived from "state-of-the-art" performance (as demonstrated by data from proficiency testing)
Within-individual Biological Variation
Most clinical laboratories evaluate specimens without knowing the specific clinical situation, so it's generally not feasible to set quality specifications based on specific clinical situation. It is feasible, however, to set quality specifications based on the general clinical situation. One approach is to use knowledge of the within-individual biological variation of the analyte in question to set an upper tolerance limit for allowable analytical imprecision.
The within-individual biological variation of an analyte reflects the range of values observed in a typical individual over time (CVw). A clinically significant change in an analyte is expected to be larger than the expected within-individual biological variation.2 If analytical imprecision is small relative to within-individual biological variation, it is unlikely that clinically significant shifts in an individual's results are due to random measurement error. A general rule of thumb is that analytical imprecision should be less than half the within-individual biological variation:
CVa < 0.5CVw
If analytical imprecision is less than half within-individual biological variation, then random measurement error in a test result will increase the overall variability (biological variability plus analytical imprecision) in an individual's test results by less than 12%.
A database of estimates for the components of biological variation has been constructed by Ricos et al.4 The 2010 edition of the database contains the CVw estimates for about 400 analytes that can be used to set limits for analytical imprecision (the database can be found in the QC documents section at www.qcnet.com).
If analytical imprecision should be less than half within-individual biological variation for a lab with a single instrument, what should it be when the lab has multiple analytical modules? Fortunately, the straightforward answer is: the same. If we view the multiple modules as part of an analytical system, the total imprecision of the analytical system (CVas) should be less than half the analyte's within-individual biological variation:
CVas < 0.5CVw
Each analytical module has its own characteristic imprecision. Additionally, there may be bias between analytical modules. The total imprecision (uncertainty) of the analytical system will be a function of the imprecision of the individual modules and the bias between them. The formulas for the total uncertainty of the system are:5
In these formulas, fi represents the fraction of patient samples tested on the ith analytical module of the system, Mi is the mean level, and SDi is the standard deviation for the ith module based on repeated testing of a specimen. Mi and SDi can be based on quality control data and easily obtained from QC management software. Mas, SDas and CVas are the mean, standard deviation and coefficient of variation for the total analytical system.
If there is no bias between analytical modules (all modules have the same mean) and each module just meets the goal of CVi < 0.5CVw (CVi = 100*SDi/Mi), then the total analytical system will also just meet the goal.
On the other hand, if modules have different means (bias exists between modules), the imprecision of the individual modules must be substantially less than 0.5CVw for the total analytical system to meet the goal.
For example, assume a lab has three analytical modules that test glucose. The within-individual biological variation for glucose reported by Ricos is CVw = 5.7%. Therefore, the lab's goal is CVas < 0.5*5.7 = 2.85%. Each module tests approximately one-third of the glucoses ordered on patient samples (f1 = f2 = f3 = 1/3). The mean glucose levels for the three modules based on repeated testing of a quality control material are 115 mg/dL, 114 mg/dL, and 120 mg/dL and the standard deviations are 3.0 mg/dL, 3.1 mg/dL and 3.1 mg/dL.
Each individual module meets the < 2.85% goal (100*3.0/115 = 2.6%, 100*3.1/114 = 2.7% and 100*3.1/120 = 2.6%). However, applying the formulas for the total analytical test system gives Mas = 116.3 mg/dL, SDas = 4. 0 mg/dL and CVas = 3.5%, which does not meet the goal. The size of the bias between modules causes the total uncertainty of the test system to exceed the performance goal.
As a second example, assume the lab has three modules that test glucose, with two performing the majority of patient testing (f1 = 0.5, f2 = 0.4) and the third module used less frequently (f3 = 0.1). The mean glucose levels for the three modules are 115 mg/dL,116 mg/dL and 112 mg/dL and the standard deviations are 3.0 mg/dL, 2.9 mg/dL and 3.1 mg/dL, giving individual coefficients of variation of 100*3.0/115 = 2.6%, 100*2.9/116 = 2.5% and 100*3.1/112 = 2.86%. In this case, Mas = 115.1 mg/dL, SDas = 3.2 mg/dL and CVas = 2.76%. Given the imprecision characteristics of the individual modules, the bias between modules is not so large as to cause the uncertainty of the total analytical test system to exceed the analytical quality goal. Therefore, serum glucose results generated in the laboratory (irrespective of the analytical module used) meet the performance goal.
Assuring Analytical Quality
Adding analyzers and modules to get higher throughput and efficiency requires vigilance over the analytical quality that instruments generate individually. Understand that this may wane when considered part of the total laboratory system. Careful selection, adherence and management of analytical quality goals will help 1) justify your purchase of that second Brand X instrument, 2) keep your clinician clients from questioning results and 3) allow you to keep up with your laboratory's workload demands.
Dr. Parvin is manager of Advanced Statistical Research; John Yundt-Pacheco is a Scientific Fellow; and Max Williams is Global Scientific and Professional Affairs Manager, Bio-Rad.
1. Kenny D, Fraser CG, Petersen PH, Kallner A. Consensus agreement. Scand J Clin Lab Invest 1999;59:585.
2. Fraser CG. Biological variation: From principles to practice. Washington DC: AACC Press, 2001:67-90.
3. Fraser CG. Biological variation: From principles to practice. Washington DC: AACC Press, 2001:55.
4. Ricos C, Alvarez V, Cava F, Garcia-Lario JV, Hernandez A, Jimenez CV, Mininchela J, Perich C, Simon M. Current databases on biologic variation: Pros, cons and progress. Scand J Clin Lab Invest 1999;59:491-500.
5. Stuart A, Ord JK. Kendall's advanced theory of statistics, Vol 1 (5th edition). New York: Oxford University Press, 1987:171.
Copyright 2015 Merion Matters. All rights reserved.