The quality of public reporting of bloodstream infection rates among hospitals may be effected by the variation in surveillance methods, according to a study in the November 10 issue of JAMA.

"Public reporting of hospital-specific infection rates is widely promoted as a means to improve patient safety. Central line [central venous catheter]-associated bloodstream infection (BSI) rates are considered a key patient safety measure because such infections are frequent, lead to poor patient outcomes, are costly to the medical system, and are preventable. Publishing infection rates on hospital report cards, which is increasingly required by regulatory agencies, is intended to facilitate interhospital comparisons that inform health care consumers and provide incentive for hospitals to prevent infections. Interhospital comparisons of infection rates, however, are valid only if the methods of surveillance are uniform and reliable across institutions," the authors write.

Michael Y. Lin, M.D., M.P.H., of Rush University Medical Center, Chicago, and colleagues conducted a study to assess institutional variation in performance of traditional central line-associated BSI surveillance. The study included 20 intensive care units among 4 medical centers (2004-2007). Unit-specific central line-associated BSI rates were calculated for 12-month periods. Infection preventionists (infection control practitioners), blinded to study participation, performed routine prospective surveillance using Centers for Disease Control and Prevention (CDC) definitions. A computer algorithm reference standard was applied retrospectively using criteria that adapted the same CDC surveillance definitions.

Twenty ICUs in 4 medical centers contributed 41 twelve-month unit periods, representing 241,518 patient-days (total number of days beds were occupied by patients in the ICUs during the study period) and 165,963 central line-days (total number of days patients had a central line in place in the ICUs during the study period). Across all unit periods, the median (midpoint) infection preventionist-measured central line-associated BSI rate was 3.3 infections per 1,000 central line-days. The median rate determined by the computer algorithm was 9.0 per 1,000 central line-days.

When unit periods were analyzed in aggregate across medical centers, overall correlation between computer algorithm and infection preventionist rates was weak. When stratified by medical center, the researchers found that the point estimates of the correlations varied widely.

Additional analysis demonstrated significant variation among medical centers in the relationship between computer algorithm and expected infection preventionist rates. "The medical center that had the lowest rate by traditional surveillance (2.4 infections per 1,000 central line-days) had the highest rate by computer algorithm (12.6 infections per 1,000 central line-days)," the authors write.

"In this study, we found strong evidence of institutional variation in central line-associated BSI surveillance performance among medical centers. Inconsistent surveillance practice can have a significant effect on the relative ranking of hospitals, which threatens the validity of the metric used by both funding agencies and the public to compare hospitals. As central line-associated BSI rates gain visibility and importance—in the form of public report cards, infection reduction campaigns such as 'Getting to Zero,' and financial incentives for reducing rates by private insurers and the Centers for Medicare & Medicaid Services—we should seek and test surveillance measures that are as reliable and objective as possible."