Interobserver Agreement of Centers for Disease Control and Prevention Criteria for Classifying Infections in Critically Ill Patients*.
Klouwenberg, Peter M. C. Klein MD, PharmD 1,2; Ong, David S. Y. MD, PharmD 1,2; Bos, Lieuwe D. J. MD 3; de Beer, Friso M. MD 3; van Hooijdonk, Roosmarijn T. M. MD 3; Huson, Mischa A. MD 3; Straat, Marleen MD 3; van Vught, Lonneke A. MD 4; Wieske, Luuk MD 3; Horn, Janneke MD, PhD 3; Schultz, Marcus J. MD, PhD 3; van der Poll, Tom MD, PhD 4; Bonten, Marc J. M. MD, PhD 1,2; Cremer, Olaf L. MD, PhD 5
Critical Care Medicine.
41(10):2373-2378, October 2013.
(Format: HTML, PDF)
Objectives: Correct classification of the source of infection is important in observational and interventional studies of sepsis. Centers for Disease Control and Prevention criteria are most commonly used for this purpose, but the robustness of these definitions in critically ill patients is not known. We hypothesized that in a mixed ICU population, the performance of these criteria would be generally reduced and would vary among diagnostic subgroups.
Design: Prospective cohort.
Setting: Data were collected as part of a cohort of 1,214 critically ill patients admitted to two hospitals in The Netherlands between January 2011 and June 2011.
Patients: Eight observers assessed a random sample of 168 of 554 patients who had experienced at least one infectious episode in the ICU. Each patient was assessed by two randomly selected observers who independently scored the source of infection (by affected organ system or site), the plausibility of infection (rated as none, possible, probable, or definite), and the most likely causative pathogen. Assessments were based on a post hoc review of all available clinical, radiological, and microbiological evidence. The observed diagnostic agreement for source of infection was classified as partial (i.e., matching on organ system or site) or complete (i.e., matching on specific diagnostic terms), for plausibility as partial (2-point scale) or complete (4-point scale), and for causative pathogens as an approximate or exact pathogen match. Interobserver agreement was expressed as a concordant percentage and as a kappa statistic.
Measurements and Main Results: A total of 206 infectious episodes were observed. Agreement regarding the source of infection was 89% (183/206) and 69% (142/206) for a partial and complete diagnostic match, respectively. This resulted in a kappa of 0.85 (95% CI, 0.79-0.90). Agreement varied from 63% to 91% within major diagnostic categories and from 35% to 97% within specific diagnostic subgroups, with the lowest concordance observed in cases of ventilator-associated pneumonia. In the 142 episodes for which a complete match on source of infection was obtained, the interobserver agreement for plausibility of infection was 83% and 65% on a 2- and 4-point scale, respectively. For causative pathogen, agreement was 78% and 70% for an approximate and exact pathogen match, respectively.
Conclusions: Interobserver agreement for classifying sources of infection using Centers for Disease Control and Prevention criteria was excellent overall. However, full concordance on all aspects of the diagnosis between independent observers was rare for some types of infection, in particular for ventilator-associated pneumonia.
(C) 2013 by the Society of Critical Care Medicine and Lippincott Williams & Wilkins