The authors found that all the scores demonstrate a degree of ethnic bias in their standardized mortality-ratio calibrations in that there was a consistent pattern of overprediction of mortality for African American and Hispanic patients (compared to Asian and white patients). If these scores were used for purposes of triage for individual patients, African American and Hispanic patients could be unfairly denied appropriate access to ICU beds, ventilators and other such resources.
“Extreme care must be taken in the application of current scoring systems for triage decisions in individual patients,” they write, “if they are to be used at all for these purposes in their present states.”
UVA’s Call to Action
The authors go on to call for improved scoring systems that are more useful for making treatment decisions for individual patients. These systems, they say, need to better reflect the communities ICUs serve and the patients they treat. “Our detection of inadvertent, but undeniable, bias in severity scores would seem to indicate that it is time to develop scoring systems that are more precise than the current one-size-fits-all systems,” they write. “Incorporating precise socioeconomic and geographical parameters, along with a set of specific biomarkers for a given disease, into future prediction models might make such models less biased and more robust.”
In addition to the potential effect on individual patients, the scoring systems may be giving a flawed impression of how well ICUs are serving their patients in general, the researchers say. An ICU with a predominantly Black patient population, for example, could appear to be outperforming expectations if fewer patients die than predicted. This could ultimately feed into incorrect assumptions about health disparities and undermine medical research.
“This was not a case of bias in terms of the designs of the scoring systems. We do not know exactly why these findings occurred, but suspect they are a result of pre-existing socio-economic and long-term health care issues that are not taken into account by the score input values,” Stone said. “We don’t currently have the pre-admission data to analyze this hypothesis, but as medicine becomes increasingly digitized, such data will become increasingly available.
“The lesson is, don’t assume that tools like these are 100% objective and complete, because they only capture one small snapshot of an evolving and dynamic clinical and human situation.”
Findings Published
The researchers have published their findings in The Lancet Digital Health. The research team consisted of Rahuldeb Sarkar of King’s College, London: Christopher Martin of UCL Institute for Health Informatics; Heather Mattie of Harvard University; Dr. Judy Wawira Gichoya of Emory University; Stone; and Dr. Leo Anthony Celi of the Massachusetts Institute of Technology and Harvard Medical School.
Sarkar disclosed he has received writing fees for health care reports from Crystallise UK. Martin is a director for Crystallise UK.
The work was supported by the National Institutes of Health’s National Institute of Biomedical Imaging and Bioengineering, grant R01 EB017205.
To keep up with the latest medical research news from UVA, subscribe to the Making of Medicine blog.