AI Analysis of Labor, Delivery Notes Finds Racial Disparities in Biased Language

Veronica Barcelona, PhD, an assistant professor at Columbia Nursing, and her colleagues also found differences in how Hispanic and Asian/Pacific Islander (API) patients were described compared to White patients.

Black patients admitted to the hospital for labor and delivery are more likely to have stigmatizing language documented in their clinical notes than White patients, Columbia University School of Nursing researchers report in JAMA Network Open.

Veronica Barcelona, PhD, an assistant professor at Columbia Nursing, and her colleagues also found differences in how Hispanic and Asian/Pacific Islander (API) patients were described compared to White patients.

Clinicians’ documentation can both reflect bias and perpetuate it, the authors note, and may contribute to racial and ethnic disparities in health and health care. Barcelona and her colleagues used a form of artificial intelligence called natural language processing to analyze clinical notes for 18,646 patients admitted to two large hospitals for labor and birth in 2017-2019, identifying instances of both stigmatizing and positive language in their electronic health records.

Four categories of stigmatizing language were included: showing bias toward a patient’s marginalized language/identity; suggesting a patient was “difficult”; indicating unilateral/authoritarian clinical decisions; and questioning a patient’s credibility.

The researchers also considered two types of positive language: preferred/autonomy, which portrays the patient giving birth as an active, decision-making participant in childbirth, and the patient’s viewpoint from an impartial perspective; and power/privilege language, which includes noting markers of a patient’s status or higher psychological or socioecological position.

Language conveying bias was found for 49.3% of patients overall, and 54.9% of Black patients. The most common type of stigmatizing language, describing a patient as “difficult,” was seen in 28.6% of patients’ charts overall and 33% of Black patients’ charts.

Compared to White patients, Black patients were 22% more likely to have any type of stigmatizing language in their clinical notes. Black patients were also 19% more likely to have positive documentation in their charts than White patients.

Hispanic patients were 9% less likely to be documented as “difficult” patients than White patients and 15% less likely to have positive language overall. API patients were 28% less likely to have language in the marginalized language/identity category, and their charts were 31% less likely to include power/privilege language.

“These findings underscore the importance of implementing targeted interventions to mitigate biases in perinatal care and to foster documentation practices that are both equitable and culturally sensitive,” the authors conclude.

Barcelona’s Columbia Nursing co-authors include data manager Ismael Ibrahim Hulchafo, MD; doctoral student Sarah Harkins, BS; and Associate Professor Maxim Topaz, PhD.

The study was funded with a grant from the Columbia University Data Science Institute Seed Funds Program and a grant from the Gordon and Betty Moore Foundation.