Use of AI to fight COVID-19 risks harming ‘disadvantaged groups’, experts warn

16 March 2021
by David Leslie, Anjali Mazumder, Aidan Peppin, Maria K Wolters, Stephen Cave, Jess Whittlestone, Rune Nyrup, Seán Ó hÉigeartaigh, Rafael A Calvo

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease.

Two new papers from researchers at CSER and CFI caution against blinkered use of AI for data-gathering and medical decision-making as we fight to regain some normalcy in 2021.
“Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic,” said Dr Stephen Cave, Director of CFI and lead author of one of the articles.  “The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology.”

In a further paper, co-authored by  Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale – predicting deterioration rates of patients who might need ventilation, for example – if it does so based on biased data.  Datasets used to “train” and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of “lower socioeconomic status”. “COVID-19 has already had a disproportionate impact on vulnerable communities. We know these systems can discriminate, and any algorithmic bias in treating the disease could land a further brutal punch,” Hagerty said.

In December, protests ensued when Stanford Medical Centre’s algorithm prioritized home-workers for vaccination over those on the Covid wards. “Algorithms are now used at a local, national and global scale to define vaccine allocation. In many cases, AI plays a central role in determining who is best placed to survive the pandemic,” said Hagerty. “In a health crisis of this magnitude, the stakes for fairness and equity are extremely high.”

Along with colleagues, Hagerty highlights the well-established “discrimination creep” found in AI that uses “natural language processing” technology to pick up symptom profiles from medical records – reflecting and exacerbating biases against minorities already in the case notes. They point out that some hospitals already use these technologies to extract diagnostic information from a range of records, and some are now using this AI to identify symptoms of COVID-19 infection.

Similarly, the use of track-and-trace apps creates the potential for biased datasets. The researchers write that, in the UK, over 20% of those aged over 15 lack essential digital skills, and up to 10% of some population “sub-groups” don’t own smartphones.“Whether originating from medical records or everyday technologies, biased datasets applied in a one-size-fits-all manner to tackle COVID-19 could prove harmful for those already disadvantaged,” said Hagerty.

In the BMJ articles, the researchers point to examples such as the fact that a lack of data on skin colour makes it almost impossible for AI models to produce accurate large-scale computation of blood-oxygen levels. Or how an algorithmic tool used by the US prison system to calibrate reoffending – and proven to be racially biased – has been repurposed to manage its COVID-19 infection risk.

Does “AI” stand for augmenting inequality in the era of covid-19 healthcare?

Read Full Paper

Using AI ethically to tackle covid-19

Read Full Paper

 

Subscribe to our mailing list to get our latest updates