Mission Statement

The aims and purposes of our group.

We document and analyse sources of health inequalities to create tools to safely use and effectively communicate the risks of digital technology in healthcare settings. We aim to provide a method to evaluate and translate large dataset conclusions to the individual patient level, thereby promoting equitable and person-focused healthcare practices.

Purpose

The focus of the team is the development, deployment and evaluation of digital solutions, including AI, within GSTT to ensure maximal patient benefit and good value for money for the NHS. In our mission to deploy safe and well-tested AI, we have been discussing the topics of fairness since our inception in 2020.

The guidance thus far published on bias mitigation in AI and addressing health disparities through digital solutions lacks practical instruction. While we appreciate the importance of diversity in data, validation and developer teams, we find it difficult to benchmark at what point these aims can be considered achieved.

Moreover, we find that currently-available fairness metrics often address individual attributes independently of healthcare context and without intersectional analysis, focussing on equality in terms of sample numbers only. This partly stems from the intuition to address health disparities through the lens of the protected characteristics (Equality Act 2010). These do not account for societal factors (e.g. poverty, literacy), which play defining roles in healthcare outcomes and are the leading causes of health disparities globally.

Guidance also champions transparency on training and testing data by encouraging publication of communication tools such as model cards, similar in nature to information leaflets produced for medications. However, the guidance does not address how this information can be interpreted by clinicians considering implementation of AI products when a patient is not demographically represented in AI model data. The end outcome is that results of clinical AI evaluations are difficult to interpret and difficult to communicate, increasing the potential for harm. We have faced this problem when both developing and evaluating AI solutions for healthcare use in the NHS.

Vision

We want to reduce health disparities by integrating digital tools into healthcare in a way that safeguards against discrimination and biases, whilst fostering knowledge, awareness and processes to address and mitigate non-pathology factors influencing patient treatment. We want to alert our colleagues to non-disease factors they might be responding to or making unbased conclusions on. It is an uncomfortable topic for a lot of care providers. We want to illuminate some of these stories. We want to make it well understood by our colleagues what we mean when we speak of health disparities, injustice, health inequalities. We want to create tools that are easy and free to use that allow cheap, quick and communicable evaluation of digital tools.

What are the end goals?

• Develop comprehensive methods to identify, test and address hidden sources of health disparities which are unrelated to the actual disease pathology.

• Eliminate pre-conceptions from patient interactions, facilitating holistic approaches to treatment that prioritise individual patient needs.

• Enable efficient processing of routine care data for better research, understanding and development, ultimately leading to improved patient care.

Digital tools are trained using large datasets and evaluated on large data sets in a public health centred way. We want to refocusing digital tools, their development and evaluation, back to person-focused medicine. When creating an AI tool, you need to do a clinical risk evaluation and create a hazard log to give you information on how your pathway may be affected. You need to do an evaluation of the AI product to see the technical performance to give you information on how good the AI is at the task that it was set, on a cohort. You also need to evaluate your health economics to give you information on potential savings made. We are making a way to add value to multi-layer approach to evaluate AI on an individual level. make conclusions on large datasets that apply to individuals. This provides the missing ingredient in AI evaluation process to ensure a holistic approach.