The pandemic has — to a large extent — exposed systematic biases in our health care system. Representation of minority ethnic groups in vaccine efficacy has been disproportionately low—and, in turn, minority groups show greater reluctance to getting vaccinated. Studies show that this mistrust is deep-seated: even among minority healthcare workers who also show lower vaccination rates in the UK and US.
Yet, researchers are optimistic that AI can be applied in meaningful ways to encourage trust among minority populations. In fact, the MHRA recently awarded 1.5 million pounds to develop AI tools intended to do just that.
The most common reason for vaccine hesitancy among minority groups is concern about its adverse effects. Therefore, rapid and transparent reporting on potential adverse effects could go a long way in improving communication and addressing hesitancy.
For example, apps—such as The Yellow Card and V-Safe—allow users to self-report vaccine side effects. To manage this wealth of information, AI can then be used to identify genuine adverse effects. These datasets and findings can be shared, ensuring transparency and accessibility to vaccine information.
Though experts have applauded efforts to apply AI to this issue, some concerns remain. Using limited self-reported data and electronic health records means that transparent auditing of clinical AI tools is necessary to combat bias in the datasets and algorithms.