Artificial intelligence has potential to counter vaccine hesitancy while building trust in vaccines, but it must be deployed ethically and responsibly, argue Heidi Larson and Leesa Lin
Given the sluggish pace of traditional scientific approaches, artificial intelligence (AI), particularly generative AI, has emerged as a significant opportunity to tackle complex health challenges, including those in public health.1 Against this backdrop, interest has focused on whether AI has a role in bolstering public trust in vaccines and helping to minimise vaccine hesitancy, which the World Health Organization named as one of the top 10 global health threats.2
Vaccine hesitancy is a state of indecision before accepting or refusing a vaccination.3 It is a dynamic and context specific challenge that varies across time, place, and vaccine type. It is influenced by a range of factors, including sociocultural and political dynamics, as well as individual and group psychology. Its multifaceted and temporal nature makes it a moving target, challenging to predict and harder to tackle. Additionally, the emergence of misinformation in public health, notably during crises such as the covid-19 pandemic, calls for rapid, data driven responses.4
Traditional public health approaches often struggle to keep pace with the swift dissemination of misinformation. Despite initiatives to counter misinformation through fact checking, such misinformation still retains a substantial influence over people’s beliefs, trust, and decision making processes.56 …