Reimagining Legacy Animal Data Through the Lens of AI

By Yiguang Zhu | December 14, 2023

This August, members of our team attended the 12th World Congress on Alternatives and Animal Use in the Life Sciences (WC-12), a significant event dedicated to advancing the 3Rs principles (replacement, reduction, and refinement) in scientific research, industry, and regulatory practices. One standout session explored an innovative approach: leveraging Artificial Intelligence (AI) to unlock the potential of legacy animal data. Discussion in that session was both intriguing and promising, inspiring us to explore this intersection further.

Legacy animal data generally refers to the vast repositories of biological and toxicological data harvested from existing animal studies, encompassing experiments looking into prognosis/diagnosis biomarkers, drug efficacy/safety assessment, toxicological testing, etc. These datasets, accumulated over decades, hold invaluable insights in fields such as physiology, biomedicine, pathology, pharmacology, etc. While the term "legacy" might suggest dated or obsolete, most of the data sets are of great value and, because the transition animal models are still widely used now, the repository is still growing.

While animal models were, and still are the cornerstone of biomedical research, they come with inherent limitations. One of the challenges is that animals, even those genetically modified to mimic human reactions, could never perfectly replicate human physiology due to limitations like interspecies differences. For instance, countless drug candidates have shown promise in animal models, only to fail in human clinical trials due to unforeseen side effects or lack of efficacy. Societal consciousness about animal welfare heightens the trend towards more human-relevant and humane research methodologies. 

AI technology is transforming the landscape of biomedical research. Conventional in vivo methodologies primarily relied on individual experiments and systematic reviews with confined scopes. Leveraging AI's power to analyze large, complex data, we are witnessing a revolutionary shift in the utilization of legacy studies, reshaping the models to be more predictive. For instance, AI has been effectively applied in computational toxicology. As demonstrated by automated read-across tools like RASAR (read-across-based structure activity relationships), their accuracy in toxicity predictions across thousands of chemicals significantly surpasses the reproducibility rates of traditional animal models. Examples like RASAR vividly showcase how AI can transform biomedical research, moving from labor-intensive, individual studies to comprehensive, data-driven analyses.

A key example from the WC-12 is the development of an AI tool — AnimalGAN. AnimalGAN is a Generative Adversarial Network (GAN) model that was specifically developed as an alternative to traditional animal studies for clinical pathology assessment. AnimalGAN was trained by data from the Open Toxicogenomics Project-Genomics Assisted Toxicity Evaluation Systems (TG-GATEs) database, comprising 38 clinical pathology measures from rat studies. The unique training process enables this model to simulate a wide range of drugs with different chemical structures, drug classes, and approval years. When validated with hepatotoxicity assessment, AnimalGAN outperformed traditional QSAR (quantitative structure-activity relationship) methods in predictive accuracy. The promising result of AnimalGAN showed that AI simulation can, to a certain extent, mimic in vivo experiments with high fidelity. This example not only demonstrates the practical application of AI in toxicology but also highlights the potential for AI to provide deeper insights and more accurate predictions.

Reflecting on the session of WC-12, we believe the integration of AI with legacy animal data, as exemplified by innovations like AnimalGAN, marks a significant paradigm shift in biomedical research. These advancements are not just enhancing toxicological research but also enabling more predictive and humane approaches to be developed. The AI-driven evolution promises a future in which virtual experiments, simulation, and computational modeling can reduce and gradually replace the role of animal testing, aligning with 3Rs principles while improving accuracy. As we embrace this transformation, the focus is on optimizing reliable AI applications and addressing data accuracy, bias, and ethical challenges. This synergy of AI and historical data opens new horizons in biomedical research and will pave the way for a more ethical and efficient future.

The cover picture was generated by DALL.E.

The views expressed do not necessarily reflect the official policy or position of Johns Hopkins University or Johns Hopkins Bloomberg School of Public Health.

Previous
Previous

Could Octopuses Become the Next “Lab Rat”? 

Next
Next

Where Do New Approach Methodologies Belong in Toxicity Testing? Key Takeaways from ASCCT