Where Do New Approach Methodologies Belong in Toxicity Testing? Key Takeaways from ASCCT

By Breanne Kincaid | November 8, 2023

With over 350,000 discrete chemical compounds and mixtures in commerce globally – many of which have never been thoroughly assessed for their risk to human health – it is imperative that industries and regulators have access to fast, accurate, human-relevant toxicity testing methods. New Approach Methodologies, or NAMs, have a lot of promise in this space. NAMs, are broadly defined as “any technology, methodology, approach, or combination thereof that can be used to provide information on chemical hazard and risk assessment that avoids the use of intact animals.” As these approaches mature, several questions have emerged around their actual role in toxicity testing for human risk assessment. 

Two weeks ago, I took the opportunity to attend the American Society for Cellular and Computational Toxicology (ASCCT) annual meeting Spotlighting NAMs: Elevating New Approaches in Risk Assessment, with the goal of gaining clarity around a few key topics in this space. The meeting provided an excellent forum for presentations and conversations with attendees from a diverse background of regulated industries, NAM development, advocacy organizations, and federal agencies. Stakeholders asked tough questions, experts provided valuable insight, and as with all worthwhile discussions, more questions and complexities emerged with each response. Here are three key takeaways from the conference: 

First, while “fit for purpose” and “context of use” were referenced interchangeably, there is a general preference for adopting the phrase “context of use” to reduce ambiguity and emphasize that: (1) The NAM must answer a specific scientific question for a defined purpose (ie. Predicting whether a candidate pharmaceutical will cause drug induced liver injury (DILI) in humans for FDA investigational new drug (IND) preclinical data submission); (2) The specific toxicological effect or response the NAM elucidates must be both biologically relevant and sufficient to address the defined regulatory endpoint of interest; (3) The NAM must have a defined chemical applicability domain such that the bounds of physicochemical properties of a test substance within which the NAM can produce a valid, accurate signal are known and enumerated.  

Speaking of validity and accuracy, a second major takeaway was the palpable frustration of NAM developers and regulated industries over the absence of defined metrics communicating what benchmarks of positive and negative predictivity, accuracy, reproducibility, or endpoint specificity that a NAM must meet in order for regulators to be confident making decisions on the basis of its data. For example, performance metrics of Emulate’s liver chip have been successfully evaluated in partnership with FDA under separate metrics in several different studies – when will their efforts be sufficient for FDA’s stamp of approval? Currently, NAM end users – the drug developers, consumer product companies, and others who would like to use NAMs to support confidence in the safety profile of their products – run into the common sentiment that while NAMs won’t be rejected from a data submission package, the information they provide is too nascent to truly support claims of the absence of toxicity. Without enumerating what test is a good test, companies are disincentivized from relying too heavily on NAMs when it comes to generating required hazard documentation. While the US  Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) released a draft report on the “Validation, Qualification, and Regulatory Acceptance of New Approach Methodologies” this summer, the agency has no regulatory authority, nor can it force regulating agencies to put its recommendations into practice. That’s not to say no efforts have been made on this front:  FDA pointed to its Drug Development Tool (DDT) qualification programs as a potential route for confident integration of a NAM into the drug approval data stream, although no NAMs have successfully progressed through this program at this time.  

The third major takeaway was a general lack of consensus among regulators for where NAM validation efforts should come from. Several FDA and EPA representatives highlighted the necessity of NAM developers working with agency offices from the inception of their test method in order to ensure it conforms to agency needs (despite those needs being opaque to the agency itself). Yet other presenters from the same agencies envisioned pharmaceutical companies championing their in-house NAMs, while still others explicitly cautioned against the bias introduced by product developers bearing the responsibility of convincing the scientific community that their NAMs are sufficient. 

In summary, the ASCCT annual meeting shed light on the evolving landscape of NAMs in regulatory toxicology and the challenges and opportunities they present. It was evident that there is room for progress from federal agencies communicating their metrics for qualification in an unambiguous, unbiased manner. It was also clear that NAMs are already being incorporated into in-house, upstream screening programs and agency investigations into existing toxicants. Despite some remaining ambiguities, the field is making significant strides toward the integration of NAMs for more effective and efficient toxicological assessments. 

The views expressed do not necessarily reflect the official policy or position of Johns Hopkins University or Johns Hopkins Bloomberg School of Public Health.

Previous
Previous

Reimagining Legacy Animal Data Through the Lens of AI

Next
Next

Three State Bills/Laws That Don’t Go Far Enough To Reduce And Replace Animals Used In Testing (Part 1)