27th

November

2024

Last Webinar Wednesday of 2024!
4 days to go!
Register Now

6th

December

2024

Last SDE of 2024!
13 days to go!
Register Now
  1. Home
  2. /
  3. Communications
  4. /
  5. PHUSE Blog
  6. /
  7. An Alternative Point of View – A Safety Mindset

An Alternative Point of View – A Safety Mindset

– Written by Anonymous, Safety Data Scientist

Explore different viewpoints with our PHUSE Blog series ‘An Alternative Point of View’. Hear from anonymous industry experts as they discuss controversial topics. Remember, these are opinions, not facts. While we value diverse perspectives and open dialogue, note that views expressed may not always be based on established facts. Join us for an engaging journey of exploration and discussion.

Screenshot 2021-02-03 at 12.12.30.png

John Tukey, the father of data science, was the mentor of Joe Heyse. And for all too brief a time, Joe was my mentor. I think of him more as a statistical philosopher and me as his pupil. He transformed the way I understand, share and deploy statistics – and data science. When he started mentoring me, he handed me the book Willful Ignorance (by Herbert Weisberg) and warned me that it was about how we’ve been making statistics worse, not better. Not for everything, to be sure, but practically everything. Joe and I went through the whole book together, chapter by chapter. They are the most impactful teachings I have ever experienced. I will attempt to channel safety data science from John Tukey through Joe Heyse to you. To get us started, here’s a quote from the book:

“Tukey hoped to restore a balance between open-ended discovery of potentially important context-dependent information and mathematical techniques for sorting the signal from the noise. The ultimate decisions, he believed, were mainly in the province of those with substantive knowledge. The role of data analysis was to provide quantitative evidence and to assist, but not to replace, the substantive expert to sift through and interpret the evidence. By focusing exclusively on (decision rules), the researcher was effectively cutting the clinician out of the ‘judicial’ process of evaluation.”

An eminent statistician, of the order of magnitude of Ronald Fisher (the father of statistics), saying that the role of the statistician is to support – not displace – the subject matter expert in the decision-making process!

About 75 years ago, we began leveraging randomisation in clinical trials. Ever since, we’ve been developing the most powerful tests – for efficacy analyses. Studies are designed for efficacy. The study population is fairly homogonous (and enriched) for efficacy. Generally, for a positive study, the p-value of the primary efficacy endpoint must be less than 0.05. If the efficacy endpoint is looked at more than once or there is more than one efficacy endpoint, there must be multiplicity adjustments. The rules for efficacy are very well known and understood; safety has been following along for the ride – but safety assessments are different from efficacy analyses. There are often subgroups, dose response, and other important clinical considerations for safety. Ignoring these clinical considerations results in mischaracterisation of the safety data. We can no longer simply repurpose efficacy analyses for safety assessments. We need to build safety assessments from the ground up – recognising that there’s a different framework for safety than for efficacy.

There’s a famous quote from John Tukey that puts this into perspective:

“Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.”

For our discussion, the ‘exact answer’ is an efficacy analysis and the ‘wrong question’ is a safety assessment.

Why is this? We’re dealing with different kinds of knowledge. For efficacy, we carefully learn about the endpoints in a series of studies until we have a high level of understanding. So much so that we can set a bar and if we jump over it, then we have objective and convincing evidence that not only do we understand the efficacy but we have also demonstrated that efficacy. For safety, we have to consider everything. “We want to do it all, we just don’t want to do it all on everything” (quoting a personal conversation with Mac Gordon, J&J). So, we have a short list of safety topics of interest – of clinical interest. However, it’s typically not a small number of safety topics of interest, and anyway it’s a moving target – the list can grow or shrink as the data accumulates. We need a scientific evaluation of the safety data to demonstrate safety. We need approximate answers to the right questions.

Fast forward 40 years and we’re still having the same conversations. For example, Robert O’Neill had this to say:

“Statistical methodology has not been developed for safety monitoring to match that for efficacy monitoring.”

The words sound different, but the meaning is the same. What has caused us to stop talking about the differences between safety assessments and efficacy analyses and to start reimagining how to evaluate programme-level safety data throughout a development programme? I believe that it was the FDA IND safety reporting final rule. Jacqueline Corrigan-Curay described the spirit of the final rule this way:

“The important thing is to have a thoughtful process; a system in place to look for clinically important imbalances, applying the best clinical and quantitative judgment, while maintaining trial integrity.”

To improve the overall quality of safety reporting and to comply with requirements for IND safety reports based on data in the aggregate, the sponsor should have in place a systematic approach for evaluating the accumulating safety data. From the beginning, sponsors have agreed with the spirit of the final rule. Operationalising it, however, has been exceptionally challenging. The FDA identified three types of serious events that could require expedited reporting. Type A events are really bad and associated with drug exposure; one or two events would be informative. Type B events are bad and rare, but not necessarily associated with drug exposure; a small cluster would be informative. Both of these types of events could be handled quite well with traditional processes, such as medical safety review of individual cases and medical monitoring of individual studies. Type C events, however, can be anticipated to occur in the patient population, regardless of drug exposure; an aggregate analysis would be needed to determine reasonably possible causal association. If you think that the FDA has manufactured their own problem here, the ICH also requires aggregate analyses. A considerable increase in the rate of an already recognised adverse drug reaction is reportable according to ICH requirements. How to do this is not explained but would require an aggregate analysis. The ICH and the rest of the world have been avoiding this problem, I believe, because there has not been a solution.

What has sparked this culture change? In the words of Niels Bohr:

“Every great and deep difficulty bears in itself its own solution. It forces us to change our thinking in order to find it.”

Sponsors and the FDA have been working together to improve approaches for assessing the accumulating safety data, with more cross-functional teams for planning and coordinating programme-level safety assessments and more guidance on aggregate safety assessments. To meet the spirit of the IND safety reporting final rule, sponsors have developed processes and procedures to evaluate, assess and act on accumulating safety information during development on an ongoing basis. Although not as eloquent as Dr Corrigan-Curay’s definition for the spirit of the IND safety reporting final rule, here is a more operational definition: a scientific evaluation of accumulating programme-level safety information throughout product development, leveraging scientific expertise and medical judgement of multidisciplinary teams, requiring:

• a multidisciplinary approach

• assessments customised for the specific product

• quantitative frameworks for measuring evidence of association

• decisions that incorporate medical judgement.

It’s one of the greatest challenges in the pharmaceutical industry, but at the same time, one of the greatest opportunities for growth and collaboration: as different region-specific regulatory initiatives go beyond ICH technical requirements, changes in emphasis develop. For post-market safety surveillance, the EMA/HMA have developed numerous and diverse good pharmacovigilance practices (GVP) modules that implement the 2010 EU PV legislation. Progress has been made in harmonisation across regions. For pre-market safety monitoring, the US FDA has placed responsibility squarely on sponsors to send a safety report only after they have judged there to be a reasonable possibility that the study drug caused the event. For other regional regulatory agencies (including in the EU), assessment of causality takes into account the opinion of the investigator as well as the sponsor. All SUSARs are required to be reported in an expedited manner. Consequently, it can be challenging to apply a single global approach to report handling. Regulators in different regions could be examining different SUSAR data as trials progress, which can introduce additional challenges.

Now that sponsors and the FDA have aligned with solutions for the final rule, other regulatory agencies – who have had the same concerns as industry – will be realigning about safety reporting. The goal for safety is to identify, understand and manage risks so that we can deliver effective medicines to the right patients:

• Patient perspective: allow distinct patient populations to realise important health benefits of effective drugs

• Regulatory perspective: improve the way we identify patients at higher risk so that we can better position products in the marketplace

• Sponsor perspective: avoid premature termination of programmes that show promise, even in the face of important risks

Multidisciplinary sponsor teams and independent assessment entities working together can protect the safety of patients while also protecting the integrity of the data, allowing clinical trials to answer the important questions they were designed to answer. To achieve this, we need to stop adhering to an efficacy mindset for safety and embrace a safety mindset.