4th

February

2025

Data Transparency Winter Event 2025
32 days to go!
Register Now

28th

January

2025

Real World Evidence Community Forum
25 days to go!
Register Now

29th

January

2025

January Webinar Wednesday
26 days to go!
Register Now

16th

March

2025

US Connect 2025
72 days to go!
Register Now
  1. Home
  2. /
  3. Communications
  4. /
  5. PHUSE Blog
  6. /
  7. PHUSE Data Transparency Autumn Event 2024 Summary

PHUSE Data Transparency Autumn Event 2024 Summary

The Data Transparency Autumn Event 2024 was a resounding success, with 539 attendees joining us from across the industry.

We had the largest registration numbers ever received for a PHUSE Data Transparency event, and the level of engagement throughout all three days was incredible! Continue reading for reflections from our Data Transparency Leads.

For those who missed any sessions or would like to revisit key discussions, all recordings are now available on the PHUSE Advance Hub.

Day 1

– Jean-Marc Ferran, Qualiance

Day 1 of the PHUSE Data Transparency Autumn Event showcased cutting-edge developments in secure data sharing, anonymisation, and clinical trial transparency. The sessions focused on innovative approaches to balance privacy, data utility, and research advancement.

Amit Gautam (Abluva) introduced Purpose and Event-Based Access Control (PEBAC) as a transformative approach to data security. Traditional access control systems struggle with the growing complexity of data sharing and evolving cyber threats. PEBAC addresses these challenges by granting access based on specific purposes and triggering events, allowing for more dynamic, context-aware control. This method enhances both data protection and transparency, making it easier for organisations to share sensitive information securely while preventing breaches. Amit emphasised how PEBAC empowers businesses to enforce more granular access controls, by offering a glimpse into the future of autonomous security.

Helen Spotswood (Roche) and Karolina Stępniak (AstraZeneca) discussed the complexities of rare disease data sharing. Rare disease trials are harder to anonymise due to the unique nature of the data, making controlled sharing more challenging. The presentation highlighted the work of the PHUSE Rare Disease/Small Population Data Sharing Working Group project, which aims to overcome barriers such as re-identification risks and privacy concerns. By collaborating on a white paper, the project hopes to create practical recommendations for more effective data sharing for rare diseases. Their work seeks to facilitate data sharing through promoting methods such as data minimisation or multimodal K-anonymity, while addressing privacy concerns.

Lisa Pilgram (University of Ottawa) shared insights from a clinical case study on anonymisation and the privacy–utility trade-off. Anonymising data is crucial for privacy, but it often comes at the cost of utility, impacting on the reproducibility of scientific results. Using data from the German Chronic Kidney Disease study, Lisa compared two anonymisation strategies – generic and use case-specific – demonstrating that while tailored anonymisation improves data utility, it requires more resources. The study recommended that data controllers consider the impact on downstream analyses when anonymising data to maximise utility and emphasised the importance of interpreting generic utility metrics with caution and more advanced methods such as estimate agreements.

Luk Arbuckle (Privacy Analytics) and Stephen Bamford (Johnson & Johnson) explored the challenges of diversity in clinical trial data and how to maintain transparency through statistical sharing. As clinical trials aim for more diverse participation, anonymising this data becomes more complex. Their session highlighted the transition from raw data sharing to statistical sharing – using synthetic datasets, aggregated counts, and advanced analytics (such as AI/ML). This method ensures clinical trial data remains both private and useful, and produces accurate, bias-mitigated statistics while safeguarding participant information. The presenters underscored the importance of generating ‘truthful statistics’, to maintain data integrity and enable meaningful insights.

DAY 1.png

Day 2

– Muhammad Oneeb Rehman Mian, Privacy Analytics

The second day featured four insightful presentations on the role of AI in clinical data protection and transparency. The first presentation, titled Disclosure Without Exposure – Upstream Strategies for Enabling AI to Effectively and Efficiently Protect Clinical Data, was delivered by Honz Slipka (Certara). Honz discussed strategies for proactively integrating data protection into the drug development process. He introduced the SMART approach, focusing on lean authoring, terminology harmonisation, and establishing sensitive data libraries. He emphasised that strategic medical authoring for regulatory transparency can help leverage AI-enabled software to standardise, automate and protect documents more accurately, consistently and efficiently.

Kathi Künnemann (Staburo) then spoke on Harnessing the Power of Artificial Intelligence (AI) in Accelerating Production of Clinical Documentation: A Case Study with Plain Language Summaries (PLS). Kathi explored the potential of AI to generate PLS and shared the findings from a case study. She described the immense potential of AI tools such as ChatGPT-4 to significantly reduce the time required to create PLS. Kathi did note that AI is not able to replace a medical writer’s work completely, and human review is needed, particularly for addressing issues around preciseness of the content created. Overall, a combined AI and medical writer approach yielded PLS content that had the same degree of understandability as PLS content created by medical writers alone.

In the third presentation, Patricia Thaine (Private AI) presented on De-Identification of Medical Data for AI Training, Speeding Up Clinical Trials, and Enabling Healthcare Research. Patricia described the need for proper privacy-enabling practices for medical data, particularly for LLM applications, using techniques such as data minimisation, cleaning and sanitising the data, de-identification to remove sensitive information from outputs, and blocking personal and sensitive data from responses. Patricia advised attendees on what to look for in a de-identification solution (multimodal data privacy that is scalable, customisable and defensible) and spoke about third-party verification options such as model assessment and warranty.

The final presentation by Woo Song (Xogene), titled Leveraging AI Agentic Networks to Automate Transparency Activities, delved into the application of AI agents in automating tasks currently performed by humans in clinical trial transparency. Woo demonstrated how this novel approach can generate plain language protocols, summaries, and informed consent forms (ICFs) in a more robust manner, and at the same time reduce the workload for medical writers. Live demonstrations of AI-generated content using AI agentic networks provided valuable insights into the transformative potential of this approach in clinical trial transparency.

DAY 2.png

Day 3

– Devaki Thavarajah, Independent

Day 3 focused on the evolving methods and experiences in data anonymisation, clinical study reporting, and privacy. Speakers highlighted the challenges and innovations in maintaining data transparency while protecting participant privacy, with a special emphasis on lessons learned from real-world applications and industry-wide transformations. Agnieszka Głowińska and Łukasz Szyszka (AstraZeneca) discussed the impact of aggregated-level data on the anonymisation of individual patient-level data (IPLD). While aggregated data is generally considered anonymous and not subject to further anonymisation, it can disrupt anonymisation processes when connected to IPLD. For example, summary demographic tables and subgrouped adverse event tables, if not properly secured, can reveal sensitive information, leading to potential re-identification of patients. The presentation stressed the importance of analysing entire clinical study reports (CSRs) to mitigate this risk and prevent inadvertent exposure of personal data through linkage between different data formats. The speakers raised awareness about the hidden risks of aggregated data, emphasising the need for thorough evaluation of tabular data to protect participant privacy effectively.

Alex Hughes (Roche) and Brent Caldwell (Novartis) reflected on a decade-long journey of evolving data sharing practices in the pharmaceutical industry. They traced the development of data transparency from the early days of ad hoc requests to today’s routine sharing of clinical trial data and documents for secondary research. The presentation highlighted the challenges faced by sponsors, such as regulatory hurdles and the complexities of multi-sponsor inquiries. The speakers also celebrated how these challenges have been overcome, leading to a more collaborative culture of data sharing across the industry. They explored the current landscape of data transparency, its ongoing implications and the likely future of data sharing in an increasingly regulated environment.

Cathal Gallagher and Laura Dodd (Instem) offered practical guidance on writing clinical study reports (CSRs) to facilitate easier anonymisation for data publication. They emphasised the importance of lean writing – including only essential information in the CSR and avoiding unnecessary details that belong in safety narratives. This not only helps regulatory reviewers focus on key study messages but also simplifies the anonymisation process for data publication. The presenters provided strategies for writing subject identifiers in text and tables, including using standardised terms, to ensure the CSRs are clear, concise and more readily anonymised. This approach aids in maintaining transparency while protecting sensitive data.

Véronique Poinsot (Sanofi) shared experiences in implementing the TransCelerate Privacy Methodology. Véronique discussed the operational aspects of adopting the methodology at scale, including decision-making processes, development of best practices, and communication with stakeholders. The presentation offered insights into the real-world challenges and successes of implementing a standardised privacy approach in a global organisation.

DAY 3.png

Screenshot 2021-02-03 at 12.12.30.png

PHUSE Data Transparency Events Generic Banner.jpg

Mark your calendars for the 2025 Data Transparency Events!

● Data Transparency Winter Event: 4–6 February

● Data Transparency Autumn Event: 16–18 September

Sponsorship packages are now available, offering a prime opportunity to showcase your company to a targeted audience, gain valuable exposure and be at the forefront of crucial conversations shaping the future of data transparency.

Plus, the earlier you sign up, the more exposure you get!

For more information, view the prospectus and reach out to events@phuse.global.

Screenshot 2021-02-03 at 12.12.30.png

Thank You to Our 2025 Sponsors Already Signed Up!

2025 DT Sponsors.png