Posts by Collection

open_projects

portfolio

publications

Differential privacy in medical imaging applications

Published in Trustworthy AI in Medical Imaging. Academic Press, 2025

This work outlines the application of differential privacy in the medical domain describing opportunities and pitfalls.

Recommended citation: Kaiser, Johannes, Tamara Mueller, and Georgios Kaissis. "Differential privacy in medical imaging applications." Trustworthy AI in Medical Imaging. Academic Press, 2025. 411-424. https://www.sciencedirect.com/science/article/abs/pii/B9780443237614000328

Differentially Private Active Learning: Balancing Effective Data Selection and Privacy

Published in SatML, 2025

This paper explores and compares different approaches for active learning in differential privacy

Recommended citation: Schwethelm, K., Kaiser, J., Kuntzer, J., Yiğitsoy, M., Rückert, D., & Kaissis, G. (2025, April). Differentially Private Active Learning: Balancing Effective Data Selection and Privacy. In 2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 858-878). IEEE. https://arxiv.org/pdf/2410.00542

Visual Privacy Auditing with Diffusion Models

Published in TMLR, 2025

This paper explores a threat model, where the attacker has knowledge about the underlying data distribution

Recommended citation: Schwethelm, K., Kaiser, J., Knolle, M., Lockfisch, S., Rueckert, D., & Ziller, A. (2024). Visual Privacy Auditing with Diffusion Models. arXiv preprint arXiv:2403.07588. https://arxiv.org/pdf/2403.07588

talks

Talk on ‘Privacy-Fairness Tradeoff’ at Google Deepmind

Published:

As machine learning systems become increasingly embedded in decision-making processes, concerns about fairness and privacy are intensifying. While previous work has highlighted the inherent trade-offs between these goals, we challenge the assumption that fairness and differential privacy are necessarily at odds. We propose a novel perspective: differential privacy, when strategically applied with group-specific privacy budgets, can serve as a lever to enhance group fairness. Our approach empowers underrepresented or disadvantaged groups to contribute data with higher privacy budgets, improving their model performance without harming others. We empirically demonstrate a “rising tide” effect, where increased privacy budgets for specific groups benefit others through shared information. Furthermore, we show that fairness improvements can be achieved by selectively increasing privacy budgets for only the most informative group members. Our results on FairFace and CheXpert datasets reveal that sacrificing privacy in a controlled, group-aware manner can reduce bias and enhance fairness, offering a positive-sum alternative to traditional fairness–accuracy or privacy–utility trade-offs.

teaching

Reconstruction attacks on Genetic Data

WS23/24 Practical: Applied Deep Learning in Medicine, TUM, 2024

This work shows that training data with higher spatial correlation, such as medical images, is more vulnerable to gray-box model inversion attacks than unstructured data like genetic profiles.

Rotation Equivariant ProtoPNets in Medicine

SS24 Practical: Applied Deep Learning in Medicine, TUM, 2024

ReProtoPNet is a novel AI model specifically designed to enhance both the interpretability and rotation invariance of machine learning systems, particularly for crucial medical applications. This development addresses significant limitations of traditional AI, which often function as “black-box” systems, making their decision-making processes opaque and difficult for healthcare professionals to trust.