Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

About me

Posts

On the Hypothesis Testing Perspective of DP

3 minute read

Published:

Differential privacy in essence tries to protect an adversary of distinguishing weather a datapoint was present in an the output of the randomized mechanism underlying data. Therefore it naturally alligns with an Hypothesis testing interpretation with given the outputs of the either $M(S)$ or $M(S’)$: <p align="center"> $H_0$: the underlying data set is $S$
$H_1$: the underlying data set is $S’$ </p> Differential privacy can be measured in terms of the advantage in predicitive performance in this hypothesis testing problem.

Bayesian view on DP

4 minute read

Published:

Often, it is assumed, that the adversary has a balanced prior between these two hypothesis i.e., the prior porbability of a datapoint being and not being in the dataset is assumed to be the same. However, this potentially diverges from the real setting in which an adversary querries large amounts of data knowing, that only few will be contained in the data set the attacked mechanism bases its output on.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

open_projects

portfolio

publications

Differential privacy in medical imaging applications

Published in Trustworthy AI in Medical Imaging. Academic Press, 2025

This work outlines the application of differential privacy in the medical domain describing opportunities and pitfalls.

Recommended citation: Kaiser, Johannes, Tamara Mueller, and Georgios Kaissis. "Differential privacy in medical imaging applications." Trustworthy AI in Medical Imaging. Academic Press, 2025. 411-424. https://www.sciencedirect.com/science/article/abs/pii/B9780443237614000328

Differentially Private Active Learning: Balancing Effective Data Selection and Privacy

Published in SatML, 2025

This paper explores and compares different approaches for active learning in differential privacy

Recommended citation: Schwethelm, K., Kaiser, J., Kuntzer, J., Yiğitsoy, M., Rückert, D., & Kaissis, G. (2025, April). Differentially Private Active Learning: Balancing Effective Data Selection and Privacy. In 2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 858-878). IEEE. https://arxiv.org/pdf/2410.00542

Visual Privacy Auditing with Diffusion Models

Published in TMLR, 2025

This paper explores a threat model, where the attacker has knowledge about the underlying data distribution

Recommended citation: Schwethelm, K., Kaiser, J., Knolle, M., Lockfisch, S., Rueckert, D., & Ziller, A. (2024). Visual Privacy Auditing with Diffusion Models. arXiv preprint arXiv:2403.07588. https://arxiv.org/pdf/2403.07588

talks

Talk on ‘Privacy-Fairness Tradeoff’ at Google Deepmind

Published:

As machine learning systems become increasingly embedded in decision-making processes, concerns about fairness and privacy are intensifying. While previous work has highlighted the inherent trade-offs between these goals, we challenge the assumption that fairness and differential privacy are necessarily at odds. We propose a novel perspective: differential privacy, when strategically applied with group-specific privacy budgets, can serve as a lever to enhance group fairness. Our approach empowers underrepresented or disadvantaged groups to contribute data with higher privacy budgets, improving their model performance without harming others. We empirically demonstrate a “rising tide” effect, where increased privacy budgets for specific groups benefit others through shared information. Furthermore, we show that fairness improvements can be achieved by selectively increasing privacy budgets for only the most informative group members. Our results on FairFace and CheXpert datasets reveal that sacrificing privacy in a controlled, group-aware manner can reduce bias and enhance fairness, offering a positive-sum alternative to traditional fairness–accuracy or privacy–utility trade-offs.

teaching

Reconstruction attacks on Genetic Data

WS23/24 Practical: Applied Deep Learning in Medicine, TUM, 2024

This work shows that training data with higher spatial correlation, such as medical images, is more vulnerable to gray-box model inversion attacks than unstructured data like genetic profiles.

Rotation Equivariant ProtoPNets in Medicine

SS24 Practical: Applied Deep Learning in Medicine, TUM, 2024

ReProtoPNet is a novel AI model specifically designed to enhance both the interpretability and rotation invariance of machine learning systems, particularly for crucial medical applications. This development addresses significant limitations of traditional AI, which often function as “black-box” systems, making their decision-making processes opaque and difficult for healthcare professionals to trust.