Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
This is a page not in th emain menu
Published:
Differential privacy in essence tries to protect an adversary of distinguishing weather a datapoint was present in an the output of the randomized mechanism underlying data. Therefore it naturally alligns with an Hypothesis testing interpretation with given the outputs of the either $M(S)$ or $M(S’)$: <p align="center"> $H_0$: the underlying data set is $S$
$H_1$: the underlying data set is $S’$ </p> Differential privacy can be measured in terms of the advantage in predicitive performance in this hypothesis testing problem.
Published:
Often, it is assumed, that the adversary has a balanced prior between these two hypothesis i.e., the prior porbability of a datapoint being and not being in the dataset is assumed to be the same. However, this potentially diverges from the real setting in which an adversary querries large amounts of data knowing, that only few will be contained in the data set the attacked mechanism bases its output on.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Trustworthy AI in Medical Imaging. Academic Press, 2025
This work outlines the application of differential privacy in the medical domain describing opportunities and pitfalls.
Recommended citation: Kaiser, Johannes, Tamara Mueller, and Georgios Kaissis. "Differential privacy in medical imaging applications." Trustworthy AI in Medical Imaging. Academic Press, 2025. 411-424. https://www.sciencedirect.com/science/article/abs/pii/B9780443237614000328
Published in SatML, 2025
This paper explores and compares different approaches for active learning in differential privacy
Recommended citation: Schwethelm, K., Kaiser, J., Kuntzer, J., Yiğitsoy, M., Rückert, D., & Kaissis, G. (2025, April). Differentially Private Active Learning: Balancing Effective Data Selection and Privacy. In 2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 858-878). IEEE. https://arxiv.org/pdf/2410.00542
Published in TMLR, 2025
This paper explores a threat model, where the attacker has knowledge about the underlying data distribution
Recommended citation: Schwethelm, K., Kaiser, J., Knolle, M., Lockfisch, S., Rueckert, D., & Ziller, A. (2024). Visual Privacy Auditing with Diffusion Models. arXiv preprint arXiv:2403.07588. https://arxiv.org/pdf/2403.07588
Published in ICLR, 2025
This paper introduces LSI a novel measure of data informativeness founded in information theory.
Recommended citation: Kaiser, J., Schwethelm, K., Rueckert, D., & Kaissis, G. Laplace Sample Information: Data Informativeness Through a Bayesian Lens. In The Thirteenth International Conference on Learning Representations. https://arxiv.org/pdf/2505.15303?
Published:
As machine learning systems become increasingly embedded in decision-making processes, concerns about fairness and privacy are intensifying. While previous work has highlighted the inherent trade-offs between these goals, we challenge the assumption that fairness and differential privacy are necessarily at odds. We propose a novel perspective: differential privacy, when strategically applied with group-specific privacy budgets, can serve as a lever to enhance group fairness. Our approach empowers underrepresented or disadvantaged groups to contribute data with higher privacy budgets, improving their model performance without harming others. We empirically demonstrate a “rising tide” effect, where increased privacy budgets for specific groups benefit others through shared information. Furthermore, we show that fairness improvements can be achieved by selectively increasing privacy budgets for only the most informative group members. Our results on FairFace and CheXpert datasets reveal that sacrificing privacy in a controlled, group-aware manner can reduce bias and enhance fairness, offering a positive-sum alternative to traditional fairness–accuracy or privacy–utility trade-offs.
WS23/24 Practical: Applied Deep Learning in Medicine, TUM, 2024
This work shows that training data with higher spatial correlation, such as medical images, is more vulnerable to gray-box model inversion attacks than unstructured data like genetic profiles.
SS24 Practical: Applied Deep Learning in Medicine, TUM, 2024
ReProtoPNet is a novel AI model specifically designed to enhance both the interpretability and rotation invariance of machine learning systems, particularly for crucial medical applications. This development addresses significant limitations of traditional AI, which often function as “black-box” systems, making their decision-making processes opaque and difficult for healthcare professionals to trust.
Master Thesis, TUM, 2025
The increasing deployment of machine learning models in critical domains such as finance, criminal justice, and healthcare has raised concerns about the ethical implications of these systems.
WS24/25 Practical: Applied Deep Learning in Medicine, TUM, 2025
Computer-aided diagnosis systems are a powerful tool for physicians to support the identification of diseases in medical images.