Computational Privacy Group

We are a young research group at Imperial College London studying the privacy risks arising from large scale behavioral datasets. We develop attack models and design solutions to collect and use data safely.

Today people leave digital breadcrumbs wherever they go and whatever they do online and offline. This data dramatically increases our capacity to understand and affect the behavior of individuals and collectives, has been key to recent advances in AI, but also raises fundamentally new privacy and fairness questions. The Computational Privacy Group aims to provide leadership, in the UK and beyond, in the safe, anonymous, and ethical use of large-scale behavioral datasets coming from the Internet of Things (IoT) devices, mobile phones, credit cards, browsers, etc.

Our projects have already demonstrated the limits of data anonymization (or de-identification) in effectively protecting the privacy of individuals in Big Data, the risk of inference in behavioral datasets coming from mobile phone, and developed solutions to allow individuals and companies to share data safely. While technical in nature, our work has had significant public policy implication for instance in reports of United Nations, FTC, and the European Commission as well as in briefs to the U.S. Supreme Court.

Research Areas


Identification Learning

We develop statistical and machine learning techniques to uniquely identify individuals in large-scale behavioral datasets. These techniques show the limits of pseudonymization and anonymization in protecting people's privacy.

Safe Data Sharing

We build privacy-preserving and conscientious techniques to collect and use data while respecting people's privacy. For instance, we're building with MIT the OPAL (Open Algorithms) platform to safely share location data and openPDS to give individuals control over their data.

Societal impact of AI

Modern privacy is not only about controlling the information but also the ability to control how this information is used e.g. for insurance pricing or ad-targeting. We study fairness in algorithmic-decision making and, more generally, the impact of AI on society.

News and Events


    Best paper award at SaTML 2025
    Apr 15, 2025

    We were lucky enough to win the Best paper Award at SaTML 2025 with our Systemization of Knowledge (SoK) on Membership Inference Attacks (MIAs) against LLMs.

    Yves-Alexandre de Montjoye presenting at ELSA Workshop
    Mar 17, 2025

    Yves-Alexandre de Montjoye will be presenting at the ELSA Workshop on Privacy-Preserving Machine Learning taking place on 17-21 March, 2025. The workshop brings together researchers and practitioners to discuss recent developments in privacy-preserving machine learning techniques …

    Yves-Alexandre de Montjoye at Dagstuhl Seminar
    Mar 14, 2025

    Yves-Alexandre de Montjoye was invited to give a talk at the Dagstuhl Seminar "PETs and AI: Privacy Washing and the Need for a PETs Evaluation Framework."

    More News

Selected publications


The full list of our papers is available on Google Scholar.

Our Team


Contact


Email:X@Y where X=demontjoye, Y=imperial.ac.uk.
Administrator (if urgent): Amandeep Bahia, +44 20 7594 8612

We are located at the Data Science Institute in the William Penney Laboratory. The best entry point is via Exhibition road, through the Business school (see map below). From there, just take the stairs towards the outdoor court. Enter the outdoor corridor after the court and the institute will be on your right (please press the Data Science intercom button for access).

Please address mails to:
Department of Computing
Imperial College London
180 Queens Gate
London SW7 2AZ