Today people leave digital breadcrumbs wherever they go and whatever they do online and offline. This data
dramatically increases our capacity to understand and affect the behavior of individuals and collectives, has
been key to recent advances in AI, but also raises fundamentally new privacy and fairness questions. The
Computational Privacy Group aims to provide leadership, in the UK and beyond, in the safe, anonymous, and
ethical use of large-scale behavioral datasets coming from the Internet of Things (IoT) devices, mobile
phones, credit cards, browsers, etc.
Our projects have already demonstrated the limits of data anonymization (or de-identification) in effectively protecting the privacy of individuals in Big Data, the risk of inference in behavioral datasets coming from mobile phone, and developed solutions to allow individuals and companies to share data safely. While technical in nature, our work has had significant public policy implication for instance in reports of United Nations, FTC, and the European Commission as well as in briefs to the U.S. Supreme Court.
We develop statistical and machine learning techniques to uniquely identify individuals in large-scale behavioral datasets. These techniques show the limits of pseudonymization and anonymization in protecting people's privacy.
Modern privacy is not only about controlling the information but also the ability to control how this information is used e.g. for insurance pricing or ad-targeting. We study fairness in algorithmic-decision making and, more generally, the impact of AI on society.
Ana-Maria Cretu and CPG alumnus Florimond Houssiau (currently a postdoc at The Alan Turing Institute) presented their paper “QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems” at the ACM CCS 2022 conference in Los Angeles.
In a new paper published in Science Advances, Arnaud J. Tournier and Yves-Alexandre de Montjoye propose an entropy-based profiling attack for location data which shows that much more auxiliary information than previously believed is available to re-identify individuals in …
In their new paper, Andrea Gadotti, Florimond Houssiau, Meenatchi Sundaram Muthu Selva Annamalai, and Yves-Alexandre de Montjoye, investigate the practical guarantees of Apple’s implementation of local differential privacy in iOS and macOS. They propose a new type of attacks, …
Yves-Alexandre is an Associate Professor at Imperial College London. He received his PhD from MIT before joining Harvard IQSS for his postdoc. He currently is a Special Adviser on AI and Data Protection to EC Justice Commissioner Reynders and a Parliament-appointed expert to the Belgian Data Protection Agency (APD-GBA).
Visiting Research Associate
Luc is a postdoctoral researcher studying the limits of privacy in the modern age, and received a PhD in Applied mathematics from UCLouvain. Luc's work has challenged the technical and legal adequacy of current de-identification techniques to anonymise data.
Originally from Romania, Ana-Maria has a background in mathematics and computer science. Her research focuses on new machine learning-based privacy and security attacks in large-scale behavioral datasets, machine learning models, query-based and client-side scanning systems.
Andrea received a BSc in math and a MSc in mathematical logic from the University of Turin. His research interests include differential privacy, privacy attacks against systems processing personal data, and the design of privacy-preserving mechanisms.
Originally from Canada, Vince studied mathematics at the University of British Columbia for his Bachelor’s degree (BA with Major in Mathematics with Minor in Philosophy) and Master’s degree (MSc in Mathematics). His research interests include privacy attacks against aggregate location data, synthetic data generation, AI fairness, and interpretable machine learning.
Florent received his BSc and MSc in theoretical computing from ENS Lyon. He also holds a BSc in pure maths from University Lyon 1 an engineering degree from Centrale Lyon. His research interests include privacy attacks against machine learning systems and generative systems such as GANs.
Originally from India, Shubham received a BTech in computer science and engineering from IIT Bombay. His research interests include fairness in machine learning systems, scalable privacy-preserving systems, and network security.
Originally from China, Yifeng obtained his bachelor degree from Tsinghua University in Automation. Then, he finished his master degree and spent one year working as a full-time research assistant in the United States. His research interests include privacy-preserving machine learning, synthetic data generation protecting personal anonymity, and attack methods against census datasets and generic ML models.
Originally from Belgium, Matthieu obtained his BSc from KU Leuven in Mechanical Engineering. Afterwards he spent four years in the US, pursuing two years of graduate study in Energy and Computational Science and two years of working as data scientist in a consulting company. His research interests include privacy attacks against computer vision and language models and bias and fairness in machine learning.
Bozhidar has obtained his MSc in Data Science at the University of Ljubljana and his BSc in Computer Science at the Ss. Cyril and Methodius University in Skopje. His research interests include machine learning and privacy attacks against query-based systems.
Assistant (if urgent): Fay Miller, +44 20 7594 8612
We are located at the Data Science Institute in the William Penney Laboratory. The best entry point is via Exhibition road, through the Business school (see map below). From there, just take the stairs towards the outdoor court. Enter the outdoor corridor after the court and the institute will be on your right (please press the Data Science intercom button for access).
Please address mails to:
Department of Computing
Imperial College London
180 Queens Gate
London SW7 2AZ