Today people leave digital breadcrumbs wherever they go and whatever they do online and offline. This data
dramatically increases our capacity to understand and affect the behavior of individuals and collectives, has
been key to recent advances in AI, but also raises fundamentally new privacy and fairness questions. The
Computational Privacy Group aims to provide leadership, in the UK and beyond, in the safe, anonymous, and
ethical use of large-scale behavioral datasets coming from the Internet of Things (IoT) devices, mobile
phones, credit cards, browsers, etc.
Our projects have already demonstrated the limits of data anonymization (or de-identification) in effectively protecting the privacy of individuals in Big Data, the risk of inference in behavioral datasets coming from mobile phone, and developed solutions to allow individuals and companies to share data safely. While technical in nature, our work has had significant public policy implication for instance in reports of United Nations, FTC, and the European Commission as well as in briefs to the U.S. Supreme Court.
We develop statistical and machine learning techniques to uniquely identify individuals in large-scale behavioral datasets. These techniques show the limits of pseudonymization and anonymization in protecting people's privacy.
Modern privacy is not only about controlling the information but also the ability to control how this information is used e.g. for insurance pricing or ad-targeting. We study fairness in algorithmic-decision making and, more generally, the impact of AI on society.
The CPG attended the CNIL Privacy Research Day in Paris in June 2023. Ana-Maria Crețu presented her paper on automated privacy attacks (Querysnout), Shubham Jain presented both papers on perceptual hashing and Florent Guépin presented his paper on correlation inference attacks.
Ana-Maria Cretu and CPG alumnus Florimond Houssiau (currently a postdoc at The Alan Turing Institute) presented their paper “QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems” at the ACM CCS 2022 conference in Los Angeles.
In a new paper published in Science Advances, Arnaud J. Tournier and Yves-Alexandre de Montjoye propose an entropy-based profiling attack for location data which shows that much more auxiliary information than previously believed is available to re-identify individuals in …
Yves-Alexandre is an Associate Professor at Imperial College London. He received his PhD from MIT before joining Harvard IQSS for his postdoc. He currently is a Special Adviser on AI and Data Protection to EC Justice Commissioner Reynders and a Parliament-appointed expert to the Belgian Data Protection Agency (APD-GBA).
Florent received his BSc and MSc in theoretical computing from ENS Lyon. He also holds a BSc in pure maths from University Lyon 1 an engineering degree from Centrale Lyon. His research interests include privacy attacks against machine learning systems and generative systems such as GANs.
Visiting PhD student
Yuhan obtained her BSc in computer science at Ximen University in China. While pursuing her Ph.D. degree at Renmin University of China since 2019, she worked as a research intern for a year at DAMO Academy, Alibaba Group. As a one-year visiting student, her research interests include differential privacy and privacy attacks against statistical data analysis and ML systems.
Nataša obtained her BSc degree in Computer Science at the University of Novi Sad in Serbia. After that, she obtained her MSc degree from EPFL. During her time there, she conducted research both in academic and industry settings, on responsible AI. Her research interests include explainable AI, algorithmic fairness, and privacy-preserving ML.
Originally from China, Yifeng obtained his BSc from Tsinghua University in Automation. After his master degree, he spent one year working as a full-time research assistant in the US. His research interests include privacy-preserving ML, synthetic data generation protecting personal anonymity, and attack methods against census datasets and ML models.
Matthieu obtained his BSc from KU Leuven in Mechanical Engineering. Next, he spent four years in the US, pursuing two years of graduate study (Energy, Computer Science) and two years of working as data scientist at McKinsey & Company. His research interests include privacy attacks against (large) language models and ML systems.
Igor obtained his undergraduate degree in Computer Science in 2013 and had been working as a Software Engineer since, most recently at Meta AI. His research interests include differential privacy and privacy attacks against ML systems.
Bozhidar has obtained his MSc in Data Science at the University of Ljubljana and his BSc in Computer Science at the Ss. Cyril and Methodius University in Skopje. His research interests include machine learning and privacy attacks against query-based systems.
Administrator (if urgent): Amandeep Bahia, +44 20 7594 8612
We are located at the Data Science Institute in the William Penney Laboratory. The best entry point is via Exhibition road, through the Business school (see map below). From there, just take the stairs towards the outdoor court. Enter the outdoor corridor after the court and the institute will be on your right (please press the Data Science intercom button for access).
Please address mails to:
Department of Computing
Imperial College London
180 Queens Gate
London SW7 2AZ