We are a young research group at Imperial College London studying the privacy risks arising from large scale behavioral datasets. We develop attack models and design solutions to collect and use data safely.
Today people leave digital breadcrumbs wherever they go and whatever they do online and offline. This data
dramatically increases our capacity to understand and affect the behavior of individuals and collectives, has
been key to recent advances in AI, but also raises fundamentally new privacy and fairness questions. The
Computational Privacy Group aims to provide leadership, in the UK and beyond, in the safe, anonymous, and
ethical use of large-scale behavioral datasets coming from the Internet of Things (IoT) devices, mobile
phones, credit cards, browsers, etc.
Our projects have already demonstrated the limits of data anonymization (or de-identification) in effectively
protecting the privacy of individuals in Big Data, the risk of inference in behavioral datasets coming from
mobile phone, and developed solutions to allow individuals and companies to share data safely. While technical
in nature, our work has had significant public policy implication for instance in reports of United Nations,
FTC, and the European Commission as well as in briefs to the U.S. Supreme Court.
We develop statistical and machine learning techniques to uniquely identify individuals in large-scale behavioral datasets. These techniques show the limits of pseudonymization and anonymization in protecting people's privacy.
Modern privacy is not only about controlling the information but also the ability to control how this information is used e.g. for insurance pricing or ad-targeting. We study fairness in algorithmic-decision making and, more generally, the impact of AI on society.
Matthieu Meeus and Igor Shilov introduced their paper [Copyright Traps for Large Language Models](https://arxiv.org/pdf/2402.09363) at ICML 2024 in Vienna.
Nataša Krčo and Igor Shilov led a session about exploring the robustness of modern data privacy systems. Learn more [here](https://www.imperial.ac.uk/news/254914/imperials-computational-privacy-group-lead-session/).
Dr Yves-Alexandre de Montjoye hosted a session on using technology to detect illegal content and assessing the robustness of modern data privacy mechanisms.
More News
Email:X@Y
where X=demontjoye
, Y=imperial.ac.uk
.
Administrator (if urgent): Amandeep Bahia, +44 20 7594 8612
We are located at the Data Science Institute in the William
Penney Laboratory. The best entry point is via Exhibition road, through the Business school (see map
below). From there, just take the stairs towards the outdoor court. Enter the outdoor corridor after the
court and the institute will be on your right (please press the Data Science intercom button for access).
Please address mails to:
Department of Computing
Imperial College London
180 Queens Gate
London SW7 2AZ