Today people leave digital breadcrumbs wherever they go and whatever they do online and offline. This data dramatically increases our capacity to understand and affect the behaviour of individuals and collectives, has been key to recent advances in AI, but also raise fundamentally new privacy and fairness questions. The Computational Privacy Group aims to provide leadership, in the UK and beyond, in the safe, anonymous, and ethical use of large-scale behavioural datasets coming from the Internet of Things (IoT) devices, mobile phones, credit cards, browsers, etc.
Our projects have already demonstrated the limits of data anonymization (or de-identification) in effectively protecting the privacy of individuals in Big Data, the risk of inference in behavioral datasets coming from mobile phone, and developed solutions to allow individuals and companies to share data safely. While technical in nature, our work has had significant public policy implication for instance in reports of United Nations, FTC, and the European Commission as well as in briefs to the U.S. Supreme Court.
We develop statistical and machine learning technique to uniquely identify individuals in large-scale behavioral datasets. These techniques show the limits of pseudonymization and anonymization in protecting people's privacy.
Modern privacy is not only about controling the information but also the ability to control how this information is used e.g. for insurance pricing or ad-targeting. We study fairness in algorithmic-decision making and, more generally, the impact of AI on society.
We studied Diffix, a system developed and commercialized by Aircloak to anonymise data by adding noise to SQL queries sent by analysts. In a manuscript we just published on arXiv, we show that Diffix is vulnerable to a noise-exploitation attack. In short, our attack uses the …
Recent revelations from Cambridge Analytica show how vulnerable our privacy is to seemingly innocuous apps installed by our friends. We here show how our privacy is affected by people we interact with. Node-based intrusions are becoming one of the main threat to our privacy.
Artificial Intelligence (AI) has potential to fundamentally change the way we work, live, and interact. There is however, no general AI out there and the accuracy of current machine learning models largely depend on the data on which they have been trained. For the coming …
Thibaut received his PhD in computational statistics from Oxford. Thibaut's research interests include the application of machine learning techniques on behavioral datasets for identification learning as well as adversarial machine learning.
Josiane spent the last 3 years as a data analyst in industry after completing her BComm at McGill University. She is currently pursuing an MSc in Business Analytics at Imperial College London. Her research focuses on fairness and transparency in algorithmic decision-making.
Ali has a MSc in high energy physics jointly from the University of Southampton and CERN. His research focuses on quantifying the privacy of human behavioral datasets, as well as studying the impact of AI algorithmic decision making on society.
Andrea received his MSc in mathematical logic from the University of Turin. His research interests include differential privacy, determining vulnerabilities in data-release systems, and designing privacy-preserving mechanims.
Florimond received his MSc in applied mathematics from UCLouvain. His research interests include obfuscation and privacy risks in Web search data .
Axel has masters in computer science from Imperial College London and ENSEEIHT (France). He worked for Societé Generale CIB. Axel leads the development of the OPAL project and is interested in privacy-preserving architectures.
Luc has a MSc in computer science from Ecole Normale Superieure de Lyon. His research focuses on re-identification, privacy and anonymity, as well as large-scale analysis of human behaviour.
Arnaud has masters in mathematics from Paris VI, Paris XI and from Ecole Centrale Paris. His research interests include biometrics and profiling.
Assistant (if urgent): Fay Miller, +44 20 7594 8612
We are located in the Data Science Institute in the William Penney Laboratory. The best entry point is via Exhibition road, through the Business school (see map below). From there, just take the stairs towards the outdoor court. Enter the outdoor corridor after the court and the institute will be on your right (please press the Data Science intercom button for access).
Please address mails to:
Department of Computing
Imperial College London
180 Queens Gate
London SW7 2AZ