In their new paper, Andrea Gadotti, Florimond Houssiau, Meenatchi Sundaram Muthu Selva Annamalai, and Yves-Alexandre de Montjoye, investigate the practical guarantees of Apple’s implementation of local differential privacy in iOS and macOS. They propose a new type of attacks, called pool inference attacks, where an adversary has access to a user’s obfuscated data, defines pools of objects, and exploits the user’s polarized behavior in multiple data collections to infer the user’s preferred pool. The results show that pool inference attacks are a concern for data protected by local differential privacy mechanisms with a large ε — such as Apple’s Count Mean Sketch mechanism —, emphasizing the need for additional technical safeguards and the need for more research on how to apply local differential privacy for multiple collections.