Publically releasing sensitive information is risky. In 1997, Latanya Sweeney used full date of birth, 5 digit ZIP code, and gender to show that seemingly anonymous medical data could be linked to an actual person when she uncovered the health information of William Weld, the former governor of Massachusetts. Sweeney in a new study analyzes the data available in the Public Genome Project (PGP) and shows once again that many people can be re-identified by using date of birth, ZIP, and gender, when other data such as a voter registration list is available.
Sweeney’s work is important, but we don’t think it should be considered an indictment of de-identification. The cases so often cited as proof that de-identification doesn’t work – the AOL Search data release, the Netflix prize, the Weld example and the PGP data – are all examples of barely or very poorly de-identified data. De-identification experts do NOT consider a publically disclosed database with full date of birth, 5 digit ZIP code, and gender de-identified. In fact, those three data points divide the US population into over 3 billion unique combinations. Full date of birth divides a population into over 36 thousand separate groups and ZIP codes further divide the US population into over 43 thousand separate groups. Publically releasing a database with such a large number of unique combinations allows additional databases to be added and gives attackers all the time in the world to examine the data. Thus, public disclosure greatly increases the risk of identifying individuals from a database.
Sweeney’s study shows the importance of very strong de-identification practices when data is disclosed publically. With public data, organizations should use very strong de-identification techniques, such as the Privacy Analytics Risk Assessment Tool developed by Dr. Khaled El Emam or the use of differential privacy as proposed by Dr. Cynthia Dwork.
For nonpublic databases, however, strong de-identification techniques may not strike the right balance between data utility and privacy. When nonpublic databases are protected by both technical and administrative controls, reasonable de-identification techniques, as opposed to very strong de-identification techniques, may be appropriate. Attackers do not have unlimited time to attempt to break the technical de-identification protection, third party data is not available, and measures are in place to provide legal commitments. Data breaches can occur of course, but certainly we need to recognize the very different status of protected versus unprotected data and should appreciate the range of protections that can support a de-identification promise.
FPF staff are conducting research exploring the different risk profiles of nonpublic databases and publically released databases and the relevant best practices for “pretty good” de-identification for restricted databases. Please contact us if you are interested.