Privacy Papers 2019
The winners of the 2019 Privacy Papers for Policymakers (PPPM) Award are: Antidiscriminatory Privacy by Ignacio N. Cofone, McGill University Faculty of Law Abstract Law often regulates the flow of information to prevent discrimination. It does so, for example, in Law often blocks sensitive personal information to prevent discrimination. It does so, however, without a […]
Legislative Resources
Recent work California’s Prop 24, the “California Privacy Rights Act,” Passed. What’s Next? Comparing Privacy Laws: GDPR v. CCPA Off to the Races for Enforcement of California’s Privacy Law California Privacy Legislation: A Timeline of Key Events Comparing the Washington Privacy Act to GDPR, CCPA, and More Tech Talk with the Regulators – Understanding Anonymization […]
COPPA Workshop Takeaways
On Monday, the Federal Trade Commission (FTC) held a public workshop focused on potential updates to the Children’s Online Privacy Protection Act (COPPA) rule. The workshop follows a July 25, 2019 notice of rule review and call for public comments regarding COPPA rule reform. The comment period remains open until December 9th. Senior FTC officials […]
CCPA 2.0? A New California Ballot Initiative is Introduced
Introduction On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsom is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy […]
New White Paper Explores Privacy and Security Risk to Machine Learning Systems
FPF and Immuta Examine Approaches That Can Limit Informational or Behavioral Harms WASHINGTON, D.C. – September 20, 2019 – The Future of Privacy Forum (FPF) released a white paper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and […]
Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
Digital Deep Fakes
The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.
Sign up below to receive FPF's monthly newsletter, event invitations, and other privacy information.
* indicates required Email Address * First Name * Last Name * Company/Industry * Zip Code * Please opt-in to receive emails from FPF * I consent to receive emails from FPF I do not consent to receive emails from FPF
What We're Reading: Europe
June 2019 A round-up of the most important developments in the EU Data Protection world Enforcement The Italian DPA levied a 2.000.000€ (IT) fine against a telemarketing company and its call-center operations conducted by a de facto “sub-contractor” in Albania for creating contact lists, calling people and sharing their telephone numbers with a third party (their client) […]
Ethical and Privacy Protective Academic Research and Corporate Data
Is edtech helping or hindering student education? What effect does social media have on elections? What types of user interfaces help users manage privacy settings? Can the data collected by wearables inform health care? In almost every area of science, academic researchers are seeking access to personal data held by companies to advance their work. […]