Increased Surveillance is Not an Effective Response to Mass Violence
By Sara Collins and Anisha Reddy This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part […]
COPPA Workshop Takeaways
On Monday, the Federal Trade Commission (FTC) held a public workshop focused on potential updates to the Children’s Online Privacy Protection Act (COPPA) rule. The workshop follows a July 25, 2019 notice of rule review and call for public comments regarding COPPA rule reform. The comment period remains open until December 9th. Senior FTC officials […]
CCPA 2.0? A New California Ballot Initiative is Introduced
Introduction On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsom is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy […]
FTC should investigate app developers banned by Facebook – Statement by Future of Privacy Forum CEO
Future of Privacy Forum Calls on FTC to Investigate Apps That Misused Consumer Data WASHINGTON, DC – September 20, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding Facebook’s announcement that it has banned 400 developers from its app store: The FTC should quickly act against many of these app developers, since […]
New White Paper Explores Privacy and Security Risk to Machine Learning Systems
FPF and Immuta Examine Approaches That Can Limit Informational or Behavioral Harms WASHINGTON, D.C. – September 20, 2019 – The Future of Privacy Forum (FPF) released a white paper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and […]
Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
What is 5G Cell Technology? How Will It Affect Me?
The leap from 3G to 4G technology brought with it faster data transfer speeds, which supported widespread adoption of data cloud and streaming services, video conferencing, and Internet of Things devices such as digital home assistants and smartwatches. 5G technology has the potential to enable another wave of smart devices: always connected and always communicating to provide faster, more personalized services.
10 Reasons Why the GDPR Is the Opposite of a ‘Notice and Consent’ Type of Law
The below piece was originally published on Medium. For a version with humorous images, head to the original post. A ‘notice and consent’ privacy law puts the entire burden of privacy protection on the person and then it doesn’t really give them any choice. The GDPR does the opposite of this. There is so much […]
10th Annual Privacy Papers for Policymakers – Send Us Your Work!
The 10th Annual Privacy Papers for Policymakers awards have been announced. Register here to attend the event on February 6, 2020. We will open the submissions process for next year’s awards in fall 2020. Have you conducted privacy-related research that policymakers should know about? If so, we can help you get it in front of […]
Digital Deep Fakes
The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.