Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
Digital Deep Fakes
The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.
Sign up below to receive FPF's monthly newsletter, event invitations, and other privacy information.
* indicates required Email Address * First Name * Last Name * Company/Industry * Zip Code * Please opt-in to receive emails from FPF * I consent to receive emails from FPF I do not consent to receive emails from FPF
What We're Reading: Europe
June 2019 A round-up of the most important developments in the EU Data Protection world Enforcement The Italian DPA levied a 2.000.000€ (IT) fine against a telemarketing company and its call-center operations conducted by a de facto “sub-contractor” in Albania for creating contact lists, calling people and sharing their telephone numbers with a third party (their client) […]
Ethical and Privacy Protective Academic Research and Corporate Data
Is edtech helping or hindering student education? What effect does social media have on elections? What types of user interfaces help users manage privacy settings? Can the data collected by wearables inform health care? In almost every area of science, academic researchers are seeking access to personal data held by companies to advance their work. […]
NAI’s 2020 Code of Conduct Expands Self-Regulation for Ad Tech Providers
By Christy Harris, Stacey Gray, and Meredith Richards As debates over the shape of federal privacy legislation in the United States continue, online advertising remains a key focus of scrutiny in the US Congress, with its recent hearing on digital advertising and data privacy. Amidst these debates, the Networking Advertising Initiative (NAI), the leading self-regulatory body […]
The Israel Tech Policy Institute: A Discussion with Limor Shmerling Magazanik
While Israel’s image as the “Start-up Nation” is well known in tech circles, the country has lacked a central organization capable of promoting the same level of thought leadership on tech policy and privacy issues. The launch of the Israel Tech Policy Institute (ITPI) in June 2018 ensured that this is no longer the […]
Protected: Future of Privacy Forum's 2019 Annual Meeting
There is no excerpt because this is a protected post.
The Future of Ad Tech: A Discussion with FPF's Stacey Gray
Almost everyone has had a similar experience: visiting a website to shop for a product and then having an advertisement for that product “follow” them around the internet. Most free content today, from social media to news, is funded by ads. In order to deliver those ads and measure their effectiveness, companies today rely heavily […]
Privacy Book Club: Archive
The FPF Privacy Book Club provides members with the opportunity to read a wide range of books — privacy, data, ethics, academic works, and other important data relevant issues — and have an open discussion of the selected literature. Archive: Previous Discussions Book Discussion 1: Privacy’s Blueprint: The Battle to Control the Design of New Technologies by Professor […]