FPF Archive

TEN QUESTIONS ON AI RISK
The Blog

June 12, 2020 | Brenda Leong

TEN QUESTIONS ON AI RISK

Artificial intelligence and machine learning (AI/ML) generate significant value when used responsibly – and are the subject of growing investment for exactly these reasons. But AI/ML can also amplify organizations’ exposure to potential vulnerabilities, ranging from fairness and security issues to regulatory fines and reputational harm.

Artificial Intelligence and the COVID-19 Pandemic
The Blog

May 7, 2020 | Dr. Sara Jordan

Artificial Intelligence and the COVID-19 Pandemic

By Brenda Leong and Dr. Sara Jordan Machine learning-based technologies are playing a substantial role in the response to the COVID-19 pandemic. Experts are using machine learning to study the virus, test potential treatments, diagnose individuals, analyze the public health impacts, and more. Below, we describe some of the leading efforts and identify data protection […]

FPF Submits Written Statement to the U.S. House Committee on Financial Services Task Force on AI
The Blog

February 13, 2020 | Marianne Varkiani

FPF Submits Written Statement to the U.S. House Committee on Financial Services Task Force on AI

This week, Future of Privacy Forum (FPF) Senior Counsel and Director of AI & Ethics Brenda Leong submitted a written statement on the use of artificial intelligence and machine learning-based applications in financial products and services. Addressed to the House Committee on Financial Services Task Force on Artificial Intelligence, the statement explores how to protect […]

Takeaways from the Understanding Machine Learning Masterclass
The Blog

January 24, 2020 | Rob van Eijk

Takeaways from the Understanding Machine Learning Masterclass

Yesterday, the Future of Privacy Forum provided bespoke training on machine learning as a side event during the Computers, Privacy and Data Protection Conference (CPDP2020) in Brussels. The Understanding Machine Learning masterclass is a training aimed at policymakers, law scholars, social scientists and others who want to more deeply understand the data-driven technologies that are front of mind for data protection […]

New White Paper Provides Guidance on Embedding Data Protection Principles in Machine Learning
The Blog

December 19, 2019 | Marianne Varkiani

New White Paper Provides Guidance on Embedding Data Protection Principles in Machine Learning

Immuta and the Future of Privacy Forum (FPF) today released a working white paper, Data Protection by Process: How to Operationalise Data Protection by Design for Machine Learning, that provides guidance on embedding data protection principles within the life cycle of a machine learning model.  Data Protection by Design (DPbD) is a core data protection requirement […]

FPF Receives Grant To Design Ethical Review Process for Research Access to Corporate Data
The Blog

October 15, 2019 | Brenda Leong

FPF Receives Grant To Design Ethical Review Process for Research Access to Corporate Data

Future of Privacy Forum (FPF) has received a grant to create an independent party of experts for an ethical review process that can provide trusted vetting of corporate-academic research projects. FPF will establish a pool of respected reviewers to operate as a standalone, on-demand review board to evaluate research uses of personal data and create a set of transparent policies and processes to be applied to such reviews.

What is 5G Cell Technology?   How Will It Affect Me?
The Blog

September 17, 2019 | Brenda Leong

What is 5G Cell Technology?  How Will It Affect Me?

The leap from 3G to 4G technology brought with it faster data transfer speeds, which supported widespread adoption of data cloud and streaming services, video conferencing, and Internet of Things devices such as digital home assistants and smartwatches. 5G technology has the potential to enable another wave of smart devices: always connected and always communicating to provide faster, more personalized services.

Digital Deep Fakes
The Blog

August 15, 2019 | Brenda Leong

Digital Deep Fakes

The media has recently labeled manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary. 

Understanding Artificial Intelligence and Machine Learning
The Blog

May 20, 2019 | Brenda Leong

Understanding Artificial Intelligence and Machine Learning

The opening session of FPF’s Digital Data Flows Masterclass provided an educational overview of  Artificial Intelligence and Machine Learning – featuring Dr. Swati Gupta, Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech; and Dr. Oliver Grau, Chair of ACM’s Europe Technology Policy Committee, Intel Automated Driving Group, […]