This paper outlines the spectrum of AI technology, from rules-based and symbolic AI to advanced, developing forms of neural networks, and seeks to put them in the context of other sciences and disciplines, as well as emphasize the importance of security, user interface, and other design factors.
WASHINGTON, D.C. – The Future of Privacy Forum (FPF) today released a new infographic, Youth Privacy and Data Protection 101 which provides an overview of the opportunities and risks for kids online, along with potential protection strategies. It also features young people’s voices from around the world on their preferences and attitudes toward privacy. “We […]
International data flows have been top of mind in the past year for digital rights advocates, companies and regulators, particularly international transfers following the Schrems II judgment of the Court of Justice of the EU from last July. As data protection authorities assess how to use technical safeguards and contractual measures to support data flows […]
UPDATED August 3, 2021. FPF released the white paper, The Spectrum of AI: Companion to the FPF AI Infographic, to provide additional detail and analysis for use of this Infographic tool as an educational resources for policymakers or regulators. FPF has just completed its newest infographic educational tool, The Spectrum of Artificial Intelligence. AI is the […]
Strong Data Encryption Protects Everyone: FPF Infographic Details Crypto Benefits for Individuals, Enterprises, and Government Officials
Today, the Future of Privacy Forum released a new tool: the interactive visual guide “Strong Data Encryption Protects Everyone.” The infographic illustrates how strong encryption protects individuals,enterprises, and the government. FPF’s guide also highlights key risks that arise when crypto safeguards are undermined – risks that can expose sensitive health and financial records, undermine the security […]
How is location data generated from mobile devices, who gets access to it, and how? As debates over companies and public health authorities using device data to address the current global pandemic continue, it is more important than ever for policymakers and regulators to understand the practical basics of how mobile operating systems work, how […]
Personal data – used lawfully, fairly, and transparently – is central to helping organizations achieve their missions. Today, Boards of Directors, CEOs, policymakers, and others need to understand the wide range of data inputs, the broad scope of risks and benefits, and how privacy and ethics are at the center of an organization’s ability to fulfill […]
FPF Releases Understanding Facial Detection, Characterization, and Recognition Technologies and Privacy Principles for Facial Recognition Technology in Commercial Applications
These resources will help businesses and policymakers better understand and evaluate the growing use of face-based biometric technology systems when used for consumer applications. Facial recognition technology can help users organize and label photos, improve online services for visually impaired users, and help stores and stadiums better serve customers. At the same time, the technology often involves the collection and use of sensitive biometric data, requiring careful assessment of the data protection issues raised. Understanding the technology and building trust are necessary to maximize the benefits and minimize the risks.
The Best Practices provide a policy framework for the collection, protection, sharing, and use of Genetic Data generated by consumer genetic testing services. These services are commonly offered to consumers for testing and interpretation related to ancestry, health, nutrition, wellness, genetic relatedness, lifestyle, and other purposes.
Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.