FPF Release: The Privacy Expert’s Guide to AI And Machine Learning


Today, FPF announces the release of The Privacy Expert’s Guide to AI and Machine Learning. This guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.

Contents Include:

  • Algorithms
  • AI; General and Narrow
  • Machine Learning; Supervised, Unsupervised, Reinforcement
    • Regression, Classification, Decision Trees, Clustering
  • Neural Networks
    • Deep Learning


Advanced algorithms, machine learning (ML), and artificial intelligence (AI) are appearing across digital and technology sectors from healthcare to financial institutions, and in contexts ranging from voice-activated digital assistants, to traffic routing, identifying at-risk students, and getting purchase recommendations on various online platforms Embedded in new technologies like autonomous cars and smart phones to enable cutting edge features,  AI is equally being applied to established industries such as agriculture and telecomm to increase accuracy and efficiency. Moving forward, machine learning is likely to be the foundation of many of the products and services in our daily lives, becoming unremarkable in much the same way that electricity faded from novelty to background during the industrialization of modern life 100 years ago.

Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.

For privacy experts, AI is more than just Big Data on a larger scale. Artificial Intelligence is differentiated by its interactive qualities – systems that collect new data in real time via sensory inputs (touchscreens, voice, video or camera inputs), and adapt their responses and subsequent functions based on these inputs. The unique features of AI and ML include not just big data’s defining characteristic of tremendous amounts of data, but the additional uses, and most importantly, the multi-layered processing models developed to harness and operationalize that data. AI-driven applications offer beneficial services and research opportunities, but pose potential harms to individuals and groups when not implemented with a clear focus on protecting individual rights and personal information. The scope of impact of these systems means it is critical that associated privacy concerns are addressed early in the design cycle, as lock-in effects make it more difficult to later modify harmful design choices. The design must include on-going monitoring and review as well, as these systems are literally built to morph and adapt over time. Intense privacy reviews must occur for existing systems as well, as design decisions entrenched in current systems impact future updates built upon these models.

As AI and ML programs are applied across new and existing industries, platforms, and applications, policymakers and corporate privacy officers will want to ensure that individuals are treated with respect and dignity, and retain the awareness, discretion and controls necessary to control their own information

Read the full guide here.