Video-Based Vehicle Safety Systems: Lessons Learned from Commercial Fleets
Recent years have seen an increasing deployment of onboard video-based safety systems in vehicles. While video technologies have been seen already in passenger vehicles, these systems have been even more prominent in the commercial fleet industry (e.g., trucking, delivery, and rental vehicles). Fleet operators have substantial regulatory, financial, and other incentives to implement safety programs, […]
Confidential Computing And Privacy: Policy Implications of Trusted Execution Environments
Confidential computing leverages two key technologies: trusted execution environments and attestation services. The technology allows organizations to restrict access to personal information, intellectual property, or sensitive or high-risk data through a secure hardware-based enclave or “trusted execution environment” (TEE). Economic sectors that have led the way in adopting confidential computing include financial services, healthcare, and […]
Generative AI for Organizational Use: Internal Policy Considerations
The Future of Privacy Forum (FPF) Center for Artificial Intelligence released a newly updated version of our Generative AI internal compliance document – Generative AI for Organizational Use: Internal Policy Considerations, with new content addressing organizations’ ongoing responsibilities, specific concerns (e.g., high-risk uses), and lessons taken from recent regulatory enforcement related to these technologies. In 2023, […]
Best Practices for AI and Workplace Assessment Technologies
The Future of Privacy Forum, along with ADP, Indeed, LinkedIn, and Workday — leading hiring and employment software developers — released Best Practices for AI and Workplace Assessment Technologies. The Best Practices guide makes key recommendations for organizations as they develop, deploy, or increasingly rely on artificial intelligence (AI) tools in their hiring and employment decisions. Organizations are incorporating […]
The Spectrum of Artificial Intelligence Report & Infographic
The Spectrum of Artificial Intelligence – Companion to the FPF AI Infographic has been updated in June 2023 to account for the development and use of advanced generative AI tools. In December 2020, FPF published the Spectrum of Artificial Intelligence – An Infographic Tool, designed to visually display the variety and complexity of Artificial Intelligence […]
Warning Signs: The Future of Privacy and Security in an Age of Machine Learning Report
FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
Nothing to Hide: Tools for Talking (and Listening) About Data Privacy for Integrated Data Systems Report
Data-driven and evidence-based social policy innovation can help governments serve communities better, smarter, and faster. Integrated Data Systems (IDS) use data that government agencies routinely collect in the normal course of delivering public services to shape local policy and practice. They can use data to evaluate the effectiveness of new initiatives or bridge gaps between public services and community providers.
The Privacy Expert’s Guide to AI and Machine Learning Report
Today, FPF announces the release of The Privacy Expert’s Guide to AI and Machine Learning. This guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.
Communicating about Data Privacy and Security Report
The ADRF Network is an evolving grassroots effort among researchers and organizations who are seeking to collaborate around improving access to and promoting the ethical use of administrative data in social science research. As supporters of evidence-based policymaking and research, FPF has been an integral part of the Network since its launch and has chaired the network’s Data Privacy and Security Working Group since November 2017.
Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models Report
Beyond Explainability aims to provide a template for effectively managing this risk in practice, with the goal of providing lawyers, compliance personnel, data scientists, and engineers a framework to safely create, deploy, and maintain ML, and to enable effective communication between these distinct organizational perspectives.