Connecting Experts to Make Privacy-Enhancing Tech and AI Work for Everyone
The Future of Privacy Forum (FPF) launched its Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics on Tuesday, July 9th. The RCN supports the Biden-Harris Administration’s commitments to privacy, equity, and safety articulated in the administration’s Executive Order on Artificial Intelligence (AI).
Industry experts, policymakers, civil society, and academics met to discuss the possibilities afforded by Privacy Enhancing Technologies (PETs), the inherent regulatory challenges, and how PETs interact with rapidly developing AI systems. FPF experts led participants in a workshop-style virtual meeting to direct and inform the RCN’s next three years of work. Later that day, senior representatives from companies, government, civil society, and academia met at the Eisenhower Executive Office Building to discuss how PETs can be used ethically, equitably, and responsibly. Among the major themes:
- Privacy Enhancing Tech can support socially important data-driven research while protecting sensitive personal info;
- In some contexts, there are hard questions about how to implement PETs while preserving data that is crucial for assessing and combating bias, especially when it comes to AI decision-making systems;
- Greater clarity about how regulators apply data protection laws to information subjected to PETs safeguards could increase the use and effectiveness of Privacy Enhancing Tech;
- Analysis of existing PETs implementations can yield important insights into the opportunities and challenges of particular tech and approaches.
Virtual Kickoff
FPF hosted a Virtual Kickoff event where over 40 global experts helped shape the RCN’s work for the next three years. There were three main areas of discussion: How can we broadly define a PET while still having a clear scope? Second, what can we learn from the opportunities and challenges encountered by existing PETs implementations? Third, what are the most important requests for policymakers?
Here’s what the experts had to say:
Broadly Defining PETs
Deciding what is and isn’t a PET is essential for making any recommendations for their use, but forming a definitive list is inherently fraught with complexity and counterexamples. Some participants suggested building a framework and series of questions to ask about a given use case with an applied technology could be a helpful way to move forward. Participants also noted that usability is essential in defining a PET—without understanding and building for the end users, we risk PETs losing their intended value. Relatedly, participants noted a sociotechnical system aspect of this work and emphasized the need to think about the human pieces that attach to technologies
PETs Possibilities
Participants identified many areas of opportunity for PETs usage, such as in the social sciences, medical research, credential verification, AI model training, behavioral advertising, and education. At the same time, there are several known issues, including balancing the tradeoff between privacy and data utility, a lack of policy clarity and economic incentives to use PETs, computational overhead, ethical considerations, and, for some, a lack of trust in the technologies. Experts advised that for more people to use PETs, the tools must become more accessible and provide additional training and support for new users. Participants identified AI as a contributor to both the opportunities and challenges while agreeing that AI technologies are a key part of some aspects of the PETs landscape moving forward.
Policy Asks for Regulators
The most frequent request was for more regulatory clarity around PETs. For example, experts wanted to know what legal and technical obligations organizations have using PETs, what regulators need to see to support the development of PETs as a mechanism for meeting data minimization and other requirements, and what the legal definitions of de-identification or anonymization are when using PETs. While some suggested regulators needed specific use cases to make such determinations, others indicated that no one wants to “go first” and suggested general use cases representing common PETs uses could be instructive. Regardless of how clarity is achieved, experts want lawmakers and regulators to provide specific measurements for how organizations can comply with various legal regimes, accurately estimate risk, and make informed decisions about PETs deployment.
A White House Roundtable Event
The Roundtable meeting, hosted by the White House Office of Science and Technology Policy at the Eisenhower Executive Office Building’s ornate Secretary of War Suite, marked the beginning of a collaborative effort to advance Privacy Enhancing Technologies and their use in developing more ethical, fair, and representative AI. The meeting commenced with an overview of the project’s goals and alignment with the administration’s agenda of fairness, safety, and privacy protection. Hal Finkel, Program Manager for Computer Science and Advanced Scientific Computing Research at the Department of Energy, and Greg Hager, Head of the Directorate for Computer and Information Science and Engineering at the National Science Foundation, expressed their agencies’ commitment to ensuring technology benefits every member of the public, emphasizing the critical role of PETs in maintaining data privacy, especially in AI applications that require extensive data collection.
Participants discussed the global momentum behind PETs driven by new data protection laws from the local to international levels. They highlighted the necessity of creating robust governance frameworks alongside technological innovations to ensure ethical use. Additionally, they articulated the complexities of studying AI’s societal impacts, particularly involving vulnerable populations, highlighting the need for governance frameworks to accompany technological solutions to privacy preservation.
Artificial Intelligence
The group also dove into some of the challenges and opportunities posed by foundation models: machine unlearning, balancing privacy with utility in personalized assistants, and identity/personhood verification. These issues underscore the necessity for advanced PETs that can adapt to evolving AI capabilities. Several people shared practical insights from the deployment of PETs in large-scale projects, such as the U.S. Census, conveying the importance of starting with a clear use case and ensuring equal footing for PETs teams to ensure success.
Specific opportunities for PETs in AI system testing were outlined, such as enabling organizations to disaggregate existing data internally and facilitating private measurement. Challenges included the need to relate metrics to life outcomes without extensive data sharing and understanding the impact of AI systems on individuals. Participants noted coordination challenges in setting up technical elements at this early stage and the gap from theory to practice.
Business Cases
Attendees also focused on the role of government in supporting business cases for PETs and the need for broader dissemination of PETs expertise beyond academia and big tech. Many people underscored the importance of public trust and consumer advocacy regarding PETs. As consumer sentiment shifts towards greater awareness of privacy issues, a unique opportunity exists to root efforts in democratic consensus and ensure that marginalized groups are adequately represented and protected.
The discussion also touched on the economic and other forms of feasibility of PETs, noting that deployment and operational costs can be prohibitive. Several people reaffirmed the need for public trust in PETs, highlighting that consumers are increasingly aware of privacy stakes and expect technologies to protect their data. They also reiterated the importance of centering public trust and consumer advocacy in these efforts.
Supporting Additional Deployment
The meeting concluded with a focus on the FPF RCN’s future direction, maintaining the need for ongoing collaboration to accelerate progress toward a privacy-preserving data-sharing and analytics ecosystem that advances democratic values. By bringing together a diverse group of experts, the RCN will foster convergence, address persistent differences, and support the broad deployment of PETs. Based on expert input such as this Roundtable, FPF will explore various mechanisms for deployment, including new technology, legal and regulatory frameworks, and standards and certifications, particularly in use cases that support privacy-preserving machine learning and the equitable use of AI by U.S. federal agencies.
As the meeting wrapped up, participants expressed optimism and a shared commitment to ongoing collaboration. The future of AI and privacy lies in the collective ability to innovate responsibly, govern wisely, and earn the public’s trust, paving the way for a new era of privacy-preserving technologies.
Next Steps for The RCN
FPF is gathering all of the participants’ feedback, suggestions, and ideas, and we’ll send out a roadmap for the first year shortly. The two main groups (Experts and Regulators) will meet regularly to provide substantive feedback on our progress. About 18 months from the RCN launch, we’ll bring both groups together for an in-person event in Washington, D.C., for an in-depth working session.
Want to Contribute?
If you’re a subject matter expert on PETs or use PETs and want to contribute to their future use and regulation, we want to hear from you!
Sign up here to be considered for the Expert or Regular Sub-Groups. For questions about the RCN, email [email protected].
The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics is supported by the U.S. National Science Foundation under Award #2413978 and the U.S. Department of Energy, Office of Science under Award #DE-SC0024884.