FPF Celebrates 15 Years! Spring Social Marks Board Transition as Data Protection Leaders Toast to FPF’s Success
Leaders in Data Protection Take Center Stage at FPF’s Spring Social
The week started with FPF’s 15th Anniversary Spring Social, where FPF CEO Jules Polonetsky thanked FPF’s Board Chair and Founder Chris Wolf, who served for 15 years, and welcomed FPF’s new Board Chair, Alan Raul. Three leading data protection regulators lauded FPF’s effectiveness in supporting their work, in the U.S. and globally. Remarks were delivered by Denise Wong (Deputy Commissioner at the Personal Data Protection Commission of Singapore), Wojciech Wiewiorowski (European Data Protection Supervisor), and Rebecca Kelly Slaughter (Commissioner at the Federal Trade Commission).
FPF’s Board, Advisory Board, and supporters were joined at the event by senior White House staff, leaders at the Federal Trade Commission, Commerce Department, House and Senate staff, state legislators and enforcement agency staff, and representatives of more than a dozen data protection authorities globally.
FPF Activities during IAPP GPS Engage Stakeholders, Launch India Focus, and Highlight Staff Experts
As in years past, FPF took part in the 2024 IAPP Global Privacy Summit in Washington, D.C., which brings together thousands of privacy pros and, most notably, some of FPF’s closest stakeholders to host a week of exciting events while FPF experts participated in GPS panel sessions. And an ‘I Love Privacy’ thank you to those who visited FPF’s booth for our latest expert resources on everything from youth privacy to ad tech to AI!
FPF Hosts India Roundtable
FPF hosted an all-star group of thought leaders on India’s data protection and digital policy landscape at its Washington, D.C., office. The roundtable featured Rahul Matthan, Partner at Trilegal in Bengaluru, Monika Tomczak-Gorlikowska, Chief Privacy Officer at Prosus, and FPF’s Senior Fellow for India Malavika Raghavan. This unique member event showcased our experts as they discussed in detail the state of play of digital policymaking in India, focusing on the next stages of implementation of the new Digital Personal Data Protection Act (DPDPA). FPF has expanded our on the ground work in India and is working with our APAC Council members to plan future activities.
FPF also hosted leadership breakfasts and lunches for our senior stakeholders, as well as informal discussions and receptions with our legislative and health teams.
FPF Experts at GPS Workshops and Panel Sessions
Meanwhile, FPF staff experts participated in nine GPS workshops and sessions, including:
FPF’s Keir Lamont, Tatiana Rice, and Jordan Francis hosted a workshop on ‘The State of U.S. Privacy Law,’ where participants were brought up to speed on the latest developments in U.S. state privacy law. At the same time, expert panelists identified nuances and differences in the recently passed data privacy laws.
Bailey Sanchez and Jim Siegl, CIPT, CIPM participated in a workshop on ‘The State of Play: Compliance with Kids and Teens Privacy Law’ where attendees learned about the core obligations of recently passed laws, tips for maintaining an advertising program that aligns with kids advertising requirements, an overview of age assurance, and an understanding of the kids and teens privacy policy landscape.
Bailey Sanchez took part in ‘The State of Play: An Overview of Kids and Teens Privacy in the U.S.’ which covered recent and expected developments in children’s privacy and online safety legislation in the United States, including considerations that go into crafting legislation at the state level, what the FTC’s priorities are in this space, how companies are working with policymakers to pass legislation that will have positive outcomes for kids, and civil society’s role. Bailey was joined by Senator James Maroney from Connecticut’s 14th District and leading experts from the FTC, Google, and Hogan Lovells.
Stacey Gray moderated ‘How to Evaluate Novel Advertising Solutions With Privacy Enhancing Technologies,’ which featured an FPF discussion draft of a detailed rubric that experts, advocates, and policymakers can use to objectively compare different novel advertising systems, from browser-based APIs to data-clean rooms. The expert panel explored the rubric’s elements and how privacy professionals can use it to conduct informed evaluations of emerging advertising systems.
Keir Lamont took part on the panel ‘Federal Privacy Legislation: Obstacles and Opportunities” to discuss the state of legislative efforts in Congress, breaking down the substance of current proposals and the political and policy disagreements that have become barriers to enactment. Panelists also covered the need for federal privacy legislation, the relationship between current proposals and existing laws, the potential impact on emerging technologies such as artificial intelligence, and the debates over preemption, enforcement mechanisms, and other topics.
Zoe Strickland at ‘The Role that Consumer Consent Plays in the Future of Trusted Commerce’ participated in a discussion on the evolving landscape of consumer trust in commerce, specifically around the crucial theme of consumer consent, and will explore its implications not only for consumers but also for businesses and regulatory bodies.
Aaron Massey sat on an expert panel at ‘Mad Men and the Metaverse: Opportunities and Challenges in Immersive Advertising’ where panelists presented a provocative, future-oriented discussion of the possibilities of immersive advertising tempered by lessons learned from the past and the current advertising policy and compliance landscape.
FPF CEO Jules Polonetsky moderated ‘PETS: How Can We Drive Progress? National Strategies and Regulator Perspectives’ which convened leaders of national PETs strategies, leading regulators and experts to assess the barriers and potential for rapid adoption of PETs across a range of use cases.
Malavika Raghavan participated in ‘The Monsoon Has Arrived. India’s New Digital Personal Data Protection Act (DPDPA),’ focused on the new Digital Personal Data Protection Act for India that was adopted in August 2023. Speakers discussed the status of the implementation of the DPDPA as of April 2024, how this new act impacts the global privacy agendas, when it is supposed to enter full effect, and what stages of operational implementation are.
We hope you enjoyed this year’s IAPP Global Privacy Summit as much as we did! If you missed us at our booth, visit FPF.org for all our reports, publications, and infographics. Follow us on Twitter/X, and LinkedIn, and subscribe to our newsletter for the latest.
Consumer Acceptance, Transparency, and Unique Privacy Considerations at the Forefront of FPF’s Discussion on Privacy and Vehicle Safety Systems
On March 21, the Future of Privacy Forum (FPF) hosted a conversation on “Driving the Conversation on Privacy and Vehicle Safety Systems” to discuss the future of certain technologies in vehicles. The panel discussion was moderated by Adonne Washington, FPF Policy Counsel for Data, Mobility, and Location, and included Hilary Cain (Senior Vice President for Policy at the Alliance for Automotive Innovation), Kristin Kingsley (Director of Program Development and Outreach at the Automotive Coalition for Traffic Safety), and William Wallace (Associate Director of Safety Policy at Consumer Reports). The event followed the launch of FPF’s new report, “Vehicle Safety Systems: Privacy Risks and Recommendations,” which focuses on Advanced Driver Assistance Systems (ADAS), Driver Monitoring Systems (DMS), and Impairment Detection Technologies.
FPF CEO Jules Polonetsky provided welcome remarks ahead of the panel discussion and highlighted how personal data and privacy are and will be implicated in new automotive technologies. In framing the discussion, he explained how many of these technologies, particularly impairment detection technologies, are so early in their development that there is more room for industry, civil society, regulators, policymakers, and other stakeholders to build out a consensus-driven framework that will guide their implementation and serve as a model for other emerging safety technologies.
Washington set the stage for FPF’s report and the panel discussion, noting the January 2024 Advanced Notice of Proposed Rulemaking (ANPRM) on Advanced Impaired Driving Prevention Technologies from the National Highway Traffic Safety Administration (NHTSA). The ANPRM comes out of the Bipartisan Infrastructure Law passed in 2021. FPF continues to investigate the privacy implications of impairment detection systems and other driver safety systems, such as those used for lane-keeping assist functions or to detect passengers in a vehicle. You can find FPF’s analysis and more on Data, Mobility, and Location on the website.
Most of the day’s discussion focused on the landscape of new driver safety automotive technologies, including driver impairment technologies, such as Alcohol Detection Systems, and the challenges needed to gain consumer acceptance of these technologies. Panelists highlighted the unique privacy challenges vehicles pose and the transparency and consent mechanisms needed to ensure individuals can exercise control over their data. Unlike more personal electronics, cars are typically owned by one person and operated by others. Cain noted that privacy is often considered in the context of the vehicle owners, but this model does not address the privacy of passengers or other drivers, including individuals who become owners later in the vehicle’s lifetime, such as through the secondary market.
Kingsley, who works with NHTSA through a public-private partnership on cooperative research to create the Driver Alcohol Detection System for Safety (DADSS), said that the project has been focused on privacy since its earliest stages, noting that “the only way we’re going to be able to deploy broadly is with consumer acceptance, and privacy is at the forefront of that.” Even though NHTSA has limited authority to regulate data privacy, she said that any NHTSA rule has to be practicable and is closely tied to considerations around individual privacy.
Cain underscored the importance of consumer trust in the technology rollout and highlighted the report’s findings that consumer trust in the automotive industry is still ahead of other industries. In light of subsequent news coverage about vehicle data collection practices, Cain affirmed the automotive industry’s commitment to the Alliance for Automotive Innovation’s 2014 Consumer Privacy Protection Principles, which require heightened protection for biometric data and driver behavior data, including information about driver impairment. Cain also noted that since the rollout of the Consumer Privacy Protection Principles, several states have enacted comprehensive state privacy laws and vehicle-specific privacy laws, which could be integrated into the principles. However, she said, “what we are begging for is a federal privacy law that would cover our industry along with every industry in the United States.”
All three panelists emphasized that the technologies to determine driver impairment are still nascent and unfamiliar to consumers. Wallace, discussing a camera-based approach to detecting driver impairment, noted that while consumers may generally like their vehicles, “the idea of having an in-cabin camera is very new, and we don’t fully know what the reaction will be.” However, he also argued that the potential to save ten thousand lives per year requires the industry to think about how best to implement the technology rather than whether to implement it at all. Wallace reiterated the importance of addressing automotive safety and consumer privacy, saying, “we are always looking at this through both lenses.”
To read the full report and the consumer survey findings, visit the FPF website, and be sure to watch Adonne Washington’s LinkedIn Live chat with CEO Jules Polonetsky on FPF’s YouTube Channel.
Alan Raul, Founder of Sidley Austin’s Privacy and Cybersecurity Law Practice Elected FPF’s New Board President
FPF Founder Christopher Wolf and Board Chair steps down after 15 years of service
FPF is pleased to announce Alan Raul, former Vice Chairman of the Privacy and Civil Liberties Oversight Board, has been elected to serve as President and Chair of the organization’s Board of Directors. Raul succeeds Christopher Wolf, founding Board President and founder of FPF, who is stepping down after a foundational and impactful tenure spanning 15 years.
Wolf, a pioneer in Internet and privacy law, is Senior Counsel Emeritus of Hogan Lovells’ top-ranked Privacy and Cybersecurity practice. As a leading attorney with the firm, he co-founded and led the development of the practice for over a decade, advising and shaping the thinking of Internet free speech, hate speech, and the parameters of government access to stored information. Wolf will continue as a member of FPF’s Board of Directors throughout this year before stepping down.
“In 2008, when I founded the Future of Privacy Forum, our vision was that it would be a place where we could advance the responsible use of data while respecting individual privacy,” Wolf said. “We believed that if dedicated technologists, policymakers, industry groups, and advocates focused on advancing privacy in a manner that businesses can achieve, we could strike a balance between consumer privacy and personalization that enables greater innovation for all.”
FPF flourished under Wolf’s guidance, becoming instrumental in steering collaborative and innovative efforts to address the complexity of the data-driven world. The organization regularly publishes substantive policy papers and reports tracking and analyzing data protection developments in different jurisdictions worldwide. Since launching, FPF has expanded its offices to Europe, Tel Aviv, and the Asia Pacific region and convened numerous international events, including the Brussels Privacy Symposium, now in its 7th year and first annual Japan Privacy Symposium.
Wolf’s dedication has not only set a high benchmark for leadership but also has helped regulators, policymakers, and staff at data protection authorities better understand the technologies at the forefront of data protection law. FPF will honor and celebrate Wolf’s contributions to the privacy sector and FPF during his tenure at their 2024 Advisory Board Meeting’s Opening Night Reception on June 5.
“In my experience in leading privacy and cybersecurity law and research, I’ve come to recognize the qualities that make a dedicated privacy trailblazer,” Wolf said. “Alan Raul shares my commitment to fostering a thriving, diverse privacy landscape that advances responsible data practices and technological innovation. His values align with the needs of FPF, and I am confident he will work tirelessly with integrity and dedication to build on the successes of recent years and take on new challenges.”
Raul has served on FPF’s board for eight years and is the founder and, for 25 years, the leader of Sidley Austin LLP’s highly-ranked Privacy and Cybersecurity Law practice. He is currently Senior Counsel at Sidley. Raul brings his breadth of knowledge in global data protection and compliance programs, cybersecurity, artificial intelligence, national security, and Internet law. He is also currently a member of the Technology Litigation Advisory Committee of the U.S. Chamber of Commerce Litigation Center. Raul is also a Lecturer in Law at Harvard Law School, where he teaches Digital Governance and Cybersecurity.
“I’m thrilled to take on this role and continue working to advance responsible data practices and safeguard individual privacy rights,” Raul said. “By leveraging my experience in advising global compliance programs and navigating complex regulatory landscapes, I hope I can contribute meaningful insights to the Board of Directors and effectively guide the direction of FPF’s work as we continue to grow globally as well as meet the new challenges and opportunities in the era of Artificial Intelligence.”
Olivier Sylvain and George Little also join FPF’s Board of Directors as two new members to serve. Sylvain is a Professor of Law at Fordham University and a Senior Policy Research Fellow at Columbia University’s Knight First Amendment Institute, where his research has focused on information and communications law and policy. Sylvain served as Senior Advisor to the Chair of the Federal Trade Commission from 2021 to 2023. Little is a partner at the Brunswick Group specializing in crisis communications, cybersecurity, reputational, and public affairs matters. Little co-chairs the firm’s Global Cybersecurity, Data & Privacy Practice, pulling from his experience working in the highest levels of the national security and defense community and the private sector.
Sylvain and Little join the ranks of recently named board members, including Tom Moore, recently retired as AT&T’s chief privacy officer; Jane Horvath, partner at Gibson, Dunn & Crutcher, LLP and former Chief Privacy Officer of Apple; and Theodore Christakis, Professor of International, European and Digital Law at University Grenoble Alpes (France), Director of the Centre for International Security and European Law (CESICE), and Director of Research for Europe with the Cross-Border Data Forum. FPF’s distinguished new Directors join other privacy luminaries on our Board of Directors – namely, Anita Allen, Debra Berlin, Danielle Citron, Mary Culnan, David Hoffman, Agnes Bundy Scanlan, and Dale Skivington.
“It’s been a pleasure getting to work with Chris Wolf and seeing the vision we had for FPF as a hub for privacy education and research develop over the years and grow into the leading institution it is today,” said Jules Polonetsky, CEO of FPF. “I am confident in Alan’s ability to lead the board to greater heights and continue informing the organization’s future work.”
Composed of leaders from industry, academia, and civil society, the input of FPF’s Board of Directors ensures that FPF’s work is expert-driven and independent of any stakeholders.
About Future of Privacy Forum (FPF) The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks and develop appropriate protections. FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Follow FPF on X and LinkedIn.
Examining Novel Advertising Solutions: A Proposed Risk-Utility Framework
The digital advertising industry is in the midst of a sea change. Around the world, privacy regulators have become far more critical of mainstream advertising business models. Both lawmakers and enforcers of existing laws are now more focused on strengthening individual privacy rights and specifically preventing many of the harms associated with the use of personal information in advertising. Meanwhile, large platforms such as Apple, Google, and Microsoft have taken significant steps in recent years to limit access to advertising-related data about their users through efforts like App Tracking Transparency (ATT), Intelligent Tracking Prevention (ITP), and an ongoing process to deprecate third party cookies in Google Chrome. Each change has ripple effects throughout the economy, changing the way advertisers do business and often impacting other social values.
In reaction to these regulatory and platform pressures, businesses are actively seeking new tools and solutions to maintain identity and addressability, or to provide greater privacy safeguards, ideally (in their view) doing so while sustaining as much business utility as possible. Many solutions involve privacy-enhancing technologies (PETs), while others involve a significant shift in business models, such as a return to contextual advertising, the use of solely first-party data, or a shift to client-side processing.
The goal of this Risk-Utility Framework and its associated Background (“Advertising in the Age of Data Protection”) is to provide a comprehensive rubric for navigating the many tradeoffs inherent in the evolving digital advertising landscape and the technology it is built upon. We do not assign values to each aspect of utility, risk, or social impact, but rather aim to holistically identify the many factors relevant for a policymaker or privacy leader to evaluate the impact of a given digital advertising proposal, solution, or system.
FPF Statement on Vice President Harris’ announcement on the OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence
Following the groundbreaking White House Executive Order on AI last fall, which outlined ambitious goals to promote the safe, secure, and trustworthy use and development of AI systems, Vice President Harris has today announced the publication by the Office of Management and Budget of a binding memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” which indicates the diligent efforts of agencies toward achieving this objective. This commitment is further highlighted by the National Telecommunications and Information Administration (NTIA) publication earlier this week of the“Artificial Intelligence Accountability Policy Report,” which details mechanisms to support the creation and adoption of trustworthy AI.
Although the OMB memorandum primarily focuses on the government’s use of AI, its influence on the private sector will be significant. This is due to not only the requirements for U.S. government vendors and procurement, but also how this framework will create broadly applicable norms and standards for conducting impact assessments, mitigating bias, providing rights to individuals affected by AI systems that impact their rights and safety, and assessing data quality and data privacy in these systems.
“This is a pivotal moment for the development of AI standards when the public sector has a crucial role to play in setting norms for the assessment and procurement of AI systems. We are particularly enthused by the renewed commitment to bring clarity to the development of AI in the public sector and its national utilization. At FPF, we eagerly anticipate contributing to this crucial work through our evidence-based research on Artificial Intelligence.”
– Anne J. Flanagan, FPF Vice President for Artificial Intelligence
Youth Privacy in Immersive Technologies: Regulatory Guidance, Lessons Learned, and Remaining Uncertainties
As young people adopt immersive technologies like extended reality (XR) and virtual world applications, companies are expanding their presence in digital spaces, launching brand experiences, advertisements, and digital products. While virtual worlds may in some ways resemble traditional social media and gaming experiences, they may also collect more data and raise potential manipulation risks, particularly for vulnerable and impressionable young people.
This policy brief analyzes recent regulatory and self-regulatory actions and guidance related to youth privacy, safety, and advertising in immersive spaces, pulling out key lessons for organizations building experiences in virtual worlds.
Recent FTC Enforcement Actions and Guidance
The Federal Trade Commission (FTC) has shown a strong interest in using its consumer protection authority to bring enforcement actions against a wide range of digital companies for alleged “unfair and deceptive” practices, rule violations, and other unlawful conduct. The Commission has also issued several policy statements and guidance documents relevant to organizations building immersive technologies, touching on issues such as biometric data and advertising to children. It is clear the agency is thinking seriously about how its authority could apply in emerging sectors like AI, and organizations working on immersive technologies should take heed. Lessons from recent FTC privacy cases and guidance include:
The FTC interprets the Children’s Online Privacy Protection Act (COPPA)’s definition of “personal information” broadly, including data types that immersive technologies commonly collect, like eye tracking.
Immersive application providers must comply with COPPA if their application is “directed to children” or if there is “actual knowledge” children are accessing it.
Organizations should provide privacy policies and notices in a format appropriate for and consistent with the design elements of immersive experiences.
Organizations should take additional steps to be transparent about advertising practices.
Self-Regulatory Cases and Safe Harbor Guidance
Self-regulatory bodies also have an essential role in ensuring privacy and safety in child-directed applications and providing guidance to companies operating in the space. For example, organizations designated as COPPA Safe Harbors can guide companies toward compliant, developmentally appropriate, and privacy-protecting practices. Lessons from recent self-regulatory cases and Safe Harbor guidance include:
Advertising disclosures in immersive environments should be designed to be as clear and conspicuous as possible and provided in an age-appropriate manner.
Platforms that allow advertisements to children should ensure that developers, brands, and content creators have the necessary tools and guidance to clearly and conspicuously disclose the presence of advertising to children.
Privacy by design and by default demonstrate to regulatory and self-regulatory bodies that an organization takes youth privacy seriously.
Privacy and advertising practices for teens should take into account the unique considerations relevant to teen privacy and safety, compared to child and adult guidance.
Organizations with a robust privacy culture that demonstrate good faith efforts to follow the law are more likely to be given the benefit of the doubt.
Remaining Areas of Uncertainty
Because immersive technologies are relatively new and evolve rapidly, much of the existing regulatory and self-regulatory guidance is pulled from other contexts. Therefore, questions remain about how regulations apply in immersive environments and how to operationalize best practices. These questions include:
How age-appropriate design principles will best fit into an immersive technology context, such as how best to ensure strong default privacy settings for underage users; the best methods for clarity and transparency regarding data practices notices and advertising disclosures; and whether an immersive experience should require unique, additional safeguards.
What novel data collection and analysis methods in the immersive technology space will require discerning data practices surrounding its safeguarding and use, such as what kinds of inferences are appropriate to make from body-based data or to what extent avatars not derived from a child’s data are considered personal information.
How immersive technologyimpacts children and teens; more research is needed to understand whether certain kinds of experiences and privacy practices are harmful for children and teens, if there are unique risks to children’s privacy and mental health, and how organizations, parents, schools, and other stakeholders can address potential issues.
New Report Explores Privacy Implications of Driver Safety Systems
Report Offers Recommendations for Organizations Developing, Implementing, and Regulating Technologies
Today, the Future of Privacy Forum (FPF) is releasing a new report explaining how safeguarding driver privacy and data protection will be critical to ensuring widespread acceptance of new safety technology in vehicles. This report comes as the National Highway Traffic Safety Administration (NHTSA) is in the process of establishing new requirements for safety technology that vehicle manufacturers will soon integrate into vehicles of the future.
FPF’s report explores the privacy implications of vehicle safety systems – including Advanced Driver Assistance Systems (ADAS) and Driver Monitoring Systems (DMS) – and impairment detection technologies, which use automated technology to enhance vehicle safety. In addition to core recommendations for public and private entities developing and enforcing these technologies, the report includes insights from a survey completed with the Automotive Coalition for Traffic Safety, which gauges individuals’ attitudes toward the use of Vehicle Safety Systems and explores how to prioritize privacy.
“Vehicle safety systems can save lives and reduce injuries–but only if people use them. Policy makers and auto manufacturers must consider the privacy and data protection implications for all drivers when incorporating new technology into vehicles to bolster driver trust and adoption.”
The 2021 Infrastructure Investment and Jobs Act requires NHTSA to establish a new Federal Motor Vehicle Safety Standard surrounding impaired driving technology. In response, the report identifies five core recommendations for organizations developing, implementing, and regulating these technologies:
Regulators, technology developers, and technology deployers should ensure that privacy is a foundational principle for any Vehicle Safety System and should implement appropriate legal, policy, and technical safeguards when personal information is implicated, including measures that:
Technology developers and technology deployers should de-identify data collected by Vehicle Safety Systems as appropriate.
Impairment-detection systems should be accurate, should be tested for potential bias, and should not produce false-positive results more often for people from underrepresented, marginalized, and multimarginalized communities. Well-defined standards for consistent deployment and alignment across the industry may be beneficial.
Driver acceptance should be promoted through transparency about Vehicle Safety Systems functions and operations, as well as the handling of personal data.
Regulators, technology developers, and technology deployers should identify and mitigate, to the extent possible, potential future harms to drivers, especially to people from underrepresented, marginalized, and multimarginalized communities.
The survey results informed the recommendations. The key findings from the survey revealed that many individuals value advanced vehicle safety technologies but worry about the privacy risks, accuracy of the technology, cost, and data transfers to third parties. Additionally, individuals indicated that they generally trust carmakers’ data practices more than online companies and the government but worry about vehicle systems that collect information about occupant behaviors. Individuals want to incorporate these technologies for safety but need privacy and data protection practices like disclosure limits, encryption, on-car storage, and de-identification to trust these systems.
“Ensuring privacy protections in vehicles is necessary. Privacy protections can’t be considered at the end of the process when developing technology and shouldn’t be considered in a vacuum, but rather privacy should be continually considered in regard not only to every stage of the development pipeline but also to any unique risks for marginalized or multimarginalized individuals and communities.”
The report examines the strategies needed to protect consumer privacy when technologies, especially those to detect impairment, are included in vehicles. Washington underscored that policy leaders, regulators, and automakers should use the resources published to better understand drivers’ knowledge of data collection and safety systems in and around new and advanced vehicles.
FPF will also host a panel discussion and reception on the report. Learn more about the event here.
Privacy and the Rise of “Neurorights” in Latin America
Authors: Beth Do, Maria Badillo, Randy Cantz, Jameson Spivack
“Neurorights,” a set of proposed rights that specifically protect mental freedom and privacy, have captured the interest of many governments, scholars, and advocates. Nowhere is that more apparent than in Latin America, where several countries are actively seeking to enshrine these rights in law, and some even in their Constitutions.
The rapid global proliferation of neurotechnology—devices that can access mental states by decoding and modulating neural activity—has generated a large amount of consumer neurodata (also known as neural, brain, or cerebral data; brain information; mental activity; etc.). As most existing privacy laws do not separately or explicitly regulate neurodata—even though such data is normally covered by the broad definitions of “personal data” in such legislation—several governments and international bodies have begun to develop specific legal protections for this type of personal data.
This analysis focuses on current legislative efforts in Chile, Mexico, and Brazil, which are indicative of how far the conversation in Latin America has progressed. Other jurisdictions, such as the United States, Israel, South Korea, and Europe, are also in the nascent stages of discussing protections for mental privacy. As neurotechnologies continue to evolve, industry and regulatory bodies alike should look to Latin America for developing trends and best practices.
1. What is neurotechnology?
Neurotechnology is an umbrella term for technologies that allow access to neurodata. Raw neurodata is collected from an individual’s central nervous system (the brain and spinal cord) and/or peripheral nervous system (the nerves outside the brain and spinal cord), including electrical activity between these systems. Neurotechnology includes both traditional techniques such as electroencephalography (EEG) testing and magnetic resonance imaging (MRI) scans, as well as new methods that can monitor or modulate brain activity.
Neurodata is valuable and uniquely sensitive as it can access a person’s emotions, biases, and memories. For example, EEGs can measure inattention, as brainwaves can indicate whether someone’s mind is focused or wandering. With sufficient data over a period of time, brainwave patterns may also even be more uniquely identifying than fingerprints.
2. What are neurorights?
“Neurorights” have been formulated to encompass mental privacy, integrity, and liberty. They are not yet widely recognized at the national level or codified in an international human rights framework, and there is disagreement about their usefulness as a conceptual framework. Some prefer using other terms such as “mental privacy” or “cognitive liberty;” others question the necessity of introducing new rights, or if current legal frameworks are sufficient or could be strengthened to account for them. Neurorights can be simplified into five fundamental rights:
Mental Privacy: Personal neurodata should be private, and should not be stored or sold without consent.
Personal Identity: Neurotechnology should not alter “mental integrity,” or an individual’s sense of self.
Free Will: Individuals should retain decision-making control, without unknown manipulation via neurotechnology.
Fair Access to Mental Augmentation: Cognitive enhancement neurotechnology should be accessible to everyone.
Protection from Bias: Neurotechnology algorithms should not discriminate.
3. The emergence of neurorights
Advances in neurotechnology, partly funded by large research programs such as the US-based Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, have spurred global interest in establishing legal safeguards for the brain and neurodata. In 2019, the Organisation for Economic Co-operation and Development (OECD) developed the first international standards to respond to neurotechnology’s ethical, legal, and social challenges. The OECD’s Recommendation on Responsible Innovation in Neurotechnology provides guiding principles to prioritize safety, inclusivity, collaboration, and trust in neurotechnology. In 2022, the UNESCO International Bioethics Committee issued a report on the ethical issues of neurotechnology and advocated for a comprehensive governance framework.
On a regional level, the Inter-American Juridical Committee of the Organization of American States (OAS) issued a Declaration on neuroscience and neurotechnologies and human rights in 2021. Two years later, the OAS followed up with a set of Principles to align international standards to national frameworks. In the same year, the Ibero-American Network of Data Protection Authorities (RIPD), the main forum for Spanish- and Portuguese-speaking data protection regulators, declared support for the OAS Declaration and Principles and announced the establishment of a working group on neurodata.
Perhaps the most consequential call for action was the 2022 Neurorights Model Law, drafted by the Latin American and Caribbean Parliament (Parlatino), a regional organization that promotes regional integration through legislative harmonization. The model law provides both structure and foundational concepts to regulate neurotechnology, including establishing an independent oversight authority and providing redress mechanisms.
Transnational stakeholders such as the OAS and Parlatino have played large roles in establishing Latin America as a leading player in the neurorights discussion. However, legislative initiatives at the domestic level may prove more influential, as their impact continues to reverberate in Latin America and beyond.
4. Chile: The first country to protect “mental integrity” in its Constitution
As a pioneer in the neuroprivacy movement, Chile was the first country to amend its Constitution to protect “mental integrity” and neurodata in 2021. Specifically, the provision states that “the law shall regulate the requirements, conditions, and restrictions for [neurodata], and shall especially protect brain activity, as well as the information derived from it.” Furthermore, scientific and technological developments are to be conducted with “respect for […] physical and mental integrity.”
Led by Senator Guido Girardi Lavín and several other legislators, the amendment centered on the individual identity as an intrinsic value of human evolution and referred to physical and psychic integrity as its main elements. The legislators asserted that any technological development affecting mental integrity, as a fundamental right, should be authorized by law. Simultaneously, the same legislators introduced Bill 13.828-19, which aimed to further regulate neurotechnology by requiring consent to use neurotechnology and establishing penalties for noncompliance.
In 2023, only two years after the country’s Constitution was amended, Chile’s Supreme Court became the first court to rule on aneuroprivacy case. The plaintiff, Senator Girardi, alleged that his brain data was insufficiently protected by the US-based Emotiv’s “Insight” device, a headband that records detailed information about the brain’s electrical activity. The Court ultimately found thatEmotiv violated Sen. Girardi’s constitutional rights to physical and psychological integrity as well as the right to privacy, setting aside Emotiv’s arguments that the harms were hypothetical. Citing both Chilean domestic law and international human rights law, the Court focused on the fact that Emotiv retained Sen. Girardi’s data for research purposes, even in anonymized form, without obtaining prior consent for this specific purpose. In addition to setting a precedent for neuroprivacy litigation, this case reflects the neurorights movement’s influence beyond the policy sphere.
5. Mexico: Proposed constitutional amendment for neuroprivacy rights
As of March 2024, there are two pending neuroprivacy bills that seek to amend Mexico’s Constitution. The first bill, proposed by Deputy María Eugenia Hernández Pérez, would include the right toindividual identity, as well as physical and psychological integrity. The Chilean constitutional amendment’s influence is noticeable throughout the Mexican bill, including language requiring the State to respect mental privacy and integrity. Moreover, the proposal has the same wording as Chile’s constitutional amendment and similarly spotlights the value of individual identity.
The proposal centers on human identity and its relation to technology, and not solely privacy and data protection, which are already recognized as two separate fundamental rights under Article 16 of Mexico’s Constitution. It includes broad legal safeguards to ensure the confidentiality of neurodata collection, informed consent before access, clear limits on neurotechnologies, and anti-discrimination measures. Moreover, the bill notes that while some local laws protect human rights and neurodata in the context of medical and scientific uses, there is a lack of regulation for non-medical uses.
The second Mexican bill, spearheaded by Senator Alejandra Lagunes Soto Ruiz, would amend Article 73 of the Constitution to provide congressional authorization to pass federal legislation related to artificial intelligence (AI), cybersecurity, and neurorights. Under this authority, Congress could safeguard mental privacy, cognitive autonomy, informed consent for the use of brain data, identity and self-expression, non-discrimination, and equal access to technology.
Both bills acknowledge that neuroprivacy is an emerging concept and focus on how neurotechnology could jeopardize fundamental rights. Although these bills approach the issue from different viewpoints, they both seek to protect personal data and build citizen trust. Additionally, in November 2023 the Mexican Data Protection Authority published a Digital Human Rights Charter that recognizes the five fundamental neurorights.
6. Brazil: Proposed constitutional amendment and neuroprivacy rights in privacy law
Several neuroprivacy initiatives have gained traction in Brazil. Bill 29/2023, introduced by Senator Randolph Frederich Rodrigues Alves in June 2023, seeks to amend the Brazilian Constitution to include protections for mental integrity and algorithmic transparency. In particular, the proposal highlights that recognizing “mental integrity” is essential to expand the “legal and normative understanding of human dignity in this new digital context” that protects both personal data and the “psychic and physical integrity of human beings.” The proposal was presented to the Senate in June 2023 and is pending until a Rapporteur is appointed to review the bill. 1 Of note, the Brazilian Constitution was amended in February 2022 to include a right to the protection of personal data, distinct from the right to privacy.
Separately, Bill 522/2022, introduced by Deputy Carlos Henrique Gaguim in March 2022, would amend Brazil’s General Data Protection Law (LGPD) to regulate neurodataas a category of sensitive data. The bill would add a new section to regulate the processing of neurodata, emphasizing that therequest for consent must “clearly and prominently indicate the possible physical, cognitive and emotional effects” of processing neurodata. Currently, Article 5 of the LGPD establishes racial and ethnic origin; religious, political, and philosophical affiliations; health, sexual and life data; and genetic and biometric data as categories of sensitive data. However, the proposal highlights the need to include neurodata as a distinct category of sensitive data, not to be confused or associated with biometric data. The bill was approved by the Health Commission Rapporteur in October 2023 and awaits further consideration.
The neurorights discussion has also made its way into Brazil’s Federal Civil Code. In December 2023, the Sub-Committee on Digital Law of the Commission of Jurists, who are responsible for reviewing the Civil Code, submitted a report that seeks to recognize neuroprivacyunder the LGPD. Independently, in December 2023, Río Grande do Sul, Brazil’s fifth-largest state by population, amended its Constitution to include neurorights, specifying mental integrity as a constitutional principle.
7. Other regional initiatives
Similar legislative efforts are underway in the region, with some variations:
Costa Rica proposed amending the country’s data protection law to include a definition of biometric data which, in contrast to Brazil’s proposal, categorizes neurodata as biometric data.
Colombia is considering updating its data protection law to include a section specific to the processing of data through AI and neurotechnologies. The proposal sets out specific obligations for accessing and processing neurodata.
Argentina has two pending bills: Bill 2446/23 proposed the creation of a bicameral committee to develop a neurorights framework. Separately, another bill would amend the Federal Code of Civil Procedure to allow neurotechnologies that infer mental activity as admissible evidence.
Uruguay’s Parliament reported that elected officials have met with their Chilean counterparts to discuss neurorights. In February 2024, Deputy Rodrigo Goñi indicated that Parliament is considering regulating neurotechnologies and providing safeguards for brain integrity and neurodata.
As neurotechnology continues to advance, it raises key questions about how the data involved should be regulated. Latin America is at the forefront of that conversation and has paved the way in recognizing neuroprivacy, from Chile’s Constitution, to Mexico and Brazil’s pending legislation. Regional frameworks, such as the OAS Declaration and Principles, illustrate that neurorights are coalescing on the international level as well. The groundswell of legislative proposals and domestic laws demonstrates that the fight for neuroprivacy is here to stay—and for now, at least, Latin America is the place to watch.
1 According to the Brazilian Chamber of Deputies Internal Rules, Art. 56, committee bills and other proposals will be examined by a Rapporteur who must issue an opinion.
AI Audits, Equity Awareness in Data Privacy Methods, and Facial Recognition Technologies are Major Topics During This Year’s Privacy Papers for Policymakers Events
Author: Judy Wang, Communications Intern, FPF
The Future of Privacy Forum (FPF) hosted two engaging events honoring 2023’s must-read privacy scholarship at the 14th Annual Privacy Papers for Policymakers ceremonies.
On Tuesday, February 27, FPF hosted a Capitol Hill event featuring an opening keynote by U.S. Senator Peter Welch (D-VT) as well as facilitated discussions with the winning authors: Mislav Balunovic, Emily Black, Albert Fox Cahn, Brenda Leong, Hideyuki Matsumi, Claire McKay Bowen, Joshua Snoke, Daniel Solove, and Robin Staab. Experts from academia, industry, and government moderated these policy discussions, including Michael Akinwumi, Didier Barjon, Miranda Bogen, Edgar Rivas, and Alicia Solow-Niederman.
On Friday, March 1, FPF honored winners of internationally focused papers in a virtual conversation hosted by FPF Global Policy Manager Bianca-Ioana Marcu, with FPF CEO Jules Polonetsky providing opening remarks. Watch the virtual event here.
For the in-person event on Capitol Hill, Jordan Francis, FPF’s Elise Berkower Fellow, provided welcome remarks and emceed the night, thanking Alan Raul, FPF Board President, and Debra Berlyn, FPF Board Treasurer, for being present. Mr. Francis noted he was excited to present leading privacy research relevant to Congress, federal agencies, and international data protection authorities (DPAs).
FPF’s Jordan Francis
In his keynote, Senator Welch celebrated the importance of privacy and the pioneering work done by this year’s winners. He emphasized that privacy is a right that should be protected constitutionally and that researchers studying digital platforms are essential for understanding evolving technologies and their impacts on our privacy. He also told the authors that their scholarship is consistent with the pioneering work of Justice Louis Brandeis and Samuel Warren, stating that “the fundamental respect that they had then underlies the work that you do for American citizens today.” He concluded his remarks by highlighting the need for an agency devoted to protecting privacy and that the work done by the authors is providing that foundation.
Senator Peter Welch (D-VT)
Following Senator Welch’s keynote address, the event shifted to discussions between the winning authors and expert discussants. The 2023 PPPM Digest includes summaries of the papers and more information about the authors.
Professor Emily Black (Barnard College, Columbia University) kicked off the first discussion of the night with Michael Akinwumi (Chief Responsible AI Officer at the National Fair Housing Alliance) by talking about her paper, Less Discriminatory Algorithms, co-written with Logan Koepke (Upturn), Pauline Kim (Washington University School of Law), Solon Barocas (Microsoft Research), and Mingwei Hsu (Upturn). Their paper analyzes how entities that use algorithmic systems in traditional civil rights domains like housing, employment, and credit should have a duty to search for and implement less discriminatory algorithms (LDAs). During her conversation, Professor Black discussed model multiplicity and argued that businesses should have an onus to proactively search for less discriminatory alternatives. They also discussed the reframing of the industry approach, what regulatory guidance could look like, and how this aligns with President Biden’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
Michael Akinwumi and Professor Emily Black
Next, Claire McKay Bowen (Urban Institute) and Joshua Snoke (RAND Corporation) discussed their paper, Do No Harm Guide: Applying Equity Awareness in Data Privacy Methods, with Miranda Bogen (Director, AI Governance Lab at the Center for Democracy & Technology). Their paper uses interviews with experts on privacy-preserving methods and data sharing to highlight equity-focused work in statistical data privacy. Their conversation explored questions such as “What are privacy utility trade-offs?”, “What do we mean by data representation?” and highlighted real-world examples of equity issues surrounding data access, such as informing prospective transgender students about campus demographics versus protecting current transgender students at law schools. They also touched on aspirational workflows, including tools and recommendations. Attendees asked questions regarding data cooperatives, census data, and more.
Miranda Bogen, Claire McKay Bowen, and Joshua Snoke
Brenda Leong (Luminos.Law) and Albert Fox Cahn (Surveillance Technology Oversight Project) discussed their paper AI Audits: Who, When, How…Or Even If?with Edgar Rivas (Senior Policy Advisor for U.S. Senator John Hickenlooper (D-CO)). Co-written with Evan Selinger (Rochester Institute of Technology), their paper explains why AI audits are often regarded as essential tools within an overall responsible governance system while also discussing why some civil rights experts are skeptical that audits can fully address all AI system risks. During the conversation, Ms. Leong stated that AI audits need to be developed and analyzed because they will be included in governance and legislation. Mr. Cahn raised important questions, such as whether we have the accountability necessary for AI audits already being deployed and whether audit elements voluntarily provided in the private sector can translate to public compliance. The co-authors also discussed New York City’s 2023 audit law (used as a case study in their paper), commenting that the law’s standards and broad application potentially open the door for discussion of key issues, including those relating to discriminatory models.
Brenda Leong and Albert Fox Cahn
During the next panel, Professor Daniel Solove (George Washington University Law School) discussed his paper Data Is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Datawith Didier Barjon (Legislative Assistant for U.S. Senate Majority Leader Charles Schumer (D-NY)). His paper argues that heightened protection for sensitive data does not work because the sensitive data categories are vague and lack a coherent theory for identifying them. In their discussion, Professor Solove noted that we can still infer sensitive information through non-sensitive data, making it difficult to know which combinations can become sensitive data and which don’t. He then stated that to be effective, privacy law must focus on harm and risk rather than the nature of personal data: “Categories are not proxies—[we] need to do the hard work of figuring out the harm and risk around data.”
Didier Barjon and Professor Daniel Solove
Professor Solove and Mr. Barjon were then joined on stage by Hideyuki Matsumi (Vrije Universiteit Brussel) to discuss Professor Solove’s and Mr. Matsumi’s co-authored paper, The Prediction Society: Algorithms and the Problems of Forecasting the Future. Their paper raises concerns about the rise of algorithmic predictions and how they not only forecast the future but also have the power to create and control it. Mr. Barjon asked the authors about the “self-fulfilling prophecy” problem discussed in the paper, and Mr. Matsumi explained that this refers to the idea that people perform better if there’s a higher expectation to do so and vice versa. Therefore, even if an algorithmic prediction is inaccurate, individuals susceptible to or prone to believe the prediction will be impacted, and the prediction will be made true, leading to what the authors called a “doom cycle.” The authors advocated for a risk-based approach to predictions and stated that we should analyze and think deeply about predictions rather than ban them altogether.
Hideyuki Matsumi and Professor Daniel Solove
In the evening’s final presentation, Robin Staab and Mislav Balunovic (ETH Zurich SRI Lab) discussed their paper, Beyond Memorization: Violating Privacy Via Inference with Large Language Models, with Professor Alicia Solow-Niederman(George Washington University Law School). Their paper, co-written with Mark Vero and Professor Martin Vechev (ETH Zurich SRI Lab), examined the capabilities of pre-trained large language models (LLMs) to infer personal attributes of a person from text on the internet and raised concerns about the ineffectiveness of protecting user privacy from LLM interferences. Professor Solow-Niederman asked the authors about the provider intervention suggested in the paper that could potentially align models to be privacy-protected. The authors noted that there are limitations to what providers can do and that there is a tradeoff between having better inferences across all areas or having limited inferences but better privacy. They also stated that we need to be aware that alignment is not the solution and that the way to move forward is for users to be aware that such inferences can happen and have the tools to write text from which inferences cannot be made.
Professor Alicia Solow-Niederman, Robin Staab, and Mislav Balunovic
As panel discussions ended, FPF SVP for Policy John Verdi closed the event by thanking the audience, winning authors, judges, discussants, the FPF Events team, and FPF’s Jordan Francis for making the event happen.
John Verdi
Thank you to Senator Peter Welch and Honorary Co-Hosts Congresswoman Diana DeGette (D-CO-1) and Senator Ed Markey (D-MA), Co-Chairs of the Congressional Privacy Caucus. We would also like to thank our winning authors, expert discussants, those who submitted papers, and event attendees for their thought-provoking work and support.
Later that week, FPF honored the winners of internationally focused papers in a virtual conversation hosted by FPF Global Policy Manager Bianca-Ioana Marcu, with FPF CEO Jules Polonetsky providing opening remarks.
The first discussion was moderated by FPF Policy Counsel Maria Badillo with authors Luca Belli (Fundação Getulio Vargas (FGV) Law School) and Pablo Palazzi (Allende & Brea) on their paper, Towards a Latin American Model of Adequacy for the International Transfer of Personal Dataco-authored by Dr. Ana Brian Nougrères (University of Montevideo), Jonathan Mendoza Iserte (National Institute of Transparency, Access to Information and Personal Data Protection), and Nelson Remolina Angarita (Law School of the University of the Andes). The conversation focused on diverse mechanisms for data transfers, such as the adequacy system, and the relevance and necessity of having a regional model of adequacy, including the benefits of having a Latin American model. The authors also dive into the role of the Ibero-American Data Protection Network.
Maria Badillo, Pablo A. Palazzi, and Professor Luca Belli
The second discussion of the event was led by FPF Senior Fellow and Considerati Managing Director Cornelia Kutterer with author Catherine Jasserand (University of Groningen) on her winning paper Experiments with Facial Recognition Technologies in Public Spaces: In Search of an EU Governance Framework. Their conversation highlighted the experiments and trials in the paper as well as the legality of facial recognition technologies under data protection law. The second portion of the discussion focused on the EU AI Act and how it relates to the relevancy and applicability of the laws highlighted in the paper.
Cornelia Kutterer and Professor Catherine Jasserand
We hope to see you next year at the 15th Annual Privacy Papers for Policymakers!
FPF Statement on the adoption of the EU AI Act and New Resource Webpage
“Today the European Union adopted the EU AI Act at the end of a long and intense legislative process. At the Future of Privacy Forum we believe that multistakeholder global approaches and advancing common understanding in the area of AI governance are key to ensuring a future with safe and trustworthy AI, one that protects fundamental rights while promoting innovation to benefit society.
The EU AI Act is a comprehensive, binding law, with broad extraterritorial effect and is therefore poised to play a crucial role in the global debate on AI regulation. We welcome the openness and foresight of the European Union’s lawmakers to adopt a definition of AI systems that is interoperable with that proposed by the OECD.
At the same time, we acknowledge the long and complicated road ahead to make the provisions of the EU AI Act effective in practice. With personal data playing a key role in the development and deployment of AI systems, we at the Future of Privacy Forum are paying particular attention to how privacy and data protection norms around the world interact with AI governance frameworks such as the EU AI Act. We will continue to explore this complicated question with research, convenings, and evidence-based tools related to AI governance.”
Jules Polonetsky, CEO of the Future of Privacy Forum
For a list of existing FPF Resources on the EU AI Act, see our new dedicated webpage.