What They’re Saying: Stakeholders Warn Senate Surveillance Bill Could Harm Students, Communities

Parents, privacy advocates, education stakeholders, and members of the disability rights community are raising concerns about new Senate legislation that would mandate unproven student surveillance programs and encourage greater law enforcement intervention in classrooms in a misguided effort to improve school safety.

Last week, Senator John Cornyn (R-TX) introduced the RESPONSE Act, legislation that is intended to help reduce and prevent mass violence in communities. However, the bill includes a provision to dramatically expand the Children’s Internet Protection Act and would require almost every U.S. school to implement costly network monitoring technology and collect massive amounts of student data.

The legislation also requires the creation of school-based “Behavioral Intervention Teams” that will be strongly encouraged to refer concerning student behavior directly to law enforcement, rather than allowing educators who know students best to engage directly and address the issue internally. This provision would likely strengthen the “school to prison pipeline” and could be especially harmful for students of color and students with disabilities.

Take a look at What They’re Saying about the legislation: 

A new Republican bill that claims ‘to help prevent mass shootings’ includes no new gun control measures. Instead, Republican lawmakers are supporting a huge, federally mandated boost to America’s growing school surveillance industry… There is still no research evidence that demonstrates whether or not online monitoring of schoolchildren actually works to prevent violence.

– The Guardian; “Republicans propose mass student surveillance plan to prevent shootings” 

Training behavioral assessment teams to default to the criminal process rather than school-based behavioral assessment and intervention would do little to address violence in schools and would likely foster rather than prevent a violent school environment … By making the criminal process the frontline for student discipline, this bill will only serve to increase the number of students of color and students with disabilities in the juvenile justice system.

– Coalition for Smart Safety; Letter to Senator John Cornyn 

Leslie Boggs, national PTA president, said in a statement that the organization has concerns with the bill as it is currently written. She said the PTA will work with Cornyn’s staff “to ensure evidence-based best practices for protecting students are used, the school to prison pipeline is not increased, students are not discouraged from seeking mental health and counseling support and that students’ online activities are not over monitored.”

– POLITICO; “Questions raised about school safety measures in anti-mass violence bill” 

Privacy experts and education groups, many of which have resisted similar efforts at the state level, say that level of social media and network surveillance can discourage children from speaking their minds online and could disproportionately result in punishment against children of color, who already face higher rates of punishment in school.

– The Hill; “Advocates warn kids’ privacy at risk in GOP gun violence bill” 

Generational gaps between adults and teens make for hefty communication barriers, and a private Facebook message that might read as “dangerous” to a grown law enforcement officer could easily just be two children goofing off… whenever they go online, students would be forced to think about what the government or their school would like and dislike, driving what Republicans so often claim to be against — mental conformity to institutional, government-driven norms. Students’ fears of being watched (and reported) would also inevitably widen the gap between government schools and their students. Surveillance accompanied by the threat of penalty would result in mass distrust from students toward the education system: a reinforced “us versus them” mentality between students and the adults in charge.

– Washington Examiner; “Sorry Republicans, but surveilling schoolchildren is an awful idea” 

Schools are already deploying significant digital surveillance systems in the name of safety…But critics say these surveillance systems vacuum up a huge and irrelevant stream of online data, can lead to false positives, and present huge problems for privacy.

– Education Week; “Senator’s Anti-Violence Bill Backs Active-Shooter Training, School Internet Monitoring” 

Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.

– Future of Privacy Forum; “Increased Surveillance is Not an Effective Response to Mass Violence” 

Billed as a response to school shootings, [the RESPONSE Act] has, as critics noted, almost nothing to do with guns, and a great deal to do with increasing surveillance (as well as targeting those with mental health issues)…Not everyone will find this troubling… But if you want to erode civil liberties and traditions of privacy, it’s best to start with people who don’t have the political power to fight back. Children are ideal–not only can’t they fight back, but they will grow up thinking it’s perfectly normal to live under constant surveillance. For their own safety, of course.

– Forbes; “Is Big Brother Watching Your Child? Probably.” 

Rather than focusing on surveillance as a solution to school safety concerns, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.

To learn more about the Future of Privacy Forum’s student privacy project, visit http://www.ferpasherpa.org/.

Alexandra Sollberger

ICYMI: New Senate Legislation Mandates “Pervasive Surveillance” in Attempt to Improve School Safety

WASHINGTON, D.C. – Legislation introduced in the U.S. Senate this week is under scrutiny from privacy and disability rights advocates for provisions that would dramatically expand surveillance technologies in schools nationwide, despite lack of evidence or research to confirm these tools have any effect on preventing or predicting school violence.

According to The Guardian, “A new Republican bill that claims ‘to help prevent mass shootings’ includes no new gun control measures. Instead, Republican lawmakers are supporting a huge, federally mandated boost to America’s growing school surveillance industry… There is still no research evidence that demonstrates whether or not online monitoring of schoolchildren actually works to prevent violence.”

Future of Privacy Forum Senior Counsel and Director of Education Privacy Amelia Vance highlighted the challenges and unintended consequences that could result from the RESPONSE Act sponsored by Senator John Cornyn (R-TX):

Privacy advocates say pervasive surveillance is not appropriate for an educational setting, and that it may actually harm children, particularly students with disabilities and students of color, who are already disproportionately targeted with school disciplinary measures.

“You are forcing schools into a position where they would have to surveil by default,” said Amelia Vance, the director of education privacy at the Future of Privacy Forum.

“There’s a privacy debate to be had about whether surveillance is the right tactic to take in schools, whether it inhibits students trust in their schools and their ability to learn,” Vance said. But “the bottom line,” she said, is “we do not have evidence that violence prediction works”…

If Cornyn’s bill becomes law, “you’re going to force probably 10,000 districts to buy a new product that they’re going to have to implement”, she said.

That would mean redirecting public schools’ time and money away from strategies that are backed by evidence, such as supporting mental health and counseling services, and towards dealing with surveillance technologies, which often produce many false alarms, like alerts about essays on To Kill a Mockingbird.

Click here to read the article. To learn more about the Future of Privacy Forum, visit www.fpf.org.


[email protected]

Increased Surveillance is Not an Effective Response to Mass Violence

By Sara Collins and Anisha Reddy

This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part of the proposal would broaden the categories of content schools must monitor under the Children’s Internet Protection Act (CIPA); specifically, schools would be required to “detect online activities of minors who are at risk of committing self-harm or extreme violence against others.” 

Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.

More Untargeted Monitoring is not the Answer

About 95% of schools are required to create internet safety policies under CIPA (these requirements are tied to schools’ participation in the “E-rate” telecommunications discount program). CIPA requires safety policies to include technology that monitors, blocks, and filters students’ attempts to access inappropriate online content. CIPA generally imposes monitoring requirements regarding: obscene content; child pornography; and content that is otherwise harmful to minors. 

The RESPONSE Act would impose new obligations, requiring schools to infer whether a students’ internet use might indicate they are at risk of committing self-harm or extreme violence against others. However, there is little evidence that detecting or blocking this kind of content is technically possible and would prevent physical harm. A report on school safety technology funded by the U.S. Department of Justice noted that violence prediction software is “immature technology.” Not only is the technology immature, the FBI found that there is no one profile for a school shooter: scanning student activity to look for the next “school shooter” is unlikely to be effective. 

By directing schools to implement “technology protection measure[s] that detect online activities of minors who are at risk of committing self-harm or extreme violence against others,” the RESPONSE Act would essentially require that all schools across the nation implement some form of comprehensive network or device monitoring technology to scan lawful content–a direct violation of local control and a serious invasion of students’ privacy. 

This broad language could encourage schools to collect as much information as possible about students, requiring already overwhelmed faculty and administrators to spend countless hours sifting through contextually harmless student data–hours that could be better spent engaging with students directly.

Additionally, this technology mandate could limit schools’ ability and desire to implement more thoughtful and effective programs and policies designed to improve school safety. Schools may assume that network monitoring technology is more effective than it actually is, and redirect resources away from evidence-based school safety measures, such as holistic approaches to early intervention. Further, without more guidance, school administrators would be forced to make judgement calls that result in the over-monitoring of student online activity.

The cost associated with the implementation of these technologies goes beyond buying appropriate network monitoring software, which is a burden in and of itself. Schools⁠—which are under-resourced and under-staffed⁠—would experience difficulty devoting funds and staff time to monitoring these alerts, as well as developing policies for responses to those alerts. These burdens are further compounded in rural school districts that already receive less funding per student. 

False Alerts Unjustly Trap Students in the Threat Assessment Process

In some cases, network monitoring does not end when the school day ends. Schools often issue devices for students to take home or online accounts students access from a device at home. Under the RESPONSE Act, these schools would be forced to monitor students constantly. If a school gets an alert during non-school hours, their default action may be to alert law enforcement. But sending law enforcement to conduct wellness checks is not a neutral action. These interactions can be traumatic for students and families, and can result in injury or false imprisonment. These harms are exacerbated when monitoring technology provides overwhelming numbers of false positives. 

Even if content monitoring technology were effective, the belief that surveillance has no negative outcomes or consequences for students has created a pernicious narrative. Surveillance technologies, like device, network, or social media monitoring services, can harm students by stifling their creativity, individual growth, and speech. Constant surveillance also conditions students to expect and accept that authority figures, such as the government, will always monitor their activity. We also know that students of color and students with disabilities are disproportionately suspended, arrested, and expelled compared to white students and non-disabled students. The RESPONSE Act’s proposed new requirements would only serve to further exacerbate this disparity. 

Schools, educators, caregivers, and communities are in the best position to notice and address concerning student behavior. The Department of Education has several resources outlining effective disciplinary measures in schools, finding that “[e]vidence-based, multi-tiered behavioral frameworks . . . can help improve overall school climate and safety.”

Ultimately, requiring schools to spend money on ineffective technology would divert much-needed resources and staff from providing students with a safe learning environment. Rather than focusing on filtering content, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.

FPF Receives Grant To Design Ethical Review Process for Research Access to Corporate Data

One of the defining features of the data economy is that research is increasingly taking place outside of universities and traditional academic settings. With information becoming the raw material for production of products and services, more organizations are exposed to and closely examining vast amounts of personal data about citizens, consumers, patients and employees. This includes companies in industries ranging from technology and education to financial services and healthcare, and also non-profit entities, which may seek to advance societal causes, or other agenda-driven projects.

For research on data subject to the Common Rule, institutional review boards (IRBs) provide an essential ethical check on experimentation and research. However, much of the research relying on corporate data is beyond the scope of IRBs, because the data has been previously collected, the project or researcher is not federally funded, the data may be a public data set or other reasons.

Future of Privacy Forum (FPF) has received a Schmidt Futures grant to create an independent party of experts for an ethical review process that can provide trusted vetting of corporate-academic research projects. FPF will establish a pool of respected reviewers to operate as a standalone, on-demand review board to evaluate research uses of personal data and create a set of transparent policies and processes to be applied to such reviews.

FPF will define the review structure, establish procedural guidelines, and articulate the substantive principles and requirements for governance. Other considerations to be addressed include companies’ common concerns about risk analysis, disclosure of intellectual property and trade secrets, and exposure to negative media and public reaction. Following this phase, members who can be available for reviews will be recruited from a range of backgrounds. The project will include input and review by government, civil society, industry and academic stakeholders.

Sara Jordan, who will be cooperating with FPF on this project, has proposed one model for addressing this challenge. Her paper, Designing an AI Research Review Committee, calls for a review committee dedicated to ethical oversight of AI research by giving serious consideration of the design of such an organization. This model proposes a design for such a committee drawing upon the history and structure of existing research review committees such as IRBs, Institutional Animal Care and Use Committees (IACUC), and Institutional Biosafety Committees. This model follows that of the IBC but with a blend of features from human subject and animal care and use committees in order to improve implementation of risk-adjusted oversight mechanisms.

Another analysis and recommendation was published recently by Northeastern University Ethics Institute and Accenture: Building Data and AI Ethics Committees. This paper comments that an ethics committee is a potentially valuable component of accomplishing responsible collection, sharing, and use of data, machine learning, and AI within and between organizations. However, to be effective, such a committee must be thoughtfully designed, adequately resourced, clearly charged, sufficiently empowered, and appropriately situated within the organization.

Likewise the EU is considering these challenges with several recent AI guidance publications including the Council of Europe established an ad hoc committee on Artificial Intelligence, which will examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law.


The ethical framework applying to human subject research in the biomedical and behavioral research fields dates back to the Belmont Report. Drafted in 1976 and adopted by the United States government in 1991 as the Common Rule, the Belmont principles were geared towards a paradigmatic controlled scientific experiment with a limited population of human subjects interacting directly with researchers and manifesting their informed consent. These days, researchers in academic institutions as well as private sector businesses not subject to the Common Rule, seek to conduct analysis of a wide array of data sources, from massive commercial or government databases to individual tweets or Facebook postings publicly available online, with little or no opportunity to directly engage human subjects to obtain their consent or even inform them of research activities. Data analysis is now used in multiple contexts, such as combatting fraud in the payment card industry, reducing the time commuters spend on the road, detecting harmful drug interactions, improving marketing mechanisms, personalizing the delivery of education in K-12 schools, encouraging exercise and weight loss, and much more.

These data uses promise tremendous research opportunities and societal benefits but at the same time create new risks to privacy, fairness, due process and other civil liberties. Increasingly, researchers and corporate officers find themselves struggling to navigate unsettled social norms and make ethical choices for ways to use this data to achieve appropriate goals. The ethical dilemmas arising from data analysis may transcend privacy and trigger concerns about stigmatization, discrimination, human subject research, algorithmic decision making and filter bubbles.

In many cases, the scoping definitions of the Common Rule are strained by new data-focused research paradigms, which are often product-oriented and based on the analysis of preexisting datasets. For starters, it is not clear whether research of large datasets collected from public or semi-public sources even constitutes human subject research. “Human subject” is defined in the Common Rule as “a living individual about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information.” Yet, data driven research often leaves little or no footprint on individual subjects (“intervention or interaction”), such as in the case of automated testing for security flaws.

While obtaining individuals’ informed consent may be feasible in a controlled research setting involving a well-defined group of individuals, such as a clinical trial, it is untenable for researchers experimenting on a database that contains the footprints of millions, or indeed billions, of data subjects. In response to these developments, the Department of Homeland Security commissioned a series of workshops in 2011-2012, leading to the publication of the Menlo Report on Ethical Principles Guiding Information and Communication Technology Research. That report remains anchored in the Belmont Principles, which it interprets to adapt them to the domain of computer science and network engineering, in addition to introducing a fourth principle, respect for law and public interest, to reflect the “expansive and evolving yet often varied and discordant, legal controls relevant for communication privacy and information assurance.”

Ryan Calo foresaw the establishment of “Consumer Subject Review Boards” to address ethical questions about corporate data research.  Calo suggested that organizations should “take a page from biomedical and behavioral science” and create small committees with diverse expertise that could operate according to predetermined principles for ethical use of data. No model has a direct correlation to the current challenges, however. The categorical non-appealable decision making of an academic IRB, which is staffed by tenured professors to ensure independence, will be difficult to reproduce in a corporate setting. And corporations face legitimate concerns about sharing trade secrets and intellectual property with external stakeholders who may serve on IRBs.

FPF’s work on this grant will seek to demonstrate the composition and viability of one way to address these challenges.



COPPA Workshop Takeaways

On Monday, the Federal Trade Commission (FTC) held a public workshop focused on potential updates to the Children’s Online Privacy Protection Act (COPPA) rule. The workshop follows a July 25, 2019 notice of rule review and call for public comments regarding COPPA rule reform. The comment period remains open until December 9th. Senior FTC officials expect the process to result in changes to the COPPA rule. The workshop also follows the Commission’s high-profile settlement with YouTube regarding child-directed content.

Monday’s workshop was a key part of the Commission’s review; the day-long session featured panel discussions focused on the various questions raised regarding COPPA’s continued effectiveness as technology evolves. FPF’s Amelia Vance spoke on a panel focused on the intersection of issues related to children’s privacy and student privacy.

During the edtech focused panel, there was a consensus that schools should be able to use the Family Educational Rights and Privacy Act’s (FERPA) school official exception to provide consent on behalf of under-thirteen students under COPPA, rather than collect consent directly from parents. This allows schools to continue to exercise judgment over what technology is used, while also preserving the privacy protections of both COPPA and FERPA. Many speakers expressed that parents feel they have little transparency about the technology being used in their child’s school. The FTC may potentially require increased transparency or notice to assuage these worries.

We also noticed several recurring themes throughout the workshop:

  1. The tension between child-directed content and “child-attractive” or child-appropriate content and what that means under COPPA;

  2. The misconceptions surrounding the meaning of “actual knowledge” and COPPA’s “product improvement” exception; and

  3. A need to focus on frameworks and technology that allow children to safely be online.

The Tension Between “Child-Directed” Content and “Child-Attractive” or Child-Appropriate Content

Several questions were posed regarding the meaning of “child-directed content:”

Panelists cautioned that determining whether content is child-directed by focusing solely on audience makeup could create a moving target for creators; they would constantly have to monitor their audience to ensure they don’t break the “child-directed” threshold. Without clear methods for monitoring when children are accessing general audience content, this tension could not only encourage additional data collection, but it also makes it very difficult to create content for teenagers or “nostalgia-content” for adults. Panelists noted that this tension can transcend beyond content creators to services originally intended for a general audience that unintentionally attracts a child audience.

Harry Jho, a YouTube content creator, raised a concern that COPPA, as applied in the FTC’s YouTube settlement, will stifle creators’ ability to produce quality children’s online content. The settlement requires YouTube and creators to disable behaviorally targeted advertisements on child-directed content. Jho stated that he relies on behavioral advertising for the “lion’s share” of his revenue. Jho claimed that this settlement requirement will cause creators to suffer, and the quality of free children’s content on the internet to decline. Jho also articulated that there is confusion among creators about whether child-attractive or child-appropriate content will be considered “child-directed” under COPPA, resulting in less certainty than ever about whether COPPA applies to particular creators, channels, or videos.

Misconceptions: Actual Knowledge and Product Improvement

There was also significant confusion around the scope of COPPA and its definitions throughout the workshop. We heard many different opinions about the meaning of the actual knowledge standard, and the only point of agreement was that the YouTube settlement has contributed to the confusion. The FTC has said that having actual knowledge that there is child-directed content on your platform triggers COPPA. However, in the YouTube settlement, the FTC cited evidence that showed YouTube had knowledge that children were using the site, as well as pointing to channels that were obviously child-directed. Phyllis Marcus, a partner at Hunton Andrews Kurth, argued that the distinction between actual knowledge of child-directed content on a website versus actual knowledge that children are using a website seems to be collapsing. This shift, coupled with the confusion regarding the definition of “child-directed,” has caused significant uncertainty. Marcus believes that the use of the term “actual knowledge” in various other privacy regimes, such as California’s Consumer Privacy Act, will also create substantial confusion for companies.

While discussing edtech, the question of whether product improvement remains acceptable under COPPA was raised.  Ariel Fox Johnson of Common Sense Media argued that product improvement is a commercial purpose under COPPA, full stop, and if schools are paying for a service, they should not also be “paying” with student data. FPF’s Amelia Vance argued that the product improvement exception is necessary to allow essential functions like security patches and authenticating users, so any changes should be carefully tailored.

Keeping Kids on the Internet 

A recurring discussion was that some strategies for COPPA compliance have the unintended consequence of keeping kids off the internet. Jo Pedder, Head of Regulatory Strategy at the United Kingdom Information Commissioner’s Office discussed the UK’s implementation of the age-appropriate design code. The code’s goal is to empower kids on the internet while keeping them safe, rather than keeping them out of the digital world. Instead of a one-stop age-gate⁠—largely decried by panelists as an ineffective method of keeping kids safe from age-inappropriate content and data collection⁠—the design code requires entities to understand the age ranges of users and use these “age bands” to, for example, tailor privacy notices or settings.

Similarly, sites with a “mixed audience” under COPPA were heavily discussed, including if age gates can be effective in the space. Dona Fraser of the Children’s Advertising Review Unit pointed out that when kids see an age-gate, they see it as a requirement to lie about their age. Children want to use the internet and they are worried about what they are missing out on. When a mixed audience online service implements a holistic design approach by, for example, establishing a child-appropriate service by default, kids don’t feel like they are missing out on content and don’t have to lie.

Next Steps for the FTC

Several privacy advocates called for the Commission to exercise its 6(b) authority regarding COPPA-covered online services: under Section 6(b) of its enabling Act, the FTC has investigative authority to require reports providing “information about [an] entity’s ‘organization, business, conduct, practices, management, and relation to other corporations, partnerships, and individuals.’ 15 U.S.C. Sec. 46(b).” Panelists who brought up Section 6(b) raised concerns about the lack of insight about what information is being collected by websites and applications, especially in the education technology sector. Panelists also asked the FTC to do studies on the effectiveness of age-gates and whether behaviorally targeted ads actually have a higher market value than contextual advertisements, and even include the voices of the most important stakeholders–children–in the FTC’s analysis. Additionally, it is important to note that several panelists commented that the child privacy conversation needs to evolve beyond notice and consent, and urged the FTC to focus on creating requirements that provide privacy protections to children, while not creating additional notice or consent mechanisms that burden both parents and companies.

Many panelists also urged the FTC to engage in more enforcement actions. One panelist stated that more frequent enforcement actions would have a “tremendous effect” in rooting out bad actors and encouraging COPPA compliance.

For additional reading on the workshop, see these articles:



FPF Expands Health Privacy Initiative

FPF is delighted to announce that Dr. Rachele Hendricks-Sturrup has joined the staff as health policy counsel, strengthening FPF’s commitment to supporting the data protection and ethics guidelines needed for health data. In this role, Rachele will work with stakeholders to advance opportunities for data to be used for research and real world evidence, improve patient care, and allow patients to access their medical records. She will also continue to develop FPF’s projects around genetic data, wearables, and machine learning with health data.

Rachele received a Doctor of Health Science degree in 2018 and holds a special focus on pharmacogenomics and precision medicine. Previously, she conducted health information privacy-related research within Harvard Pilgrim Health Care Institute’s Department of Population Medicine, where she was one of the first research fellows to have a combined focus on addressing issues and challenges at the forefront of precision medicine and health policy.

As a prominent academic, Rachele has written numerous influential publications on consumer privacy and non-discrimination. She recently wrote a piece that looks at how direct-to-consumer genetic testing companies engage health consumers in unprecedented ways and leverage genetic information to further engage health companies. Many of her peer-reviewed manuscripts, including one relevant piece entitled “Direct-to-Consumer Genetic Testing Data Privacy: Key Concerns and Recommendations Based on Consumer Perspectives,” can be accessed via PubMed.

FPF Appoints Robbert van Eijk as Managing Director for Europe

FPF Expanding EU Programming

BRUSSELS – October 1, 2019 – The Future of Privacy Forum (FPF) today announced Robbert van Eijk as managing director for its operations in Europe. In this role, Eijk will implement FPF’s agenda in Europe, oversee its day-to-day operations, and manage relationships with stakeholders in industry, government, academia, and civil society.

“European data protection policies are driving privacy practices around the world,” said FPF CEO Jules Polonetsky. “As an established leader in the data protection field, Rob has technical and policy expertise that will be a tremendous asset as we provide on-the-ground guidance to European stakeholders navigating the dynamic data protection landscape.”

Prior to serving in this position, Eijk worked at the Dutch Data Protection Authority (DPA) for nearly 10 years and has since become an authority in the field of online privacy and data protection. He represented the Dutch DPA in international meetings and as a technical expert in court. He also represented the European Data Protection Authorities, assembled as the Article 29 Working Party, in the multi-stakeholder negotiations of the World Wide Web Consortium on Do Not Track. Eijk is a technologist with a PhD from Leiden Law School focusing on online advertising (real-time bidding).

Peter Swire, FPF Senior Fellow and Professor at the Georgia Institute of Technology, worked with Eijk on the World Wide Web Consortium (W3C) Do Not Track process, which involved more than 100 organizations, and found him to be uniquely constructive. “Rob’s combination of technical insight, policy savvy, and integrity as a person is outstanding,” said Swire. “Rob is an acclaimed expert in EU data protection and the technology of processing personal data, while also understanding perspectives from the United States and globally. He will be a great leader for FPF in Europe.”

Eijk started his professional career in the automotive industry. As an onsite consultant, he specialized in dealer-network planning. Before joining the Dutch DPA, he founded a company with a focus on office automation for small-sized enterprises. In 2008, he sold the company, BLAEU Business Intelligence BV, after he had run it successfully for nine years. Eijk expects to deploy this expertise in the European tech market, helping local startups, entrepreneurs and technologists establish the knowledge and expertise needed to navigate tech and innovation policy.

“The Future of Privacy Forum could not have made a better choice than appointing Robbert van Eijk as Director for its European operations. He is a brilliant expert in privacy and technology matters and has contributed enormously to the European and international debate on these issues,” said Alexander Dix, Former Chairman of the International Working Group on Data Protection in Telecommunications (Berlin Group).

Eijk will collaborate with FPF EU senior policy counsel Gabriela Zanfir-Fortuna to expand FPF programming to bridge the gap between European and U.S. privacy cultures and build a common data protection language. Through its convenings and trainings, FPF helps regulators, policymakers, and staff at EU data protection authorities better understand the technologies at the forefront of data protection law. Last year, FPF kicked off its Digital Data Flows Masterclass, a year-long educational program designed for regulators, policymakers, and staff seeking to better understand the data-driven technologies at the forefront of data protection law and policy.

“FPF has a great reputation in the EU for bringing diverse stakeholders together to develop practical policy approaches to emerging technologies,” said Eijk. “It’s exciting to be part of a talented team exploring best practices for data portability, user control, the ethical use of AI, data research, anonymization, and other issues critical to data protection and fundamental rights in Europe and around the world.”

On 19 November, FPF will host its third annual Brussels Privacy Symposium in partnership with the Brussels Privacy Hub of the Vrije Universiteit Brussel. Details about the event, Exploring the Intersection of Data Protection and Competition Law: The 2019 Brussels Privacy Symposium, can be found here.

Media Contact:

Tony Baker

Future of Privacy Forum

[email protected]


About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.