Rob van Eijk Discusses Trends in European Privacy Discussions
We’re talking to FPF senior policy experts about their work on important privacy issues. Today, Rob van Eijk, FPF’s Managing Director for Europe, is sharing his perspective on FPF’s EU work, differences between U.S. and EU privacy frameworks, and more.
Prior to serving in his position as Managing Director for Europe at FPF, Rob worked at the Dutch Data Protection Authority (DPA) for nearly 10 years. He represented the Dutch DPA in international meetings and as a technical expert in court. He also represented the European Data Protection Authorities, assembled as the Article 29 Working Party, in the multi-stakeholder negotiations of the World Wide Web Consortium on Do Not Track. Rob is a privacy technologist with a PhD from Leiden Law School focusing on online advertising, specifically real-time bidding.
Tell us about yourself – what led you to be involved in privacy and FPF?
I first got into privacy at the end of 2009 at a time when the retention time of data from Automatic Number Plate Recognition (ANPR)-cameras in the Netherlands were being actively discussed. Back then, I was doing contract work in computer forensics and project management. This was about a year after I had sold my company, BLAEU Business Intelligence BV – where I took care of the computer operations of small and medium sized enterprises – after running it successfully for nine years
While working as a contractor, I found out that the Dutch Data Protection Authority was looking for a technologist. Within three weeks of applying, I was part of the “Internet Team” that was charged with formal investigations into compliance with the Privacy Directive 95/46/EC in the private sector. At the Dutch DPA, my role was, to lead on-site inspections and collect evidence or to explain technology to people in the organization. I had a lot of fun in that role and stayed at the Dutch DPA for nearly 10 years.
The Dutch DPA made it possible for me to do a doctoral thesis alongside my work. Eventually, after finishing my PhD research, Jules invited me to present the results in an FPF Masterclass. It turned out that Jules and FPF were looking for someone on the ground in Europe, which led to my current role as FPF’s Managing Director for Europe.
How would you describe your role at FPF?
I am managing director of FPF operations in Europe, where we’re working to build a data protection community. Most of the work today is to ensure that everyone – particularly the Data Protection Authorities, academics, and civil society groups – understand the added value of a neutral platform that the Future of Privacy Forum provides. FPF is a non-profit membership organization with many companies that have unique privacy questions, whether related to ad-tech or new laws and technologies being developed.
Another aspect of my role is ensuring that we are a respected voice in the media. We do a lot of media outreach and engagement – for instance, addressing questions around the implications of RFID implants on what it really means to be human. I also explain to what extent there are implications of certain laws or technologies on the rights and freedoms of the individual, while being mindful of the different legislative frameworks under which companies operate. I see my role as one that is intended to guide both the Future of Privacy Forum and our member organizations through the most important privacy issues and questions, cooperating with the academics and regulators, and facilitating a neutral space for interesting, topical discussions.
The legal framework for privacy in Europe is different from the U.S. framework. Could you explain some of those differences and their impacts?
In Europe, we have a human rights-based data protection regulation framework, whereas the US laws are based on the notion of informational privacy as defined by Westin. In the EU, the human right to privacy is considered to be a real human right, and that thinking trickles down to the way that we talk about the concepts of freedom, privacy, security, and autonomy.
Informational privacy is focused on information control. In the EU, control is based, for example, around the protection of personal information and what is yours – protection of the integrity of your body, your phone, the integrity of the technology that’s in your connected car. There’s a lot of data being generated today and some of that data can be connected in such a way that it creates a comprehensive picture of your life, which needs protection. Thus, moving from the idea that privacy is a human right, we can create clear boundaries related to the context of information that must be protected, not just in principles like necessity or proportionality, but also in a societal context. The impact of certain types of contexts varies, as do the requirements for a legal basis: certain categories of data – like health data – require a high bar for informed consent, and other types of data – like biometric data – are prohibited from being processed, unless there is an exemption such as a clear law that enables certain processes.
What is the impact of different legal frameworks for a developing technology like artificial intelligence and machine learning?
AI and ML are interesting technological developments that show the implications of having different privacy frameworks in an interconnected world. Big questions around bias and discrimination in data via AI systems, and also the harms of these technologies, are top of mind in academic, policy, and economic discussions in Europe and reflect European priorities and thinking in each of those areas, but don’t necessarily reflect the thinking elsewhere. The outcomes of those discussions – both in Europe and elsewhere – will influence how AI/ML technology is developed and regulated in different parts of the world. It’s always important to keep in mind that technology is not developed in isolation here in Europe, so we are dependent on small groups of specialized companies that provide these technologies globally and must interact with a variety of jurisdictions and frameworks.
One of the consequences of having different legal frameworks around the world is the impact on innovation in certain markets. Different legal frameworks can create big complications in terms of compliance for companies and lead to a different use of technology. For instance, in the advertising space, we’ve seen that the information requirements have changed in relation to evolving privacy regulations in different regions, with different thresholds for consent. The consequence of those differences is that the EU experience of browsing websites is very different from the U.S. experience. In that way, it’s fascinating to see how the same advertising technology can lead to vastly different experiences. That is a concrete example of how technology shapes the society based on cultural values around privacy.
The impact of different legal frameworks was placed square in the center of the privacy debate when the Court of Justice of the European Union handed down its judgment in the Schrems II case. An important question in this debate is: how do we strike the right balance between the security of a state and the protection of its civilians, a legitimate interest for companies and its customers to benefit from big data, and the fundamental right to privacy and freedom of people?
You started as a tech expert and became a policy expert. What advantages does that give you? How is that helpful to the overall conversation about tech policy?
I studied electrical engineering because I really wanted to understand the world around us. Later, I got a master’s degree from the Leiden Institute of Advanced Computer Science. That technical background provided me with the ability to bridge hardware (understanding how information is collected at the hardware level) and software (specifically, how that information is translated to data by software), and then be able to follow the data flow to servers and platforms.
Being able to zoom in and out on the data helps in being able to be clear about policy questions and to hash out the real risks from a data privacy perspective. Then, once a consensus around the key risks and issues is developed, we can think about ways to mitigate those risks, not just in terms of minimization or prevention, but also understanding that certain risks can be positive and can create opportunities. From the policy perspective, it’s valuable to understand what a bottom-up, data-driven world actually means, what the data looks like, how software works, and how the components in software and hardware work.
What do you see happening next? Key topics you’ll be working on over the next year or so?
In Europe, we’re closely following the topics that are on the work program of the European Data Protection Board and the European Data Protection Supervisor in terms of guidance that they provide. We’re also closely following the 2020 work program of the European Commission.
One of those topics, cookies, is close to my heart. E-privacy has become an extremely important topic, as a number of different data-driven contexts are impacted by new privacy rules governing the use of connected technologies.
We also go beyond topics and issues that are connected to personal data – we address uses of machine-generated data that are not necessarily personal data in the legal sence. Therefore, it’s important that we track policy developments related to the free flow of non-personal data, which also happens to be an issue that is top of mind for the European Commission at the moment.
To learn more about FPF in Europe, please visit fpf.org/eu.