Key Findings From the Latest ‘Right To Be Forgotten’ Cases

Case C-136/17 GC et al v CNIL – right to be forgotten; lawful grounds for processing of sensitive data

Link to judgment: ex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=335023 

Main issue: 

Four erasure requests not linked to each other and all having to do with de-linking news articles from Google search results pages, some of which contained sensitive data, were rejected by Google. The CNIL upheld Google’s assessment, considering the public’s right of information prevailed in all cases. The data subjects challenged CNIL’s decision in Court, which sent questions for a preliminary ruling to the CJEU. One key question was whether Google as a controller and within the limits of its activity as a search engine has to comply with the prohibition of processing sensitive personal data, which has very limited exceptions. In other words, should Google ensure that before displaying a search result leading to information containing sensitive data it must have in place one of the exceptions under Article 9(2)? So should there be a difference of treatment between controllers, depending on the nature of the processing they engage in? Another question was whether information related to criminal investigations falls under the definition of information related to “offences” and “criminal convictions” under Article 10 GDPR, so subject to the restrictions for processing imposed by it. The Court made detailed findings about the content of Article 17 GDPR (the right to be forgotten) and about the exceptions of the prohibition to process sensitive personal data. 

Key findings: 

Case C-507/17 Google – global de-listing requests 

Link to judgment: ex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=1103956

Main issue 

Key points 

Relevant nuances 

CCPA 2.0? A New California Ballot Initiative is Introduced


On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsom is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy Rights and Enforcement Act of 2020) that will be voted on in the 2020 election; if passed, the initiative would substantially expand CCPA’s protections for consumers and obligations on businesses. While the new proposal preserves key aspects of the current CCPA statute, there are some notable additions and amendments.

Notable Provisions 

The California Privacy Rights and Enforcement Act of 2020 ballot initiative would:

Next Steps

According to the California Elections Code (ELECT CA ELEC § 9002), the California Attorney General will hold a 30-day review process and public comment period, followed by five additional days for proponents of the initiative to amend the proposal, prior to the initiative appearing on the ballot.

As stated above, this proposal provides an idiosyncratic approach to the legislative process for laws passed via a ballot initiative by allowing for amendments after it is signed by the governor if the amendments are “consistent  with and further the purpose and intent” of the Act. This approach suggests a willingness to pass new amendments to help the law keep pace with emerging technology. The standard process for amending ballot initiatives requires a supermajority vote of the legislature.

Civic Data Privacy Leaders Convene at MetroLab Annual Summit

By Kelsey Finch, FPF Senior Counsel

The MetroLab Network’s Annual Summit brought together an inspired group of civic, academic, industry, and nonprofit leaders to discuss the most important issues in smart cities and civic innovation. For the third year in a row, FPF partnered with MetroLab Network to promote data privacy perspectives and to advance responsible data practices within smart and connected communities.

This year at the Summit, I moderated a roundtable discussion of privacy officials representing Pittsburgh, Seattle, Boulder, and more than a dozen other cities who have joined the Civic Data Privacy Leaders Network, an FPF-led initiative supported by the National Science Foundation. Network members joined summit participants from academia, industry, and civil society to share their most pressing questions, concerns, and smart city success stories with each other. The roundtable highlighted the common privacy challenges and opportunities faced by today’s local government privacy leaders and sparked new ideas for promoting fair and transparent data practices.

In this candid and collaborative atmosphere, some common priorities emerged:

FPF also previewed a working draft of its forthcoming Smart Cities & Communities Privacy Risk Assessment at the roundtable, intended to help smart and connected communities ask the right questions and reach for the right tools to ensure that they are collecting, using, and sharing personal data responsibly.

Other important, data-centric discussions during the event included Thursday’s Mobility Data Management, Analytics, and Privacy session—in which Network member Ginger Ambruster of Seattle and I participated—and sessions dedicated to a new Model Data Handling Policy for Cities from UMKC, data equity and responsible data science, micromobility services, and digital equity and community engagement. Univision ran a Spanish language story on the event focused on how smart cities can ensure equitable treatment and access to resources for immigrants, which you can view here.

While the Civic Data Privacy Leaders roundtable—and the MetroLab Summit as a whole—underlined the significant challenges that communities around the world are facing as they explore new technologies and data uses, it also highlighted the potential for civic innovation to deliver more livable, equitable, and sustainable communities. By working together across sectoral and geographic boundaries, the event showcased how we can help city and community leaders strengthen their ability to collect, use, and share data in a responsible manner and promote the public’s trust in smart city technologies and in local government.

To learn more or join the Civic Data Privacy Leaders Network, a peer group for local government privacy leaders from more than 25 localities in the U.S. and abroad, please contact me at [email protected]

The Right to Be Forgotten: Future of Privacy Forum Statement on Decisions by European Court of Justice

WASHINGTON, DC – September 24, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding two European Court of Justice decisions announced today in its cases with Google:

Key decisions about the balance of privacy and free expression still remain to be settled by the European Court of Justice (ECJ). Although the ECJ’s two decisions generally support the rights of those searching the web to access links to information, both show the tremendous weight European law gives to privacy as a human right that is given the strongest consideration before it is limited. Even though the court found that European law does not mandate global delisting when the Right to Be Forgotten is asserted, it indicated that a data protection authority could seek global delisting if the privacy balance called for it in a specific circumstance.

The court also made clear that within Europe there can be national variances in how the Right to Be Forgotten can be applied, given differences in local law and culture.

In a second case also decided today, the court avoided banning in advance listing of results that include political, racial or other sensitive information. It did require heightened consideration for those results, to the extent that it even required that pages containing information about criminal histories include relevant context on the search page, when the affected party objects to the results.


Media Contact:

Tony Baker

Future of Privacy Forum

[email protected]


About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting

FTC should investigate app developers banned by Facebook – Statement by Future of Privacy Forum CEO

Future of Privacy Forum Calls on FTC to Investigate Apps That Misused Consumer Data

WASHINGTON, DC – September 20, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding Facebook’s announcement that it has banned 400 developers from its app store:

The FTC should quickly act against many of these app developers, since they share the blame with Facebook, and some could still be holding on to consumer data or continuing to sell it. If apps that misuse Facebook members’ data escape legal penalty, developers will get the message that there is no legal risk to improper data-sharing. Every company, and especially app developers, needs to understand that there are consequences for abusing consumer data. This situation demonstrates yet again that Congress should dramatically increase the human and technological resources available to the FTC and give it broader authority to levy civil penalties.

Media Contact:

Tony Baker

[email protected]


About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting

New White Paper Explores Privacy and Security Risk to Machine Learning Systems

FPF and Immuta Examine Approaches That Can Limit Informational or Behavioral Harms

WASHINGTON, D.C. – September 20, 2019The Future of Privacy Forum (FPF) released a white paper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and security risks, and explaining approaches to data protection.

“Machine learning is a powerful tool with many benefits for society,” said Brenda Leong, FPF Senior Counsel & Director, Artificial Intelligence and Ethics. “Its use will continue to grow, so it is important to explain the steps creators can take to limit the risk that data could be compromised or a system manipulated.”

The white paper presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.

“Privacy or security harms in machine learning do not necessarily require direct access to underlying data or source code,” said Andrew Burt, Immuta Chief Privacy Officer and Legal Engineer. “We explore how creators of any machine learning system can limit the risk of unintended leakage of data or unauthorized manipulation.”

Co-authors of the paper are Leong, Burt, Sophie Stalla-Bourdillon, Immuta Senior Privacy Counsel and Legal Engineer, and Patrick Hall, Senior Director for Data Science Products.

The white paper released today builds on the analysis in Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, released by FPF and Immuta in June 2018.

Leong and Burt will discuss the findings of the WARNING SIGNS whitepaper at the Strata Data Conference in New York City during the “War Stories from the Front Lines of ML” panel at 1:15 p.m. on September 25, 2019, and the “Regulations and the Future of Data” panel at 2:05 p.m. on the same day.


Media Contacts:

Tony Baker

Future of Privacy Forum

[email protected]


Hadley Weinzierl


[email protected]


About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting

About Immuta

Immuta was founded in 2014 based on a mission within the U.S. Intelligence Community to build a platform that accelerates self-service access and control of sensitive data. Immuta’s award-winning Automated Data Governance software platform creates trust across security, legal, compliance, and business teams so they can work together to ensure timely access to critical business data with minimal risks. Its automated, scalable, no code approach makes it easy for users to access the data they need, when they need it, while protecting sensitive data and ensuring their customers’ privacy. Immuta is headquartered in College Park, Maryland. Learn more at

Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems

By Brenda Leong, FPF Senior Counsel & Director, Artificial Intelligence and Ethics

Machine learning (ML) is a powerful tool, providing better health care, safer transportation, and greater efficiencies in manufacturing, retail, and online services. That’s why FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.

Today, FPF released a whitepaper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning [], exploring how machine learning systems can be exposed to new privacy and security risks, and explaining approaches to data protection. Unlike traditional software, in machine learning systems privacy or security harms do not necessarily require direct access to underlying data or source code.

The whitepaper presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.

My co-authors of the paper are Andrew Burt, Immuta Chief Privacy Officer and Legal Engineer, Sophie Stalla-Bourdillon, Immuta Senior Privacy Counsel and Legal Engineer, and Patrick Hall, Senior Director for Data Science Products.

The whitepaper released today builds on the analysis in Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, released by FPF and Immuta in June 2018.

Andrew and I will discuss the findings of the WARNING SIGNS whitepaper at the Strata Data Conference in New York City during the “War Stories from the Front Lines of ML” panel at 1:15 p.m. on September 25, 2019, and the “Regulations and the Future of Data” panel at 2:05 p.m. on the same day. If you’ll be at the conference, we hope you’ll join us!


What is 5G Cell Technology?  How Will It Affect Me?

(Except where otherwise noted, this post describes service and availability within the United States.)

5G, the fifth generation of wireless network technology, is the newest way of connecting wireless devices to cellular networks. Each previous generation of wireless technology has revolutionized the way people communicate and socialize, and led to waves of novel products and services using the new capabilities. The leap from 3G to 4G technology brought with it faster data transfer speeds, which supported widespread adoption of data cloud and streaming services, video conferencing, and Internet of Things devices such as digital home assistants and smartwatches. 5G technology has the potential to enable another wave of smart devices: always connected and always communicating to provide faster, more personalized services. While these new products and services may create significant benefits for both businesses and consumers, connected devices can raise substantial privacy risks. Concerns about data collection, use, and sharing must be addressed. 

How Does 5G Technology Work?

5G uses very fast, short-range signals on an unused frequency band to send and receive more total data, within a dense network of specialized small cell sites. A small cell site refers to the area within which this short-range wireless signal can be received. In contrast, current 4G technology uses long-range signals transmitted across crowded low frequency radio waves to send and receive data over a broad network of larger cell sites. 5G operates on a different part of the spectrum, using bands with little interference, at a higher frequency. This can result in faster download speeds and more stable connections for wireless devices, but waves at these frequencies have a shorter range. The 2G, 3G, and 4G/LTE systems all used the same long-range band of airwaves, as does broadcast television. This shift to a different part of the spectrum promises better connectivity, but will require a larger number of cell sites.

Current 4G networks provide service by broadcasting radio waves from large cell towers to extended areas. The capabilities of individual tower sites, the geographic features of covered areas, and the amount of available spectrum all impact the quality of service. These 4G/LTE services operate on a model of regional centralization – all service from large areas, including up to several states worth of traffic, is collected from broadly spaced cell towers, then aggregated into one central location before distribution (connection to the internet). 

The initial deployment of 5G technology to cell towers will likely operate similarly, and many of the gains will not be immediately noticeable to users until the supporting small cell sites are in place. Some early use cases will likely be in stadiums, malls, or other large venues where small cell sites will be placed to create discrete, distributed systems serving those bounded locations in ways that will provide faster service.

5G service requires telecommunications providers to convert existing cell towers to accommodate the new technology and then incorporate a network of small cell sites, densely located for maximum coverage of service areas. Most carriers are currently upgrading the services available on 4G systems as an interim step before full deployment of 5G – the full extent of which will take several years. For example, 4G towers are taking advantage of “MIMO” (multi-input/multi-output) technologies, which enable simultaneous operation of 2 or 4 channels to receive or send signals. Also, some carriers are opting to push internet connectivity out to more local levels than the current regionalization model – either to a single point per state, per metro area, or even to individual cell towers in some dense urban areas. These changes produce a noticeable increase in speed for individual users within those service areas.

5G technology will use a different type of signal called millimeter waves that, while significantly faster, are more limited in range and will ultimately require many more total sites. Such small cell sites must be much closer to each other than existing cell tower networks in order to allow the millimeter waves to carry a signal successfully. Service on this millimeter wave spectrum can only happen once the full infrastructure of small cell sites is in place. This transition is a massive process. In the U.S., it will likely be 2-3 years to upgrade all the existing cell towers – which will represent the initial level at which a carrier may claim to be providing 5G level service. (AT&T for example, has currently upgraded towers in over 20 metropolitan locations, and is continually adding to this list.) Some other countries are ahead of the U.S. in completing this initial phase, but none has yet extensively deployed the network of small cell sites to bring the system to full capability. 

Telecommunications companies are starting in the core of dense, urban areas, then spreading to the suburbs and fringes of metropolitan areas, and ultimately will reach all towers. Setting up the small cell sites can be done simultaneously with the cell tower conversions, but will almost certainly take much longer to reach full coverage. Many rural or less densely populated cities are offering accelerated permit processes, and other business incentives to entice carriers to prioritize upgrades in those locations.

How Will 5G Technology Change Wireless Services and Products?

When companies reach widespread 5G small cell coverage, the new technology will offer two major improvements: (1) increased signal coverage (reliability) and (2) significantly faster mobile speeds with lower latency, i.e. the lag time between a signal and a response. However, it’s important to note that no devices designed solely for 4G will work on 5G networks. 5G-enabled hardware will be required, and devices designed and sold during the transitional years will almost certainly have to include connectivity to both networks. Whereas current devices can switch between 4G and 3G connectivity based on which signal is available in any particular location, the devices only operate on one of these systems at a time. Dual-capability 4G/5G devices will be able to operate on 4G and 5G at the same time – thus ensuring minimal disruption to service as consumers transit between locations and service availability.

When fully operational, 5G networks promise to provide significantly faster speeds with lower latency compared to current 4G/LTE networks. While speeds vary based on a variety of factors, most 4G networks average 40 megabits per second (“Mbps”) download speeds with the fastest local networks getting up to 500Mbps. Although recent tests demonstrate that 4G networks can provide speeds up to 2 gigabit per second (Gbps) or 2,000 Mbps under some conditions, 5G networks promise still higher speeds and better connectivity. The early performance of 5G networks is likely to appear similar to current high-speed 4G networks, but 5G technology has been shown to reach speeds even up to 4.5 Gbps once small cells are fully deployed. 

5G networks also promise drastic decreases in the latency of wireless transmissions, reducing the amount of lag time between an interaction with the network and the network’s response. To an ordinary consumer, the difference in speed and latency may not seem noticeable during everyday transactions, but for high-speed, data-intensive computing services these differences can completely revolutionize an industry. For example, lower latency in video conferencing may enable better feedback in communications and fewer dropped calls. Just as the shift from 3G to 4G/LTE provided capabilities that generated unforeseen applications and uses, it is impossible to predict exactly what may become feasible on 5G, and what consumer-facing services that will enable.

5G is unlikely to be used for all wireless communications. Although some analysts talk of autonomous vehicles relying on 5G, no current developers of these cars are designing with the intention to use 5G. Instead, carmakers are implementing different technologies that use a different part of the spectrum. Likewise, connected devices in homes (IoT and “smart home” technology) will primarily remain connected via WiFi systems. However, those home-related devices such as power and water meters, or smart city sensors that rely on cell technology will likely transition to 5G. The improvement here will not be so much the speed, as these devices use very little bandwidth, but in quantity. Current systems support millions of devices per square kilometer. With 5G, literal billions of devices can be managed. 

5G Security

 The transition to 5G networks, the growth of IoT networks, and the expansion of other advanced technologies raises questions about security safeguards, particularly with regard to access and authentication protocols. Some IoT devices employ lighter security measures in an effort to allow simple connection and communication; this is particularly common for IoT technologies that pose lower risks to users and networks. 5G technology can support more connected devices on cellular networks, which will increase the number of potential vulnerabilities. But the 5G technology may include security improvements as well.

 Work is under way to determine the best practices for securing 5G networks in various applications. Experts expect that faster speeds and higher capacity of these networks to allow for more sophisticated threat assessment measures and more secure authentication frameworks; both aspects of 5G which could improve network security. The nature of the 5G network is a shift from a centralized system to distributed, virtual networks. The carriers developing these systems are embedding security functions at multiple stages of deployment, including virtual protections as well as those which are hardware-based. Additionally, some techniques that work for 4G networks can be translated to 5G networks, as advanced 4G networks follow many of the same technological principles as 5G. While 5G may exacerbate some existing risks or create new security challenges, the technology may also provide for new, effective ways to secure data and devices through more sophisticated networks and algorithms.

Global Use of 5G

5G standards are largely stable and interoperable in the United States. However, stakeholders have not yet agreed on a single standard for worldwide interoperability of 5G networks. The telecommunications industry is working to identify and reach agreement on connectivity standards for global 5G connectivity. The current global standard for cellular connectivity is known as “3GPP.” This standard reflects the agreement of a consensus-driven oversight body that includes partners from Asia, Europe, and North America. The standards setting process is handled by working groups formed within larger technical specification bodies. Both individual company capabilities as well as regional balancing are priorities. All countries and companies will need to adhere to an agreed-upon standard for the manufacture and operation of 5G networks, as non-conforming equipment will not be interoperable across networks. The transition to a 5G network reflects a greater number of suggestions for standards, by both operators and manufacturers, than in the past, because of the obvious importance of interoperability.

Privacy Impacts of 5G

Faster speeds and lower latency may mean better products and services, but the transition to 5G will likely create privacy risks associated with new devices, data collection, and use of personal information. 5G access will enable individuals to have more smart devices that can reliably connect and interact with online services; at the same time, the technology can also create more detailed personal data sets for device manufacturers and service providers. 

For video platforms, 5G promises to provide higher image quality with less lag time; this also means that facial recognition technology will have clearer images to analyze. Similarly, public video surveillance networks can transmit more detailed video of an individual’s activities and can be more easily analyzed by artificial intelligence systems to identify and track individuals in public. The ability to collect and share data more quickly is likely to result in more useful, personalized services; at the same time, increased data collection can increase users’ concerns about the creation of more detailed profiles on individuals. 

5G promises to be a revolutionary improvement in cellular network technology. Better connections for users and faster speeds with lower latency for data intensive computing will likely lead to improvements in existing products and services, as well as the development and implementation of new technologies. However, such changes will include the need to consistently prioritize individual privacy in this new context; 5G technology will not eliminate existing privacy and security challenges. While including beneficial services, a faster, more efficiently connected digital world will continue to pose data risks and will require continued meaningful privacy safeguards to ensure appropriate handling and protection of personal information.

Authored by Daniel Neally and Brenda Leong

10 Reasons Why the GDPR Is the Opposite of a ‘Notice and Consent’ Type of Law

The below piece was originally published on Medium. For a version with humorous images, head to the original post.

A ‘notice and consent’ privacy law puts the entire burden of privacy protection on the person and then it doesn’t really give them any choice. The GDPR does the opposite of this.

There is so much misunderstanding about what the GDPR is and what the GDPR does, that most of what is out there at this point is more mythology than anything else.

For example, an article in Axios claimed over the weekend that ‘the notice and consent approach forms the backbone of the GDPR’. This claim is simply not true.

Understanding and correctly categorizing the regulatory framework of the GDPR is actually very important, now. Look at US Senate’s hearing yesterday, on ‘GDPR & CCPA: Opt-ins, Consumer Control, and the Impact on Competition and Innovation’. If this law is considered as point of reference for future privacy legislation in the US — in the sense of deciding how close or far from it should be the future US privacy framework, then one should understand what are the mechanisms that make the GDPR what it is.

A ‘notice and consent’ framework puts all the burden of protecting privacy and obtaining fair use of personal data on the person concerned, who is asked to ‘agree’ to an endless text of ‘terms and conditions’ written in exemplary legalese, without actually having any sort of choice other than ‘all or nothing’ (agree to all personal data collection and use or don’t obtain access to this service or webpage). The GDPR is anything but a ‘notice and consent’ type of law.

There are many reasons why this is the case, and I could go on and get lost into the minutiae of it. Instead, I’m listing 10 high level reasons, explained in plain language, to the best of my knowledge:

1. Data Protection by Design and by Default is a legal obligation

All organizations, public or private, that touch personal data (“processing” in the GDPR means anything from collection to storage to profiling and creating inferences to whatever you can think of and that can be done to personal data) are under an obligation to bake privacy into all technologies and/or processes they create and, very importantly, to set privacy friendly options as default. There are no exceptions to this obligation. Data Protection by Design and by Default (DPbD) must be implemented regardless of whether the personal data will be obtained based on an opt-in, an opt-out, a legal obligation to collect the data. It doesn’t matter. All uses of personal data must be based on DPbD. Check out Article 25 GDPR.

2. Data Protection Impact Assessments are mandatory for large scale and other complex processing

All organizations that engage in any sort of sensitive, complex or large scale data uses must conduct a Data Protection Impact Assessment (DPIA) before proceeding. Think of the now-common Environmental Impact Assessments (EIA). The DPIA is just like an EIA, but instead of the impact of a project on the environment, it measures the impact of a project using personal data on all the rights of the individuals concerned, from free speech, to privacy, to non-discrimination. Depending on the results of the DPIA, safeguards must be brought to minimize the impact on rights, or the project can simply be stopped if there is no way to minimize the risks. Again, this happens regardless of opt-ins, opt-outs, legal obligations, other grounds relied on by organizations to collect and use the personal data. Check out Article 35 GDPR.

3. All processing of personal data must be fair

Absolutely all collection and uses of personal data must be fair and transparent, regardless of the ground for processing (opt-in, opt-out, legal obligation etc.). This is the Number 1 rule relating to processing of personal data listed in the GDPR (check out Article 5(1)(a)) and breaching it is sanctioned with the higher tier of fines. In practice, this means several things, including the fact that people should be expecting that their personal data is collected, used or shared in the way it is being collected, used or shared.

4. There must be a specific, well defined reason for all collection or uses of personal data

From the outset, and regardless of the justification relied upon by an organization to process personal data (opt-in, opt-out, fulfillment of contract etc.), the collection of that personal data, be it directly from individuals, observed or inferred, must be done only for specified, explicit and legitimate purposes and only processed either for those purposes, or for purposes compatible with them. This is the principle of purpose limitation. In practice, it means that it is illegal to collect personal data ‘because maybe some day I will find something useful to do with it’. Non-compliance with the purpose limitation obligations also triggers the higher level of fines.

5. Data grabs unrelated to the purpose of processing are illegal

Only those personal data that are relevant and limited to what is necessary to achieve the specified purpose can be collected or otherwise processed. Casting a net to grab as much personal data as possible, even if it is not needed for the purpose announced, is unlawful and, again, sanctioned with the higher tier of fines. This rule applies to all processing of personal data, even to those processing activities mandated by law, such as anti money-laundering.

6. The person can actually do things related to how his or her personal data is handled

The individual has well defined rights that allow him or her to do many things to ensure their personal data are processed fairly and lawfully, such as obtaining a copy of the personal data being processed (regardless of whether the personal data is processed on the basis of consent, or a legal obligation, or any other ground), erasing the personal data not being processed lawfully, objecting to processing of personal data, even to lawful processing, on his or her particular grounds, or initiating Court proceedings against any unlawful processing, with the possibility to claim moral or material damages.

7. State of the art security is an obligation

There is an obligation to ensure state of the art security measures for all processing of personal data, with hefty fines for data breaches. Check out Article 32.

8. There is someone in each organization engaging in complex processing whose job is to ensure personal data are processed fairly and lawfully

All organizations that engage in complex or sensitive or large scale data collection and use (this covers all Big Tech, but also many others) must appoint a Data Protection Officer, whose job as an independent adviser is well regulated and protected by the GDPR. Technically, the DPO is someone specialized or experienced in data protection law or applying data protection law, who advises the highest level of management on how to fairly and lawfully collect and use personal data. Check out Articles 3738 and 39.

9. Personal data is followed through the vendor maze

The GDPR provides for solid guarantees on how personal data is managed by the chain of vendors and suppliers of an organization. In particular, all vendors that process personal data on behalf of an organization have to enter detailed contractual agreements which hold them accountable for how they protect the personal data entrusted to them. Vendors also have some direct statutory obligations, such as keeping the Record of processing activities and appointing a Data Protection Officer.

10. All processing of personal data must be kept in a comprehensive and updated Record

All organizations that collect and use personal data in any way are under an obligation to keep track in a Record of all the personal data they collect and use, for what purpose, for how long, about what categories of individuals, with whom they share the data and other details as prescribed by Article 30 GDPR, regardless of whether they collect it on opt-in, opt-out, legal obligations, contract fulfillment etc. The only organizations exempted from this obligation are those with under 250 employees and that only occasionally process personal data. Even those must still keep a Record if the occasional processing may result in a high risk for the rights of individuals or involves sensitive personal data.

10th Annual Privacy Papers for Policymakers – Send Us Your Work!

The 10th Annual Privacy Papers for Policymakers awards have been announced. Register here to attend the event on February 6, 2020. We will open the submissions process for next year’s awards in fall 2020.

Have you conducted privacy-related research that policymakers should know about? If so, we can help you get it in front of key government officials. The Future of Privacy Forum will return to Capitol Hill early next year (date TBD) for the 10th installment of our annual Privacy Papers for Policymakers (PPPM) awards program.

PPPM recognizes the year’s leading privacy research and analysis that has practical implications for policymakers in Congress, federal agencies, and data protection authorities internationally. Winners are selected by a diverse team of academics, advocates, and industry privacy professionals. Awarded articles are chosen both for their scholarly value and because they offer policymakers concrete solutions and practical insights into real-world challenges.

Submit Your Work!

We are currently accepting finished papers to be considered for next year’s awards. The deadline for regular submissions is October 4, 2019, and the deadline for student submissions is October 25, 2019. For more on submission guidelines, please review this page.

Last year’s winning papers examined topical privacy issues: sexual privacy, data subject rights, how local laws can fill gaps in state and federal laws, and more. PPPM honors papers from academics, practitioners, technologists and lawyers, which allows for a wide range of approaches to research and analysis.

For 10 years, the research compiled in PPPM has informed the policy debate in Congress, in the states, and around the world. By submitting your work for consideration, you are providing a valuable tool for legislators and staff considering the structure and elements of a national privacy framework.

Please send any questions about the program or submission process to [email protected]. We look forward to reading your work!