Four erasure requests not linked to each other and all having to do with de-linking news articles from Google search results pages, some of which contained sensitive data, were rejected by Google. The CNIL upheld Google’s assessment, considering the public’s right of information prevailed in all cases. The data subjects challenged CNIL’s decision in Court, which sent questions for a preliminary ruling to the CJEU. One key question was whether Google as a controller and within the limits of its activity as a search engine has to comply with the prohibition of processing sensitive personal data, which has very limited exceptions. In other words, should Google ensure that before displaying a search result leading to information containing sensitive data it must have in place one of the exceptions under Article 9(2)? So should there be a difference of treatment between controllers, depending on the nature of the processing they engage in? Another question was whether information related to criminal investigations falls under the definition of information related to “offences” and “criminal convictions” under Article 10 GDPR, so subject to the restrictions for processing imposed by it. The Court made detailed findings about the content of Article 17 GDPR (the right to be forgotten) and about the exceptions of the prohibition to process sensitive personal data.
Key findings:
The Court makes it clear that its findings are equally applicable to the former provisions of the Directive, as well as to the current provisions of the GDPR.
The Court reiterated that Google is a controller (#35, #36, #37) and found that the law doesn’t provide for a general derogation from the prohibition of processing sensitive data for processing conducted by an internet search engine. Therefore, that prohibition and the exceptions following it apply to search engines as well (#42, #43).
In fact, the sensitivity of personal data is what justifies the obligations being applicable to all controllers equally, with the Court stating that exempting search-engine-controllers from the stricter regime applied to processing sensitive data would run counter to the purpose of the provisions to ensure “enhanced protection” for such processing, which “because of the particular sensitivity of the data, is liable to constitute … a particularly serious interference with the fundamental rights to privacy and the protection of personal data” (#44).
This being said, the Court nonetheless acknowledged the practical difficulty of applying those restrictions a priori to hyperlinks that lead to webpages containing sensitive data. “The specific features of the processing carried out by the operator of a search engine in connection with the activity of the search engine … may have an effect on the extent of the operator’s responsibility and obligations under those provisions” (#45), the Court found.
The Court added that a search engine is responsible for this processing “not because personal data referred to in those provisions appear on a web page published by a third party but because of the referencing of that page and in particular the display of the link to that web page in the list of results presented to internet users following a search on the basis of an individual’s name” (#46).
As a consequence, the Court decided that the prohibition to process sensitive data only kicks in for search engines “by reason of that referencing, and thus via a verification, under the supervision of the competent national authorities, on the basis of a request by the data subject” (#47). This means that Google doesn’t have to justify any of the exceptions that would apply to its processing of sensitive personal data in hyperlinks displayed as search results before it receives a request from the data subject.
So what happens after a data subject signals that a search result leads to content that includes sensitive data about them and ask for de-listing?
The Court makes a thorough analysis of Article 17 GDPR – the right to be forgotten, by laying out its conditions of applicability as well as its exceptions. It highlights that the exercise of the freedom of expression and information is now expressly mentioned among the exceptions to the right to be forgotten, per Article 17(3) GDPR (#56, #57).
It concludes that the GDPR “expressly lays down the requirement to strike a balance between the fundamental rights to privacy and protection of personal data guaranteed by Articles 7 and 8 of the Charter, on the one hand, and the fundamental right of freedom of information guaranteed by Article 11 of the Charter, on the other” (#59).
The Court considers that the processing of sensitive data by a search engine can be justified by consent – Article 9(2)(a); if the data are manifestly made public by the data subject – Article 9(2)(e); or where the processing is necessary for reasons of substantial public interest – Article 9(2)(g), on the basis of EU or Member State law (#61).
The Court then analyzes how all these three exceptions from the prohibition would apply to the processing of sensitive data by a search engine. Relevantly, the Court finds that “in practice, it is scarcely conceivable … that the operator of a search engine will seek the express consent of data subjects before processing personal data concerning them for the purposes of his referencing activity” (#62). It seems that the Court recognizes the practical impossibility for a search engine to obtain consent for its referencing activity. The Court also points out that in any case any request to have data de-listed would amount to a withdrawal of consent.
The other possible exception – that the sensitive data have been manifestly made public, is intended to apply “both to the operator of the search engine and to the publisher of the web page concerned” (#63). The Court doesn’t further explain what “manifestly made public” means.
When these conditions are met and provided that the other lawfulness provisions in Article 5 GDPR are complied with (purpose limitation, data minimization etc.), the processing of sensitive data is “compliant” (#64). It thus seems that the Court does not support the approach taken by the EDPB that a controller needs to first have in place a general lawful ground for processing under Article 6 GDPR and then the processing has to fall under one of the exceptions in Article 9.
Even in the case of a compliant processing, the Court points out that data subjects can still object to that processing based on their particular situation, following Article 21 GDPR (#65).
Ultimately, the Court shows that when dealing with a de-listing request involving sensitive data, a search engine must ascertain “having regard to the reasons of substantial public interest” per Article 9(2)(g) GDPR whether including the link at issue in the search results “is necessary for exercising the right of freedom of information of internet users potentially interested in accessing that web page by means of such a search, a right protected by Article 11 of the Charter” (#66). It thus seems that the Court links the right to information to a “substantial public interest”.
Finally, the Court assesses the question of whether information related to ongoing criminal proceedings amount to data relating to “offences” and “criminal convictions”, falling thus under the restrictions pursuant to Article 10 GDPR. Subsequently, the Court provides guidance as to whether links leading to information relating to investigations should be deleted following a request from the data subject, if the investigation found the person concerned not guilty.
The Court takes a broad approach and establishes that “information relating to the judicial investigation and the trial and, as the case may be, the ensuing conviction, is data relating to ‘offences’ and ‘criminal convictions’” pursuant Article 10 GDPR, “regardless of whether or not, in the course of those legal proceedings, the offence for which the individual was prosecuted was shown to have been committed” (#72).
The Court then adds a couple of nuances concerning de-listing requests of such information.
First, it states that “where the information in question has been disclosed to the public by the public authorities in compliance with the applicable national law” is an indication that the processing is appropriate (#73).
Second, the Court adds that “even initially lawful processing of accurate data may over time become incompatible with [the GDPR] where those data are no longer necessary in the light of the purposes for which they were collected or processed” (#74).
The Court provided some detailed guidance on the elements that need to be taken into account in the balancing of rights. Specifically, it made a reference to the jurisprudence of the European Court of Human Rights balancing Article 8 of the European Convention on Human Rights (privacy) and Article 10 of the Convention (freedom of expression) in cases where freedom of the press is at stake and highlighted that “account must be taken of the essential role played by the press in a democratic society, which includes reporting and commenting on legal proceedings. Moreover, to the media’s function of communicating such information and ideas there must be added the public’s right to receive them” (#76).
The Court went further and recalled ECHR case-law stating that “the public had an interest not only in being informed about a topical event, but also in being able to conduct research into past events”.
However, as a last point, the Court acknowledged that the public’s interest as regards criminal proceedings is “varying in degree” and “possibly evolving over time according in particular to the circumstances of the case” (#76). This last point could justify, in limited cases, de-listing of links falling in this category.
The fact that the CJEU recalled ECHR case-law under Article 8 Convention is significant. After the EU Charter of Fundamental Rights entered into force, the CJEU built its profile as a human rights Court by building its own jurisprudence under the Charter.
The search engine will then have to assess “in the light of all circumstances of the case” whether the data subject has the right to the information in question no longer being linked with his or her name by a list of results displayed following a search carried out on the basis of that name. The Court provides detailed guidance on the circumstances to take into account (#77):
“the nature and seriousness of the offence in question”
“the progress and the outcome of the proceedings”
“the time elapsed”
“the part played by the data subject in public life and his past conduct”
“the public’s interest at the time of the request”
“the content and form of the publication” and
“the consequences of publication for the data subject”
The last finding of the Court is perhaps also the most consequential: the Court found that even if the link will not be de-listed following the request of the data subject, the search engine is in any case required to rank first on the search results webpage information relating to the outcome of the criminal case.
In the words of the Court, “the operator is in any event required, at the latest on the occasion of the request for de-referencing, to adjust the list of results in such a way that the overall picture it gives the internet user reflects the current legal position, which means in particular that links to web pages containing information on that point must appear in first place on the list” (#78).
In 2015, CNIL delivered a formal notice to Google that as a result of a successful de-listing request it must apply the link removal to all its search engine’s domain name extensions globally and not only to those versions of the website with EU Member States extensions (#30). Google challenged the decision in Court and that Court sent questions for a preliminary ruling to the CJEU on the interpretation of the scope of the right to erasure. Therefore, what was at issue was an automatic effect of successful de-listing requests: should a successful request be applied globally automatically?
Key points
Court makes it clear that its interpretation concerns both the Directive and the GDPR (#41), so the effects of the judgment are valid for applying Article 17 GDPR too.
The Court was deferential in its judgment to the legal systems outside the EU and it did emphasize that “numerous third States” don’t recognize a right to de-listing or that they have a different approach to that right (#60) and also that the balance between privacy, data protection and freedom of information “is likely to vary significantly around the world” (#60).
The Court found that “currently” there is no obligation under EU law to de-list search engine results globally following a successful de-listing request. There is however an obligation to de-list them throughout the EU and not only in the Member State where the request was made (#64, #66).
At the same time, the Court was also deferential to national Courts and to DPAs, explicitly allowing them to impose global de-listing orders. The Court “emphasized” that while EU law does not require search engines to automatically de-list results globally following a successful request, “it also does not prohibit such a practice” (#72).
Citing its Melloni and Fransson jurisprudence, the Court stated that “a supervisory or judicial authority of a Member State remains competent to weigh up, in the light of national standards of protection of fundamental rights a data subject’s right to privacy and the protection of personal data concerning him or her, on the one hand, and the right to freedom of information, on the other, and, after weighing those rights against each other, to order, where appropriate, the operator of that search engine to carry out a de-referencing concerning all versions of that search engine”.
Therefore, global de-listing orders are still possible in those Member States whose fundamental rights practice allows it (and to the extent that practice does not conflict with the EU Charter of Fundamental Rights, per Melloni and Fransson), following a case by case analysis of individual cases.
Interestingly enough, the Court does not make any findings concerning Articles 7 and 8 Charter in this judgment, other than mentioning them in one paragraph which recalled the findings in the first Google right to be forgotten judgment.
Relevant nuances
The Court included two findings in its judgment that justify a potential future clear legislative measured that would require successful erasure requests to automatically have a global scope.
Court states that the referencing of a link referring to information regarding a person whose “center of interests is situated in the Union” is likely to have “immediate and substantial effects on that person within the Union itself” (#57)
The Court then informs the EU legislature that due to the consideration above, it is competent “to lay down the obligation for a search engine operator to carry out, when granting a request for de-referencing made by such a person, a de-referencing on all the versions of its search engine” (#58).
In fact, the Court made a point of highlighting the current lack of a specific legal provision that extends the scope of GDPR rights outside of the EU. The Court thinks that “it is in no way apparent” that the EU legislature “would…have chosen to confer a scope on the rights enshrined in those provisions which would go beyond the territory of the Member States and that it would have intended to impose on an operator which, like Google, falls within the scope of that directive or that regulation a de-referencing obligation which also concerns the national versions of its search engine that do not correspond to the Member States” (#62). Thus, the Court seems to not take into account the intention of the EU legislature to generally confer to the GDPR extraterritorial effects, which is shown by the inclusion of Article 3(2) into the GDPR.
The consequences of the Court ignoring the potential extraterritorial scope of GDPR provisions has immediate effects on the cooperation and consistency mechanism. In the next paragraph the Court states that EU law does not currently provide for cooperation instruments and mechanisms at EDPB level as regards the scope of a de-referencing outside the Union (#63). Technically this means that if a global de-listing request is granted by one of the DPAs, then that DPA does not have to coordinate with the other DPAs at EDPB level and can act by itself.
Further, the Court acknowledged that even at Union level there will be differences regarding the result of weighing up the interest of the public to access information and the rights to privacy and data protection, especially in the light of the GDPR allowing derogations at MS level for processing for journalistic purposes or artistic/literary expression (#67).
If this occurs, in the case of cross-border processing, the Court stated that the EDPB must reach consensus and a single decision which is binding on all DPAs and with which the controller must ensure compliance as regards processing across the Union (#68). Therefore, for divergent practices concerning de-listing cases at Union level, the EDPB is competent to hear cases and to cooperate in order to reach a single decision and provide certainty, as opposed to divergent practices concerning de-listing cases globally where the Court decided the EDPB is not competent to cooperate on cases.
CCPA 2.0? A New California Ballot Initiative is Introduced
Introduction
On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsom is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy Rights and Enforcement Act of 2020) that will be voted on in the 2020 election; if passed, the initiative would substantially expand CCPA’s protections for consumers and obligations on businesses. While the new proposal preserves key aspects of the current CCPA statute, there are some notable additions and amendments.
Notable Provisions
The California Privacy Rights and Enforcement Act of 2020 ballot initiative would:
Create the “California Privacy Protection Agency,” an independent executive agency tasked with protecting consumer privacy, ensuring that consumers are well-informed about their obligations and rights, promulgating regulations, and enforcing the law against businesses that violate of consumer privacy rights. The initiative provides for a hand-off process from the California Attorney General to this new agency; the AG’s office is currently responsible for CCPA education, rulemaking, and enforcement activities.
Add a new category of personal information to the CCPA called: “sensitive information,” which includes: precise geolocation information, social security number, passport number, customer’s account log in, financial account, personal information revealing a consumer’s racial or ethcinic origin, religion, union membership, or sexual orientation, among other categories.
Consumers are granted new rights over “sensitive information” such as the right to opt-out, at any time, from a business disclosing or using sensitive personal information for advertising and marketing or disclosing this information to a service provider or contractor for these purposes.
Businesses shall provide a separate link for users to exercise this opt-out right.
Businesses must obtain opt-in consent prior to the sale of a consumer’s sensitive personal information. A consumer who opted-in to the sale of sensitive personal information can revoke this authorization at any time.
Create a new right to correct inaccurate personal information.
Require opt-in consent for the collection of personal information from children under 16, and increase penalties for children’s privacy violations.
Provide that a consumer may request that a business disclose personal information collected beyond the currently-required 12-month period, and the business must provide such information unless doing so would be unduly burdensome or involve a disproportionate amount of information.
Require that a business must notify the consumer and state, when using a consumer’s personal information to advance the business’s political interests on their own behalf, or influence the outcome of an election.
Enact additional notice requirements for businesses, including but not limited to, specific requirements for “third parties.”
Amend the definition of a “business” as having 100,000 or more consumers or households, rather than the CCPA’s 50,000 or more consumers, households, or devices.
Amend the definition of “business purpose” to include new elements such as: “non-personalized advertising” (not based on a profile or predictions derived from a consumer’s past behavior) provided the information is not disclosed to a third party, used to build a profile of the consumer, or alter the consumer’s experience with the business.
Amend the definition of “deidentified” to: “information that cannot reasonably be used to infer information about, or otherwise be linked to, an identifiable consumer,” if the business meets certain requirements. The Attorney General will provide additional regulations related to the definition of “deidentified.”
Define “household” as “a group, however identified, of consumers who cohabitate with one another at the same residential address and share access to common device(s) or service(s) provided by a business.”
Provide that the provisions of the ballot initiative may be amended after it is approved by voters by a statute that was passed by a majority of members of the California State Legislature, and signed by the governor if the amendments are “consistent with and further the purpose and intent” of the Act.
Next Steps
According to the California Elections Code (ELECT CA ELEC § 9002), the California Attorney General will hold a 30-day review process and public comment period, followed by five additional days for proponents of the initiative to amend the proposal, prior to the initiative appearing on the ballot.
As stated above, this proposal provides an idiosyncratic approach to the legislative process for laws passed via a ballot initiative by allowing for amendments after it is signed by the governor if the amendments are “consistent with and further the purpose and intent” of the Act. This approach suggests a willingness to pass new amendments to help the law keep pace with emerging technology. The standard process for amending ballot initiatives requires a supermajority vote of the legislature.
Civic Data Privacy Leaders Convene at MetroLab Annual Summit
The MetroLab Network’s Annual Summit brought together an inspired group of civic, academic, industry, and nonprofit leaders to discuss the most important issues in smart cities and civic innovation. For the third year in a row, FPF partnered with MetroLab Network to promote data privacy perspectives and to advance responsible data practices within smart and connected communities.
This year at the Summit, I moderated a roundtable discussion of privacy officials representing Pittsburgh, Seattle, Boulder, and more than a dozen other cities who have joined the Civic Data Privacy Leaders Network, an FPF-led initiative supported by the National Science Foundation. Network members joined summit participants from academia, industry, and civil society to share their most pressing questions, concerns, and smart city success stories with each other. The roundtable highlighted the common privacy challenges and opportunities faced by today’s local government privacy leaders and sparked new ideas for promoting fair and transparent data practices.
In this candid and collaborative atmosphere, some common priorities emerged:
engaging and activating key stakeholders (including elected leaders, partner organizations, and diverse community members);
securing adequate resources and strengthening in-house privacy expertise;
increasing municipal access to cutting-edge privacy enhancing technologies like differential privacy or synthetic data;
integrating privacy considerations at each stage of the data and technology lifecycle (including procurement);
committing to clear, consistent privacy principles that reflect community values and priorities.
FPF also previewed a working draft of its forthcoming Smart Cities & Communities Privacy Risk Assessment at the roundtable, intended to help smart and connected communities ask the right questions and reach for the right tools to ensure that they are collecting, using, and sharing personal data responsibly.
Other important, data-centric discussions during the event included Thursday’s Mobility Data Management, Analytics, and Privacy session—in which Network member Ginger Ambruster of Seattle and I participated—and sessions dedicated to a new Model Data Handling Policy for Cities from UMKC, data equity and responsible data science, micromobility services, and digital equity and community engagement. Univision ran a Spanish language story on the event focused on how smart cities can ensure equitable treatment and access to resources for immigrants, which you can view here.
While the Civic Data Privacy Leaders roundtable—and the MetroLab Summit as a whole—underlined the significant challenges that communities around the world are facing as they explore new technologies and data uses, it also highlighted the potential for civic innovation to deliver more livable, equitable, and sustainable communities. By working together across sectoral and geographic boundaries, the event showcased how we can help city and community leaders strengthen their ability to collect, use, and share data in a responsible manner and promote the public’s trust in smart city technologies and in local government.
To learn more or join the Civic Data Privacy Leaders Network, a peer group for local government privacy leaders from more than 25 localities in the U.S. and abroad, please contact me at [email protected].
The Right to Be Forgotten: Future of Privacy Forum Statement on Decisions by European Court of Justice
WASHINGTON, DC – September 24, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding two European Court of Justice decisions announced today in its cases with Google:
Key decisions about the balance of privacy and free expression still remain to be settled by the European Court of Justice (ECJ). Although the ECJ’s two decisions generally support the rights of those searching the web to access links to information, both show the tremendous weight European law gives to privacy as a human right that is given the strongest consideration before it is limited. Even though the court found that European law does not mandate global delisting when the Right to Be Forgotten is asserted, it indicated that a data protection authority could seek global delisting if the privacy balance called for it in a specific circumstance.
The court also made clear that within Europe there can be national variances in how the Right to Be Forgotten can be applied, given differences in local law and culture.
In a second case also decided today, the court avoided banning in advance listing of results that include political, racial or other sensitive information. It did require heightened consideration for those results, to the extent that it even required that pages containing information about criminal histories include relevant context on the search page, when the affected party objects to the results.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
FTC should investigate app developers banned by Facebook – Statement by Future of Privacy Forum CEO
Future of Privacy Forum Calls on FTC to Investigate Apps That Misused Consumer Data
WASHINGTON, DC – September 20, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding Facebook’s announcement that it has banned 400 developers from its app store:
The FTC should quickly act against many of these app developers, since they share the blame with Facebook, and some could still be holding on to consumer data or continuing to sell it. If apps that misuse Facebook members’ data escape legal penalty, developers will get the message that there is no legal risk to improper data-sharing. Every company, and especially app developers, needs to understand that there are consequences for abusing consumer data. This situation demonstrates yet again that Congress should dramatically increase the human and technological resources available to the FTC and give it broader authority to levy civil penalties.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
New White Paper Explores Privacy and Security Risk to Machine Learning Systems
FPF and Immuta Examine Approaches That Can Limit Informational or Behavioral Harms
“Machine learning is a powerful tool with many benefits for society,” said Brenda Leong, FPF Senior Counsel & Director, Artificial Intelligence and Ethics. “Its use will continue to grow, so it is important to explain the steps creators can take to limit the risk that data could be compromised or a system manipulated.”
The white paper presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.
“Privacy or security harms in machine learning do not necessarily require direct access to underlying data or source code,” said Andrew Burt, Immuta Chief Privacy Officer and Legal Engineer. “We explore how creators of any machine learning system can limit the risk of unintended leakage of data or unauthorized manipulation.”
Co-authors of the paper are Leong, Burt, Sophie Stalla-Bourdillon, Immuta Senior Privacy Counsel and Legal Engineer, and Patrick Hall, H2O.ai Senior Director for Data Science Products.
Leong and Burt will discuss the findings of the WARNING SIGNS whitepaper at the Strata Data Conference in New York City during the “War Stories from the Front Lines of ML” panel at 1:15 p.m. on September 25, 2019, and the “Regulations and the Future of Data” panel at 2:05 p.m. on the same day.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
About Immuta
Immuta was founded in 2014 based on a mission within the U.S. Intelligence Community to build a platform that accelerates self-service access and control of sensitive data. Immuta’s award-winning Automated Data Governance software platform creates trust across security, legal, compliance, and business teams so they can work together to ensure timely access to critical business data with minimal risks. Its automated, scalable, no code approach makes it easy for users to access the data they need, when they need it, while protecting sensitive data and ensuring their customers’ privacy. Immuta is headquartered in College Park, Maryland. Learn more at www.immuta.com
Warning Signs: Identifying Privacy and Security Risks to Machine Learning Systems
Machine learning (ML) is a powerful tool, providing better health care, safer transportation, and greater efficiencies in manufacturing, retail, and online services. That’s why FPF is working with Immuta and others to explain the steps machine learning creators can take to limit the risk that data could be compromised or a system manipulated.
Today, FPF released a whitepaper, WARNING SIGNS: The Future of Privacy and Security in an Age of Machine Learning, exploring how machine learning systems can be exposed to new privacy and security risks, and explaining approaches to data protection. Unlike traditional software, in machine learning systems privacy or security harms do not necessarily require direct access to underlying data or source code.
The whitepaper presents a layered approach to data protection in machine learning, including recommending techniques such as noise injection, inserting intermediaries between training data and the model, making machine learning mechanisms transparent, access controls, monitoring, documentation, testing, and debugging.
My co-authors of the paper are Andrew Burt, Immuta Chief Privacy Officer and Legal Engineer, Sophie Stalla-Bourdillon, Immuta Senior Privacy Counsel and Legal Engineer, and Patrick Hall, H2O.ai Senior Director for Data Science Products.
Andrew and I will discuss the findings of the WARNING SIGNS whitepaper at the Strata Data Conference in New York City during the “War Stories from the Front Lines of ML” panel at 1:15 p.m. on September 25, 2019, and the “Regulations and the Future of Data” panel at 2:05 p.m. on the same day. If you’ll be at the conference, we hope you’ll join us!
What is 5G Cell Technology? How Will It Affect Me?
(Except where otherwise noted, this post describes service and availability within the United States.)
5G, the fifth generation of wireless network technology, is the newest way of connecting wireless devices to cellular networks. Each previous generation of wireless technology has revolutionized the way people communicate and socialize, and led to waves of novel products and services using the new capabilities. The leap from 3G to 4G technology brought with it faster data transfer speeds, which supported widespread adoption of data cloud and streaming services, video conferencing, and Internet of Things devices such as digital home assistants and smartwatches. 5G technology has the potential to enable another wave of smart devices: always connected and always communicating to provide faster, more personalized services. While these new products and services may create significant benefits for both businesses and consumers, connected devices can raise substantial privacy risks. Concerns about data collection, use, and sharing must be addressed.
How Does 5G Technology Work?
5G uses very fast, short-range signals on an unused frequency band to send and receive more total data, within a dense network of specialized small cell sites. A small cell site refers to the area within which this short-range wireless signal can be received. In contrast, current 4G technology uses long-range signals transmitted across crowded low frequency radio waves to send and receive data over a broad network of larger cell sites. 5G operates on a different part of the spectrum, using bands with little interference, at a higher frequency. This can result in faster download speeds and more stable connections for wireless devices, but waves at these frequencies have a shorter range. The 2G, 3G, and 4G/LTE systems all used the same long-range band of airwaves, as does broadcast television. This shift to a different part of the spectrum promises better connectivity, but will require a larger number of cell sites.
Current 4G networks provide service by broadcasting radio waves from large cell towers to extended areas. The capabilities of individual tower sites, the geographic features of covered areas, and the amount of available spectrum all impact the quality of service. These 4G/LTE services operate on a model of regional centralization – all service from large areas, including up to several states worth of traffic, is collected from broadly spaced cell towers, then aggregated into one central location before distribution (connection to the internet).
The initial deployment of 5G technology to cell towers will likely operate similarly, and many of the gains will not be immediately noticeable to users until the supporting small cell sites are in place. Some early use cases will likely be in stadiums, malls, or other large venues where small cell sites will be placed to create discrete, distributed systems serving those bounded locations in ways that will provide faster service.
5G service requires telecommunications providers to convert existing cell towers to accommodate the new technology and then incorporate a network of small cell sites, densely located for maximum coverage of service areas. Most carriers are currently upgrading the services available on 4G systems as an interim step before full deployment of 5G – the full extent of which will take several years. For example, 4G towers are taking advantage of “MIMO” (multi-input/multi-output) technologies, which enable simultaneous operation of 2 or 4 channels to receive or send signals. Also, some carriers are opting to push internet connectivity out to more local levels than the current regionalization model – either to a single point per state, per metro area, or even to individual cell towers in some dense urban areas. These changes produce a noticeable increase in speed for individual users within those service areas.
5G technology will use a different type of signal called millimeter waves that, while significantly faster, are more limited in range and will ultimately require many more total sites. Such small cell sites must be much closer to each other than existing cell tower networks in order to allow the millimeter waves to carry a signal successfully. Service on this millimeter wave spectrum can only happen once the full infrastructure of small cell sites is in place. This transition is a massive process. In the U.S., it will likely be 2-3 years to upgrade all the existing cell towers – which will represent the initial level at which a carrier may claim to be providing 5G level service. (AT&T for example, has currently upgraded towers in over 20 metropolitan locations, and is continually adding to this list.) Some other countries are ahead of the U.S. in completing this initial phase, but none has yet extensively deployed the network of small cell sites to bring the system to full capability.
Telecommunications companies are starting in the core of dense, urban areas, then spreading to the suburbs and fringes of metropolitan areas, and ultimately will reach all towers. Setting up the small cell sites can be done simultaneously with the cell tower conversions, but will almost certainly take much longer to reach full coverage. Many rural or less densely populated cities are offering accelerated permit processes, and other business incentives to entice carriers to prioritize upgrades in those locations.
How Will 5G Technology Change Wireless Services and Products?
When companies reach widespread 5G small cell coverage, the new technology will offer two major improvements: (1) increased signal coverage (reliability) and (2) significantly faster mobile speeds with lower latency, i.e. the lag time between a signal and a response. However, it’s important to note that no devices designed solely for 4G will work on 5G networks. 5G-enabled hardware will be required, and devices designed and sold during the transitional years will almost certainly have to include connectivity to both networks. Whereas current devices can switch between 4G and 3G connectivity based on which signal is available in any particular location, the devices only operate on one of these systems at a time. Dual-capability 4G/5G devices will be able to operate on 4G and 5G at the same time – thus ensuring minimal disruption to service as consumers transit between locations and service availability.
When fully operational, 5G networks promise to provide significantly faster speeds with lower latency compared to current 4G/LTE networks. While speeds vary based on a variety of factors, most 4G networks average 40 megabits per second (“Mbps”) download speeds with the fastest local networks getting up to 500Mbps. Although recent tests demonstrate that 4G networks can provide speeds up to 2 gigabit per second (Gbps) or 2,000 Mbps under some conditions, 5G networks promise still higher speeds and better connectivity. The early performance of 5G networks is likely to appear similar to current high-speed 4G networks, but 5G technology has been shown to reach speeds even up to 4.5 Gbps once small cells are fully deployed.
5G networks also promise drastic decreases in the latency of wireless transmissions, reducing the amount of lag time between an interaction with the network and the network’s response. To an ordinary consumer, the difference in speed and latency may not seem noticeable during everyday transactions, but for high-speed, data-intensive computing services these differences can completely revolutionize an industry. For example, lower latency in video conferencing may enable better feedback in communications and fewer dropped calls. Just as the shift from 3G to 4G/LTE provided capabilities that generated unforeseen applications and uses, it is impossible to predict exactly what may become feasible on 5G, and what consumer-facing services that will enable.
5G is unlikely to be used for all wireless communications. Although some analysts talk of autonomous vehicles relying on 5G, no current developers of these cars are designing with the intention to use 5G. Instead, carmakers are implementing different technologies that use a different part of the spectrum. Likewise, connected devices in homes (IoT and “smart home” technology) will primarily remain connected via WiFi systems. However, those home-related devices such as power and water meters, or smart city sensors that rely on cell technology will likely transition to 5G. The improvement here will not be so much the speed, as these devices use very little bandwidth, but in quantity. Current systems support millions of devices per square kilometer. With 5G, literal billions of devices can be managed.
5G Security
The transition to 5G networks, the growth of IoT networks, and the expansion of other advanced technologies raises questions about security safeguards, particularly with regard to access and authentication protocols. Some IoT devices employ lighter security measures in an effort to allow simple connection and communication; this is particularly common for IoT technologies that pose lower risks to users and networks. 5G technology can support more connected devices on cellular networks, which will increase the number of potential vulnerabilities. But the 5G technology may include security improvements as well.
Work is under way to determine the best practices for securing 5G networks in various applications. Experts expect that faster speeds and higher capacity of these networks to allow for more sophisticated threat assessment measures and more secure authentication frameworks; both aspects of 5G which could improve network security. The nature of the 5G network is a shift from a centralized system to distributed, virtual networks. The carriers developing these systems are embedding security functions at multiple stages of deployment, including virtual protections as well as those which are hardware-based. Additionally, some techniques that work for 4G networks can be translated to 5G networks, as advanced 4G networks follow many of the same technological principles as 5G. While 5G may exacerbate some existing risks or create new security challenges, the technology may also provide for new, effective ways to secure data and devices through more sophisticated networks and algorithms.
Global Use of 5G
5G standards are largely stable and interoperable in the United States. However, stakeholders have not yet agreed on a single standard for worldwide interoperability of 5G networks. The telecommunications industry is working to identify and reach agreement on connectivity standards for global 5G connectivity. The current global standard for cellular connectivity is known as “3GPP.” This standard reflects the agreement of a consensus-driven oversight body that includes partners from Asia, Europe, and North America. The standards setting process is handled by working groups formed within larger technical specification bodies. Both individual company capabilities as well as regional balancing are priorities. All countries and companies will need to adhere to an agreed-upon standard for the manufacture and operation of 5G networks, as non-conforming equipment will not be interoperable across networks. The transition to a 5G network reflects a greater number of suggestions for standards, by both operators and manufacturers, than in the past, because of the obvious importance of interoperability.
Privacy Impacts of 5G
Faster speeds and lower latency may mean better products and services, but the transition to 5G will likely create privacy risks associated with new devices, data collection, and use of personal information. 5G access will enable individuals to have more smart devices that can reliably connect and interact with online services; at the same time, the technology can also create more detailed personal data sets for device manufacturers and service providers.
For video platforms, 5G promises to provide higher image quality with less lag time; this also means that facial recognition technology will have clearer images to analyze. Similarly, public video surveillance networks can transmit more detailed video of an individual’s activities and can be more easily analyzed by artificial intelligence systems to identify and track individuals in public. The ability to collect and share data more quickly is likely to result in more useful, personalized services; at the same time, increased data collection can increase users’ concerns about the creation of more detailed profiles on individuals.
5G promises to be a revolutionary improvement in cellular network technology. Better connections for users and faster speeds with lower latency for data intensive computing will likely lead to improvements in existing products and services, as well as the development and implementation of new technologies. However, such changes will include the need to consistently prioritize individual privacy in this new context; 5G technology will not eliminate existing privacy and security challenges. While including beneficial services, a faster, more efficiently connected digital world will continue to pose data risks and will require continued meaningful privacy safeguards to ensure appropriate handling and protection of personal information.
Authored by Daniel Neally and Brenda Leong
10 Reasons Why the GDPR Is the Opposite of a ‘Notice and Consent’ Type of Law
The below piece was originally published on Medium. For a version with humorous images, head to the original post.
A ‘notice and consent’ privacy law puts the entire burden of privacy protection on the person and then it doesn’t really give them any choice. The GDPR does the opposite of this.
There is so much misunderstanding about what the GDPR is and what the GDPR does, that most of what is out there at this point is more mythology than anything else.
Understanding and correctly categorizing the regulatory framework of the GDPR is actually very important, now. Look at US Senate’s hearing yesterday, on ‘GDPR & CCPA: Opt-ins, Consumer Control, and the Impact on Competition and Innovation’. If this law is considered as point of reference for future privacy legislation in the US — in the sense of deciding how close or far from it should be the future US privacy framework, then one should understand what are the mechanisms that make the GDPR what it is.
A ‘notice and consent’ framework puts all the burden of protecting privacy and obtaining fair use of personal data on the person concerned, who is asked to ‘agree’ to an endless text of ‘terms and conditions’ written in exemplary legalese, without actually having any sort of choice other than ‘all or nothing’ (agree to all personal data collection and use or don’t obtain access to this service or webpage). The GDPR is anything but a ‘notice and consent’ type of law.
There are many reasons why this is the case, and I could go on and get lost into the minutiae of it. Instead, I’m listing 10 high level reasons, explained in plain language, to the best of my knowledge:
1. Data Protection by Design and by Default is a legal obligation
All organizations, public or private, that touch personal data (“processing” in the GDPR means anything from collection to storage to profiling and creating inferences to whatever you can think of and that can be done to personal data) are under an obligation to bake privacy into all technologies and/or processes they create and, very importantly, to set privacy friendly options as default. There are no exceptions to this obligation. Data Protection by Design and by Default (DPbD) must be implemented regardless of whether the personal data will be obtained based on an opt-in, an opt-out, a legal obligation to collect the data. It doesn’t matter. All uses of personal data must be based on DPbD. Check out Article 25 GDPR.
2. Data Protection Impact Assessments are mandatory for large scale and other complex processing
All organizations that engage in any sort of sensitive, complex or large scale data uses must conduct a Data Protection Impact Assessment (DPIA) before proceeding. Think of the now-common Environmental Impact Assessments(EIA). The DPIA is just like an EIA, but instead of the impact of a project on the environment, it measures the impact of a project using personal data on all the rights of the individuals concerned, from free speech, to privacy, to non-discrimination. Depending on the results of the DPIA, safeguards must be brought to minimize the impact on rights, or the project can simply be stopped if there is no way to minimize the risks. Again, this happens regardless of opt-ins, opt-outs, legal obligations, other grounds relied on by organizations to collect and use the personal data. Check out Article 35 GDPR.
3. All processing of personal data must be fair
Absolutely all collection and uses of personal data must be fair and transparent, regardless of the ground for processing (opt-in, opt-out, legal obligation etc.). This is the Number 1 rule relating to processing of personal data listed in the GDPR (check out Article 5(1)(a)) and breaching it is sanctioned with the higher tier of fines. In practice, this means several things, including the fact that people should be expecting that their personal data is collected, used or shared in the way it is being collected, used or shared.
4. There must be a specific, well defined reason for all collection or uses of personal data
From the outset, and regardless of the justification relied upon by an organization to process personal data (opt-in, opt-out, fulfillment of contract etc.), the collection of that personal data, be it directly from individuals, observed or inferred, must be done only for specified, explicit and legitimate purposes and only processed either for those purposes, or for purposes compatible with them. This is the principle of purpose limitation. In practice, it means that it is illegal to collect personal data ‘because maybe some day I will find something useful to do with it’. Non-compliance with the purpose limitation obligations also triggers the higher level of fines.
5. Data grabs unrelated to the purpose of processing are illegal
Only those personal data that are relevant and limited to what is necessary to achieve the specified purpose can be collected or otherwise processed. Casting a net to grab as much personal data as possible, even if it is not needed for the purpose announced, is unlawful and, again, sanctioned with the higher tier of fines. This rule applies to all processing of personal data, even to those processing activities mandated by law, such as anti money-laundering.
6. The person can actually do things related to how his or her personal data is handled
The individual has well defined rights that allow him or her to do many things to ensure their personal data are processed fairly and lawfully, such as obtaining a copy of the personal data being processed (regardless of whether the personal data is processed on the basis of consent, or a legal obligation, or any other ground), erasing the personal data not being processed lawfully, objecting to processing of personal data, even to lawful processing, on his or her particular grounds, or initiating Court proceedings against any unlawful processing, with the possibility to claim moral or material damages.
7. State of the art security is an obligation
There is an obligation to ensure state of the art security measures for all processing of personal data, with hefty fines for data breaches. Check out Article 32.
8. There is someone in each organization engaging in complex processing whose job is to ensure personal data are processed fairly and lawfully
All organizations that engage in complex or sensitive or large scale data collection and use (this covers all Big Tech, but also many others) must appoint a Data Protection Officer, whose job as an independent adviser is well regulated and protected by the GDPR. Technically, the DPO is someone specialized or experienced in data protection law or applying data protection law, who advises the highest level of management on how to fairly and lawfully collect and use personal data. Check out Articles 37, 38 and 39.
9. Personal data is followed through the vendor maze
The GDPR provides for solid guarantees on how personal data is managed by the chain of vendors and suppliers of an organization. In particular, all vendors that process personal data on behalf of an organization have to enter detailed contractual agreements which hold them accountable for how they protect the personal data entrusted to them. Vendors also have some direct statutory obligations, such as keeping the Record of processing activities and appointing a Data Protection Officer.
10. All processing of personal data must be kept in a comprehensive and updated Record
All organizations that collect and use personal data in any way are under an obligation to keep track in a Record of all the personal data they collect and use, for what purpose, for how long, about what categories of individuals, with whom they share the data and other details as prescribed by Article 30 GDPR, regardless of whether they collect it on opt-in, opt-out, legal obligations, contract fulfillment etc. The only organizations exempted from this obligation are those with under 250 employees and that only occasionally process personal data. Even those must still keep a Record if the occasional processing may result in a high risk for the rights of individuals or involves sensitive personal data.
10th Annual Privacy Papers for Policymakers – Send Us Your Work!
The 10th Annual Privacy Papers for Policymakers awards have been announced. Register here to attend the event on February 6, 2020. We will open the submissions process for next year’s awards in fall 2020.
Have you conducted privacy-related research that policymakers should know about? If so, we can help you get it in front of key government officials. The Future of Privacy Forum will return to Capitol Hill early next year (date TBD) for the 10th installment of our annual Privacy Papers for Policymakers (PPPM) awards program.
PPPM recognizes the year’s leading privacy research and analysis that has practical implications for policymakers in Congress, federal agencies, and data protection authorities internationally. Winners are selected by a diverse team of academics, advocates, and industry privacy professionals. Awarded articles are chosen both for their scholarly value and because they offer policymakers concrete solutions and practical insights into real-world challenges.
Submit Your Work!
We are currently accepting finished papers to be considered for next year’s awards. The deadline for regular submissions is October 4, 2019, and the deadline for student submissions is October 25, 2019. For more on submission guidelines, please review this page.
Last year’s winning papers examined topical privacy issues: sexual privacy, data subject rights, how local laws can fill gaps in state and federal laws, and more. PPPM honors papers from academics, practitioners, technologists and lawyers, which allows for a wide range of approaches to research and analysis.
For 10 years, the research compiled in PPPM has informed the policy debate in Congress, in the states, and around the world. By submitting your work for consideration, you are providing a valuable tool for legislators and staff considering the structure and elements of a national privacy framework.
Please send any questions about the program or submission process to [email protected]. We look forward to reading your work!