COPYRIGHTS AND PRIVACY: What is the Irrevocable License and is it Really a Privacy Concern?

By submitting a Posting, you hereby authorize [us] to use, and authorize others to use, your Postings and User Names in whole or in part, on a royalty-free basis, throughout the universe in perpetuity in any and all media, now known or hereafter devised, alone, or together or as part of other material of any kind or nature, including without limitation commercial use on and advertising and promotion of the Site. Without limiting the foregoing, [we] will have the right to use and change the Postings in any manner, either with or without the User Name, that [we] may determine.”

Scary sounding, isn’t it? There are so many words, packed into two sentences that all seem to say “we can do whatever we want with your information forever!” But where did this (specific) “throughout the universe” license come from, typo and all? Was it Facebook? No. Google? No. Amazon? No. What about the controversial FaceApp? No, again. This specific license came from Nickelodeon’s terms of service.* Yes, the children’s media company that brought us Spongebob Squarepants also brings us a license that seems to say they can access an individual’s data forever and do whatever they want with it. As does PBS kids, the New York Times, Panda Express, Wells Fargo Banking, UnitedHealth Group, Wikipedia, School Loop, the National Park Service, the International Association of Privacy Professionals (IAPP), and basically every single other website that allows their users to interact, comment, or signup.

Whenever a new app or technology is unveiled, or new controversy that raises privacy concerns arises, eventually someone will go through the company’s Terms of Service (TOS), find this sort of licensing clause and reference this language to allege potential privacy abuses on a grand scale. (In 2017, there was an initial blog concerned about Ancestry.com’s licensing language for DNA data, the Ancestry response, and even a Snopes article clarifying the specifics. More recently, Wired expressed similar concerns in an article on FaceApp). However, this language is not typically targeted at undermining privacy controls governing consumer data, but instead establishes the copyright permissions and liabilities a company has in users’ posted content. 

Due to the strength of copyright protections, and harsh penalties against those who violate them, this language must be broad to comply with existing laws. Unfortunately, this has the effect of making the language easy to misunderstand, especially for consumers who do not understand the variety of legal requirements that may apply. The truth is, while the perpetual license language affects the copyright of content, any personal information a company possesses is equally controlled and limited by its legally binding Privacy Policies. In addition,  there may be various state, national, and international laws that apply further restrictions. 

The reasons for the development and inclusion of these clauses, and the privacy controversies the terms can trigger, tell an interesting tale about the intersection of data protection and intellectual property law.

First, why does the “perpetual license” language exist? Simply put, this is a copyright clause used to protect the company from being sued for copyright infringement. Copyright law exists to allow content creators to protect their works. Under copyright law, unlike some other forms of intellectual property, the content creator automatically gains exclusive rights immediately upon creating their work in a “tangible medium,” i.e. making it exist in the world. These automatic rights include the right to reproduce or copy the work, to create derivative works (any work based on the original in any form of media or material), to distribute the work, and to publicly display or perform the work. Sound familiar? That is because these same rights are often listed using the same or similar language in perpetual licenses for digital products and services. All of these rights spring into existence the moment any person creates their own original text message, sound recording, picture, drawing, or other types of works. 

A user who posts content immediately has copyright rights in original content they provide, such as text, pictures, or other submitted content. Once the user clicks “post” or “submit” or “send,” the company or website or app receiving the copyrighted content needs to copy it onto their servers, transform it into different mediums for their servers, then copy, distribute, and publicly display the original (copyrighted) post for other users to see or to provide the service the user originally requested. In essence, the website must take actions that are governed by copyright law to accomplish exactly what the user intended when sharing the work with the platform. Platforms must make sure that the user, who likely owns the copyright, has provided them with the rights and permissions they need to provide the desired service.

While it is worth mentioning that the Digital Millennium Copyright Act does provide some exceptions to these infringement actions, these exceptions apply narrowly–to services that transmit or cache content, rather than display it. Companies that host and display content can rely on the DMCA to protect them from third party claims–but they still should ensure they have permission from the poster themselves to take all the actions needed to provide the service. Rather than risk copyright liability for displaying or altering content without a poster’s permission and expose themselves to statutory damages for copyright infringement, most websites prefer to get a license from their user that will be effective for as long as the business is active (perpetually), that will not require them to pay millions of users (royalty-free), that is accessible to other users (worldwide or throughout the universe), and that cannot be rescinded by consumers so the consumer can turn-around and sue the company (irrevocable and non-exclusive). 

So even though the perpetual license language exists because a website needs permission to use any original work that a user provides, isn’t that still a privacy concern? The company now has the information. Doesn’t that license still mean they can use the information however they like, even if they don’t own all rights to it? Because copyright law is distinct from privacy law, the answers are “not exactly,” and “no.”

The rights and requirements of the TOS are not independent of other contracts, terms, policies, and laws. As such, while the TOS legally create a license to some uses of the information a user provides, the Privacy Policy i limits what information can be collected, stored, used, and sold, by who, and for what purposes. Both the TOS and Privacy Policy bind the company and the language of one does not mitigate the language of the other. Take, for example, Nickelodeon’s “throughout the universe” license posted above. Nickelodeon also has a strong Privacy Policy that outlines, in very explicit terms, exactly what information they obtain and why (found here), exactly how and why they use the information they collect (found here) and exactly who they share information with and why (found here). Should Nickelodeon be found to be in breach of any of these specific terms in the Privacy Policy, they could be fined by regulatory agencies for deceptive trade practices–but not sued for statutory damages for copyright infringement

This same overlapping of restraints also affects the interplay of the perpetual license and the various legal statutes that govern data practices. Nickelodeon has both a “throughout the universe” license and a comprehensive Privacy Policy, but is still further constrained by required compliance with the Children’s Online Privacy Protection Act (COPPA). 

Thus, the existence of this licensing language dealing with copyright law does not mean that every website and company instantly has full and complete ownership, access, and control over every bit of information that a user provides, always and forever. The license is a protective shield used to create a right for the company to use original material created by someone else – even if that is just the text of a user-posted comment – as well as to limit liability under intellectual property law. These rights do not typically limit additional liabilities under other statutes. 

Perpetual licenses, while they may sound scary, are necessary for a functioning internet and are often substantially limited by both Privacy Policies and statutes. Although this language can become an easy target for privacy-minded critiques, the language itself is a product of intellectual property practices used to mitigate legal liabilities and has minimal impact on data collection, use, or privacy protections. The real privacy concerns come from weak or insufficient Privacy Policies that may not create sufficiently strong protections for user-provided data, including personal information well beyond that covered under copyright. It is their Privacy Policy that determines what a company can or cannot do with a user’s data and is where users should look for the details of what a company may be allowed to do with their data.

*FPF is not criticizing or critiquing Nickelodeon’s Terms of Service. Nickelodeon was chosen as a useful example to highlight the issue due to their unique position in media, strong privacy policy, and intersection with a federal privacy statute.

Co-Authored by Dan Neally, FPF Summer Intern, and Brenda Leong

FPF Letter to Senate on School Safety

This week, the Future of Privacy Forum (FPF) sent a letter to the Senate Homeland Security & Governmental Affairs Committee in advance of today’s hearing “Examining State and Federal Recommendations for Enhancing School Safety Against Targeted Violence.” FPF’s letter focused on three key points:

FPF invited the committee to seek answers about how privacy and equity guardrails are or are not being incorporated into state and local school safety initiatives. Prior to implementing school safety programs, officials ought to 1) find and analyze the best available evidence to inform policy; 2) perform privacy impact assessments, commonly-used and established processes for ensuring the appropriate balance between the benefits and risks of data collection and use initiatives, particularly as they related to already vulnerable communities; and 3) transparently engage with all stakeholders, including parents, students, and educators.

Read the full letter here.

Statement by FPF CEO Jules Polonetsky: Facebook Case Shows It Is Time to Give the FTC Enhanced Civil Penalty Authority

WASHINGTON – July 24, 2019 –Today, the Federal Trade Commission (FTC) announced an unprecedented settlement requiring Facebook to pay $5 billion in civil penalties, create new accountability and compliance mechanisms, and imposing additional injunctive relief. The settlement stems from violations of a 2012 order.

The $5 billion penalty is more than 15 times larger than the previous record penalty levied by the FTC for a privacy violation. It is one of the largest penalties issued by a US government agency in any context. The fine is more than twice the financial penalty that could be imposed by an EU regulator under the General Data Protection Regulation.

But today’s record settlement masks a major gap in the FTC’s enforcement authority – the Commission doesn’t typically have fining authority for privacy violations, unless it is enforcing an existing order (as with Facebook) or invoking specific statutes (such as the Children’s Online Privacy Protection Act).

In fact, in many privacy cases the FTC has trouble even getting refunds for consumers. That’s because many companies provide online products and services for free – so it’s difficult to prove a financial loss. In those privacy cases, the FTC should have fining authority; it would create effective, proportionate deterrence and ensure that bad actors are held accountable – even when they don’t charge consumers a fee for services.

The time has come to give the FTC civil penalty authority. Preferably, this would be accomplished by Congress as part of a comprehensive new national privacy law that also gives consumers meaningful control over how their information is used.

The FTC also needs more resources so it can conduct more privacy investigations faster, while maintaining a high level of technical and legal competence. Real oversight of the Facebook settlement will require FTC staff resources and time to be effective. That funding could be provided by Congress this year through the appropriations process.

If Congress wants stronger incentives for compliance and more responsive investigations, it needs to give the FTC civil penalty authority for privacy violations and more tech and investigative resources now. There is no reason to wait.

Media Contact:

Tony Baker

Future of Privacy Forum

[email protected]

(310) 593-3680

About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

The US, China, and the Risks of Cutting Global Data Flows

Peter Swire published an op-ed for the French newspaper Le Monde that discusses the Court of Justice for the European Union’s decision as to whether U.S. surveillance practices violate the fundamental rights of EU citizens under GDPR. Swire argues that if the U.S is deemed to be in violation, thereby causing transatlantic data flows to be blocked, then data flows between the EU and China should also be blocked.

An English translation of the piece is available here. The original piece is available in French here.

You can read an annotated bibliography for the piece here.

Peter is an FPF Senior Fellow and Elizabeth and Tommy Holder Chair and Professor of Law and Ethics at the Georgia Tech Scheller College of Business.

New Privacy Tech Industry Attracts Massive Funding

Privacy Tech Alliance connecting researchers and entrepreneurs to analysts, customers, VCs

WASHINGTON – July 11, 2019 – One Trust’s announcement today of a $200 million Series A investment, which follows yesterday’s announcement by TrustArc of a $70 million Series D round, demonstrates the arrival of a new industry sector for privacy protection technologies.

“Investors have noticed that business is booming for companies in the privacy technology space,” said Jules Polonetsky, CEO of the Future of Privacy Forum and a co-founder of the Israel Tech Policy Institute. “Innovative technology must be part of the solution for companies and government agencies that want to use data and be sensitive to individual privacy.”

In addition to OneTrust and TrustArc, other privacy tech companies have received significant investments recently. Privitar announced a $40 million series B funding round in June and BigID raised a $30 million series B round last year.

The Israel Tech Policy Institute, in conjunction with the Future of Privacy Forum, launched the Privacy Tech Alliance to promote the market for privacy protective technologies internationally, facilitate the development of new tech, and maximize value for innovators and investors. The global nature of privacy regulation – from GDPR to the California Consumer Privacy Act – is spurring innovative technologies and a new industry sector is rising around technologies that help companies use data while protecting privacy, such as homomorphic encryption and de-identification.

“The Privacy Tech Alliance is supporting diverse companies bringing privacy-enhancing technology to market,” said Limor Shmerling Magazanik, Managing Director of the Israel Tech Policy Institute. “Many of these companies also offer compliance solutions to help their customers navigate an increasingly complex regulatory environment around privacy.”

OneTrust and TrustArc join eleven other leading global tech vendors who have joined the Privacy Tech Alliance Advisory Board. Founding members of the Privacy Tech Alliance Board include Anonos, BigID, D-ID, Duality, Immuta, Nymity, OneTrust, Privacy Analytics, SAP, Truata, TrustArc, WireWheel, and ZL Tech.

For companies large and small, drafting policies and managing excel sheets no longer suffice to oversee complex global data operations. To scale data governance and privacy program management, companies in every sector of the economy must turn to privacy governance systems and tools. Such tools serve multiple governance needs, including data mapping, data protection impact assessments, consent and cookie management, data storage and retention, identity management and authentication, and more. In addition to privacy program management tools, researchers, scientists and entrepreneurs are innovating privacy enhancing technologies, including tools for de-identification, encryption, obfuscation, blockchain, and more.

This week’s notice by the UK Information Commissioner of its intention to fine Marriott Hotels and British Airways $130 million and $230 million respectively vividly illustrates the rising stakes for organizations that wrestle with an increasingly complex regulatory environment for privacy and data protection, including Europe’s GDPR and California’s CCPA.

Media Contacts:

Nat Wood

Future of Privacy Forum

[email protected]

410-507-7898

Tony Baker

Future of Privacy Forum

[email protected]

310-593-3680


About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

About the Israel Tech Policy Institute

Israel Tech Policy Institute is an incubator for tech policy leadership and scholarship, advancing ethical practices in support of emerging technologies. Learn more about ITPI by visiting www.techpolicy.org.il.

Education, Privacy, Disability Rights, and Civil Rights Groups Send Letter to Florida Governor About Discriminatory Student Database

WASHINGTON, DC – Today, the Future of Privacy Forum and 32 other education, disability rights, privacy, and civil rights organizations sent a letter to Florida Governor DeSantis, urging him to postpone the implementation of Florida’s proposed school safety database. FPF is deeply concerned that the program will be used to label students as threats based on data that has no documented link to violent behavior, such as data on disabilities or those seeking mental health care. The signatories urged Governor DeSantis to immediately halt the state’s construction of this database and, instead, create a commission of parents, students, and experts on education, privacy, security, equity, disability rights, civil rights, and school safety, to identify measures that have been demonstrated to effectively identify and mitigate school safety threats.

Education Week recently detailed the types of information to be collected in Florida’s planned database. The categories discussed included children who have been victims of bullying based on protected statuses such as race, religion, disability, and sexual orientation; children who have been treated for substance abuse or undergone involuntary psychiatric assessments; and children who have been in foster care, among others.

“Through policy, Florida is saying that students who have been bullied and harassed are threats, making it less likely that those students will report bullying and receive the help they need,” said Amelia Vance, Director of the Education Privacy Project at FPF. “It is especially troubling that the database has no retention or deletion requirements – meaning that Florida is creating a literal permanent record that could follow students around their whole life.”

The letter asks the Governor to pause the database’s implementation – due to be launched August 1, 2019 – and create a commission of experts to determine whether a state database would actually help to identify school safety threats and would not pose undue harm to students, and identify the legal, ethical, privacy, and security parameters that should be an integral part of this database. If Governor DeSantis is not willing to do that, signatories requested that he require the state to provide public information about the database’s data governance, enumerate the data that will be included, share how parents can access and, if needed, contest the information and inferences about their child in the database, and provide a public commitment to abide by all federal and state privacy and non-discrimination laws.

Read the letter here.

 

The Future of Privacy Forum is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

Media Contacts:

Tony Baker

[email protected]

310-593-3680

Nat Wood

[email protected]

410-507-7898

Signatories:

Sidewalk Labs Releases Detailed Plans for Collaboration with City of Toronto on Quayside Smart City Project, Including Proposed Privacy and Data Protection Framework

By: Suzie Allen

Experts Highlight Data Protection Safeguards, Opportunities, and Risks

“Master Innovation and Development Plan” will be Vetted by City Residents, Officials

Last week, Sidewalk Labs unveiled its proposed “Master Innovation and Development Plan” (MIDP) for Sidewalk Toronto, a project that would design a smart city district in Toronto’s Eastern Waterfront. The proposal will be considered by the government and other stakeholders in the coming months to determine whether to move forward with the project. This proposed public-private partnership between Sidewalk Labs and Waterfront Toronto seeks to promote affordability and sustainability while reducing climate impact and creating new mobility solutions, such as by prioritizing mass transit and pedestrians over vehicles. 

The MIDP as proposed contemplates substantial data collection and use; it also proposes a range of signifcant legal, technical, and policy controls to mitigate privacy risks and promote data protection. In the coming year, Toronto residents and officials will analyze the MIDP and work with Sidewalk Labs and Waterfront Toronto to identify aspects of the proposal that could be modified to promote benefits and reduce risks. 

Background

Privacy, Data Governance, and Transparency

The MIDP acknowledges that some of the urban data at the core of the Quayside effort will be personal and/or sensitive, and proposes several key measures intended to mitigate the privacy risks. The MIDP contemplates both include technical controls, such as employing hardware and software solutions that integrate privacy-protective data collection, use, and sharing into the development and operation of the Quayside site, as well as legal and organizational safeguards, such as establishing consistent and transparent processes for using urban data and independent oversight. Key measures include: 

FPF has previously reported on the importance of evaluating both privacy risks and data benefits in its practical guide Benefit-Risk Analysis for Big Data Projects and outlined the potential harms that can arise from automated decision-making in Unfairness By Algorithm: Distilling the Harms of Automated Decision-Making.

Since 2017, Sidewalk Labs has staked out an ambitious vision of the “city of tomorrow.” As Sidewalk Toronto would be fueled in significant part by data from and about Quayside’s residents and visitors, it is essential that clear and consistent standards for protecting personal data be built into the project from the outset. The MIDP sets out one of the most detailed urban data protection frameworks we have seen for any local development project and sets forward a model structure of municipal data.  If the Sidewalk Labs proposal is ultimately approved, it could be the catalyst for similar projects throughout the world, making it imperative to keep privacy as a priority. MIDP describes an intriguing range of proposed organizational, technical, and legal safeguards, and has set the stage for continued discussions with Torontians and with stakeholders from government, industry, academia, and civil society about how to maximize the potential of urban innovation while minimizing risks to individuals and communities.

California’s AB-1395 Highlights the Challenges of Regulating Voice Recognition

Under the radar of ongoing debates over the California Consumer Privacy Act (CCPA), the California Senate Judiciary Committee will also soon be considering, at a July 9th hearing, an unusual sectoral privacy bill regulating “smart speakers.” AB-1395 would amend California’s existing laws to add new restrictions for “smart speaker devices,” defined as standalone devices “with an integrated virtual assistant connected to a cloud computing storage service that uses hands-free verbal activation.” Physical devices like the Amazon Echo, Google Home, Apple HomePod, and others (e.g. smart TVs or speakers produced by Sonos or JBL that have integrated Alexa or Google Assistant), would be included, although the bill exempts the same cloud-based voice services when they are integrated into cell phones, tablets, or connected vehicles. 

Although AB-1395 seeks to address legitimate consumer privacy concerns, its core provisions likely contain pitfalls. Nonetheless, it raises important questions about the best ways to regulate privacy in the context of “listening” devices.

First, it’s clear that speech-to-text recognition has made incredible strides in the past decade, due in large part to companies being able to train machine learning models on very large datasets of human speech. These models are not perfect–they are continuing to work on heavy accents, unusual speech patterns, and non-English speech–but they have improved dramatically in recent years. Only a few years after the first voice assistants hit the market, speech recognition has now become a common way of interacting with computers, and a game-changer for accessibility.

Notwithstanding these ground-breaking benefits, most people are justifiably wary of devices that seem to “listen,” “spy,” or retain or use data in unexpected ways. FPF explored these concerns in a 2016 White Paper, Always On: Privacy Implications of Microphone-Enabled Devices. We have also explored uses of voice recognition in Smart TVs. Sometimes privacy concerns are based on misunderstandings of how voice-activated technology works–for example, we distinguished in an Infographic on Microphones in Internet of Things (IoT) Devices, between “always on,” “voice-activated,” and “manually activated” devices, which operate and collect data differently. Other concerns are totally valid, for example those raised by consumer privacy advocates regarding data retention defaults, design of user choices, or concerns about possible future uses of data in unexpected ways.

These issues can and should be addressed through comprehensive privacy legislation. FPF supports a non-sectoral, comprehensive federal privacy law, and in its absence has written in support of the California Consumer Privacy Act (CCPA), which creates baseline protections for Californians that apply across sectors and types of technology, including smart speakers. For example, many companies provide options for data deletion, and this will soon be mandated as a consumer right under the CCPA. Enshrining these and other privacy rights into law, if bolstered by ongoing rule-making and effective enforcement, allows the law to set clear limits across sectors and technologies, while remaining flexible enough to adapt to evolving technology in the future. So-called “smart speakers” are a great example of this: five years ago they did not exist. Five years from now, it may already be an antiquated concept, as cloud-based voice recognition transcends the physical boundaries of standalone devices, and becomes increasingly integrated as a core feature of almost all new technology, e.g. connected cars, wearables, and outdoor smart city kiosks.

If California decides to address the narrow slice of “smart speakers,” we recommend that they take a close look at two core aspects of AB-1395 (as revised 06/26/2019) that could cause unintended consequences, or not be as effective at addressing consumer privacy concerns as intended:

We hope consumer privacy will continue to be a core legislative priority in 2019 and 2020, as the United States draws closer to drafting and passing a baseline comprehensive privacy law. States that address these issues in the meantime should do so thoughtfully and with an eye towards effective regulation to address real privacy concerns while supporting the benefits of emerging technologies.