COPYRIGHTS AND PRIVACY: What is the Irrevocable License and is it Really a Privacy Concern?
“By submitting a Posting, you hereby authorize [us] to use, and authorize others to use, your Postings and User Names in whole or in part, on a royalty-free basis, throughout the universe in perpetuity in any and all media, now known or hereafter devised, alone, or together or as part of other material of any kind or nature, including without limitation commercial use on and advertising and promotion of the Site. Without limiting the foregoing, [we] will have the right to use and change the Postings in any manner, either with or without the User Name, that [we] may determine.”
Whenever a new app or technology is unveiled, or new controversy that raises privacy concerns arises, eventually someone will go through the company’s Terms of Service (TOS), find this sort of licensing clause and reference this language to allege potential privacy abuses on a grand scale. (In 2017, there was an initial blog concerned about Ancestry.com’s licensing language for DNA data, the Ancestry response, and even a Snopes article clarifying the specifics. More recently, Wired expressed similar concerns in an article on FaceApp). However, this language is not typically targeted at undermining privacy controls governing consumer data, but instead establishes the copyright permissions and liabilities a company has in users’ posted content.
Due to the strength of copyright protections, and harsh penalties against those who violate them, this language must be broad to comply with existing laws. Unfortunately, this has the effect of making the language easy to misunderstand, especially for consumers who do not understand the variety of legal requirements that may apply. The truth is, while the perpetual license language affects the copyright of content, any personal information a company possesses is equally controlled and limited by its legally binding Privacy Policies. In addition, there may be various state, national, and international laws that apply further restrictions.
The reasons for the development and inclusion of these clauses, and the privacy controversies the terms can trigger, tell an interesting tale about the intersection of data protection and intellectual property law.
First, why does the “perpetual license” language exist? Simply put, this is a copyright clause used to protect the company from being sued for copyright infringement. Copyright law exists to allow content creators to protect their works. Under copyright law, unlike some other forms of intellectual property, the content creator automatically gains exclusive rights immediately upon creating their work in a “tangible medium,” i.e. making it exist in the world. These automatic rights include the right to reproduce or copy the work, to create derivative works (any work based on the original in any form of media or material), to distribute the work, and to publicly display or perform the work. Sound familiar? That is because these same rights are often listed using the same or similar language in perpetual licenses for digital products and services. All of these rights spring into existence the moment any person creates their own original text message, sound recording, picture, drawing, or other types of works.
A user who posts content immediately has copyright rights in original content they provide, such as text, pictures, or other submitted content. Once the user clicks “post” or “submit” or “send,” the company or website or app receiving the copyrighted content needs to copy it onto their servers, transform it into different mediums for their servers, then copy, distribute, and publicly display the original (copyrighted) post for other users to see or to provide the service the user originally requested. In essence, the website must take actions that are governed by copyright law to accomplish exactly what the user intended when sharing the work with the platform. Platforms must make sure that the user, who likely owns the copyright, has provided them with the rights and permissions they need to provide the desired service.
While it is worth mentioning that the Digital Millennium Copyright Act does provide some exceptions to these infringement actions, these exceptions apply narrowly–to services that transmit or cache content, rather than display it. Companies that host and display content can rely on the DMCA to protect them from third party claims–but they still should ensure they have permission from the poster themselves to take all the actions needed to provide the service. Rather than risk copyright liability for displaying or altering content without a poster’s permission and expose themselves to statutory damages for copyright infringement, most websites prefer to get a license from their user that will be effective for as long as the business is active (perpetually), that will not require them to pay millions of users (royalty-free), that is accessible to other users (worldwide or throughout the universe), and that cannot be rescinded by consumers so the consumer can turn-around and sue the company (irrevocable and non-exclusive).
So even though the perpetual license language exists because a website needs permission to use any original work that a user provides, isn’t that still a privacy concern? The company now has the information. Doesn’t that license still mean they can use the information however they like, even if they don’t own all rights to it? Because copyright law is distinct from privacy law, the answers are “not exactly,” and “no.”
The rights and requirements of the TOS are not independent of other contracts, terms, policies, and laws. As such, while the TOS legally create a license to some uses of the information a user provides, the Privacy Policy i limits what information can be collected, stored, used, and sold, by who, and for what purposes. Both the TOS and Privacy Policy bind the company and the language of one does not mitigate the language of the other. Take, for example, Nickelodeon’s “throughout the universe” license posted above. Nickelodeon also has a strong Privacy Policy that outlines, in very explicit terms, exactly what information they obtain and why (found here), exactly how and why they use the information they collect (found here) and exactly who they share information with and why (found here). Should Nickelodeon be found to be in breach of any of these specific terms in the Privacy Policy, they could be fined by regulatory agencies for deceptive trade practices–but not sued for statutory damages for copyright infringement
This same overlapping of restraints also affects the interplay of the perpetual license and the various legal statutes that govern data practices. Nickelodeon has both a “throughout the universe” license and a comprehensive Privacy Policy, but is still further constrained by required compliance with the Children’s Online Privacy Protection Act (COPPA).
Thus, the existence of this licensing language dealing with copyright law does not mean that every website and company instantly has full and complete ownership, access, and control over every bit of information that a user provides, always and forever. The license is a protective shield used to create a right for the company to use original material created by someone else – even if that is just the text of a user-posted comment – as well as to limit liability under intellectual property law. These rights do not typically limit additional liabilities under other statutes.
Perpetual licenses, while they may sound scary, are necessary for a functioning internet and are often substantially limited by both Privacy Policies and statutes. Although this language can become an easy target for privacy-minded critiques, the language itself is a product of intellectual property practices used to mitigate legal liabilities and has minimal impact on data collection, use, or privacy protections. The real privacy concerns come from weak or insufficient Privacy Policies that may not create sufficiently strong protections for user-provided data, including personal information well beyond that covered under copyright. It is their Privacy Policy that determines what a company can or cannot do with a user’s data and is where users should look for the details of what a company may be allowed to do with their data.
*FPF is not criticizing or critiquing Nickelodeon’s Terms of Service. Nickelodeon was chosen as a useful example to highlight the issue due to their unique position in media, strong privacy policy, and intersection with a federal privacy statute.
Co-Authored by Dan Neally, FPF Summer Intern, and Brenda Leong
Students deserve safety measures that are evidence-based. Decisions about threats should be made by, among others, school administrators, counselors, and educators who understand students’ particular needs and circumstances. Non-evidence based protocols are more likely to trigger false alarms, fail to identify actual threats, and increase the workload on already overburdened administrators—administrators who could otherwise be doing things that actually make schools safer. And there is a model on how to do this: Utah’s 2019 school safety law found ways to bake-in evidence-based policies and privacy guardrails without hindering school safety.
Increased surveillance and data sharing without clear justification frequently overwhelms administrators with information, undermines effective learning environments, increases inequities, and can fail to promptly identify individuals who may pose genuine threats to school safety. In particular, overbroad school surveillance programs can place important data-driven school initiatives at risk: data collected to help ensure students are treated equitably under the Every Student Succeeds Act, for example, should not be repurposed in the name of school safety to harm or stigmatize those students.
Even when policies are evidence-based and don’t repurpose sensitive data in ways that break trust, without sufficient privacy and equity guardrails, certain information collected for school surveillance purposes will disadvantage particular minority groups. School safety policies must be created in an evidence-based way that avoids creating a disparate impact on vulnerable communities.
FPF invited the committee to seek answers about how privacy and equity guardrails are or are not being incorporated into state and local school safety initiatives. Prior to implementing school safety programs, officials ought to 1) find and analyze the best available evidence to inform policy; 2) perform privacy impact assessments, commonly-used and established processes for ensuring the appropriate balance between the benefits and risks of data collection and use initiatives, particularly as they related to already vulnerable communities; and 3) transparently engage with all stakeholders, including parents, students, and educators.
Statement by FPF CEO Jules Polonetsky: Facebook Case Shows It Is Time to Give the FTC Enhanced Civil Penalty Authority
WASHINGTON – July 24, 2019 –Today, the Federal Trade Commission (FTC) announced an unprecedented settlement requiring Facebook to pay $5 billion in civil penalties, create new accountability and compliance mechanisms, and imposing additional injunctive relief. The settlement stems from violations of a 2012 order.
The $5 billion penalty is more than 15 times larger than the previous record penalty levied by the FTC for a privacy violation. It is one of the largest penalties issued by a US government agency in any context. The fine is more than twice the financial penalty that could be imposed by an EU regulator under the General Data Protection Regulation.
But today’s record settlement masks a major gap in the FTC’s enforcement authority – the Commission doesn’t typically have fining authority for privacy violations, unless it is enforcing an existing order (as with Facebook) or invoking specific statutes (such as the Children’s Online Privacy Protection Act).
In fact, in many privacy cases the FTC has trouble even getting refunds for consumers. That’s because many companies provide online products and services for free – so it’s difficult to prove a financial loss. In those privacy cases, the FTC should have fining authority; it would create effective, proportionate deterrence and ensure that bad actors are held accountable – even when they don’t charge consumers a fee for services.
The time has come to give the FTC civil penalty authority. Preferably, this would be accomplished by Congress as part of a comprehensive new national privacy law that also gives consumers meaningful control over how their information is used.
The FTC also needs more resources so it can conduct more privacy investigations faster, while maintaining a high level of technical and legal competence. Real oversight of the Facebook settlement will require FTC staff resources and time to be effective. That funding could be provided by Congress this year through the appropriations process.
If Congress wants stronger incentives for compliance and more responsive investigations, it needs to give the FTC civil penalty authority for privacy violations and more tech and investigative resources now. There is no reason to wait.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
The US, China, and the Risks of Cutting Global Data Flows
Peter Swire published an op-ed for the French newspaper Le Monde that discusses the Court of Justice for the European Union’s decision as to whether U.S. surveillance practices violate the fundamental rights of EU citizens under GDPR. Swire argues that if the U.S is deemed to be in violation, thereby causing transatlantic data flows to be blocked, then data flows between the EU and China should also be blocked.
An English translation of the piece is available here. The original piece is available in French here.
You can read an annotated bibliography for the piece here.
Peter is an FPF Senior Fellow and Elizabeth and Tommy Holder Chair and Professor of Law and Ethics at the Georgia Tech Scheller College of Business.
New Privacy Tech Industry Attracts Massive Funding
Privacy Tech Alliance connecting researchers and entrepreneurs to analysts, customers, VCs
WASHINGTON – July 11, 2019 – One Trust’s announcement today of a $200 million Series A investment, which follows yesterday’s announcement by TrustArc of a $70 million Series D round, demonstrates the arrival of a new industry sector for privacy protection technologies.
“Investors have noticed that business is booming for companies in the privacy technology space,” said Jules Polonetsky, CEO of the Future of Privacy Forum and a co-founder of the Israel Tech Policy Institute. “Innovative technology must be part of the solution for companies and government agencies that want to use data and be sensitive to individual privacy.”
The Israel Tech Policy Institute, in conjunction with the Future of Privacy Forum, launched the Privacy Tech Alliance to promote the market for privacy protective technologies internationally, facilitate the development of new tech, and maximize value for innovators and investors. The global nature of privacy regulation – from GDPR to the California Consumer Privacy Act – is spurring innovative technologies and a new industry sector is rising around technologies that help companies use data while protecting privacy, such as homomorphic encryption and de-identification.
“The Privacy Tech Alliance is supporting diverse companies bringing privacy-enhancing technology to market,” said Limor Shmerling Magazanik, Managing Director of the Israel Tech Policy Institute. “Many of these companies also offer compliance solutions to help their customers navigate an increasingly complex regulatory environment around privacy.”
OneTrust and TrustArc join eleven other leading global tech vendors who have joined the Privacy Tech Alliance Advisory Board. Founding members of the Privacy Tech Alliance Board include Anonos, BigID, D-ID, Duality, Immuta, Nymity, OneTrust, Privacy Analytics, SAP, Truata, TrustArc, WireWheel, and ZL Tech.
For companies large and small, drafting policies and managing excel sheets no longer suffice to oversee complex global data operations. To scale data governance and privacy program management, companies in every sector of the economy must turn to privacy governance systems and tools. Such tools serve multiple governance needs, including data mapping, data protection impact assessments, consent and cookie management, data storage and retention, identity management and authentication, and more. In addition to privacy program management tools, researchers, scientists and entrepreneurs are innovating privacy enhancing technologies, including tools for de-identification, encryption, obfuscation, blockchain, and more.
This week’s notice by the UK Information Commissioner of its intention to fine Marriott Hotels and British Airways $130 million and $230 million respectively vividly illustrates the rising stakes for organizations that wrestle with an increasingly complex regulatory environment for privacy and data protection, including Europe’s GDPR and California’s CCPA.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
About the Israel Tech Policy Institute
Israel Tech Policy Institute is an incubator for tech policy leadership and scholarship, advancing ethical practices in support of emerging technologies. Learn more about ITPI by visiting www.techpolicy.org.il.
Education, Privacy, Disability Rights, and Civil Rights Groups Send Letter to Florida Governor About Discriminatory Student Database
WASHINGTON, DC – Today, the Future of Privacy Forum and 32 other education, disability rights, privacy, and civil rights organizations sent a letter to Florida Governor DeSantis, urging him to postpone the implementation of Florida’s proposed school safety database. FPF is deeply concerned that the program will be used to label students as threats based on data that has no documented link to violent behavior, such as data on disabilities or those seeking mental health care. The signatories urged Governor DeSantis to immediately halt the state’s construction of this database and, instead, create a commission of parents, students, and experts on education, privacy, security, equity, disability rights, civil rights, and school safety, to identify measures that have been demonstrated to effectively identify and mitigate school safety threats.
Education Weekrecently detailed the types of information to be collected in Florida’s planned database. The categories discussed included children who have been victims of bullying based on protected statuses such as race, religion, disability, and sexual orientation; children who have been treated for substance abuse or undergone involuntary psychiatric assessments; and children who have been in foster care, among others.
“Through policy, Florida is saying that students who have been bullied and harassed are threats, making it less likely that those students will report bullying and receive the help they need,” said Amelia Vance, Director of the Education Privacy Project at FPF. “It is especially troubling that the database has no retention or deletion requirements – meaning that Florida is creating a literal permanent record that could follow students around their whole life.”
The letter asks the Governor to pause the database’s implementation – due to be launched August 1, 2019 – and create a commission of experts to determine whether a state database would actually help to identify school safety threats and would not pose undue harm to students, and identify the legal, ethical, privacy, and security parameters that should be an integral part of this database. If Governor DeSantis is not willing to do that, signatories requested that he require the state to provide public information about the database’s data governance, enumerate the data that will be included, share how parents can access and, if needed, contest the information and inferences about their child in the database, and provide a public commitment to abide by all federal and state privacy and non-discrimination laws.
The Future of Privacy Forum is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
Florida Council of Administrators of Special Education
Florida League of Women Voters
Future of Privacy Forum
Intercultural Development Research Association
Learning Disabilities Association of America
Learning Disabilities Association of Florida
Mental Health America
Mental Health Association in Indian River County, Florida, a proud affiliate of Mental Health America
National Association of Councils on Developmental Disabilities
National Center for Learning Disabilities
National Center for Youth Law
The National Council on Independent Living
National Disability Rights Network
Public Advocacy for Kids
School Social Work Association of America
SPLC Action Fund
TASH
Sidewalk Labs Releases Detailed Plans for Collaboration with City of Toronto on Quayside Smart City Project, Including Proposed Privacy and Data Protection Framework
By: Suzie Allen
Experts Highlight Data Protection Safeguards, Opportunities, and Risks
“Master Innovation and Development Plan” will be Vetted by City Residents, Officials
Last week, Sidewalk Labs unveiled its proposed “Master Innovation and Development Plan” (MIDP) for Sidewalk Toronto, a project that would design a smart city district in Toronto’s Eastern Waterfront. The proposal will be considered by the government and other stakeholders in the coming months to determine whether to move forward with the project. This proposed public-private partnership between Sidewalk Labs and Waterfront Toronto seeks to promote affordability and sustainability while reducing climate impact and creating new mobility solutions, such as by prioritizing mass transit and pedestrians over vehicles.
The MIDP as proposed contemplates substantial data collection and use; it also proposes a range of signifcant legal, technical, and policy controls to mitigate privacy risks and promote data protection. In the coming year, Toronto residents and officials will analyze the MIDP and work with Sidewalk Labs and Waterfront Toronto to identify aspects of the proposal that could be modified to promote benefits and reduce risks.
Background
Quayside: The Quayside site in Toronto covers 12 acres of land that is primarily managed by Waterfront Toronto, a tri-government organization funded by the Government of Canada, the Province of Ontario, and the City of Toronto. Sidewalk Labs has proposed a development plan that includes elements of user-centric design and seeks to promote the health and well-being of residents. For example, Quayside’s streets will prioritize transit, cycling, and walking instead of a car-centered design and the city will have a thermal grid for fossil-free heating and cooling. The plan also articulates inclusiveness for indigenous populations, individuals with disabilities, and other members of the community as a goal of the design.
Scale: The Sidewalk Labs proposal includes the 12-acre Quayside site, as well as additional land on Toronto’s Eastern Waterfront over approximately a 20 year period. Public engagement around the Quayside site and the development of the MIDP stretches back to November 2017, and has involved “dozens of meetings with local experts, non-profits, and community stakeholders; and the research, engineering, and design work of more than 100 local firms.”
Roles and Responsibilities: If the MIDP is approved, Sidewalk Labs would have three main roles in developing Quayside: 1) developing real estate and infrastructure systems through partnerships with local developers; 2) providing advisory, technical, and management services to the District Administrator; and 3) serving as a technical advisor, purchasing technology from or partnering with third parties rather than building the technology itself.
Process and next steps: Waterfront Toronto plans to consult with the public and receive feedback on the MIDP. Once this is complete, Waterfront Toronto will take the evaluation and make a recommendation to the Investment Real Estate and Quayside (IREQ) Committee, which will make a recommendation to the Waterfront Toronto Board of Directors. The Board will then decide whether, and how, to continue to the next phase by deciding to pursue some, all, or none of the elements of the MIDP.
Privacy, Data Governance, and Transparency
The MIDP acknowledges that some of the urban data at the core of the Quayside effort will be personal and/or sensitive, and proposes several key measures intended to mitigate the privacy risks. The MIDP contemplates both include technical controls, such as employing hardware and software solutions that integrate privacy-protective data collection, use, and sharing into the development and operation of the Quayside site, as well as legal and organizational safeguards, such as establishing consistent and transparent processes for using urban data and independent oversight. Key measures include:
Responsible Data Use (RDU) Guidelines: The MIDP calls for the development of core, high-level principles for responsible data use that apply to all uses of personal data by Sidewalk Toronto projects. Sidewalk Labs proposed several potential starting points, including:
all technology involved in the Quayside project must have a beneficial purpose for residents;
projects will strive to minimize the amount of personal information collected and retained;
personal data that is collected will be de-identified by default and at the source — that is, on the device collecting the data — whenever possible;
data deemed to be non-personal or sufficiently de-identified will be made publicly accessible by default;
AI systems must address ethical and bias concerns; and
personal information will not be sold or used for advertising purposes without explicit consent.
Responsible Data Use Assessment: To support the implementation of the RDU Guidelines, the MIDP contemplates developing a RDU Assessment as a mechanism for public and private entities to weigh the data benefits and privacy risks of digital products and services prior to deployment. The Assessments would focus on transparency and extending protections to diverse groups and communities, in order to ensure that a particular technology or algorithmic use case does not negatively impact individuals, groups, or communities due to biased decision-making.
Urban Data Trust: Finally, the MIDP would entrust oversight and accountability of the Responsible Data Use Guidelines and Assessments to an “Urban Data Trust.” This new non-profit entity would manage urban data and technologies independent of Sidewalk Labs and Waterfront Toronto, and would oversee day-to-day digital governance of Sidewalk Toronto projects. Sidewalk Labs states the data trust concept is intended to build on existing privacy laws while providing an additional protection and review before data-related measures are permitted to go into effect. The trust would also apply to third-party data collection and use.
Since 2017, Sidewalk Labs has staked out an ambitious vision of the “city of tomorrow.” As Sidewalk Toronto would be fueled in significant part by data from and about Quayside’s residents and visitors, it is essential that clear and consistent standards for protecting personal data be built into the project from the outset. The MIDP sets out one of the most detailed urban data protection frameworks we have seen for any local development project and sets forward a model structure of municipal data. If the Sidewalk Labs proposal is ultimately approved, it could be the catalyst for similar projects throughout the world, making it imperative to keep privacy as a priority. MIDP describes an intriguing range of proposed organizational, technical, and legal safeguards, and has set the stage for continued discussions with Torontians and with stakeholders from government, industry, academia, and civil society about how to maximize the potential of urban innovation while minimizing risks to individuals and communities.
California’s AB-1395 Highlights the Challenges of Regulating Voice Recognition
Under the radar of ongoing debates over the California Consumer Privacy Act (CCPA), the California Senate Judiciary Committee will also soon be considering, at a July 9th hearing, an unusual sectoral privacy bill regulating “smart speakers.” AB-1395 would amend California’s existing laws to add new restrictions for “smart speaker devices,” defined as standalone devices “with an integrated virtual assistant connected to a cloud computing storage service that uses hands-free verbal activation.” Physical devices like the Amazon Echo, Google Home, Apple HomePod, and others (e.g. smart TVs or speakers produced by Sonos or JBL that have integrated Alexa or Google Assistant), would be included, although the bill exempts the same cloud-based voice services when they are integrated into cell phones, tablets, or connected vehicles.
Although AB-1395 seeks to address legitimate consumer privacy concerns, its core provisions likely contain pitfalls. Nonetheless, it raises important questions about the best ways to regulate privacy in the context of “listening” devices.
First, it’s clear that speech-to-text recognition has madeincredible strides in the past decade, due in large part to companies being able to train machine learning models on very large datasets of human speech. These models are not perfect–they are continuing to work on heavy accents, unusual speech patterns, and non-English speech–but they have improved dramatically in recent years. Only a few years after the first voice assistants hit the market, speech recognition has now become a common way of interacting with computers, and a game-changer for accessibility.
Notwithstanding these ground-breaking benefits, most people are justifiably wary of devices that seem to “listen,” “spy,” or retain or use data in unexpected ways. FPF explored these concerns in a 2016 White Paper, Always On: Privacy Implications of Microphone-Enabled Devices. We have also explored uses of voice recognition in Smart TVs. Sometimes privacy concerns are based on misunderstandings of how voice-activated technology works–for example, we distinguished in an Infographic on Microphones in Internet of Things (IoT) Devices, between “always on,” “voice-activated,” and “manually activated” devices, which operate and collect data differently. Other concerns are totally valid, for example those raisedby consumer privacy advocates regarding data retention defaults, design of user choices, or concerns about possible future uses of data in unexpected ways.
These issues can and should be addressed through comprehensive privacy legislation. FPF supports a non-sectoral, comprehensive federal privacy law, and in its absence has written in supportof the California Consumer Privacy Act (CCPA), which creates baseline protections for Californians that apply across sectors and types of technology, including smart speakers. For example, many companies provide options for data deletion, and this will soon be mandated as a consumer right under the CCPA. Enshrining these and other privacy rights into law, if bolstered by ongoing rule-making and effective enforcement, allows the law to set clear limits across sectors and technologies, while remaining flexible enough to adapt to evolving technology in the future. So-called “smart speakers” are a great example of this: five years ago they did not exist. Five years from now, it may already be an antiquated concept, as cloud-based voice recognition transcends the physical boundaries of standalone devices, and becomes increasingly integrated as a core feature of almost all new technology, e.g. connected cars, wearables, and outdoor smart city kiosks.
If California decides to address the narrow slice of “smart speakers,” we recommend that they take a close look at two core aspects of AB-1395 (as revised 06/26/2019) that could cause unintended consequences, or not be as effective at addressing consumer privacy concerns as intended:
Sharing Data with Third Parties. Section 22948.20(b) appears to prohibit a company from sharing transcript data with third parties, even if a user affirmatively consents and requests such sharing. This might be a drafting error and thus an easy fix, but as currently written it would outlaw many common and beneficial features of smart speakers. Many household smart speakers or “voice assistants” (e.g. Amazon Echo, Google Home, and many others) serve as a “hub” or “portal” for connecting to a user’s other devices or services. For example, a user might use a voice assistant to: turn on or off the lights, adjust the air conditioning, add something to their calendar, order take-out food, or order a taxi or shared ride. All of these examples require sharing identifiable data (an interpretation of the user’s request, e.g. “turn on the lights”). In many circumstances, owners of these devices expect this kind of data sharing to occur at their request, and on their behalf (in other words, with meaningful consent).
Retention. Section 22948.21(a) requires separate, opt-in consent for retention of voice or transcript data, and that manufacturers provide a “basic” retention-free version to customers who don’t opt in. In the context of voice recognition, access to large amounts of data has driven the rapid advancement of voice recognition in the last decade, and continues to drive product improvement–for example, as discussed above, for learning to recognize heavy accents, speech disorders, or non-English speech. However, consumer advocates are justified in their concerns about indefinite data retention as a “default,” particularly when users have limited ability to delete their data. One way to address this is through consumer deletion rights, which many leading companies provide and are mandated by the California Consumer Privacy Act (CCPA). An even better, more nuanced approach, might be to require or encourage companies to create meaningful, easier-to-use choices, such as automatic recurring deletion options (as Google recently introduced). Another common-sense privacy protection would be to require that it be possible to request data deletion through a voice request. Unfortunately, AB-1395 does not take any of these approaches, but instead creates an “all or nothing” framework for data retention. Most consumers probably want something in between–the ability to get the benefits of voice personalization (for example, if they themselves have a strong accent or unusual speech pattern), and perhaps support product improvement, but with easier, better, or more meaningful deletion options.
We hope consumer privacy will continue to be a core legislative priority in 2019 and 2020, as the United States draws closer to drafting and passing a baseline comprehensive privacy law. States that address these issues in the meantime should do so thoughtfully and with an eye towards effective regulation to address real privacy concerns while supporting the benefits of emerging technologies.