Call for Public Comments: Resources for Companies Sharing Personal Data with Academic Researchers

In June 2019, FPF launched the Corporate-Academic Data Stewardship Research Alliance, a peer-to-peer network of private companies that share the goal of facilitating privacy-protective data sharing between businesses and academic researchers. The Alliance has worked to support data sharing efforts, help address and mitigate challenges that create barriers to sharing, and promote responsible and privacy-protective practices that enable more data sharing between industry and academic researchers. 

FPF Senior Fellow Mike Hintze has been leading the project, working with 25 prominent organizations to develop usable, privacy-focused resources for companies that share personal data with academic researchers.

Today, FPF releases two resources in working form:

In the spirit of openness and collaboration, FPF invites public comments from companies, researchers, privacy experts, and all other interested individuals and stakeholders regarding these resources.

Following the comment period, FPF will publish revised versions of these two documents so that companies and researchers can use them to help facilitate responsible and protective data sharing. 

How to Comment:

Please email your feedback to: [email protected]. Please submit your feedback no later than Thursday, April 30, 2020.

View the Best Practices

View the Contract Guidelines

FPF Submits Written Statement to the U.S. House Committee on Financial Services Task Force on AI

This week, Future of Privacy Forum (FPF) Senior Counsel and Director of AI & Ethics Brenda Leong submitted a written statement on the use of artificial intelligence and machine learning-based applications in financial products and services. Addressed to the House Committee on Financial Services Task Force on Artificial Intelligence, the statement explores how to protect AI and ML-based financial systems from the impact of undesired or unintended bias. 

“Financial services organizations have the responsibility, both legally and ethically, to treat their customers, whether other businesses or individuals, fairly and equally,” wrote Leong. “As more players in this industry employ AI systems in more use cases, it is incumbent on them to ensure that their algorithms are fair and explainable.”

In the statement, Leong describes beneficial ways that financial institutions are currently using AI, such as combating fraud or extending credit to traditionally underserved individuals. She also identifies a few factors that can present fairness and equity concerns that are unique to or heightened by processing within an AI or ML-based system; and identifies the technical, policy, regulatory and legislative actions that can help mitigate risk and bias from the use of these systems.

“While ML and AI are technologies thought of as completely ‘other’ from human thinking, they are so far still always based on algorithms and models created by people,” wrote Leong. “Thus, these algorithms are prone to incorporating the biases of their designers, as well as the biases of the systems they’re designed to serve, because the only data available to train them already reflects decades or even centuries of inequality.”

She prompts legislators to take caution when taking new action to legislate AI, since existing laws and regulations already apply, including legislation protecting consumers from unfair or misleading trade practices, labor and employment laws, applicable privacy laws, and the entire regulatory structure around financial services.

The statement was submitted for the record of the Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services hearing held by the Task Force on Artificial Intelligence on February 12. 

Read the full statement here.

A New U.S. Model for Privacy? Comparing the Washington Privacy Act to GDPR, CCPA, and More

By Stacey Gray, Pollyanna Sanderson, and Katelyn Ringrose

Download a printable version of this report (pdf).

As Congress continues to work toward drafting and passing a comprehensive national privacy law, state legislators are not slowing down. In Washington State, a new comprehensive privacy law is moving quickly: last week, the Washington Privacy Act (SSB 6281) was voted out of the Washington Senate Ways & Means Committee, and appears likely to be voted on by the Senate. If approved, it will reach the House, which is currently considering (and amending) an almost identical companion bill. The deadline for the bill to be voted on by both Senate and House (including, if applicable, resolving any differences) is March 12, 2020.

FPF commented at a recent public hearing that, if passed into law, the Washington Privacy Act (as represented by Senator Carlyle’s SSB-6281) would be a significant achievement for US privacy legislation. We have previously noted that the WPA would incorporate protections that go beyond those in the California Consumer Privacy Act, the only existing comprehensive consumer privacy law in the United States.

Is the Washington Privacy Act a good model for U.S. legislation? Lawmakers should consider:

FPF has created the following charts to compare key elements of the current California Consumer Privacy Act (CCPA); the upcoming 2020 California Ballot Initiative (CPRA); the EU General Data Protection Regulation (GDPR); the WPA of 2019 (Senate Bill 5376); and the WPA of 2020 (Substitute Senate Bill 6281). The following charts take into account the following key features of all laws: (1) jurisdictional scope; (2) definitions and structure; (3) pseudonymous data; (4) individual rights; (5) obligations on companies;  (6) facial recognition provisions; and (7) preemption and enforcement.

DOWNLOAD THE FULL ANALYSIS HERE.

1. JURISDICTIONAL SCOPE

The 2020 Washington Privacy Act (SSB-6281) would govern legal entities in Washington that collect data from Washington residents. Although narrower in scope than the GDPR, the WPA contains a significantly broader scope and territorial reach than the CCPA. Unlike the CCPA (which governs for-profit businesses), the WPA would also govern non-profit organizations, including public charities and foundations. In some cases, the WPA would even govern entities that do not “conduct business” in Washington, if they produce products or services “targeted to” residents of Washington.

EU GDPR CCPA CA Ballot Initiative WPA 2019 WPA 2020
Who can exercise rights? Natural persons (“data subjects”) California residents  California residents  Washington residents Washington residents
Who has obligations?  All govt and non-govt legal entities and individuals established in the EU or offering goods or services to EU residents For-profit businesses that “[do] business in the State of California” and meet thresholds (below) For-profit businesses that “[do] business in the State of California” and meet thresholds (below) Non-govt legal entities that “conduct business in Washington or produce products or services that are intentionally targeted to residents of Washington.” Non-gov’t legal entities that “conduct business in Washington or produce products or services that are targeted to residents of Washington.”
Thresholds  None. However, there is a limited small-business exemption for certain obligations (see e.g. Art. 30(5)) $25 million annual revenue; or 50,000+ consumers; or 50% of annual revenue derived from selling consumers personal data $25 million annual revenue; or 100,000+ consumers; or 50% of annual revenue derived from selling or sharing consumers’ personal data 100,000+ consumers; or derives 50%+ annual revenue from the sale of personal data and processes or controls personal data of 25,000+ consumers 100,000+ consumers during a calendar year; or derives 50%+ annual revenue from the sale of personal data and processes or controls personal data of 25,000+ consumers

 

2. DEFINITIONS AND STRUCTURE

The 2020 Washington Privacy Act (SSB-6281) contains key terms and an overall structure that closely aligns with the GDPR. It would define personal data broadly as “any information that is linked or reasonably linkable to an identified or identifiable natural person.” This definition is in line with long-standing global norms; for example, personal data was defined similarly as early as 1981 in the text of Convention 108, the first binding international data protection agreement. The 2020 WPA also contains different obligations for “controllers” and “processors,” with narrow CCPA-like exemptions for “de-identified” data and “publicly available information.”

EU GDPR CCPA CA Ballot Initiative WPA 2019 WPA 2020
Broad definition of covered data Y Y Y Y
“Controllers” & “Processors” Y Y (“businesses” and “service providers”) Y (“businesses” and “service providers”) Y Y
Excludes “de-identified” data Y * Y Y Y Y
Excludes “publicly available information” N Y Y Y Y
* The GDPR defines personal data very broadly. (Art. 4(1)). Its provisions do not apply to data which does not relate to an “identified or identifiable person” or to personal data “rendered anonymous in such a manner that the data subject is no longer identifiable.” (Recital 26).

 

3. PSEUDONYMOUS DATA

The 2020 Washington Privacy Act treats “pseudonymous data” differently than other covered data. Under the WPA, pseudonymous data – data that “cannot be attributed to a specific consumer without the use of additional information” – is exempted from access, deletion, and correction rights, but not from opt-outs of sale, profiling, or targeted advertising. This provides flexibility for companies processing data that is less identifiable (and therefore harder to associate with individuals in order to fulfill their requests) but still carries some risks to privacy or autonomy. For example, pseudonyms are frequently used in large datasets to conduct scientific research (e.g., in a HIPAA Limited Dataset, John Doe = 5L7T LX619Z).

In contrast, other U.S. laws do not explicitly codify different obligations for pseudonymous data. In practice, however, there is a growing consensus that U.S. privacy law will need to reflect the practical challenges of dealing with data that falls along a spectrum of identifiability. For example, in practice under the CCPA, individual rights to access, delete, or correct their data are almost always more limited for pseudonymous data due to the challenges with (1) linking the request to the data the company holds; and (2) verifying that the request is authentic and that disclosure or deletion is being conducted on behalf of the right person. (For more, see the California Attorney General’s ongoing CCPA rulemaking efforts).

In the EU, the GDPR explicitly recognizes that pseudonymization of personal data decreases risks to the rights and freedoms of individuals (see Recital 28). The GDPR also exempts controllers from complying with individual requests to exercise rights of access and deletion (erasure) when identification in a dataset would require the controller to acquire additional information, unless the individuals themselves provide additional information to help re-identification (see Article 11). Pseudonymization is also considered an important safeguard for the GDPR’s “privacy by design” requirements in Article 25 and for data security measures in Article 32.

EU GDPR CCPA CA Ballot Initiative WPA 2019 WPA 2020
Recognizes pseudonymous data Y * Indirect ** Indirect ** N Y
* More precisely, the GDPR recognizes “pseudonymization” as a method to decrease privacy risks and comply with certain obligations (see description above). ** Indirectly, individual rights to access and delete pseudonymous data in California may be limited in practice due to challenges with verifying consumer requests (see description above).

 

4. INDIVIDUAL RIGHTS

The WPA would codify individual rights for residents of Washington that go beyond both CCPA and the bill introduced in Washington in 2019. For instance, it would offer consumers a right to correct inaccurate data and to exercise broader rights to opt out of not only “sale” but also “profiling” and “targeted advertising.” In comparison, the CCPA does not require an opt out of certain targeted advertising practices if they can be conducted without “selling” data (a limitation that would be eliminated in the Ballot Initiative). Last year’s Washington bill contained a right to object to processing for targeted advertising, but would have allowed other kinds of data processing if outweighed by the interests of the company. The WPA would also grant consumers additional protections by requiring companies to establish internal appeals processes, paralleling certain procedural elements of the GDPR.

Finally, the WPA would require opt-in consent for collection of “sensitive information.” This includes, for example, racial or ethnic origin, biometric data, sexual orientation, or mental or physical health condition or diagnosis. Heightened protection for sensitive data largely aligns with the Ballot Initiative and the GDPR, which requires either “explicit consent” or a very narrow and specifically prescribed justification to process “special categories of data” (see Article 9). Notably, the 2020 WPA also includes “specific geolocation” as a type of sensitive data that requires opt-in consent. This aligns with the Ballot Initiative, but in comparison, under the GDPR, geolocation data can be processed in some situations without consent where other strong safeguards apply (see e.g. guidance on location data collected through Wi-Fi Analytics). In other cases, EU privacy laws like the ePrivacy Directive may apply.

EU GDPR CCPA CA Ballot Initiative WPA 2019 WPA 2020
Right to Access Y Y Y Y Y
Right to Correct Y N Y Y Y
Right to Delete Y Y Y Y
Right to Portability  Y Y Y Y
Internal Appeals Processes Y * N N N Y
Opt out of “Sale” Y ** Y Y N Y
Out Out for “Targeted Advertising”  N *** Y Y Y
Opt Out of “Profiling” N N N Y
Opt In Consent for Sensitive Data Y N Y N **** Y
* Companies engaged in high-volume or high-risk processing must appoint a Data Protection Officer (DPO) who handles requests, communications, and appeals (Article 37, Article 38, and Article 39). ** An individual can object to any processing of their personal data conducted pursuant to certain lawful bases, at which point the controller may no longer process the data unless it demonstrates “compelling legitimate grounds” to override that person’s interests, rights, and freedoms (Article 21). If such processing is conducted with consent, the consent must be easy to withdraw at any time (Article 7.3). Finally, the GDPR includes an absolute right to object to “direct marketing.” (Article 21.2). *** The CCPA does not restrict targeted advertising if it can be conducted without “selling” data. In contrast, the Ballot Initiative contains a broader opt-out provision (of both “sale” and “sharing”) and specifically limits service providers from engaging in any “cross-context behavioral advertising.” **** Except where consent could be used as a way for a company to engage in “high risk” processing, as determined by risk assessments. (s8(3)).

 

5. OBLIGATIONS ON COMPANIES

After the CCPA passed in 2018, it was widely criticized by privacy advocates for placing most of the burden on consumers to exercise their rights, without containing strong restrictions on the collection or uses of data. The California Ballot Initiative would go significantly further than CCPA by incorporating additional consumer rights and restrictions on the collection and use of “sensitive data.” The WPA similarly places additional obligations on companies to act as responsible stewards of information, including mandated risk assessments for high-risk activities. Neither would go as far as the GDPR, which requires that companies have a “lawful basis” to collect data at all, where a “lawful basis” can include, for example, consent, fulfillment of a contract, or “legitimate interests” (for more, see FPF’s report: Deciphering Legitimate Interests).

Both the California Ballot Initiative and the 2020 Washington Privacy Act incorporate elements of data minimization, purpose limitation, and avoidance of “secondary uses.” Neither is as restrictive as the provisions in the GDPR’s Article 5. However, the Ballot Initiative would require that a business’s collection and use of data be “reasonably necessary and proportionate to achieve the purposes for which [it was] collected or processed . . . and not further processed in a manner that is incompatible with those purposes.” (1798.100c). In comparison, the 2020 Washington Privacy Act would require that data collection be “limited to what is reasonably necessary” as well as “adequate, relevant, and limited” in relation to “the purposes for which such data are processed, as disclosed to the consumer,” and prohibit further processing that is not “compatible with” those purposes (absent consent) (Section 8).

EU GDPR CCPA CA Ballot Initiative WPA 2019  WPA 2020
Lawful Bases for Collection Y N N N N
Privacy Policies Y Y Y Y Y
Risk Assessments for High-Risk Activities Y N N Y
Data Minimization Y (strongest) N Y N Y
Purpose Limitation Y (strongest) N Y N Y
Duty to Avoid Secondary Use Y (strongest) N Y N Y
Security Requirements Y Y Y Y Y
Non-Discrimination Y (Indirectly)* Y Y N Y
* The GDPR does not include an explicit provision stating that a data subject must not be discriminated against on the basis of their choices to exercise rights. However, it is implicit from other principles of the GDPR that individuals must be protected from discrimination on these grounds. (Article 5, Article 13, Article 22, and elements of “freely given” consent and fair processing).

 

6. FACIAL RECOGNITION PROVISIONS

The current version of the  WPA contains special provisions for commercial uses of facial recognition technologies. Such provisions do not directly exist in the GDPR or other comprehensive privacy laws. However, other laws in the US and EU govern facial recognition technologies, whether as category of “sensitive data” (e.g. the Ballot Initiative would require consent for uses of biometric data), or as a form of sensitive data or automated profiling (under the GDPR).

Specifically, the GDPR regulates facial recognition technologies through several provisions. When facial recognition is used for identification purposes, “explicit consent” is required under Article 9, unless a narrow overriding justification applies, like a substantial public interest provided by law. In addition, GDPR imposes obligations for companies engaged in “solely automated decision-making and profiling” (Article 22), both of which can be part of real-world facial recognition use cases (see, e.g., EU guidance on collecting data through video).

Compared to the facial recognition provisions in 2019, the 2020 WPA provisions are significantly stronger. In 2019, the bill that passed the Washington Senate allowed for implied consent for facial recognition: “The placement of conspicuous notice in physical premises . . . [shall] constitute a consumer’s consent to the use of such facial recognition services . . . (Section 14). In contrast, the 2020 version does not permit this – instead, it would require businesses to obtain affirmative opt in consent from consumers prior to their enrollment in a facial recognition system (with narrow, limited exceptions). The 2020 WPA would also require covered entities to enable third-party auditing, and to address inaccuracies identified related to bias and discrimination. 

EU GDPR CCPA CA Ballot Initiative WPA 2019  WPA 2020
Protections for Commercial Uses of Facial Recognition Y (indirectly)  N Indirectly Y (limited) Y (stronger)

 

7. PREEMPTION AND ENFORCEMENT

The WPA aligns with other privacy laws in that it would preempt local regulations that would govern the same data processing activities. As a result, it would be likely to preempt local regulations for commercial uses of data that fall within the same jurisdictional scope of the law, but might not preempt local regulations of government or municipal entities.

The current WPA would be enforced by the Washington Attorney General. Similarly, the CCPA is enforced by the California Attorney General, whose office is currently engaged in regulatory rulemaking (see California’s draft regulations). The CCPA does not allow for civil enforcement of most of the law’s provisions, but contains a limited private right of action for data breaches. The GDPR allows individuals to exercise rights to “individual redress,” in addition to each EU Member State having their own well-funded Data Protection Authority. 

EU GDPR CCPA CA Ballot Initiative WPA 2019 WPA 2020
Preemption Y Y Y Y Y
Enforcement by State AG or Government Body (DPA) Y Y Y Y
Enforcement by Individuals Y (mix of EU judicial rights & individual redress from regulators) N (exception for security breaches) N (exception for security breaches) N N *
* Unlike the 2019 WPA, the 2020 WPA has been amended to clarify that it does not override the existing rights of Washington residents to bring actions under Washington State’s Consumer Protection Act (chapter 19.86 RCW) for conduct or behavior that would amount to an unfair or deceptive practice (Section 11). Similarly, residents of California (and many other states) have the ability to bring lawsuits to challenge privacy violations when they violate unfair and deceptive practices (UDAP) state laws.

 

CONCLUSION

The Senate sponsor of the 2020 WPA, Senator Reuven Carlyle, recently noted: “I don’t think that we’re ever going to be done dealing with the regulatory framework of consumer data and the issue of privacy. We’re living in a new era.” We agree. The United States needs a comprehensive, baseline federal privacy law to set uniform standards and create clarity for companies and strong rights for individuals. In the absence of such a law, the Washington Privacy Act could serve as a useful regulatory model for other states and for Congress that improves upon the CCPA, provides rights to Washington residents, and helps companies build effective data protection programs.

 

Did we miss anything? Let us know at [email protected] or [email protected] as we continue tracking state and federal developments in privacy legislation.

New “Privacy 101” Video Series Helps School District Leaders Protect Student Data

WASHINGTON, D.C. – In recognition of Safer Internet Day, the Future of Privacy Forum (FPF) today released a new “Student Privacy 101” video series that is designed to help school leaders better understand federal and state privacy laws and protect sensitive student data.

“As technology becomes increasingly prevalent in the classroom, faculty, administrators, and district leaders could benefit from a quick and easy guide to understanding how they can help reduce privacy risks and improve transparency around student data,” said FPF Director of Youth & Education Privacy Amelia Vance. “FPF’s new videos provide an animated overview of best practices and tips on how schools can protect student privacy.”

The “Student Privacy 101” video series includes:

Vance added, “Safer Internet Day reminds companies to examine how they can use technology more responsibly in support a better Internet experience for everyone, with a special focus on advancing positive practices that protect children and young people under 18. As one of the nation’s leading think tanks focused on privacy issues, FPF is a proud supporter of Safer Internet Day and works to provide year-round resources that support a culture of data security.”

FPF also published a new blog post celebrating Safer Internet Day today with additional information and resources about how schools can protect children’s data privacy.

The video series is based on the Siegl Framework⁠—developed by Jim Siegl, the Technology Architect at Fairfax County Public Schools in Virginia⁠—which advises local education agencies to consider the Venn diagram of legal compliance, privacy and security risks, and perception risks when working on student privacy.

The series was created by Monica Bulger, David Sallay, and Amelia Vance, with the animation magic and brilliance of Thought Café.

To learn more about Safer Internet Day, visit www.saferinternetday.org. For more information about FPF’s student privacy work, visit studentprivacycompass.org.

# # #

Contact

Alexandra Sollberger

[email protected]

202-317-0774

About FPF

The Future of Privacy Forum (FPF) is a Washington, DC-based think tank that seeks to advance responsible data practices. The forum is led by internet privacy experts and includes an advisory board comprised of leading figures from industry, academia, law, and advocacy groups. For more information, visit www.fpf.org.

Privacy Papers 2019: Spotlight on the Winning Authors

FPF recently announced the winners of the 10th Annual Privacy Papers for Policymakers (PPPM) Award. This Award recognizes leading privacy scholarship that is relevant to policymakers in the United States Congress, at U.S. federal agencies, and for data protection authorities abroad.

From many nominated privacy-related papers published in the last year, five were selected by Finalist Judges, after having been first evaluated highly by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. Finalist Judges and Reviewers agreed that these papers demonstrate a thoughtful analysis of emerging issues and propose new means of analysis that can lead to real-world policy impact, making them “must-read” privacy scholarship for policymakers.


The winners of the 2019 PPPM Award are:

Antidiscriminatory Privacy

by Ignacio N. Cofone, McGill University Faculty of Law

Ignacio N. Cofone is an Assistant Professor at McGill University’s Faculty of Law, where he teaches about privacy law and artificial intelligence regulation, and an Affiliated Fellow at the Yale Law School Information Society Project. His research explores how law should adapt to technological and social change with a focus on information privacy and algorithmic decision-making. Before joining McGill, Ignacio was a research fellow at the NYU Information Law Institute, a resident fellow at the Yale Law School Information Society Project, and a legal advisor for the City of Buenos Aires. He obtained a joint PhD from Erasmus University Rotterdam and Hamburg University, where he was an Erasmus Mundus Fellow, and a JSD from Yale Law School. His full list of publications is available at www.ignaciocofone.com. He tweets from @IgnacioCofone.


Privacy’s Constitutional Moment and the Limits of Data Protection

by Woodrow Hartzog, Northeastern University, School of Law and Khoury College of Computer Sciences and Neil M. Richards, Washington University, School of Law and the Cordell Institute for Policy in Medicine & Law

Woodrow Hartzog is a Professor of Law and Computer Science at Northeastern University School of Law and the Khoury College of Computer Sciences. He is also a Resident Fellow at the Center for Law, Innovation and Creativity (CLIC) at Northeastern University, a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, a Non-resident Fellow at The Cordell Institute for Policy in Medicine & Law at Washington University, and an Affiliate Scholar at the Center for Internet and Society at Stanford Law School. His research on privacy, media, and robotics has been published in scholarly publications such as the Yale Law Journal, Columbia Law Review, and California Law Review and popular publications such as The New York Times, The Washington Post, and The Guardian. He has testified multiple times before Congress and has been quoted or referenced by numerous media outlets, including NPR, BBC, and The Wall Street Journal. He is the author of Privacy’s Blueprint: The Battle to Control the Design of New Technologies, published in 2018 by Harvard University Press. His book with Daniel Solove, Breached!: Why Data Security Law Fails and How to Improve It, is under contract with Oxford University Press.

Neil M. Richards is one of the world’s leading experts in privacy law, information law, and freedom of expression. He writes, teaches, and lectures about the regulation of the technologies powered by human information that are revolutionizing our society. Professor Richards holds the Koch Distinguished Professor in Law at Washington University School of Law, where he co-directs the Cordell Institute for Policy in Medicine & Law. He is also an affiliate scholar with the Stanford Center for Internet and Society and the Yale Information Society Project, a Fellow at the Center for Democracy and Technology, and a consultant and expert in privacy cases. Professor Richards serves on the board of the Future of Privacy Forum and is a member of the American Law Institute. Professor Richards graduated in 1997 with graduate degrees in law and history from the University of Virginia, and served as a law clerk to both William H. Rehnquist, Chief Justice of the United States and Paul V. Niemeyer, United States Court of Appeals for the Fourth Circuit. Professor Richards is the author of Intellectual Privacy (Oxford Press 2015). His many scholarly and popular writings on privacy and civil liberties have appeared in a wide variety of media, from the Harvard Law Review and the Yale Law Journal to The Guardian, WIRED, and Slate. His next book, Why Privacy Matters, will be published by Oxford Press in 2020. Professor Richards regularly speaks about privacy, big data, technology, and civil liberties throughout the world, and also appears frequently in the media. At Washington University, he teaches courses on privacy, technology, free speech, and constitutional law, and is a past winner of the Washington University School of Law’s Professor of the Year award. He was born in England, educated in the United States, and lives with his family in St. Louis. He is an avid cyclist and a lifelong supporter of Liverpool Football Club.


Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations

by Margot E. Kaminski, University of Colorado Law and Gianclaudio Malgieri, Vrije Universiteit Brussel (VUB) – Faculty of Law

Margot E. Kaminski is an Associate Professor at the University of Colorado Law and the Director of the Privacy Initiative at Silicon Flatirons. She specializes in the law of new technologies, focusing on information governance, privacy, and freedom of expression. Recently, her work has examined autonomous systems, including AI, robots, and drones (UAS). In 2018, she researched comparative and transatlantic approaches to sensor privacy in the Netherlands and Italy as a recipient of the Fulbright-Schuman Innovation Grant. Her academic work has been published in UCLA Law Review, Minnesota Law Review, Boston University Law Review, and Southern California Law Review, among others, and she frequently writes for the popular press. Prior to joining Colorado Law, Margot was an Assistant Professor at the Ohio State University Moritz College of Law (2014-2017), and served for three years as the Executive Director of the Information Society Project at Yale Law School, where she remains an affiliated fellow. She is a co-founder of the Media Freedom and Information Access (MFIA) Clinic at Yale Law School. She served as a law clerk to the Honorable Andrew J. Kleinfeld of the Ninth Circuit Court of Appeals in Fairbanks, Alaska.

Gianclaudio Malgieri is a doctoral researcher at the “Law, Science, Technology and Society” center of Vrije Universiteit Brussel, Attorney in Law and Training Coordinator of the Brussels Privacy Hub. He is Work Package Leader of the EU Panelfit Research Project, about Legal & Ethical issues of  data processing in the research sector. He is also external expert of the EU Commission for the ethics and data protection assessment of EU research proposals. He has authored more than 40 publications in leading international law reviews and is deputy editor of Computer, Law and Security Review (Elsevier). He is lecturer of Data Protection Law and Intellectual Property for undergraduate and professional courses at the University of Pisa, Sant’Anna School of Advanced Studies and Strasbourg University. He got an LLM with honours at the University of Pisa and a JD with honours at Sant’Anna School of Advanced Studies of Pisa (Italy). He was visiting researcher at the Oxford University, London School of Economics, World Trade Institute of the University of Bern and École Normale Superieure de Paris.


Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites

by Arunesh Mathur, Princeton University; Gunes Acar, Princeton University; Michael Friedman, Princeton University; Elena Lucherini, Princeton University; Jonathan Mayer, Princeton University; Marshini Chetty, University of Chicago; and Arvind Narayanan, Princeton University  

Arunesh Mathur is a graduate student in the department of computer science at Princeton University, where he is affiliated with the Center for Information Technology Policy (CITP). Mathur’s research examines how technical systems interface with and impact society in negative ways. His current research focus is dark patterns: empirically studying how commercial, political, and other powerful actors employ user interface design principles to exploit individuals, markets, and democracy. His research has won multiple awards including the best paper awards at ACM CSCW 2018 and USENIX SOUPS 2019.

Gunes Acar is a FWO postdoctoral fellow at KU Leuven’s COSIC research group. His research interests involve web tracking measurement, anonymous communications, and IoT privacy and security. Gunes obtained his PhD at KU Leuven in 2017, and was a postdoctoral researcher between 2017 and 2019 at Princeton University’s Center for Information Technology Policy.

 

Michael Friedman is a Technical Program Manager at Google. His work focuses on monitoring compliance with privacy regulations and certifications. Michael is broadly interested in the privacy implications of information technology and the enforcement of privacy standards. He earned his Bachelor’s degree in Computer Science at Princeton University with a concentration in societal implications of information technology. While there, he conducted research on the effectiveness of technology privacy policies, with a focus on children’s data privacy. He also collaborated in this work on dark patterns.

Elena Lucherini is a second-year Ph.D. student at the Center for Information Technology Policy at Princeton University. Her advisor is Arvind Narayanan. Lucherini received her bachelor’s degree from University of Pisa and her master’s from University of Pisa and Sant’Anna School of Advanced Studies.

 

 

Jonathan Mayer is an Assistant Professor at Princeton University, where he holds appointments in the Department of Computer Science and the Woodrow Wilson School of Public and International Affairs. Before joining the Princeton faculty, he served as the technology law and policy advisor to United States Senator Kamala Harris and as the Chief Technologist of the Federal Communications Commission Enforcement Bureau. Professor Mayer’s research centers on the intersection of technology and law, with emphasis on national security, criminal procedure, and consumer privacy. He is both a computer scientist and a lawyer, and he holds a Ph.D. in computer science from Stanford University and a J.D. from Stanford Law School.

Marshini Chetty is an assistant professor in the Department of Computer Science at the University of Chicago. She specializes in human-computer interaction, usable privacy and security, and ubiquitous computing. Marshini designs, implements, and evaluates technologies to help users manage different aspects of Internet use from privacy and security to performance, and costs. She often works in resourceconstrained settings and uses her work to help inform Internet policy. She has a Ph.D. in Human-Centered Computing from Georgia Institute of Technology, USA and a Masters and Bachelors in Computer Science from the University of Cape Town, South Africa. In her former roles, Marshini was on the faculty in the Computer Science Department at Princeton University and the College of Information Studies at the University of Maryland, College Park. Her work has won best paper awards at SOUPS, CHI, and CSCW and has been funded by the National Science Foundation, the National Security Agency, Intel, Microsoft, Facebook, and multiple Google Faculty Research Awards.

Arvind Narayanan is an Associate Professor of Computer Science at Princeton. He leads the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. Narayanan is the lead author of a textbook on Bitcoin and cryptocurrency technologies which has been used in over 150 courses around the world. His doctoral research showed the fundamental limits of de-identification, for which he received the Privacy Enhancing Technologies Award. His 2017 paper in Science was among the first to show how machine learning reflects cultural stereotypes, including racial and gender biases. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).


The Many Revolutions of Carpenter

by Paul Ohm, Georgetown University Law Center

Paul Ohm is a Professor of Law and the Associate Dean for Academic Affairs at the Georgetown University Law Center, where he also serves as a faculty director for the Center on Privacy & Technology and the Institute for Technology Law & Policy. His writing and teaching focuses on information privacy, computer crime law, intellectual property, and criminal procedure. A computer programmer and computer scientist as well as a lawyer, Professor Ohm tries to build new interdisciplinary bridges between law and computer science, and much of his scholarship focuses on how evolving technology disrupts individual privacy.

Professor Ohm began his academic career on the faculty of the University of Colorado Law School, where he also served as Associate Dean and Faculty Director for the Silicon Flatirons Center. From 2012 to 2013, Professor Ohm served as Senior Policy Advisor to the Federal Trade Commission. Before becoming a professor, he worked as an Honors Program trial attorney in the U.S. Department of Justice’s Computer Crime and Intellectual Property Section and a law clerk to Judge Betty Fletcher of the United States Court of Appeals for the Ninth Circuit and Judge Mariana Pfaelzer of the United States District Court for the Central District of California. He is a graduate of the UCLA School of Law.


The Finalist Judges also selected three papers for Honorable Mention on the basis of their uniformly strong reviews from the Advisory Board.

The 2019 PPPM Honorable Mentions are:

Additionally, the 2019 Student Paper award goes to:


The winning authors have been invited to join FPF and Honorary Co-Hosts Senator Edward J. Markey, and Congresswoman Diana DeGette, to present their work at the U.S. Senate with policymakers, academics, and industry privacy professionals. This annual event will be held on February 6, 2020. FPF will subsequently publish a printed digest of summaries of the winning papers for distribution to policymakers, privacy professionals, and the public. RSVP here to join us.

Award-Winning Paper: "Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites"

For the tenth year, FPF’s annual Privacy Papers for Policymakers program is presenting award-winning research to lawmakers and regulators. Among the papers to be honored at an event at the Hart Senate Office Building on February 6, 2020 is Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites by Arnuesh Mathur, Gunes Acar, Michael Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. Mathur and his co-authors present an analysis of deceptive user interface designs across 11,000 shopping websites to create a taxonomy of “dark pattern” characteristics that harm user decision-making.


Dark patterns are user interface design choices that benefit an online service by coercing or deceiving users into making a decision that, if fully informed and capable of selecting alternatives, they may not make. In Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites, Arunesh Mathur and his co-authors present a new, large-scale analysis of the presence of dark patterns across 11,000 shopping websites, informing our understanding of the prevalence of these patterns and their influence on users. Mathur observes that “at best, dark patterns annoy and frustrate users. At worst, they can mislead and deceive users, e.g., by causing financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children.” In the context of shopping websites, dark patterns can trick users into signing up for recurring subscriptions and making unwanted purchases, resulting in “concrete financial loss.”

The authors contribute a taxonomy that offers precise terminology to characterize how each type of dark pattern functions and exploits users’ cognitive biases. The authors identify five distinct types of dark patterns: asymmetric, covert, deceptive, information-hiding, and restrictive. Of these, the authors state that “the majority of [dark patterns] are covert, deceptive, and information hiding in nature.” Additionally, the authors observe the effects of user interface design that employs the anchoring effect, the bandwagon effect, the default effect, the framing effect, the scarcity bias, and the sunk cost fallacy to manipulate users’ decision-making abilities.

Through their analysis, the authors discover 1,818 dark pattern instances across 53K product pages and 11K shopping websites, representing multiple types and categories. Interestingly, the authors observe that “shopping websites that were more popular, according to Alexa rankings, were more likely to feature dark patterns.” Based on their findings, the authors suggest that future study should focus on creating empirical evaluations of the effects of dark patterns on user behavior in order to develop better countermeasures against dark patterns to ensure that users can enjoy a fair and transparent shopping experience.

If you’re interested in learning more about how dark patterns in user interface design influence users’ behavior, you’ll want to read Mathur’s paper.


The Privacy Papers for Policymakers project’s goal is to put diverse academic perspectives in front of policymakers to inform the development of privacy legislation. You can view all of this year’s award-winning papers on the FPF website.

CPDP2020 Panel: The Future Is Now: Autonomous Vehicles, Trolley Problem(s) and How to Deal with Them

Last week, FPF brought together a panel of technology, legal, regulatory, and business voices to discuss “The Future is Now: Autonomous Vehicles, Trolley Problem(s) and How to Deal with Them at the 13th annual Computers, Privacy, and Data Protection conference.

The premise of the panel was that autonomous and highly automated vehicles are likely the first product that will bring AI to the masses in a life-changing way. They rely on AI for a variety of uses: from mapping, perception and prediction, to self-driving technologies. Their promise is great: increasing the safety and convenience of our cities and roads. But so are the challenges that come with it, from solving life and death questions to putting in place a framework that works for the protection of fundamental rights of drivers, passengers and everyone physically around them. This panel of experts discussed connected and automated technology, law, policy, and proposed a EU-US comparative perspective to discuss essential questions. The panel was moderated by Trevor Hughes (CEO, IAPP), and the panelists were Sophie Nerbonne (Director, CNIL), Andreea Lisievici (Head of Global Data Protection Office, Volvo Cars), Mikko Niva (Group Privacy Officer, Vodafone), and Chelsey Colbert (Policy Counsel, Mobility and Location, FPF). 

The speakers answered many questions, including: How much data and what type of data runs through all systems of an autonomous vehicle? What are the benefits of autonomous vehicles and what are the risks to individual rights? How can they be balanced? They also discussed the infamous thought experiment “the Trolley Problem” and its application to connected and automated vehicles in the real world. 

Andreea Lisievici (Head of Global Data Protection Office, Volvo Cars) started the panel with demystifying what we are talking about when we are talking about “connected and autonomous cars.” She gave an overview of the levels of autonomy in vehicles: Level 0 – no automation; Level 1 – driver assistance; Level 2 – partial driver automation; Level 3 – conditional driving automation; Level 4 – high driving automation; and Level 5 – full driving automation. Commercial vehicles currently on the market are considered level 2 (or “2+” or 3), while some other companies doing AV testing are reportedly at level 4. 

Mikko Niva (Group Privacy Officer, Vodafone) commented on the vast ecosystem of parties in the connected and automated car ecosystem. 

Sophie Nerbonne (Director, CNIL) reminded everyone that most of the data in this complex ecosystem is personal data. She recounted that when the CNIL began working with French OEMs a couple of years ago, they weren’t fully aware of how much “technical data” was in fact personal data. 

Indeed, the CAV ecosystem is vast and interconnected; we must think beyond the individual car and consider the broader ecosystem that will include city infrastructure, such as streetlights, pedestrians, other vehicles, and even other objects, such as delivery robots. These “V2X” (vehicle to everything) technologies, which includes V2V (vehicle to vehicle), V2I (vehicle to infrastructure), and V2P (vehicle to pedestrian), bring in parties such as car manufacturers, telecom providers, third party apps and services, and local governments. This ecosystem presents challenges and opportunities for not just personal car ownership, but also rental companies, rideshare and ride-hailing companies, delivery robots, micro-mobility such as scooters, and modes of transportation or freight delivery that are underground and in the air. 

The majority of this information is likely to be personal data, or capable of being linked to a person, and there are many players and data flows for organizations to consider, including drivers, passengers, pedestrians, and employees. [See here for FPF infographic about data and the connected car]. Data protection impact assessments are an important tool available to organizations, and the speakers agreed that while privacy and ethics by design is important, operationalizing this can be a challenge. Entities must look beyond legal obligations and consider how they will earn and maintain consumer trust.

As for the Trolly Problem, the speakers agreed that… it is not the right problem, since it does not ask the right question. Real life scenarios where connected/autonomous vehicles need to “make decisions” have much more parameters to take into account and many more options than what the Trolly Problem proposes. Watch the full recording of the panel by following this link.

Data and the Connected Car Infographic

ICYMI: FPF Webinar Examines Policies to Protect Child Privacy Online

As policymakers worldwide reexamine how to more effectively protect children’s privacy online without imposing broad age restrictions across the internet, the Future of Privacy Forum (FPF) recently hosted a webinar to assess diverse approaches to addressing child privacy concerns. The webinar also explored how respective policies can help address the many potential risks children face online, including oversharing, identify theft, physical safety, and exposure to inappropriate content.

“The majority of child privacy laws and proposals are focused on limiting commercialization,” said FPF Director Youth & Education Privacy Amelia Vance. “This includes preventing targeted or behavioral advertising to children, limiting or eliminating the ability to sell or share children’s data, or other protections aimed at limiting children’s exposure to marketing and protecting data from being used in inappropriate ways by companies.”

In addition to child privacy proposals from the European Union, United Kingdom, South Korea, and California, FPF experts highlighted the federal child privacy law in the U.S., the Children’s Online Privacy Protection Act (COPPA), and several of its key limitations.

While two new federal proposals and California’s new consumer privacy law extend the age of COPPA protections to 16, most children in the U.S. are only covered until age 13.  Additionally, the ability of the Federal Trade Commission⁠—which is currently reviewing COPPA⁠⁠—to effectively interpret and enforce the law’s standards for determining whether a business has ‘actual knowledge’ that a specific user of their website is a child, or is providing services that are ‘directed’ at children, has varied considerably over the statute’s nearly 20-year history.

“How ‘actual knowledge’ is defined has really changed over time,” Vance said. “We saw in the recent YouTube settlement, for example, the FTC noting that YouTube was telling potential advertisers that there were children on the platform that they could reach.”

FPF’s recent comments to the FTC in response to its ongoing review of COPPA also underscore the need for guidance on the law’s “actual knowledge” definition, as well as for the agency to modernize its policies related to voice-enabled technologies and provide greater alignment with the primary federal student privacy law, FERPA.

When it comes to developing child privacy legislation, Vance cautioned unintended consequences are “incredibly easy to occur.” To mitigate this risk, Vance advised policymakers to be as intentional and clear as possible, and to get input from those on the ground including parents, teachers, school superintendents, attorneys, and children and teens themselves.

“When looking at child privacy, it is important to be focused and ask, ‘what are you trying to regulate?” Vance noted. “Being specific about what potential risks or harms you are trying to mitigate or prevent lends itself to a more targeted bill and one that is more likely to achieve whatever that end goal is.”

Finally, policymakers may need to acknowledge that children today are growing up in a vastly different world. “Look broadly to the stakeholders who you are talking to because schools and homes are very different from how we all grew up as children,” Vance advised. “You want to make sure you’re not limiting some aspect of the digital world that can be important.”

“It’s worth noting that all of this is up for discussion right now,” Vance ultimately concluded. “This is very much an evolving space in the U.S.”

Click here to watch to the full webinar, part of FPF’s ongoing Privacy Legislation Series, which to date has also covered preemption, commercial research and defining covered data. Access the slide deck from the presentation and additional recommended materials on child privacy here.

To learn more about the Future of Privacy Forum, visit www.fpf.org and subscribe to FPF’s student privacy newsletter.

Contact:

[email protected]

 

Award-Winning Papers: "Antidiscriminatory Privacy" and "Algorithmic Impact Assessments under the GDPR"

For the tenth year, FPF’s annual Privacy Papers for Policymakers program is presenting to lawmakers and regulators award-winning research representing a diversity of perspectives. Among the papers to be honored at an event at the Hart Senate Office Building on February 6, 2020 are two papers broadly addressing the impact of algorithms on transparency and fairness: Antidiscriminatory Privacy by Ignacio N. Cofone and Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations by Margot E. Kaminski and Gianclaudio Malgieri. Cofone assesses how privacy rules can both facilitate and protect against discriminatory behavior, while Kaminski and Malgieri discuss how impact assessments serve to link the individual and systemic regulatory subsystems within the European General Data Protection Regulation (GDPR).


Law often blocks sensitive personal information to prevent discrimination. In Antidiscriminatory Privacy, Ignacio Cofone, assistant professor at McGill University Faculty of Law, presents a framework for reducing discrimination against minorities. To build this framework, Cofone explored two case studies that “illustrate when rules that regulate the flow of personal information (privacy rules) are compatible with antidiscrimination efforts and when they are not.” Through an analysis of anonymous orchestra auditions and the “Ban the Box” initiative, Cofone reveals how blocking an information flow can be successful at combatting discrimination in some cases, but not all of them. Cofone states that “privacy can protect against discrimination as well as enable a discriminatory dynamic.” He notes that certain data points may serve as proxies for categories that the law aims to protect, arguing that information about certain proxies must be blocked when employing privacy rules to fight discrimination. In the case of the “Ban the Box” initiative, for example, when employers were prohibited from asking about an applicant’s criminal history, they were more likely to discriminate against black applicants they thought might have criminal histories. Cofone found that in the “Ban the Box” case, applicants’ race became a proxy for criminal history, fostering discriminatory behavior. Cofone’s analysis offers a framework for determining the effectiveness of antidiscrimination measures based on information restrictions, including questions to consider to identify proxies for protected information.

In Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations, authors Margot Kaminski of Colorado Law School and Gianclaudio Malgieri of Vrije Universiteit Brussels, propose that impact assessments should link the GDPR’s dual methods of regulating algorithmic decision-making by providing systemic governance and also safeguarding individual privacy rights. The authors state that, in the context of decision-making algorithms, the GDPR’s existing Data Protection Impact Assessment (DPIA) should serve as an Algorithmic Impact Assessment that addresses problems of algorithmic discrimination, bias, and unfairness. Beyond serving as a tool in the GDPR’s systemic governance regime, the authors state that the DPIA should serve as an element of the GDPR’s protection of individual rights, connecting the two regulatory subsystems that underline the GDPR. The way that the DPIA links these two subsystems within the GDPR, the authors note, mandates the creation of “multi-layered explanations” for algorithmic decision-making that are targeted to everything from oversight bodies and auditors to individuals. Privacy professionals will benefit from the authors’ suggestions for improving Algorithmic Impact Assessments in the GDPR context, calling for the expansion of the right to explanation to include a “whole web of explanations…of differing degrees of breadth, depth, and technological complexity.”

If you’re interested in learning more about the relationship between privacy and discrimination, you’ll want to read the full papers from Cofone and Kaminski and Malgieri.


The Privacy Papers for Policymakers project’s goal is to put diverse academic perspectives in front of policymakers to inform the development of privacy legislation. You can view all of this year’s award-winning papers on the FPF website.

ICYMI: Future of Privacy Forum Highlights Potential “Unintended Consequences” of Child Privacy Policies at TechFreedom Event

The Future of Privacy Forum (FPF) recently joined top YouTube creators, FTC Commissioner Noah Philips, and privacy experts from Google, the Georgetown Institute for Public Representation, and others on Capitol Hill for TechFreedom’s event, Will Kids’ Privacy Break the Internet? The COPPA Rule. FPF Director of Youth & Education Privacy Amelia Vance participated in an expert panel discussion about the Federal Trade Commission’s (FTC) ongoing review of the Children’s Online Privacy Protection Act (COPPA).

The event focused on the controversy surrounding the FTC’s September 2019 settlement with YouTube over COPPA, and YouTube’s response in November announcing changes regarding “child-directed” content. To help dispel some of the resulting confusion among creators that has followed, FPF published a “mythbusters” blog post addressing common misperceptions, including that creators could “stop COPPA” by filing comments with the FTC. The FTC received more than 175,000 comments – including from FPF – as a part of the agency’s ongoing review of COPPA. FPF’s comments urged the FTC to modernize COPPA in three key areas: policies related to voice-enabled technologies, guidance on COPPA’s “actual knowledge” definition, and greater alignment with the primary federal student privacy law, FERPA.

As YouTube content creators face this new uncertainty, Vance emphasized the importance of keeping the conversation focused on the facts, and potential solutions. “I think the key here is to provide as much insight about what laws creators, in particular, have to follow,” Vance said. “We should be talking about how to make it more practical for the people who have to actually implement these new provisions.” However, she noted that, at the end of the day, child privacy in the U.S. may be “out of the FTC’s hands,” since rulemaking will take a significant amount of time and both Congress and state legislatures have indicated that they are eager to legislate on child privacy.

Vance, who spoke at the FTC’s COPPA workshops both in late 2019 and in 2017, reminded the audience that a lot has changed since the FTC’s last review of COPPA in 2013. Europe and California have both passed significant new consumer privacy laws with child privacy protections, and some European countries are considering even higher protections for children. Additionally, two new federal proposals call for extending the age of COPPA protections to 16, and one of those bills also includes an update of COPPA’s “actual knowledge” definition, a key enforcement mechanism.

Additionally, Vance cautioned against legislators or regulators expanding child privacy through “opt-in” parental consent, citing the example of students in Louisiana who, under a strict opt-in regime, missed out on the state’s scholarship program because they couldn’t get parental sign off.

“It’s really important to remember that there are unintended consequences here,” Vance noted. “Where we’re going with privacy protections is an underlying framework of protections that would apply across the board, and not protections that parents have to consent to. Exactly what the boundaries of that…remains to be seen.”

Click here to watch to the full TechFreedom event, read FPF’s comments to the FTC about COPPA here, and access additional FPF child privacy resources here.

To learn more about the Future of Privacy Forum, visit www.fpf.org and subscribe to FPF’s student privacy newsletter.

Contact:

[email protected]