Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade

Today, FPF is publishing a white paper co-authored by CEO Jules Polonetsky and hackylawyER Founder Elizabeth Renieris to help corporate officers, nonprofit leaders, and policymakers better understand privacy risks that will grow in prominence during the 2020s, as well as rising technologies that will be used to help manage privacy through the decade. Leaders must understand the basics of technologies like biometric scanning, collaborative robotics, and spatial computing in order to assess how existing and proposed policies, systems, and laws will address them, and to support appropriate guidance for the implementation of new digital products and services.

The white paper, Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade, identifies ten technologies that are likely to create increasingly complex data protection challenges. Over the next decade, privacy considerations will be driven by innovations in tech linked to human bodies, health, and social networks; infrastructure; and computing power. The white paper also highlights ten developments that can enhance privacy – providing cause for optimism that organizations will be able to manage data responsibly. Some of these technologies are already in general use, some will soon be widely deployed, and others are nascent.

Read the White Paper

cleanshot 2023 07 18 at 10.31.00@2x

Child Privacy Protections Compared: California Consumer Privacy Act v. Proposed Washington Privacy Act

By Anisha Reddy, Tyler Park, and Amelia Vance

As legislatures consider enacting broad consumer privacy legislation, officials must consider whether, and how, to address children’s and teen’s privacy. The leading models for addressing consumer privacy contain language addressing child privacy that differs in significant ways. Many states have introduced legislation that mirrors the framework of the California Consumer Privacy Act (CCPA). The proposed Washington Privacy Act (SB 6281) has also emerged as an influential framework. CCPA and SB 6281 differ in many respects, including with regard to child privacy. As described below, the frameworks take different approaches to the age of youth protected, the statutory knowledge standards, and the consumer rights granted. 

As FPF previously wrote, SB 6281 would create a comprehensive data protection framework for Washington residents that includes both individual rights and obligations on data “controllers,” (both for-profit businesses and nonprofits) that go beyond the rights and obligations in CCPA. A bill similar to SB 6281 failed to pass the Washington legislature in 2019, but SB 6281 is an influential model for states considering alternatives to California’s approach to consumer privacy legislation. 

Both CCPA and SB 6281’s approaches to child privacy build on the federal Children’s Online Privacy Protection Act (COPPA), which requires “operators of commercial websites and online services directed to children under 13 or knowingly collecting personal information from children under 13 to obtain verifiable parental consent prior to the collection, use, or disclosure of children’s personal information.” A chart with a full comparison of the relevant language in COPPA, CCPA, and SB 6281 is below. 

CCPA adds new consumer rights for children and also extends child privacy protections to teens. SB 6281 would add new consumer rights for children, such as data portability, but would not extend child-centric protections to teens. The approaches differ in how they craft protections for children: CCPA contains specific requirements regarding the sale of children’s data, while SB 6281 would place children’s data in a larger category of “sensitive data” that would enjoy heightened protection – the category would also include types of data not related to age such as biometrics. 

CCPA and SB 6281 differ in three key ways: 

Age of youth who are protected: CCPA extends child privacy protections to youth under age 16, while SB 6281 would provide heightened protections for children under 13. Though SB 6281’s approach mirrors COPPA’s age threshold, CCPA’s expansion of special protections to teens is likely to become more common. Most consumer privacy bills introduced since CCPA was passed in 2018 would also extend protections to teens. Creating special protections for teens’ data is consistent with international trends – Europe’s GDPR sets age 16 as the threshold for special privacy protections, but permits member states to reduce the age as low as 13; many countries have chosen to retain age 16 as the age of consent for data processing. Though SB 6281 would not apply age-based protections to teens’ data, it would apply strong protections to sensitive data for all consumers, not just young people. Therefore, teens using online services in Washington would still experience a meaningful increase in their privacy rights and protections.

Knowledge Standards: CCPA and SB 6281 contain different “knowledge standards” – they have different thresholds for determining when a regulated entity “knows” a user is a child. SB 6281 would categorize the “personal data from a known child” as “sensitive data,” which would require parent permission before a collector could process the child’s data. What constitutes a “known child” in this context is not clear. 

In contrast, CCPA adopts the “actual knowledge” standard from COPPA and adds that a business that willfully disregards a consumer’s age has actual knowledge of the consumer’s age. Entities are subject to COPPA if they have “actual knowledge that they are collecting, using, or disclosing personal information from children under 13.” While many experts have traditionally advised companies that this standard was applicable only if the company knew a specific child was using their platform, the FTC’s recent YouTube settlement has raised questions about whether more generalized knowledge that children are using an entity’s service could be interpreted as falling under the actual knowledge standard–for example, the FTC’s complaint noted that YouTube enticed brands to market on YouTube by highlighting that children were using the service. 

Consumer rights granted: While CCPA creates protections for children and teens that relate to the sale of their data, SB 6281 would require parental consent before a controller may “process,”  meaning perform “any operation” on, data of a known child. Under CCPA, children’s data cannot be sold unless parents (if a child is under 13) or teens (ages 13–15) opt-in to sale. CCPA also provides new protections to all consumers regardless of age: they are given the right to request deletion of personal information, the right to know how their information is used, and businesses may not discriminate against consumers for exercising CCPA rights. While CCPA’s protections for children only currently apply to the sale of data, a ballot initiative to amend CCPA would expand the legal protections to cover the “sharing” of children’s data as well; the initiative will be voted on in November 2020.

SB 6281 would require collectors to obtain opt-in consent from parents before taking a much wider variety of actions than those covered by the child privacy provisions in CCPA. Collectors would not be permitted to process “sensitive data,” a category that includes data from a known child, without obtaining consent from the child’s parent or guardian. “Sensitive data” also includes non-age-related types of data such as religious beliefs, mental or physical health information, sexual orientation, unique biometric or genetic identifiers, and specific geolocation data. While CCPA’s child-specific protections only address sale of children’s data, SB 6281 would govern “any operation or set of operations which are performed on personal data or on sets of personal data, whether or not by automated means, such as the collection, use, storage, disclosure, analysis, deletion, or modification of personal data.” This scope of protections for children is wider than the protections included in both CCPA and COPPA.

Preemption? It is important to note that COPPA preempts some child privacy laws, preventing states from enacting requirements that conflict with COPPA provisions. Privacy expert Peter Swire has written that COPPA preempts state’s attempts to regulate activities covered by COPPA. Though the scope of COPPA preemption has not been decided by courts, the Federal Trade Commission, which enforces COPPA, wrote in an amicus brief that it believes Congress did not intend for COPPA to displace state laws that create additional protections for teens. Even if a court finds that COPPA preempts some or all of the child privacy protections in CCPA or SB 6281 (if it is enacted), both frameworks are nevertheless influential as Congress considers how to craft a comprehensive federal privacy law or update COPPA.

 

Washington state and California share a commitment to youth privacy, but the CCPA and SB 6281 approaches diverge in notable ways that could eventually create headaches for businesses attempting to comply with differing U.S. and international standards. We’ve seen this in student privacy, where edtech companies need to examine more than 100 state and federal student privacy laws to determine their legal obligations. Moving ahead in 2020, we expect to see other states introduce bills based on CCPA and SB 6281 that include additional protections for children, as well as standalone state and federal bills governing child privacy.

 

Child Privacy Protections in COPPA, CCPA, and SB 6281

COPPA  CCPA SB 6281
Age Applies to children under age 13 Applies to children under age 16 Would apply to children under age 13
Who can consent to data collection/use? Parents or guardians Parents or guardians (when child is under age 13) or Teen (when they are age 13 or over) Parents or guardians
Information Covered Personal information from a child collected or maintained by operators of commercial websites and online services directed to children or with actual knowledge the operator is collecting, using, or disclosing children’s data.  Personal information of consumers if the business has actual knowledge that the consumer is less than 16 years of age. The personal data from a known child.
Rights and Protections Parental consent must be obtained before data is collected. Parents also have rights to access and delete their child’s information. Operators must also have a privacy policy; maintain information only as long as necessary to fulfill the purpose for which it was created; and maintain the confidentiality, security, and integrity of information. Children and 13–15 year olds must opt-in for data to be sold; consumers of all ages have rights: to access, delete, opt-out of the sale of their data, be informed of collection, and not be discriminated against for exercising CCPA rights. Parental consent would need to be obtained before processing data from a “known child.” Consumers of all ages would have rights: to access, correct, delete, port, and opt-out of data processing.
How the Law/Bill Incorporates Child Privacy Entirely focused on protections for children under 13. Sale of data is opt-in instead of opt-out for children under 16; all other protections applicable to all consumers, including children. “Personal data from a known child” would be a type of “sensitive data,” and “sensitive data” requires opt-in consent before processing. 
Knowledge Standard Operators with products that are “directed to children” or that have “actual knowledge” they are collecting data from a child. Applies to businesses with “actual knowledge” that consumer is under 16; willful disregard constitutes actual knowledge. Would apply to personal data from a “known child,” with child defined as under 13. “Known” is not defined. 
Exceptions Information used for internal operations is exempt from needed consent [None applicable] Collectors in compliance with the verifiable parental consent mechanisms under COPPA and personal data regulated by FERPA

 

Takeaways from the Understanding Machine Learning Masterclass

Yesterday, the Future of Privacy Forum provided bespoke training on machine learning as a side event during the Computers, Privacy and Data Protection Conference (CPDP2020) in Brussels. The Understanding Machine Learning masterclass is a training aimed at policymakers, law scholars, social scientists and others who want to more deeply understand the data-driven technologies that are front of mind for data protection discussions. The training received a lot of interest from academics, civil society, and key staff from policymakers in Brussels.

The expert speakers consisted of:

The speakers opened the black box of machine learning step by step. A key question the speakers answered was: How do you get from mathematical regression analysis to (unsupervised) learning and end up with a neural network?

The presentations shed light on the black box by bringing the details of the technology to an audience without an in-depth computer science background. Starting with a primer on the basics of the field, the speakers examined issues of particular consequence to policymakers such as transparency, fairness, bias, and discrimination.

The slides are available for download here. Attendees also received a copy of FPF’s Privacy Expert’s Guide to Artificial Intelligence and Machine Learning, a guide that explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services. Learn more about FPF’s presence in Europe.

A Privacy Playbook for Connected Car Data

Drivers and passengers expect cars to be safe, comfortable, and trustworthy. Individuals often consider the details of their travels—and the vehicles that take them between their home, the office, a hospital, their place of worship, or their child’s school—to be sensitive, personal data.

The newest cars contain numerous sensors, from cameras and GPS to accelerometers and event data recorders. Carmakers, rideshare services, tech companies, and others are increasingly using data about cars to reduce emissions, manage traffic, avoid crashes, and more. The benefits of connected vehicles for individuals, communities, and society are clear. So are the privacy risks posed by increased collection, use, and sharing of personal information about drivers, passengers, cyclists, and pedestrians.

It is crucial that companies, advocates, academics, technical experts, and policymakers craft creative solutions that promote the benefits of connected vehicles while mitigating the privacy risks. Global legal frameworks have a role to play in assuring meaningful data protection and promoting trust, as do voluntary, enforceable codes of conduct and technical standards.

However, it is plain that entities must look beyond legal obligations and consider how they will earn and maintain consumer trust. With this white paper, Otonomo has taken an important step to advance the dialogue on connected car data privacy.

Originally released in October 2019, Otonomo’s Privacy Playbook for Connected Car Data presents nine plays for putting privacy at the center of your data business practices.

Read the White Paper

Award-Winning Paper: "Privacy Attitudes of Smart Speaker Users"

For the tenth year, FPF’s annual Privacy Papers for Policymakers program is presenting to lawmakers and regulators award-winning research representing a diversity of perspectives, including those from students and academics. Among the papers to be honored at an event at the Hart Senate Office Building on February 6, 2020 is Privacy Attitudes of Smart Speaker Users by Nathan Malkin, PhD student in computer science at University of California, Berkeley, and his coauthors. The study surveys privacy attitudes of smart speaker users and presents an evaluation of users’ comprehension and use of existing privacy settings and controls.


In Privacy Attitudes of Smart Speaker Users, study authors Nathan Malkin, Joe Deatrick, Allen Tong, Primal Wijesekera, Serge Egelman, and David Wagner surveyed 116 owners of Amazon and Google smart speakers.

In an effort to understand whether smart speaker users are making informed decisions about the privacy consequences and controls offered by smart speakers, the authors used recordings of real interactions between the study participants and their devices. The authors found that “almost half did not know that their recordings were being permanently stored and that they could review them; only a quarter reported reviewing interactions, and very few had ever deleted any.” The authors found that the way smart speakers default to permanent storage of interactions with users places an “undue burden” on the user, and “is almost certain to result in most interactions going unreviewed.” However, the authors observe that, after the conclusion of the study, both Google and Amazon updated their voice assistants to allow for automatic data deletion after three or 18 months (Google) or the deletion of a day’s worth of recorded interactions (Amazon).

While more than 71% of smart speaker users had not raised privacy concerns related to their device in the past, the authors are careful to state that people are not “apathetic” about their privacy. Instead, people’s acceptance of smart speakers is tied closely to what is happening with their data, as well as the specific subjects in the speaker’s recordings. According to the study, more than 72% of users found recordings reviewed by a computer to be acceptable, while users were more likely to view human review of recordings as unacceptable. Additionally, users found certain interactions with smart speakers more sensitive than others. Recordings of children, financial information, sexual or medical topics, locations, and personally identifying information were viewed as particularly sensitive.

The authors conclude that privacy controls on smart speakers are underutilized. Based on the views of study participants, the authors suggest that voice assistants adopt shorter retention periods, despite the fact that users did not feel that their stored recordings presented a grave privacy danger. The authors may have revealed an important insight about individual perspectives on privacy: “people seem more protective of the privacy of others” than their own privacy.

If you’re interested in reading more about the attitudes of smart speaker users toward privacy, you’ll want to check out the full paper.


The Privacy Papers for Policymakers project’s goal is to put diverse academic perspectives in front of policymakers to inform the development of privacy legislation. You can view all of this year’s award-winning papers on the FPF website.

The Future Is Now: FPF at CPDP2020

Computers, Privacy and Data Protection (CPDP) Conference 2020 commences next week in Brussels, bringing together academics, data protection authorities, policymakers, data scientists, and civil society to network, exchange ideas, and talk over the latest trends. Check out the panels and events FPF will be participating in below. 

Algorithmic Regulation of Transportation

Wednesday, January 22 at 11:45, Petite Halle

We are bringing together experts across the privacy, mobility, and civic space to discuss the challenges of transforming—and enforcing—transportation regulations through the use of code and algorithms. This panel aims to build upon the issue as framed by the ITIF report released earlier this year, which introduced multiple potential frameworks for integrating automated enforcement mechanisms in the transportation industry. At CPDP, we hope to reexamine this issue with the specific lens of privacy and data protection and ultimately, identify concrete steps cities and mobility operators can take to share data responsibly. Specific questions we hope to address in this panel:

The speakers are Simon Hania, Uber; Ger Baron, City of Amsterdam; Karen Vancluysen, Polis; and Kara Selke, Streetlight Data. The panel is moderated by Rob van Eijk, FPF. 


The Future Is Now: Autonomous Vehicles, Trolley Problem(s) and How to Deal with Them 

Wednesday, January 22 at 14:15, Petite Halle

Autonomous and highly automated vehicles are likely the first product that will bring AI to the masses in a life-changing way. They rely on AI for a variety of uses: from mapping, perception and prediction, to self-driving technologies. Their promise is great: increasing the safety and convenience of our cities and roads. But so are the challenges that come with it, from solving life and death questions to putting in place a framework that works for the protection of fundamental rights of drivers, passengers and everyone physically around them. This panel proposes a EU-US comparative perspective to discuss essential questions. Are existing legal frameworks well-equipped to deal with these challenges? How much data and what type of data runs through all systems of an autonomous vehicle? What rights are affected? What ethical considerations might play into decision-making algorithms around accidents?

Moderated by IAPP’s Trevor Hughes, the panel includes speakers Sophie Nerbonne, Director of Economic Co-Regulation, CNIL; Andreea Lisievici, Volvo Cars; Chelsey Colbert, FPF; and Mikko Niva, Vodafone. 


Turning the Tables: Academics in the Hot Seat

Wednesday, January 22 at 16:00, Grande Halle 

In numerous privacy and data protection conferences and workshops, academics moderate discussions between policymakers, regulators and industry players. Academics are tough inquisitors and harsh critics, pointing out the shortcomings of legislation, the slow turn of the wheels of justice, the practical challenges of enforcement and the tangled web of interests

of businesses. In this session we turn the tables. Helen Dixon, Data Protection Commissioner for Ireland, will be asking the questions. The academics will be in the hot seat providing direct and complete answers. Are their theories sound and coherent? Do they influence the world outside the ivory tower? Did their writings withstand the test of time?

Speakers include Franziska Boehm, Karlsruhe Institute of Technology; Neil Richards, Washington University School of Law; Omer Tene, IAPP; Gabriela Zanfir-Fortuna, Future of Privacy Forum. The session will be moderated by Helen Dixon, Data Protection Commissioner for Ireland. 


SIDE EVENT

Masterclass: Understanding Machine Learning

Thursday, January 23 from 16:00-18:00, Area 42, 46 Rue des Palais, 1030 Bruxelles, Belgium

This Masterclass is aimed at policymakers, law scholars, social scientists and others who want to more deeply understand the data driven technologies that are front of mind for data protection discussions. Structured as an interactive lesson, technology experts will present a training session focused on Artificial Intelligence and Machine Learning. 

Attendees will be provided with a copy of “The Privacy Expert’s Guide to Machine Learning” and will join leading machine learning experts for a presentation geared at bringing the details of the technology to an audience without an in depth computer science background. In addition to a primer on the basics of the field, issues of particular consequence to policymakers such as fairness, bias, and data minimization will be examined. 

Expert Speakers: 

RSVP for the event here.


TECH POLICY HAPPY HOUR

Friday, January 24 from 17:00-20:00, Ginette Bar 

Join us at Ginette Bar in Place du Luxembourg for an evening of drinks and networking. No RSVP needed. 


ADDITIONAL INFORMATION

Please view the entire program for additional details.

If you would like to discuss FPF’s expanding activities in Europe, please contact us at [email protected].

FPF Welcomes New Staff to Focus on Artificial Intelligence and Mobility

FPF is pleased to announce the addition of two new members to its team, Dr. Sara Jordan and Chelsey Colbert

As individuals’ personal data is increasingly used by algorithmic systems that employ machine learning and artificial intelligence technologies, the benefits to consumers, businesses and society are evident, but so are the privacy risks. In her role as policy counsel, Sara will lead FPF’s efforts to create an ethical review process that can provide trusted vetting of research projects. She will also support the organization’s work on artificial intelligence and machine learning and emerging ethical questions relating to privacy and data protection. 

Her profile includes privacy implications of data sharing, data and AI review boards, privacy analysis of AI and Machine Learning (AI/ML) technologies, and analysis of the ethics challenges of AI/ ML. Sara is an active member of the IEEE Global Initiative on Ethics for Autonomous and Intelligent Systems. Prior to working at FPF, Sara was faculty in the Center for Public Administration and Policy at Virginia Tech (2014-2020) and in the Department of Politics and Public Administration at the University of Hong Kong (2007- 2013).

Similarly, individuals’ geolocation data is increasingly collected, used, and shared as part of businesses ranging from connected cars and scooter rentals to mobile apps and online advertising. While FPF recognizes the benefits of geolocation data that lead to the development of safer vehicles and more personalized services, we acknowledge the privacy risks associated with sensitive data use. Chelsey will serve as policy counsel, leading FPF’s portfolio on mobility and location data, including connected cars, autonomous vehicles, ride-sharing, micro-mobility, drones, and robotics. 

Prior to FPF, Chelsey was an associate at an international business law firm in Canada and was seconded as in-house privacy and data governance counsel to Sidewalk Labs, an Alphabet company that designs and builds urban innovations to help cities meet their biggest challenges. Chelsey holds a J.D. with a major in technology law and policy from the University of Ottawa.

FPF to Present First-Ever Research Data Stewardship Award

Nominations Requested by March 12, 2020

Today, the Future of Privacy Forum (FPF) is announcing a first-of-its-kind award recognizing privacy protective research collaboration between a company and academic researchers. When privately held data is responsibly shared with academic researchers, it can support significant progress in medicine, public health, education, social science, and other fields.

With this in mind, FPF is requesting nominations for its Award for Research Data Stewardship. The goal is to promote the safe use and transfer of privately held company data to academic institutions for study and analysis. The award is supported by the Alfred P. Sloan Foundation, a not-for-profit grantmaking institution that supports high-quality, impartial scientific research and institutions.

“Increasingly, the challenges facing our society – health, transportation, education – are being addressed by independent research on consumer data collected by private companies,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “This award recognizes projects that minimize potential privacy risks while helping academics access corporate data for research that benefits society.”

Academics and their corporate partners are invited to nominate a successful data-sharing project that reflects privacy protective approaches to data protection and ethical data sharing. Nominations will be reviewed and selected by an Award Committee comprised of representatives from FPF, leading foundations, academics, and industry leaders. Nominated projects will be judged based on several factors, including their adherence to privacy protection in the sharing process, the quality of the data handling process, and the company’s commitment to supporting the academic research. The award winner will be notified by Monday, March 16, 2020.

FPF will present the award at an April 6, 2020 gala in Washington, DC to a member of the academic research team and a senior executive at the company that provided the data. Applicants should apply by filling out the corporate and academic nomination forms by Thursday, March 12, 2020. Self-nominations and nominations from the public are welcome. Read more about the award, event, and more in the call for nominations, and email Kelsey Finch, FPF Senior Counsel, at [email protected] with any questions.

FPF Director of AI & Ethics Testifies Before Congress on Facial Recognition

WASHINGTON, D.C. – In a hearing today before the House Committee on Oversight and Reform, Future of Privacy Forum (FPF) Senior Counsel and Director of AI and Ethics Brenda Leong testified on the privacy and ethical implications of the commercial use of facial recognition technology.

“Technology has only accelerated the practice of identification and tracking of people’s movements, whether by governments, commercial businesses, or some combination thereof, leading to the real concerns about an ultimate state of ubiquitous surveillance,” wrote Leong. “How our society faces these challenges will determine how we move further into the conveniences of a digital world, while continuing to embrace our fundamental ideals of personal liberty and freedom.”

In her testimony, Leong emphasized that not every camera-based system is a facial recognition system,” and that the term facial recognition is often broadly and confusingly used in reference to other image-based technology that does not necessarily involve individual identification.

“Understanding how particular image-analysis technology systems work is a critical foundation for effectively understanding and evaluating the risks of facial recognition,” Leong noted in her written testimony. To help educate policymakers, consumers, and others about the varying levels of facial image software and associated benefits and risks, and privacy implications of each, FPF created the infographic, Understanding Facial Detection, Characterization, and Recognition Technologies.

Leong outlined a set of privacy principles created by FPF that should be considered as the foundation of any facial recognition-specific legislation, writing, “consent remains the critical factor, and should be tiered based on the level of personal identification collected or linked, and the associated increasing risk levels.” Leong highlighted that the default standard for consent should be “an “opt-in” or “affirmative consent” model consistent with existing FTC guidelines.

As educational institutions across the country, including colleges and public school districts, consider the use of facial recognition technology on campus, Leong pointed to guidance in the privacy principles that calls for policymakers to: “Give special consideration to the age, sophistication, or degree of vulnerability of those individuals, such as children, in light of the purposes for which facial recognition technology is used, including whether additional levels of transparency, choice, and data security are required.” She also testified that “there is no good justification for the use of facial recognition in a K-12 school.”

In 2019, FPF held a webinar about facial recognition in schools and wrote to the New York State Legislature in support of a well-crafted moratorium on facial recognition systems for security uses in public schools, while cautioning against overly broad bans or language that might have unintended consequences on other security programs.

In her written testimony, Leong cited controversial developments surrounding the implementation of passports and the requirement that they include a photo, resistance to calls for a federally issued national ID card, and REAL ID requirements for state licensing as precedent for policymakers seeking to balance individual rights and freedoms with efficiencies and security.

“These historical discussions reflect the ongoing need to determine the appropriate balance of technological, legal and policy standards and protections, along with the underlying threshold question of whether some systems are simply too high risk to implement regardless of perceived benefits,” wrote Leong.

To read Leong’s written testimony, click here. For an archived livestream of the committee hearing, visit https://oversight.house.gov/. To learn more about the Future of Privacy Forum, visit www.fpf.org and FPF’s facial recognition and biometrics work, including relevant resources, recommended reading, and other materials.

CONTACT: [email protected]

Award-Winning Paper: "The Many Revolutions of Carpenter"

For the tenth year, FPF’s annual Privacy Papers for Policymakers program is presenting to lawmakers and regulators award-winning research representing a diversity of perspectives. Among the papers to be honored at an event at the Hart Senate Office Building on February 6, 2020 is The Many Revolutions of Carpenter by Paul Ohm of Georgetown University Law Center. The paper’s detailed assessment of the 2018 Supreme Court opinion in Carpenter v. United States is an essential read for those interested in the changing conception of privacy in the criminal justice system.


The Supreme Court’s 2018 majority opinion in Carpenter v. United States, the author argues, is the most important Fourth Amendment opinion in decades. The opinion requires the police to obtain a warrant to access an individual’s historical whereabouts from the records of a cell phone provider.

Ohm states that Carpenter represents a new approach to the “reasonable expectation of privacy” test: “Until now, the Supreme Court has tended to pay more attention to the nature of the police intrusion required to obtain information than to the nature of the information obtained.” In Carpenter, the justices argued that individuals have a “reasonable expectation of privacy in the whole of their physical movements,” suggesting that data tracking those movements should be considered private and subject to warrant requirements.

Ohm notes that the Carpenter opinion serves as the death of the “third party doctrine” – an idea that holds that information a person voluntarily discloses to a third party is not protected by a reasonable expectation of privacy. The justices write: “the fact that the Government obtained the information from a third party does not overcome Carpenter’s claim to Fourth Amendment protection.” Ohm points out that the justices focused on the nature of the information rather than the structure of the database or its relation to the individual, likely ensuring that this opinion will apply to other massive collections of historical geolocation information.

Finally, Carpenter creates a previously unrecognized rule of “technological equivalence.” Ohm explains: “If a technology, or a near-future improvement, gives police the power to gather information that is the ‘modern-day equivalent’ of activity that has been held to be a Fourth Amendment search, the use of that technology is also a search.” The justices acknowledge that information technology is exceptional – different in kind, not merely in degree, from what has come before.

If you’re interested in reading more about how Carpenter v. United States represents an inflection point in Fourth Amendment court cases concerning privacy, you’ll want to check out the full paper.


The Privacy Papers for Policymakers project’s goal is to put diverse academic perspectives in front of policymakers to inform the development of privacy legislation. You can view all of this year’s award-winning papers on the FPF website.