Exploring Consumer Attitudes About Privacy

We’ve been taking a look at recent studies about consumer attitudes regarding data use and privacy.

A new study, “Privacy Front and Center,” from Consumer Reports’ Digital Lab with support from Omidyar Network, found that American consumers are increasingly concerned about privacy and data security when purchasing new products and services, which may be a competitive advantage to companies that raise the bar for privacy. A majority of smart product owners (62%) worry about potential loss of privacy when buying them for their home or family, according to CR’s February 2020 nationally representative survey. The study found that the privacy and security conscious consumer class seems to include more men and people of color.

Additional findings from Consumer Reports:

The Cisco 2020 Consumer Privacy Survey, “Protecting Data Privacy to Maintain Digital Trust,” found that protecting data privacy remains important to consumers during the pandemic. In fact, one-third of consumers are “Privacy Actives” who have stopped doing business with organizations over their data privacy practices. The survey also found that residents of all 12 countries in the study view their privacy laws very favorably and want more transparency about how their data is being used.

Cisco also found:

In addition, Deloitte recently released its 2020 Digital Consumer Trends survey, which focused on the growth in smart device use and data in the United Kingdom. It found that UK consumers have become less concerned about the use of their data. In 2018, 47% of respondents stated they were ‘very concerned’, this has now halved to 24%.

FPF Welcomes New Members to the Youth & Education Privacy Team

We are thrilled to announce two new members of FPF’s Youth & Education Privacy team. The new staff –  Karsen Bailey and Bailey Sanchez – will help expand FPF’s technical assistance and training, resource creation and distribution, and state and federal legislative tracking.

You can read more about Karsen and Bailey below. Please join us in welcoming them to the team!

 

Karsen Bailey

Karsen Bailey is a Project Assistant for the Youth and Education Privacy team at the Future of Privacy Forum. Karsen assists with the planning and execution of projects completed by our Youth and Education Privacy team. He will help manage scheduling for the team, assist with the organization and release of resources and expert insights, and manage the Student Privacy Compass website. Prior to working at FPF, he was a Legislative Intern in the office of Senator Bob Casey working on agriculture, environment, and energy issues. He then continued on in the office as an Executive Scheduling Assistant, where he provided administrative and scheduling support to the Senator’s team. 

Karsen is a 2020 graduate from the University of Mississippi, where he completed his B.A cum laude in Economics and Political Science with a minor in International Studies.

“I’m so excited to join a team of strong, committed people who are doing amazing work supporting students and teachers in the classroom while ensuring that their privacy is protected.”

 

Bailey Sanchez

Bailey Sanchez is a Policy Fellow with the Youth and Education Privacy team. Bailey is primarily providing legal research and analysis for FPF’s resources and long-term projects. Bailey will also oversee FPF’s Train the Trainer program. Prior to joining FPF, Bailey was a legal extern at the International Association of Privacy Professionals where she worked closely with IAPP’s General Counsel/Data Protection Officer and Research Director to produce research and support the legal team. Before law school, she also worked as an Attractions Coordinator at Disney.

Bailey is a 2020 graduate of the University of New Hampshire School of Law. While at UNH Law, she was a research assistant to Professor Alexandra Roberts, a legal writing teaching assistant to Professors Jennifer Davis and Rachel Goldwasser, and president of the Civic Engagement Society. She also spent a year as a student attorney in UNH Law’s Intellectual Property & Transaction Clinic. Bailey earned her Bachelor’s Degree from the University of Central Florida where she majored in Political Science and minored in Mass Communication.

“I am excited about joining FPF’s Youth & Education Privacy team during such a unique moment in time for student privacy. I’m looking forward to being a resource to stakeholders as they navigate new and existing student privacy concerns.”

Interested in student privacy? Subscribe to our monthly education privacy newsletter here. Want more info? Check out Student Privacy Compass, the education privacy resource center website.

FPF Best Practices and Contract Guidelines Help Companies Share Data with Academic Researchers

Does your company have data that could help academic researchers unravel the mysteries of human health, behavior, education, or other areas of study? Data held by private organizations has the potential to lead to scientific insights that can benefit society and improve lives – if it can be accessed in a responsible manner that respects personal privacy.

To that end, FPF has published a list of best practices for companies that are considering sharing personal data with academic researchers. The Best Practices for Sharing Data with Academic Researchers were developed by FPF Corporate Academic Data Stewardship Research Alliance, a group of more than two dozen companies and organizations with the support of the Alfred P. Sloan Foundation.

The best practices favor academic independence and freedom over tightly controlled research and encourage broad publication and dissemination of research results while protecting the privacy of individual research subjects. Specific best practices include having a written data sharing agreement, practicing data minimization, and developing a common understanding of relevant de-identification techniques, among many others.

In addition, FPF published Contract Guidelines for Data Sharing Agreements Between Companies and Academic Researchers. The guidelines cover best practices and sample language that can be used in contracts between a company that supplies data to one or more researchers for academic or scientific research purposes.

Supporting ethical data sharing to enable academic researchers to use data from corporations is a priority for FPF. In addition to creating the data stewardship alliance and publishing the best practices, FPF recently awarded the first Award for Research Data Stewardship and is creating Ethical Data Sharing Review Committees to provide independent review and oversight. You can learn more about these initiatives at FPF.org/data.

Event Recap: Panel at the Annual Privacy Forum 2020

Authors: Hunter Dorwart and Rob van Eijk


To track and to get tracked: new innovative methods and advancements

On September 30, 2020, the Future of Privacy Forum participated in a panel at the Annual Privacy Forum 2020 (APF-2020). The event is organized annually by the European Union Agency for Cybersecurity (ENISA), the Directorate-General for Communications Networks, Content and Technology (DG CONNECT), and the Católica University of Portugal, Lisbon School of Law.

 

FPF’s Rob van Eijk contributed to a panel on tracking and tracing, To track and to get tracked: new innovative methods and advancements, alongside Marit Hansen (State Data Protection Commissioner of Land Schleswig-Holstein), Fernando Silva (Banco de Portugal, DPO cabinet), and Prokopios Drogkaris (ENISA, moderator).

 

The Pros and Cons of Existing Legal Provisions Against Tracking 

The process of online tracking has evolved with the advancement of technology and as a result has become more ubiquitous and connected. Today, tracking through applications or even IoT devices is augmenting user behavior. In some instances, users might request the provision of such tracking services or consider it as the default option.

 

The proliferation of tracking tools across the technological ecosystem raises the question: how to define the process of tracking? Marit Hansen, defined tracking as the act of following something or someone and when done over the Internet, involves analyzing behavior by making inferences about a user’s personal interest, information, and preferences.

 

Hansen also pointed out that there are many different tracking techniques and mechanisms, e.g., monitoring web traffic through cookies, tracking across devices, using the Media Access Control (MAC) address of mobile phone to pinpoint geolocation, and utilizing Wi-Fi Access Points in public areas like airports or train stations. Such complexity has raised profound questions around how regulators can effectively limit the negative impacts of tracking without undermining further innovation or preventing the necessary use of these technologies.

 

According to Hansen, the implementation of the General Data Protection Regulation (GDPR) and ePrivacy Directive (2002/58/EC amended by 2009/136/EC) has not solved the problem, as it has pushed industry to rely on cookie banners to obtain consent, which often overwhelm or confuse consumers (see 2019 CJEU C-673/17). The purpose of a cookie banner is to inform users and ask for the consent of the user for services that are not strictly necessary. Users may often customize the use of cookies from a particular website through a dropdown menu. The problem, however, is that a user must choose from a list of multiple cookies such as essential cookies, functionality cookies, social media cookies, targeted cookies and advertising cookies. These choices create confusion and make it difficult to determine which cookies are strictly necessary for the website to function.

 

Recent Technological Changes in the Industry Can Complement Legal Instruments

In addition, any evaluation of existing legal provisions to mitigate harmful tracking must also take into account the way big industry players are addressing the concerns. Rob van Eijk gave an overview of how tracking technologies work behind the scenes of an online advertisement in order to make sense of the latest technological advancements in the industry (Figure 1).

Figure 1. A graphical representation of the dataflow behind the scenes of an online advertisement.

 

With respect to web browsing and cookies, there are a variety of tracking differences between the major browsers that illustrate how the industry is changing. For example, while both Firefox and Safari have restricted the use of third-party cookies on their browsers, the former uses enhanced tracking protection (ETP) while Safari utilizes Intelligent Tracking Prevention (ITP). In addition, Google announced that Chrome will follow Firefox and Safari in banning third-party cookies but will not implement the change until 2022.

 

Van Eijk also identified four technical approaches that, in his view, indicate where the post Do Not Track (DNT) system is headed.

 

  1. Stricter browser security settings, e.g., SameSite cookies and Strict Site Isolation. We remark that the implementation of these settings differs by browsers.
  2. Google’s Federated Learning of Cohorts (FLoC), which aims to embed privacy by decentralizing the browser itself. FLoC uses machine learning in the browser to group people into audience segments.
  3. Apple’s IDFA and App Tracking Transparency Framework, which requires developers in iOS 14 to offer Privacy Nutrition Labels and obtain consent prior to tracking users across apps and websites.
  4. The Global Privacy Control (GPC), which allows users to request that their data not be sold or shared and has been included in US state legislation such as the CCPA.

 

From the discussion, it became clear that the practice of tracking is changing across the entire ecosystem, not just within browsers but also across apps and through different policy initiatives. Such developments require policymakers and industry leaders to reevaluate their approach to traditional notice and choice frameworks because of the interconnected nature of technology and data sharing. Van Eijk suggested that relying solely on consent may not be the solution because it puts the burden on the user to avoid tracking. Rather, developers should embed safety and privacy within the configurability and design of the tools themselves. Such privacy by default should work in tandem with legal instruments.

 

Tracking Versus Tracing – Lessons Learned from Covid-19

Questions around tracking and tracing of data have become increasingly important in the context of Covid-19. Hansen stressed that tracing does not automatically equal tracking as tracing tools are not always used to identify and track specific users. Governments around the world have utilized tracing techniques to mitigate and control the spread of the virus. While these techniques share substantial overlap with tracking methods, tracing doesn’t “follow” users but merely allows individuals to be notified if they could contract Covid-19 based on their proximity to exposed areas or people.

 

Fernando Silva, pointed out that the debate about tracking and tracing is not new. The challenges have also evolved with technological advancements. Silva highlighted how more traditional methods of mapping created space for new tools such as Bluetooth, intelligent video analytics (IVA), RF and ultrasound tracking, biometric detection, and IPv6 fingerprinting. While each of these technologies offered safer methods for tracing, companies quickly began configuring these tools for tracking purposes.

 

A similar lesson has emerged from Covid-19 contact tracing apps. There is a growing concern that companies and governments will continue to use the pervasiveness and necessity of these apps to extend their tracking capabilities and create an environment more conducive to ubiquitous surveillance. Silva stressed that the current landscape has produced a variety of privacy risks and negative externalities. Such risks include the persistence of covert tracking, the diminished ability to exercise data subject rights, and the growing presence of function creep, the pervasive use of technologies for a purpose contrary to original intent.

 

Building Trust Through Verification and Design

Silva noted that at the heart of the debate around tracing lies an imbalance of power between individual users and the owners and operators of technology. As dependence on critical technologies grow, there is a real risk that users will lose even more control over how companies harvest their data. Indeed, without transparency and verification, new technologies can enable covert tracking and reinforce the imbalance of power by excluding individuals from services if they do not accept a more privacy-intrusive default setting.

 

In closing, Silva suggested that independent verification of these technologies could help engender trust and credibility in how governments and companies use them to combat Covid-19. Such verification could provide a layer of transparency around new technologies as well as push big industry players to take privacy considerations into account when designing these technologies in the first place.

 

The lessons learned not only from the pandemic but also years of legal and technical advancement have revealed a gap between the growing sophistication of tracking technologies and the regulatory environments that aim to protect individuals from intrusive and pervasive harm. Overcoming this gap requires embedding privacy into the configuration and design of technologies themselves and not letting new degrees of surveillance become the norm.

 

To learn more about FPF in Europe, please visit fpf.org/eu.

24 Organizations Release Principles for Protecting Student Data Privacy and Equity in the Pandemic

The Future of Privacy Forum (FPF) and 23 other education, healthcare, disability rights, data protection, and civil liberties organizations today released Education During a Pandemic: Principles for Student Data Privacy and Equity (available here). The Principles offer 10 guiding recommendations for schools as they rely on new technologies and data to facilitate remote, in-person, or hybrid learning models during the COVID-19 pandemic. 

Signatories, including National PTA, National Education Association, Southern Poverty Law Center, the National Association of School Psychologists, and the National Center for Learning Disabilities, initially sought to address challenges posed by school closures in the spring, from how to fairly assess student attendance to closing the digital divide and protecting virtual classrooms from unwelcome interruptions. However, when it became clear the pandemic would also dramatically reshape the 2020-21 school year, the group developed its 10 recommendations to help guide schools as they navigate an unprecedented and evolving situation.

“The pandemic is not over, and the challenges facing K-12 schools aren’t, either. We have a long way to go, and the success of data and technology-driven efforts to educate students during this time depends on trust and ensuring adequate privacy and equity safeguards are in place to protect students and their families,” said Amelia Vance, FPF’s Director of Youth and Education Privacy. “These 10 principles provide an excellent roadmap for schools to build and maintain trust with students and families, and ultimately create a supportive, safe, and inclusive learning environment for all students during this unprecedented time.”

While many schools spent the summer preparing for the return of students in person, a surge of cases in late July and early August forced many schools to alter their plans, sometimes just days before school started, leading to what The New York Times called a “lost summer” of opportunity to fix online learning. School districts are using data to inform their decisions to reopen campuses, and some schools, facing the prospect of a winter surge in cases, are making plans to revert to online learning if needed.  

The principles also raise key considerations for schools to ensure that all students are appropriately provided for during the pandemic. “This is an extraordinarily challenging time and it is more important than ever to ensure that schools guard against unfounded assumptions about students with disabilities that lead to segregation and unequal education,” said Jennifer Mathis, signatory Bazelon Center for Mental Health Law’s Deputy Legal Director and Director of Policy & Legal Advocacy. 

Highlights of The Principles for Student Data Privacy and Equity (available here) include:

 

See the full list of 10 principles and signatories here.

 

The release of the Principles for Student Data Privacy and Equity furthers FPF’s commitment to providing new and timely resources for educators navigating the unprecedented student privacy challenges posed by the COVID-19 pandemic.  Over the summer, FPF launched its “Privacy and Pandemics” professional development series for educators, also accessible through YouTube.  FPF also partnered with the National Center For Learning Disabilities (NCLD) to develop Student Privacy and Special Education: An Educator’s Guide During and After COVID-19 and is maintaining a comprehensive list of student privacy and COVID-19 resources on its student privacy-focused website, Student Privacy Compass.

To learn more about the Future of Privacy Forum, visit www.fpf.org

# # #

Contact: [email protected]

About FPF

The Future of Privacy Forum (FPF) is a nonprofit organization focused on how emerging technologies affect consumer privacy. FPF’s Youth & Education Privacy program seeks to protect child and student privacy while allowing for data and technology use that can help young people learn, grow, develop, and succeed. FPF works with stakeholders from practitioners to policymakers, providing technical assistance, resources, trend analysis, and training. The Youth & Education Privacy team runs Student Privacy Compass, the one-stop-shop resource site on all things related to student privacy. For more information, visit www.fpf.org

FPF Submits Feedback and Comments on UNICEF’s Draft Policy Guidance on AI for Children

Last week, FPF submitted feedback and comments to the United Nations Children’s Fund (UNICEF) on the Draft Policy Guidance on Artificial Intelligence (AI) for Children, which seeks “to promote children’s rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine children’s rights.” 

The draft policy guidance outlines nine requirements for child-centered AI, including to:

  1. Support children’s development and well-being;
  2. Ensure inclusion of and for children
  3. Prioritize fairness and non-discrimination for children;
  4. Protect children’s data and privacy;
  5. Ensure safety for children;
  6. Provide transparency, explainability, and accountability for children;
  7. Empower governments and businesses with knowledge of AI and children’s rights;
  8. Prepare children for present and future developments in AI; and
  9. Create an enabling environment.

In the feedback and comments, FPF encouraged UNICEF to adopt an approach that accounts for the diversity of childhood experiences across countries and contexts. The feedback highlighted the need to address the specific and unique challenges children from marginalized groups face, particularly as AI may create or exacerbate prejudice, inequities, and harm for children from these communities. FPF also identified opportunities for the guidance to include strategies, tools, and resources to instruct stakeholders on ways to operationalize the requirements. Finally, the comments recommended a greater emphasis on acknowledging children as active participants in developing AI systems and their uses, and the importance of empowering children with digital literacy and citizenship skills.

Earlier in October, FPF also submitted comments to the United Nations Office of the High Commissioner for Human Rights Special Rapporteur on the right to privacy to inform the Special Rapporteur’s upcoming report on the privacy rights of children. FPF will continue to provide expertise and insight on child and student privacy, AI, and ethics to agencies, governments, and corporations to promote the best interests of children. 

Learn more about FPF’s US and international work on youth privacy here.

A Look Back at the Role of Law and the Right To Privacy in LGBTQ+ History

By Katelyn Ringrose, Christopher Wolf Diversity Law Fellow at the Future of Privacy Forum, and Christopher Wood, Executive Director of LGBT Tech, with thanks to Connor Colson, FPF Policy Intern. 

LGBTQ+ rights are, and have always been, linked with privacy. Over the years, privacy-invasive laws, practices, and norms have been used to oppress LGBTQ+ individuals by criminalizing and stigmatizing individuals on the basis of their sexual behavior, sexuality, and gender expression.  

In honor of October as LGBTQ+ History Month, FPF and LGBT Tech explore three of the most significant privacy invasions impacting the LGBTQ+ community in modern U.S. history: anti-sodomy laws; the “Lavender Scare” beginning in the 1950s; and privacy invasions during the HIV/AIDS epidemic. These examples, along with many more, will be analyzed in FPF and LGBT Tech’s upcoming white paper on the sensitivity of data concerning a person’s gender identity, sexual orientation, and sex life. 

1. Anti-Sodomy Laws and Sexual Privacy // 

U.S. Anti-sodomy laws have been systematically utilized to oppress individuals through incarceration, denial of employment, and public shaming. In all American colonies the punishment for sodomy was death, a punishment that remained on the books in some states into the 19th century. In the early 20th century, sodomy was a felony in every state.

Anti-sodomy laws allowed law enforcement and communities to violate individual privacy by reporting suspected sexual activity. This practice of community and law enforcement invading the privacy of civilians continued well into this century, when the Supreme Court ruled the remaining state anti-sodomy laws unconstitutional in Lawrence v. Texas. In that 2003 case, Justice Anthony Kennedy refuted arguments that anti-sodomy laws protect against unwanted sexual activity:

The case does involve two adults who, with full and mutual consent from each other, engaged in sexual practices common to a homosexual lifestyle. The petitioners are entitled to respect for their private lives. The State cannot demean their existence or control their destiny by making their private sexual conduct a crime. Their right to liberty under the Due Process Clause gives them the full right to engage in their conduct without intervention of the government.

— Justice Kennedy, delivering the majority opinion in Lawrence v. Texas

More recently, scholars have begun to consider anti-sodomy laws in the larger context of “sexual privacy,” a distinct privacy interest that serves as a cornerstone for sexual autonomy, consent, human dignity and intimacy. 

Although Lawrence invalidated state anti-sodomy laws, those laws still remain on the books in many states. For example, as recently as 2011 to 2014, 12 men in East Baton Rouge Parish, Louisiana were arrested for “crimes against nature.” Similarly, the government continues to regularly reveal, or “out,” information concerning people’s sexuality, gender identity, and HIV status through legal regimes, including, for example, state laws requiring people to frequent bathrooms in accord with their sex assigned at birth, denial of healthcare services to transgender individuals, and through mandatory disclosure laws to obtain government services, including government-issued identification.

In addition to issues of government intrusion, corporate or commercial collection of data can lead to many of the same harms mentioned above, including the perpetuation of existing bias and the encoding of discrimination within systems of power. Moving forward, it is important to understand the practices of the past in order to better understand potential harms posed by the collection, use, and sharing of commercial data.

2. The Lavender Scare and the Role of Employment Protection //

Beginning in the 1950s, the U.S. federal government began surveilling and systematically purging LGBTQ+ employees from the civil workforce in what became known as the “Lavender Scare.” In 1953, President Eisenhower declared in an Executive Order that federal employees, as a matter of national security, should be investigated for “sexual perversion” and “mental illness.”

Over the next four decades, resulting investigations led to more than ten thousand civil servants losing their jobs due to their sexual orientation. This movement made it largely impossible for federal employees to publicly identify as LGBTQ+. In fact, the stigma was so strong that federal employees were fired simply for “guilt of association” because they had known someone who was accused of being LGBTQ+. 

The Lavender Scare was also the beginning of an intense fifty year period of government surveillance of LGBTQ+ individuals spearheaded by the FBI. The majority of the FBI’s documents from the Sex Deviant Program have been destroyed, but those that remain show the extent of government spying. The FBI recruited informants within early LGBTQ+ rights organizations, photographed and tracked protestors to get them removed from their federal jobs, and regularly outed them. Former Secretary of State John Kerry, in a formal apology issued in 2015, acknowledged the practice:

In the past — as far back as the 1940s, but continuing for decades — the Department of State was among many public and private employers that discriminated against employees and job applicants on the basis of perceived sexual orientation, forcing some employees to resign or refusing to hire certain applicants in the first place. These actions were wrong then, just as they would be wrong today.

On behalf of the Department, I apologize to those who were impacted by the practices of the past and reaffirm the Department’s steadfast commitment to diversity and inclusion for all our employees, including members of the LGBTI community.

— Former Secretary of State John Kerry, on behalf of the Department of State

While the Lavender Scare and associated practices within the federal government largely ended as late as the 1990s, issues associated with government surveillance and over-policing still plague LGBTQ+ communities today. And it wasn’t until 2020 that the Supreme Court clarified, in Bostock v. Clayton County, that Title VII of the Civil Rights Act bans employment discrimination on the basis of sexual orientation and gender identity.

3. The HIV/AIDS Epidemic and the Importance of Medical Privacy //

Public health infrastructure has historically excluded acknowledgement of LGBTQ+ individuals, even going as far as perpetuating harms in the form of underinvestment in healthcare. When the Human Immunodeficiency Virus (HIV) and the associated Acquired Immunodeficiency Syndrome (AIDS) came into the American consciousness in the early 1980s, most hospitals saw homosexuality as an illness, and care was laced with stigma

HIV/AIDS, initially called GRID or Gay Related Immuno-Deficiency Syndrome, was accompanied by a range of required HIV/AIDS disclosures. As a result of these disclosures, gay and bisexual men were fired from their jobs, kicked out of housing and even refused treatment due to their potential or actual HIV status. In addition, they were often denied health insurance and in order to pay for health coverage, were exploited by viatical insurance companies to sell life insurance policies for pennies on the dollar. 

The high rate of stigmatization led individuals to avoid testing or treatment, with individuals avoiding testing out of fear that their employer would find out about their LGBTQ+ status when visits or medication were billed to their employer’s insurance. Healthcare providers themselves perpetuated this stigma by refusing to treat HIV positive patients. This stigma has led a significant portion of men who had sex with men to withhold their sexual orientation from their doctors, resulting in a lack of tailored care. These issues were only compounded by existing racism and inequities—with HIV prevention programs reaching Black communities at a slower pace than programs aimed at white gay and bisexual men, despite HIV/AIDS impacting a higher proportion of the Black population than other races and ethnicities. 

A lack of medical privacy and inadequate anti-discrimination protections continue to impact the LGBTQ+ community. A recently issued rule from the Department of Health and Human Services (“HHS”) would eliminate federal protections for LGBTQ+ individuals in accessing health insurance, particularly transgender individuals who are already mistreated and neglected through the denial of equal coverage and care. 

At the same time, governments, physicians and researchers use personal data to provide HIV/AIDS services, monitor healthcare efforts, and to advance research that benefits LGBTQ+ communities. In these circumstances, the balance between public health and individual privacy is difficult to strike — at least partially due to the deep distrust that developed during the height of the HIV/AIDS epidemic. 

Looking Forward //

Lessons learned from the past about privacy and LGBTQ+ history can, and should, continue to shape conversations today. For example, during the COVID era, we can apply lessons learned from the HIV/AIDS epidemic to examine issues around required medical disclosures for COVID-19. As we contemplate issues from the implementation of digital contact tracing to mandatory medical disclosures for individuals who have tested positive for COVID-19, we must understand that the collection of medical data, at least for the LGBTQ+ community, is an issue deeply rooted in history, laced with stigma, and marked by a lack of legal protection.

Today, connected devices and services are empowering members of the LGBTQ+ community to participate more fully online. Data regarding an individual’s sexual orientation, gender identity, or details about their sex life can be important to the provision of social and healthcare services, public health, and medical research. However, data pertaining to an individual’s gender identity, sexual orientation, and sex life can be incredibly sensitive—and the collection, use, and sharing of this data can raise unique privacy risks and challenges. Conversations around LGBTQ+ data privacy must take into account the harms of the past.

FPF and LGBT Tech continue to research LGBTQ+ Privacy. Please contact either Katelyn Ringrose at [email protected] or Chris Wood at [email protected] with any questions, comments, or to get involved. 

Dr. Lauren Gardner, Creator of Johns Hopkins University’s COVID-19 Dashboard, and UC Berkeley Data Analytics Researcher Dr. Katherine Yelick to Keynote FPF Conference on Privacy & Pandemics

Workshop Explores the Value and Limits of Data and Technology in the Context of the COVID-19 Pandemic

Today, the Future of Privacy Forum announced a pair of distinguished professors and researchers will be the keynote speakers for a two-day virtual workshop exploring the value and limits of data and technology in the context of a global crisis. The workshop, “Privacy & Pandemics: Responsible Uses of Technology and Health Data During Times of Crisis – An International Tech and Data Conference,” convened by FPF with the Duke Sanford School of Public Policy, Dublin City University, and Intel Corporation, will take place virtually from 10:00AM to 2:00PM EST on October 27th and 28th.

Dr. Lauren Gardner, whose team at the Johns Hopkins University Center for Systems Science and Engineering (CSSE) is responsible for aggregating the most comprehensive publicly available data set on the coronavirus pandemic will deliver the day one keynote.

“We’re living through a moment when the stakes for the technology and data protection community are high because lives depend on the responsible use of public health data,” said FPF CEO Jules Polonetsky. “Dr. Gardner has been making high-stakes decisions to balance privacy and access to data throughout the pandemic. Privacy practitioners and public health officials can learn a lot from her experience with the COVID-19 Dashboard.”

The keynote speaker on day two of the workshop will be Dr. Katherine Yelick, the Robert S. Pepper Distinguished Professor of Electrical Engineering and Computer Sciences and the Associate Dean for Research in the Division of Computing, Data Science and Society at UC Berkeley. She is also the Senior Advisor on Computing at Lawrence Berkeley National Laboratory and leads the ExaBiome project on scalable tools for analyzing microbial data. Dr. Yelick will address address the challenges of COVID data analytics, specifically the many different types of data involved, the numerous barriers to access of that data, the variable quality of COVID data, and the challenges to fast and accurate analysis.

“The COVID-19 pandemic has elevated issues of data access and responsible data use, including respect for privacy, as it relates to epidemiology and public health surveillance,” said Dr. Sara Jordan, FPF Policy Counsel, Artificial Intelligence and Ethics. “Dr. Yelick offers a thoughtful analysis of the unique challenges of the COVID-19 response related to data access, sharing, protection, and use.”

Participants in the workshop include international leaders from academia and industry. The workshop begins with a session focused on the challenges associated with the urgent efforts to assemble, collect, manage, and transfer volumes of data from a variety of disparate sources to track the spread of COVID-19. The second session will assess the proliferation of technologies to facilitate contact tracing, exposure notification, thermal scanning, and isolation following an infection, attempting to balance the epidemiological benefits of these technologies against the privacy concerns they raise.

On day two of the workshop, the third session will feature experts discussing what we can learn from this pandemic for the future of law, regulatory authority, and social norms. The workshop will conclude with a session that considers the future direction of privacy law, technology, and research when interfacing with the scientific community. Following the workshop, a report will be prepared and used by the National Science Foundation to help set the direction for its Convergence Accelerator 2021 Workshops, speeding the transition of convergence research into practice to address grand challenges of national importance.

The event will take place from October 27th and 28th and is hosted in collaboration with the National Science Foundation, Duke Sanford School of Public Policy, SFI ADAPT Research Centre, Dublin City University, Intel Corporation, OneTrust, and the Israel Tech Policy Institute. To register and learn more about the event, go to the FPF website.

The Federal Trade Commission Updates to the COPPA FAQs

In July, the Federal Trade Commission (FTC) announced changes to update and streamline its Children’s Online Privacy Protection Act (COPPA) Frequently Asked Questions (FAQs). The COPPA FAQs supplement the COPPA Rule by providing plain-language guidance and examples of COPPA compliance. Although the Commission stated that the revisions “don’t raise new policy issues,” companies collecting or managing data from children under 13 should be aware of several significant changes and clarifications to the FAQs. The revised FAQs:

The changes to the FAQs, discussed in detail below, do not impact the Commission’s ongoing review of the COPPA Rule, for which The Future of Privacy Forum (FPF) provided comments in 2019. 

COPPA and Schools

The Commission made two significant changes to the “COPPA and Schools” portion of the FAQs: one regarding consent under COPPA and the other relating to other federal education laws. 

Consent in the School Environment

The requirements for collecting verifiable parental consent (VPC) have long been a challenge for companies that partner with schools or provide educational services intended for classroom use. The Commission’s updates to the FAQs clarify that companies contracting with schools must not state in their “Terms of Service or anywhere else” that schools are responsible for complying with COPPA because it is “the responsibility of the Operator [to comply] with the Rule.” The FAQs further clarify that companies must provide schools with “the same type of direct notice regarding its practices as to the collection, use, or disclosure of personal information from children as it would otherwise provide to the parent.” This change indicates that while the actual notice provided to schools can differ from that given to parents, it must be direct and must convey the same information as notice provided to parents.

Federal Student Privacy Laws

The updated FAQs provide more detailed descriptions of operators’ and schools’ obligations under applicable laws, including the Family Educational Rights and Privacy Act (FERPA), the Individuals with Disabilities Education Act (IDEA), and the Protection of Pupil Rights Amendment (PPRA). The previous FAQs did not mention IDEA and included only PPRA and FERPA as additional legal considerations. 

The Commission also updated language throughout the section on COPPA in schools to refer to operators’ and schools’ obligations under federal education laws. One notable addition is the following: “The school’s agreement with a third party operator must also be reviewed under the school official exception or other applicable exception under FERPA.” This language indicates that COPPA alone does not provide a basis for collecting student data; operators may obtain COPPA-required consent from schools and teachers only in the context of agreements subject to FERPA.

Response to the 2019 YouTube Settlement

The FAQs now include the Commission’s interpretations of COPPA that were dispositive in the landmark settlement with YouTube. The YouTube complaint alleged that YouTube maintained numerous child-directed channels and used persistent identifiers to serve targeted advertising on these channels. The settlement findings noted that the collection and use of persistent identifiers for targeted advertising from viewers of such channels violated COPPA because YouTube had actual knowledge that many of its channels were clearly child-directed, but did not obtain parental consent to collect, use, and disclose children’s personal information. In light of this, the updated FAQs include guidance for determining whether sites are directed to children, as well as guidance for mixed audience websites.

Directed to Children

The updated FAQs 1) address when COPPA deems content creators to be operators subject to the law; and 2) include four specific factors to help operators determine whether videos posted on their websites are directed to children. These four new factors specific to video build on the 10 factors listed by the Commission for determining whether sites are directed to children. This update largely incorporates a standalone blog that the Commission published to help content creators analyze whether their content is directed to children. The FAQs urge content creators to consider whether their content is directed to children because, in light of the YouTube Settlement, content creators may be considered operators (and thus subject to COPPA) if their sites collect personal information such as persistent identifiers from children. 

Mixed Audience

The FAQs provide deeper insight into how operators may determine whether their websites or services are directed to children, a mixed audience, or a general audience. The FAQs distinguish these three categories by clarifying that “the ‘mixed audience’ category is a subset of the ‘directed to children’ category, and a general audience site does not become ‘mixed audience’ just because some children use the site or service.” The Commission clarified that when operators’ sites or services target children under 13 but they are not the primary audience, operators can take advantage of the mixed audience exception.

If operators serve a mixed audience, they can establish age screens to ensure that they do not collect personal information from users under age 13 or to ensure they collect verifiable parental consent for those users. The FAQs also add details about how operators may appropriately establish age screens in the context of a mixed audience site or app. The Commission clarified that knowledge-based questions alone, such as a difficult math problem, are insufficient to screen children but that knowledge-based problems can be used “in addition to asking the age of the user.” The Commission also restated its longstanding position that companies must establish methods to prevent children from back-buttoning to enter a new age at an age gate, using technical means such as cookies.

IOT Devices and the Non-Enforcement Policy Regarding Voice Recordings

The updated FAQs include Internet of Things (IOT) devices—specifically, connected toys, smart speakers, and voice assistants—as commercial services subject to COPPA. The new FAQ F.6 incorporates the Commission’s 2017 Enforcement Policy Statement Regarding the Applicability of the COPPA Rule to the Collection and Use of Voice Recordings, which aligns with FPF’s 2016 recommendations regarding connected toys and voice recordings. The FAQs now state that the FTC will not enforce the prior parental consent requirement when operators 1) collect an audio file of a child’s voice for the purpose of fulfilling a request or conducting an internet search; and 2) maintain that file only “for the brief time necessary for that purpose.” 

This policy applies as long as operators provide clear notice of their data collection, use, and deletion policies; do not request personal information via voice; use the audio file solely to fulfill the user’s request; and delete the file upon request fulfillment. In addition to this new section, the FAQs clarify that COPPA applies to connected toys, IOT devices, smart speakers, and voice recordings of children. 

Additional Changes

The Commission made several other notable changes to the FAQs, including adding new methods for obtaining verifiable parental consent; clarifying that wireless network information is subject to COPPA; adding examples of “internal operations;” and removing guidance regarding the transition from the old COPPA rule. 

New methods for obtaining parental consent 

The FAQs highlight two new methods of obtaining parental consent. Operators may require “a parent to answer a series of knowledge-based challenge questions that would be difficult for someone other than the parent to answer.” Or, operators may compare and verify a parent’s photo identification with a photo submitted by the parent through facial recognition technology, as long as the FTC pre-approves the mechanism deployed for either option.

Wireless network information is subject to COPPA

The FAQs add the Commission’s finding that wireless network identifiers used to infer the precise location of a child is personal information covered by COPPA and, thus, requires notice and parental consent prior to collection, per the 2016 InMobi settlement.

Further examples of internal operations 

The FAQs update the Commission’s definition of activities that support internal operations to include “activities necessary for the site or service to maintain or analyze its functioning,” specifically listing “intellectual property protection, payment and delivery functions, spam protection, optimization, statistical reporting, and debugging,” as such activities. The FAQs further remind operators that behavioral advertising and amassing profiles are not internal operations, consistent with settlements dating back to 2015.

Removing guidance relating to the transition from the old COPPA Rule

Throughout the FAQs, the Commission has removed language that distinguished the “old” and “new” COPPA rules. The Commission said that because the current regulations have been in place for seven years, it removed language regarding the transition between the two rules.

References:

This blog was authored by Anisha Reddy, Casey Waughn, and Tyler Park.

FPF, Highmark Health, and CMU Host Wired for Health: 2020 — Examining Biometric Technologies in the Age of COVID-19

On Thursday, October 8th, Highmark Health, Carnegie Mellon University’s CyLab Security and Privacy Institute, and Future of Privacy Forum hosted a virtual symposium—taking an in-depth look at the role of biometrics and privacy in the COVID-19 era. 

During this virtual symposium, expert discussants and presenters examined the impact of biometrics and privacy in the ongoing fight against the novel coronavirus. The world has changed. Today, keeping COVID-19 at bay is a top priority across our communities, throughout our country, and around the world. 

The full recording of the virtual symposium is available here: 

https://www.youtube.com/watch?v=rrJsjBERsUE

Presentations focused on emerging technology, covering advanced facial recognition systems, temperature scanning, and respiratory disease detection. Presenters included Dr. Anil Singh of the Allegheny Health Network and Satya Venneti, CTO and Co-founder of Telling.ai; Dr. Marios Savvides of the CyLab Biometrics Center at Carnegie Mellon University; and Dr. Yang Cai of the Visual Intelligence Studio at Carnegie Mellon University. 

The expert panel analyzed the privacy impacts of certain technologies, including the deployment of voice recognition and temperature sensing to identify disease symptoms associated with COVID-19. Panelists noted the inherent tensions within certain biometrics technologies, including the ethical and social dilemmas raised by public and private use. 

Panel discussants included: Dr. Lorrie Faith Craon, Director and Bosch Distinguished Professor of the CyLab Security and Privacy Institute at Carnegie Mellon University; Dr. Lisa Martinelli, Chief Privacy and Data Ethics Officer, Highmark Health; Dr. Rachele Hendricks-Sturrup, Health Policy Counsel, Future of Privacy Forum; Kirk Nahra, Partner, WilmerHale; and Jules Polonetsky, CEO, Future of Privacy Forum. 

We look forward to continuing this important conversation, with an in-person conference — Wired for Health 2021 — later next year.