Increased Surveillance is Not an Effective Response to Mass Violence

By Sara Collins and Anisha Reddy

This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part of the proposal would broaden the categories of content schools must monitor under the Children’s Internet Protection Act (CIPA); specifically, schools would be required to “detect online activities of minors who are at risk of committing self-harm or extreme violence against others.” 

Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.

More Untargeted Monitoring is not the Answer

About 95% of schools are required to create internet safety policies under CIPA (these requirements are tied to schools’ participation in the “E-rate” telecommunications discount program). CIPA requires safety policies to include technology that monitors, blocks, and filters students’ attempts to access inappropriate online content. CIPA generally imposes monitoring requirements regarding: obscene content; child pornography; and content that is otherwise harmful to minors. 

The RESPONSE Act would impose new obligations, requiring schools to infer whether a students’ internet use might indicate they are at risk of committing self-harm or extreme violence against others. However, there is little evidence that detecting or blocking this kind of content is technically possible and would prevent physical harm. A report on school safety technology funded by the U.S. Department of Justice noted that violence prediction software is “immature technology.” Not only is the technology immature, the FBI found that there is no one profile for a school shooter: scanning student activity to look for the next “school shooter” is unlikely to be effective. 

By directing schools to implement “technology protection measure[s] that detect online activities of minors who are at risk of committing self-harm or extreme violence against others,” the RESPONSE Act would essentially require that all schools across the nation implement some form of comprehensive network or device monitoring technology to scan lawful content–a direct violation of local control and a serious invasion of students’ privacy. 

This broad language could encourage schools to collect as much information as possible about students, requiring already overwhelmed faculty and administrators to spend countless hours sifting through contextually harmless student data–hours that could be better spent engaging with students directly.

Additionally, this technology mandate could limit schools’ ability and desire to implement more thoughtful and effective programs and policies designed to improve school safety. Schools may assume that network monitoring technology is more effective than it actually is, and redirect resources away from evidence-based school safety measures, such as holistic approaches to early intervention. Further, without more guidance, school administrators would be forced to make judgement calls that result in the over-monitoring of student online activity.

The cost associated with the implementation of these technologies goes beyond buying appropriate network monitoring software, which is a burden in and of itself. Schools⁠—which are under-resourced and under-staffed⁠—would experience difficulty devoting funds and staff time to monitoring these alerts, as well as developing policies for responses to those alerts. These burdens are further compounded in rural school districts that already receive less funding per student. 

False Alerts Unjustly Trap Students in the Threat Assessment Process

In some cases, network monitoring does not end when the school day ends. Schools often issue devices for students to take home or online accounts students access from a device at home. Under the RESPONSE Act, these schools would be forced to monitor students constantly. If a school gets an alert during non-school hours, their default action may be to alert law enforcement. But sending law enforcement to conduct wellness checks is not a neutral action. These interactions can be traumatic for students and families, and can result in injury or false imprisonment. These harms are exacerbated when monitoring technology provides overwhelming numbers of false positives. 

Even if content monitoring technology were effective, the belief that surveillance has no negative outcomes or consequences for students has created a pernicious narrative. Surveillance technologies, like device, network, or social media monitoring services, can harm students by stifling their creativity, individual growth, and speech. Constant surveillance also conditions students to expect and accept that authority figures, such as the government, will always monitor their activity. We also know that students of color and students with disabilities are disproportionately suspended, arrested, and expelled compared to white students and non-disabled students. The RESPONSE Act’s proposed new requirements would only serve to further exacerbate this disparity. 

Schools, educators, caregivers, and communities are in the best position to notice and address concerning student behavior. The Department of Education has several resources outlining effective disciplinary measures in schools, finding that “[e]vidence-based, multi-tiered behavioral frameworks . . . can help improve overall school climate and safety.”

Ultimately, requiring schools to spend money on ineffective technology would divert much-needed resources and staff from providing students with a safe learning environment. Rather than focusing on filtering content, schools should emphasize the importance of safe and responsible internet use and use school safety funding on evidence-based solutions. By doing so, administrators can create a school community built on trust rather than suspicion.

FPF Receives Grant To Design Ethical Review Process for Research Access to Corporate Data

One of the defining features of the data economy is that research is increasingly taking place outside of universities and traditional academic settings. With information becoming the raw material for production of products and services, more organizations are exposed to and closely examining vast amounts of personal data about citizens, consumers, patients and employees. This includes companies in industries ranging from technology and education to financial services and healthcare, and also non-profit entities, which may seek to advance societal causes, or other agenda-driven projects.

For research on data subject to the Common Rule, institutional review boards (IRBs) provide an essential ethical check on experimentation and research. However, much of the research relying on corporate data is beyond the scope of IRBs, because the data has been previously collected, the project or researcher is not federally funded, the data may be a public data set or other reasons.

Future of Privacy Forum (FPF) has received a Schmidt Futures grant to create an independent party of experts for an ethical review process that can provide trusted vetting of corporate-academic research projects. FPF will establish a pool of respected reviewers to operate as a standalone, on-demand review board to evaluate research uses of personal data and create a set of transparent policies and processes to be applied to such reviews.

FPF will define the review structure, establish procedural guidelines, and articulate the substantive principles and requirements for governance. Other considerations to be addressed include companies’ common concerns about risk analysis, disclosure of intellectual property and trade secrets, and exposure to negative media and public reaction. Following this phase, members who can be available for reviews will be recruited from a range of backgrounds. The project will include input and review by government, civil society, industry and academic stakeholders.

Sara Jordan, who will be cooperating with FPF on this project, has proposed one model for addressing this challenge. Her paper, Designing an AI Research Review Committee, calls for a review committee dedicated to ethical oversight of AI research by giving serious consideration of the design of such an organization. This model proposes a design for such a committee drawing upon the history and structure of existing research review committees such as IRBs, Institutional Animal Care and Use Committees (IACUC), and Institutional Biosafety Committees. This model follows that of the IBC but with a blend of features from human subject and animal care and use committees in order to improve implementation of risk-adjusted oversight mechanisms.

Another analysis and recommendation was published recently by Northeastern University Ethics Institute and Accenture: Building Data and AI Ethics Committees. This paper comments that an ethics committee is a potentially valuable component of accomplishing responsible collection, sharing, and use of data, machine learning, and AI within and between organizations. However, to be effective, such a committee must be thoughtfully designed, adequately resourced, clearly charged, sufficiently empowered, and appropriately situated within the organization.

Likewise the EU is considering these challenges with several recent AI guidance publications including the Council of Europe established an ad hoc committee on Artificial Intelligence, which will examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law.

BACKGROUND

The ethical framework applying to human subject research in the biomedical and behavioral research fields dates back to the Belmont Report. Drafted in 1976 and adopted by the United States government in 1991 as the Common Rule, the Belmont principles were geared towards a paradigmatic controlled scientific experiment with a limited population of human subjects interacting directly with researchers and manifesting their informed consent. These days, researchers in academic institutions as well as private sector businesses not subject to the Common Rule, seek to conduct analysis of a wide array of data sources, from massive commercial or government databases to individual tweets or Facebook postings publicly available online, with little or no opportunity to directly engage human subjects to obtain their consent or even inform them of research activities. Data analysis is now used in multiple contexts, such as combatting fraud in the payment card industry, reducing the time commuters spend on the road, detecting harmful drug interactions, improving marketing mechanisms, personalizing the delivery of education in K-12 schools, encouraging exercise and weight loss, and much more.

These data uses promise tremendous research opportunities and societal benefits but at the same time create new risks to privacy, fairness, due process and other civil liberties. Increasingly, researchers and corporate officers find themselves struggling to navigate unsettled social norms and make ethical choices for ways to use this data to achieve appropriate goals. The ethical dilemmas arising from data analysis may transcend privacy and trigger concerns about stigmatization, discrimination, human subject research, algorithmic decision making and filter bubbles.

In many cases, the scoping definitions of the Common Rule are strained by new data-focused research paradigms, which are often product-oriented and based on the analysis of preexisting datasets. For starters, it is not clear whether research of large datasets collected from public or semi-public sources even constitutes human subject research. “Human subject” is defined in the Common Rule as “a living individual about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information.” Yet, data driven research often leaves little or no footprint on individual subjects (“intervention or interaction”), such as in the case of automated testing for security flaws.

While obtaining individuals’ informed consent may be feasible in a controlled research setting involving a well-defined group of individuals, such as a clinical trial, it is untenable for researchers experimenting on a database that contains the footprints of millions, or indeed billions, of data subjects. In response to these developments, the Department of Homeland Security commissioned a series of workshops in 2011-2012, leading to the publication of the Menlo Report on Ethical Principles Guiding Information and Communication Technology Research. That report remains anchored in the Belmont Principles, which it interprets to adapt them to the domain of computer science and network engineering, in addition to introducing a fourth principle, respect for law and public interest, to reflect the “expansive and evolving yet often varied and discordant, legal controls relevant for communication privacy and information assurance.”

Ryan Calo foresaw the establishment of “Consumer Subject Review Boards” to address ethical questions about corporate data research.  Calo suggested that organizations should “take a page from biomedical and behavioral science” and create small committees with diverse expertise that could operate according to predetermined principles for ethical use of data. No model has a direct correlation to the current challenges, however. The categorical non-appealable decision making of an academic IRB, which is staffed by tenured professors to ensure independence, will be difficult to reproduce in a corporate setting. And corporations face legitimate concerns about sharing trade secrets and intellectual property with external stakeholders who may serve on IRBs.

FPF’s work on this grant will seek to demonstrate the composition and viability of one way to address these challenges.

 

 

COPPA Workshop Takeaways

On Monday, the Federal Trade Commission (FTC) held a public workshop focused on potential updates to the Children’s Online Privacy Protection Act (COPPA) rule. The workshop follows a July 25, 2019 notice of rule review and call for public comments regarding COPPA rule reform. The comment period remains open until December 9th. Senior FTC officials expect the process to result in changes to the COPPA rule. The workshop also follows the Commission’s high-profile settlement with YouTube regarding child-directed content.

Monday’s workshop was a key part of the Commission’s review; the day-long session featured panel discussions focused on the various questions raised regarding COPPA’s continued effectiveness as technology evolves. FPF’s Amelia Vance spoke on a panel focused on the intersection of issues related to children’s privacy and student privacy.

During the edtech focused panel, there was a consensus that schools should be able to use the Family Educational Rights and Privacy Act’s (FERPA) school official exception to provide consent on behalf of under-thirteen students under COPPA, rather than collect consent directly from parents. This allows schools to continue to exercise judgment over what technology is used, while also preserving the privacy protections of both COPPA and FERPA. Many speakers expressed that parents feel they have little transparency about the technology being used in their child’s school. The FTC may potentially require increased transparency or notice to assuage these worries.

We also noticed several recurring themes throughout the workshop:

  1. The tension between child-directed content and “child-attractive” or child-appropriate content and what that means under COPPA;

  2. The misconceptions surrounding the meaning of “actual knowledge” and COPPA’s “product improvement” exception; and

  3. A need to focus on frameworks and technology that allow children to safely be online.

The Tension Between “Child-Directed” Content and “Child-Attractive” or Child-Appropriate Content

Several questions were posed regarding the meaning of “child-directed content:”

Panelists cautioned that determining whether content is child-directed by focusing solely on audience makeup could create a moving target for creators; they would constantly have to monitor their audience to ensure they don’t break the “child-directed” threshold. Without clear methods for monitoring when children are accessing general audience content, this tension could not only encourage additional data collection, but it also makes it very difficult to create content for teenagers or “nostalgia-content” for adults. Panelists noted that this tension can transcend beyond content creators to services originally intended for a general audience that unintentionally attracts a child audience.

Harry Jho, a YouTube content creator, raised a concern that COPPA, as applied in the FTC’s YouTube settlement, will stifle creators’ ability to produce quality children’s online content. The settlement requires YouTube and creators to disable behaviorally targeted advertisements on child-directed content. Jho stated that he relies on behavioral advertising for the “lion’s share” of his revenue. Jho claimed that this settlement requirement will cause creators to suffer, and the quality of free children’s content on the internet to decline. Jho also articulated that there is confusion among creators about whether child-attractive or child-appropriate content will be considered “child-directed” under COPPA, resulting in less certainty than ever about whether COPPA applies to particular creators, channels, or videos.

Misconceptions: Actual Knowledge and Product Improvement

There was also significant confusion around the scope of COPPA and its definitions throughout the workshop. We heard many different opinions about the meaning of the actual knowledge standard, and the only point of agreement was that the YouTube settlement has contributed to the confusion. The FTC has said that having actual knowledge that there is child-directed content on your platform triggers COPPA. However, in the YouTube settlement, the FTC cited evidence that showed YouTube had knowledge that children were using the site, as well as pointing to channels that were obviously child-directed. Phyllis Marcus, a partner at Hunton Andrews Kurth, argued that the distinction between actual knowledge of child-directed content on a website versus actual knowledge that children are using a website seems to be collapsing. This shift, coupled with the confusion regarding the definition of “child-directed,” has caused significant uncertainty. Marcus believes that the use of the term “actual knowledge” in various other privacy regimes, such as California’s Consumer Privacy Act, will also create substantial confusion for companies.

While discussing edtech, the question of whether product improvement remains acceptable under COPPA was raised.  Ariel Fox Johnson of Common Sense Media argued that product improvement is a commercial purpose under COPPA, full stop, and if schools are paying for a service, they should not also be “paying” with student data. FPF’s Amelia Vance argued that the product improvement exception is necessary to allow essential functions like security patches and authenticating users, so any changes should be carefully tailored.

Keeping Kids on the Internet 

A recurring discussion was that some strategies for COPPA compliance have the unintended consequence of keeping kids off the internet. Jo Pedder, Head of Regulatory Strategy at the United Kingdom Information Commissioner’s Office discussed the UK’s implementation of the age-appropriate design code. The code’s goal is to empower kids on the internet while keeping them safe, rather than keeping them out of the digital world. Instead of a one-stop age-gate⁠—largely decried by panelists as an ineffective method of keeping kids safe from age-inappropriate content and data collection⁠—the design code requires entities to understand the age ranges of users and use these “age bands” to, for example, tailor privacy notices or settings.

Similarly, sites with a “mixed audience” under COPPA were heavily discussed, including if age gates can be effective in the space. Dona Fraser of the Children’s Advertising Review Unit pointed out that when kids see an age-gate, they see it as a requirement to lie about their age. Children want to use the internet and they are worried about what they are missing out on. When a mixed audience online service implements a holistic design approach by, for example, establishing a child-appropriate service by default, kids don’t feel like they are missing out on content and don’t have to lie.

Next Steps for the FTC

Several privacy advocates called for the Commission to exercise its 6(b) authority regarding COPPA-covered online services: under Section 6(b) of its enabling Act, the FTC has investigative authority to require reports providing “information about [an] entity’s ‘organization, business, conduct, practices, management, and relation to other corporations, partnerships, and individuals.’ 15 U.S.C. Sec. 46(b).” Panelists who brought up Section 6(b) raised concerns about the lack of insight about what information is being collected by websites and applications, especially in the education technology sector. Panelists also asked the FTC to do studies on the effectiveness of age-gates and whether behaviorally targeted ads actually have a higher market value than contextual advertisements, and even include the voices of the most important stakeholders–children–in the FTC’s analysis. Additionally, it is important to note that several panelists commented that the child privacy conversation needs to evolve beyond notice and consent, and urged the FTC to focus on creating requirements that provide privacy protections to children, while not creating additional notice or consent mechanisms that burden both parents and companies.

Many panelists also urged the FTC to engage in more enforcement actions. One panelist stated that more frequent enforcement actions would have a “tremendous effect” in rooting out bad actors and encouraging COPPA compliance.

For additional reading on the workshop, see these articles:

https://iapp.org/news/a/ftc-workshop-aims-to-inform-potential-coppa-updates/

https://www.edsurge.com/news/2019-10-08-the-ftc-has-its-sights-on-coppa-and-edtech-providers-should-take-notice

FPF Expands Health Privacy Initiative

FPF is delighted to announce that Dr. Rachele Hendricks-Sturrup has joined the staff as health policy counsel, strengthening FPF’s commitment to supporting the data protection and ethics guidelines needed for health data. In this role, Rachele will work with stakeholders to advance opportunities for data to be used for research and real world evidence, improve patient care, and allow patients to access their medical records. She will also continue to develop FPF’s projects around genetic data, wearables, and machine learning with health data.

Rachele received a Doctor of Health Science degree in 2018 and holds a special focus on pharmacogenomics and precision medicine. Previously, she conducted health information privacy-related research within Harvard Pilgrim Health Care Institute’s Department of Population Medicine, where she was one of the first research fellows to have a combined focus on addressing issues and challenges at the forefront of precision medicine and health policy.

As a prominent academic, Rachele has written numerous influential publications on consumer privacy and non-discrimination. She recently wrote a piece that looks at how direct-to-consumer genetic testing companies engage health consumers in unprecedented ways and leverage genetic information to further engage health companies. Many of her peer-reviewed manuscripts, including one relevant piece entitled “Direct-to-Consumer Genetic Testing Data Privacy: Key Concerns and Recommendations Based on Consumer Perspectives,” can be accessed via PubMed.

FPF Appoints Robbert van Eijk as Managing Director for Europe

FPF Expanding EU Programming

BRUSSELS – October 1, 2019 – The Future of Privacy Forum (FPF) today announced Robbert van Eijk as managing director for its operations in Europe. In this role, Eijk will implement FPF’s agenda in Europe, oversee its day-to-day operations, and manage relationships with stakeholders in industry, government, academia, and civil society.

“European data protection policies are driving privacy practices around the world,” said FPF CEO Jules Polonetsky. “As an established leader in the data protection field, Rob has technical and policy expertise that will be a tremendous asset as we provide on-the-ground guidance to European stakeholders navigating the dynamic data protection landscape.”

Prior to serving in this position, Eijk worked at the Dutch Data Protection Authority (DPA) for nearly 10 years and has since become an authority in the field of online privacy and data protection. He represented the Dutch DPA in international meetings and as a technical expert in court. He also represented the European Data Protection Authorities, assembled as the Article 29 Working Party, in the multi-stakeholder negotiations of the World Wide Web Consortium on Do Not Track. Eijk is a technologist with a PhD from Leiden Law School focusing on online advertising (real-time bidding).

Peter Swire, FPF Senior Fellow and Professor at the Georgia Institute of Technology, worked with Eijk on the World Wide Web Consortium (W3C) Do Not Track process, which involved more than 100 organizations, and found him to be uniquely constructive. “Rob’s combination of technical insight, policy savvy, and integrity as a person is outstanding,” said Swire. “Rob is an acclaimed expert in EU data protection and the technology of processing personal data, while also understanding perspectives from the United States and globally. He will be a great leader for FPF in Europe.”

Eijk started his professional career in the automotive industry. As an onsite consultant, he specialized in dealer-network planning. Before joining the Dutch DPA, he founded a company with a focus on office automation for small-sized enterprises. In 2008, he sold the company, BLAEU Business Intelligence BV, after he had run it successfully for nine years. Eijk expects to deploy this expertise in the European tech market, helping local startups, entrepreneurs and technologists establish the knowledge and expertise needed to navigate tech and innovation policy.

“The Future of Privacy Forum could not have made a better choice than appointing Robbert van Eijk as Director for its European operations. He is a brilliant expert in privacy and technology matters and has contributed enormously to the European and international debate on these issues,” said Alexander Dix, Former Chairman of the International Working Group on Data Protection in Telecommunications (Berlin Group).

Eijk will collaborate with FPF EU senior policy counsel Gabriela Zanfir-Fortuna to expand FPF programming to bridge the gap between European and U.S. privacy cultures and build a common data protection language. Through its convenings and trainings, FPF helps regulators, policymakers, and staff at EU data protection authorities better understand the technologies at the forefront of data protection law. Last year, FPF kicked off its Digital Data Flows Masterclass, a year-long educational program designed for regulators, policymakers, and staff seeking to better understand the data-driven technologies at the forefront of data protection law and policy.

“FPF has a great reputation in the EU for bringing diverse stakeholders together to develop practical policy approaches to emerging technologies,” said Eijk. “It’s exciting to be part of a talented team exploring best practices for data portability, user control, the ethical use of AI, data research, anonymization, and other issues critical to data protection and fundamental rights in Europe and around the world.”

On 19 November, FPF will host its third annual Brussels Privacy Symposium in partnership with the Brussels Privacy Hub of the Vrije Universiteit Brussel. Details about the event, Exploring the Intersection of Data Protection and Competition Law: The 2019 Brussels Privacy Symposium, can be found here.

Media Contact:

Tony Baker

Future of Privacy Forum

[email protected]

202-759-0811

About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

Key Findings From the Latest ‘Right To Be Forgotten’ Cases

Case C-136/17 GC et al v CNIL – right to be forgotten; lawful grounds for processing of sensitive data

Link to judgment: http://curia.europa.eu/juris/document/document.jsf?text=&docid=218106&pageInd ex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=335023 

Main issue: 

Four erasure requests not linked to each other and all having to do with de-linking news articles from Google search results pages, some of which contained sensitive data, were rejected by Google. The CNIL upheld Google’s assessment, considering the public’s right of information prevailed in all cases. The data subjects challenged CNIL’s decision in Court, which sent questions for a preliminary ruling to the CJEU. One key question was whether Google as a controller and within the limits of its activity as a search engine has to comply with the prohibition of processing sensitive personal data, which has very limited exceptions. In other words, should Google ensure that before displaying a search result leading to information containing sensitive data it must have in place one of the exceptions under Article 9(2)? So should there be a difference of treatment between controllers, depending on the nature of the processing they engage in? Another question was whether information related to criminal investigations falls under the definition of information related to “offences” and “criminal convictions” under Article 10 GDPR, so subject to the restrictions for processing imposed by it. The Court made detailed findings about the content of Article 17 GDPR (the right to be forgotten) and about the exceptions of the prohibition to process sensitive personal data. 

Key findings: 


Case C-507/17 Google – global de-listing requests 

Link to judgment: http://curia.europa.eu/juris/document/document.jsf?text=&docid=218105&pageInd ex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=1103956

Main issue 

Key points 

Relevant nuances 

CCPA 2.0? A New California Ballot Initiative is Introduced

Introduction

On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsom is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy Rights and Enforcement Act of 2020) that will be voted on in the 2020 election; if passed, the initiative would substantially expand CCPA’s protections for consumers and obligations on businesses. While the new proposal preserves key aspects of the current CCPA statute, there are some notable additions and amendments.

Notable Provisions 

The California Privacy Rights and Enforcement Act of 2020 ballot initiative would:

Next Steps

According to the California Elections Code (ELECT CA ELEC § 9002), the California Attorney General will hold a 30-day review process and public comment period, followed by five additional days for proponents of the initiative to amend the proposal, prior to the initiative appearing on the ballot.

As stated above, this proposal provides an idiosyncratic approach to the legislative process for laws passed via a ballot initiative by allowing for amendments after it is signed by the governor if the amendments are “consistent  with and further the purpose and intent” of the Act. This approach suggests a willingness to pass new amendments to help the law keep pace with emerging technology. The standard process for amending ballot initiatives requires a supermajority vote of the legislature.

Civic Data Privacy Leaders Convene at MetroLab Annual Summit

By Kelsey Finch, FPF Senior Counsel

The MetroLab Network’s Annual Summit brought together an inspired group of civic, academic, industry, and nonprofit leaders to discuss the most important issues in smart cities and civic innovation. For the third year in a row, FPF partnered with MetroLab Network to promote data privacy perspectives and to advance responsible data practices within smart and connected communities.

This year at the Summit, I moderated a roundtable discussion of privacy officials representing Pittsburgh, Seattle, Boulder, and more than a dozen other cities who have joined the Civic Data Privacy Leaders Network, an FPF-led initiative supported by the National Science Foundation. Network members joined summit participants from academia, industry, and civil society to share their most pressing questions, concerns, and smart city success stories with each other. The roundtable highlighted the common privacy challenges and opportunities faced by today’s local government privacy leaders and sparked new ideas for promoting fair and transparent data practices.

In this candid and collaborative atmosphere, some common priorities emerged:

FPF also previewed a working draft of its forthcoming Smart Cities & Communities Privacy Risk Assessment at the roundtable, intended to help smart and connected communities ask the right questions and reach for the right tools to ensure that they are collecting, using, and sharing personal data responsibly.

Other important, data-centric discussions during the event included Thursday’s Mobility Data Management, Analytics, and Privacy session—in which Network member Ginger Ambruster of Seattle and I participated—and sessions dedicated to a new Model Data Handling Policy for Cities from UMKC, data equity and responsible data science, micromobility services, and digital equity and community engagement. Univision ran a Spanish language story on the event focused on how smart cities can ensure equitable treatment and access to resources for immigrants, which you can view here.

While the Civic Data Privacy Leaders roundtable—and the MetroLab Summit as a whole—underlined the significant challenges that communities around the world are facing as they explore new technologies and data uses, it also highlighted the potential for civic innovation to deliver more livable, equitable, and sustainable communities. By working together across sectoral and geographic boundaries, the event showcased how we can help city and community leaders strengthen their ability to collect, use, and share data in a responsible manner and promote the public’s trust in smart city technologies and in local government.

To learn more or join the Civic Data Privacy Leaders Network, a peer group for local government privacy leaders from more than 25 localities in the U.S. and abroad, please contact me at [email protected].

The Right to Be Forgotten: Future of Privacy Forum Statement on Decisions by European Court of Justice

WASHINGTON, DC – September 24, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding two European Court of Justice decisions announced today in its cases with Google:

Key decisions about the balance of privacy and free expression still remain to be settled by the European Court of Justice (ECJ). Although the ECJ’s two decisions generally support the rights of those searching the web to access links to information, both show the tremendous weight European law gives to privacy as a human right that is given the strongest consideration before it is limited. Even though the court found that European law does not mandate global delisting when the Right to Be Forgotten is asserted, it indicated that a data protection authority could seek global delisting if the privacy balance called for it in a specific circumstance.

The court also made clear that within Europe there can be national variances in how the Right to Be Forgotten can be applied, given differences in local law and culture.

In a second case also decided today, the court avoided banning in advance listing of results that include political, racial or other sensitive information. It did require heightened consideration for those results, to the extent that it even required that pages containing information about criminal histories include relevant context on the search page, when the affected party objects to the results.

 

Media Contact:

Tony Baker

Future of Privacy Forum

[email protected]

202-759-0811

About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

FTC should investigate app developers banned by Facebook – Statement by Future of Privacy Forum CEO

Future of Privacy Forum Calls on FTC to Investigate Apps That Misused Consumer Data

WASHINGTON, DC – September 20, 2019 – Statement by Future of Privacy Forum CEO Jules Polonetsky regarding Facebook’s announcement that it has banned 400 developers from its app store:

The FTC should quickly act against many of these app developers, since they share the blame with Facebook, and some could still be holding on to consumer data or continuing to sell it. If apps that misuse Facebook members’ data escape legal penalty, developers will get the message that there is no legal risk to improper data-sharing. Every company, and especially app developers, needs to understand that there are consequences for abusing consumer data. This situation demonstrates yet again that Congress should dramatically increase the human and technological resources available to the FTC and give it broader authority to levy civil penalties.

Media Contact:

Tony Baker

[email protected]

310-593-3680

About the Future of Privacy Forum

Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.