New Privacy Tech Alliance Promotes Innovative Privacy Technologies
TEL AVIV, ISRAEL – June 25, 2019 – The Future of Privacy Forum and the Israel Tech Policy Institute are launching the Privacy Tech Alliance during CyberWeek 2019, to promote the market for privacy-protective technologies internationally, facilitate the development of new technologies, and maximize value for innovators and investors.
“As the data ecosystem and regulatory requirements grow more complex, companies need technical solutions from innovators in this emerging sector,” said Jules Polonetsky, CEO of the Future of Privacy Forum and a co-founder of the Israel Tech Policy Institute. “Our goal is to encourage the social benefits of technology that allows for data-driven insights while minimizing privacy risks.”
The Privacy Tech Alliance brings together innovative startups and academics in the privacy space with companies and government agencies that need solutions and investors who see the potential upside. Startups and academic researchers are joining leading Chief Privacy Officers (CPOs) and venture capitalists to:
Define the sector and explore the products and services privacy and tech leaders need
Promote the adoption of privacy technologies by the government and the private sector
Support research and connect researchers and startups with partners and funders
Foster relationships between companies and prospective customers
“The global nature of privacy regulation means there is a growing market for privacy-protecting technologies,” said Limor Shmerling Magazanik, Managing Director of the Israel Tech Policy Institute. “Companies around the world are eager for tech-based solutions to help them comply with the EU’s General Data Protection Regulation, the California Consumer Privacy Act, and state and national rules modeled upon them.”
A recent wave of investments indicates funders see promise in privacy tech. Companies involved with the Privacy Tech Alliance in the U.S., EU, and Israel provide privacy-enhancing technology tools and privacy program management solutions, including de-identification, secure communications, homomorphic encryption, active monitoring, and data mapping and discovery.
The Privacy Tech Alliance launch will be held at 5:00 Tel Aviv time on June 25 at Camilo – The Green House, George Waze 24, Tel Aviv Jaffa, Israel 6997714. Speakers at the launch event will include:
Jules Polonetsky, CEO of the Future of Privacy Forum and co-founder of the Israel Tech Policy Institute
Omer Tene, Vice President of Research for the International Association of Privacy Professionals and a co-founder of the Israel Tech Policy Institute
Limor Shmerling Magazanik, Managing Director of the Israel Tech Policy Institute and a Senior Fellow at the Future of Privacy Forum. Ms. Magazanik will manage the Privacy Tech Alliance, which will be overseen by an Advisory Board of industry leaders and researchers.
David Hoffman, Associate General Counsel of Security Policy and Global Privacy Officer, Intel
Daniel Goroff, Vice President and Program Director, Alfred P. Sloan Foundation
Rami Kalish, General Managing Partner & Co-Founder, Pitango Venture Capital
Dr. Yair Rotstein, Executive Director of the US-Israel Binational Science Foundation
Anna Pouliou, Head of Privacy, Chanel
Lindsey Finch, Executive Vice President, Global Privacy & Product Legal at Salesforce
Alisa Bergman, Vice President and Chief Privacy Officer, Adobe Systems
Mike Yeh, Assistant General Counsel Corporate, External and Legal Affairs, Middle East and Africa, Microsoft
Florian Schaub, Assistant Professor, School of Information, University of Michigan
Companies that have joined the Privacy Tech Alliance Advisory Board include Anonos, BigID, Duality, D-ID, Immuta, Nymity, OneTrust, Privacy Analytics, Truata, TrustArc, and WireWheel.
Click here to view an archived broadcast of the event.
To learn about Privacy Technologies, visit the Resources page.
Future of Privacy Forum is a global non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.
About the Israel Tech Policy Institute
Israel Tech Policy Institute is an incubator for tech policy leadership and scholarship, advancing ethical practices in support of emerging technologies. Learn more about ITPI by visiting www.techpolicy.org.il.
FPF Letter to NY State Legislature
On Friday, June 14, FPF submitted a letter to the New York State Assembly and Senate supporting a well-crafted moratorium on facial recognition systems for security uses in public schools. FPF also cautioned against overly broad bans or language that might have unintended consequences on other security programs, including some that may include biometric technology.
A targeted moratorium specifically focused on pausing the use of facial recognition systems for security purposes at public school facilities, rather than banning the use of all biometric technology prior to July 2022;
Permitting the continued operation of existing biometrics systems that do not rely on facial recognition, such as fingerprint and palm-print systems, and requiring a review of these systems; and
Analysis and reporting regarding the risks and benefits of biometric technology in schools. The report should include recommendations concerning both 1) the appropriate notice regarding the use of facial recognition systems; and 2) the appropriate level of consent applicable to such systems, if facial recognition technology is approved for future use.
FPF supports a moratorium to allow time for comprehensive study of the impact of facial recognition systems on school campuses. Our analysis of the risks and benefits of facial recognition systems suggests that an evidence-based review of widespread use of these systems in schools will likely find that the systems do not offer sufficient benefits when used for security purposes at public schools (as FPF Senior Counsel Brenda Leong discusses in this video). Although the desire to provide the highest levels of security and protection for students and school personnel is well-intentioned, it is unclear that facial recognition systems will actually make schools safer. Particularly in light of the costs of purchase, implementation, training, and maintenance, we believe the study is unlikely to find sufficient value or benefit in these systems to justify their risks and privacy impacts.
Schools may also face backlash from parents and staff who don’t want to be involved in such a system. For example, some parents who volunteer at school may wish to opt out of having their biometric information collected and stored. Although privacy best practices would require provision of an alternate method, any barrier to entry may decrease people’s willingness to volunteer or come to the school at all. For similar reasons, employees may also resist. Schools would thus incur additional costs to create alternatives for individuals who do not want to take part in a facial recognition system.
While FPF supports a moratorium on this technology, some provisions of the draft New York law contains broad language that may lead to unintended consequences. Facial recognition systems for campus security have triggered the immediate concerns, and that should be the moratorium’s target. Schools may implement facial categorization technologies in other ways that, if banned outright, would prevent or compromise current services to students. For example, schools may currently use biometric software that does not identify individuals but measures facial expressions, voice data, or gait analysis in order to help students in special education, occupational therapy, and physical therapy programs. If the ban applies broadly to all biometrics in all cases, it could unintentionally eliminate these services and programs.
Likewise, some school systems in New York have already purchased and implemented biometric systems based on fingerprints and palm prints for lunch-line efficiencies, attendance reporting, and other administrative functions. These systems are widespread throughout the country and have not typically presented high-risk factors for student privacy. Allowing these school districts to continue using these systems would prevent unnecessary costs of reverting to less-reliable technology, unless or until any risks are identified. Excessively broad language concerning biometric collection or use might even compromise the current practice of collecting the fingerprints of staff and other employees at public schools in order to run background checks, an outcome that would actually decrease student safety.
Instituting a moratorium on facial recognition technology in schools, while permitting continued operation of other existing biometric programs would mitigate privacy risks while creating time for the state to review the risks and benefits of biometric programs for students, teachers, parents, and others. The study should, of course, consider all aspects of biometrics use and make appropriate recommendations. By allowing existing programs to continue in the interim, schools could gradually make necessary changes without negatively impacting students or services.
Finally, if the study does find appropriate uses or justifications for facial recognition systems, we recommend that the current requirement to provide appropriate notice to those affected be expanded to require appropriate consent by school employees, students, visitors, and others who might be impacted. Establishing an express consent requirement and/or options to opt out are important for protecting individual privacy.
The full FPF letter to members of the State Senate can be read here, and to members of the State Assembly here.
Ethical and Privacy Protective Academic Research and Corporate Data
Is edtech helping or hindering student education? What effect does social media have on elections? What types of user interfaces help users manage privacy settings? Can the data collected by wearables inform health care? In almost every area of science, academic researchers are seeking access to personal data held by companies to advance their work.
Data held by companies holds the potential to unlock new scientific insights that can benefit society and expand human knowledge. When responsibly shared with academic researchers, this data can support progress in medicine and public health, education, social science, and many other fields.
But access to the data needed is often unavailable due to a range of barriers – including the need to protect privacy, address commercial concerns, maintain ethical standards, and comply with legal obligations.
To help companies tackle these challenges, the Future of Privacy Forum has launched the Corporate-Academic Data Stewardship Research Alliance, a peer-to-peer network of private companies who share the goal of facilitating privacy-protective data sharing between businesses and academic researchers.
The Alliance will support data sharing efforts under way, help address and mitigate challenges that create barriers to sharing and promote practices that encourage more data sharing between industry and academic researchers. So far, more than 25 prominent companies are participating in the Alliance’s activities.
In its initial work, the Alliance has identified a number of existing barriers to data sharing and has begun to address potential solutions that support compliance with legal, policy and ethical concerns.
Alliance participants agree on the need for a common understanding of the legal landscape with regard to sharing personal information with researchers. In response, the Alliance is producing an overview of how the use and sharing of personal information for research purposes is treated in key privacy laws, as well as a paper that analyzes the legal landscape and argues that lawmakers should continue to make allowances for scientific research when drafting future privacy laws.
The Alliance has also begun work on establishing a set of best practices for sharing data for research purposes. Those best practices include data security, de-identification, vendor management, due diligence, training and education, and more. This work will likely result in a guidance document or an industry Code of Conduct.
A major barrier to data sharing identified by the participants is the lack of contractual uniformity. Research institutions, some of which are subject to state procurement rules, may require their own contractual terms. This creates scaling issues, in which the company must negotiate with each institution separately. Additionally, companies want to ensure that the contracts include provisions that address and reduce the risks (privacy, security, etc.) inherent in data sharing. Going forward, the Alliance will gather or develop model contractual terms or template agreements that all parties can agree to, with the goal of easing the negotiation process and ensuring that appropriate protections for all parties, including data subjects, are included in the written agreements.
The lack of access to an Institutional Review Board (IRB) or ethics review board is another roadblock for companies. Some companies have expressed a preference for an independent third-party that could review a range of privacy and ethical issues that go beyond what a traditional IRB might address. The Alliance will support efforts to develop effective options for independent review of data sharing and the related research purposes.
To encourage privacy protective data sharing for scientific research, the Alliance will create a new Award for Leadership in Data Stewardship and Achievement in Academic Research.
The Alliance welcomes industry participants to join our monthly calls and contribute to our work. If you are interested in learning more about the Alliance, please contact FPF Senior Fellow Mike Hintze at [email protected].
NAI’s 2020 Code of Conduct Expands Self-Regulation for Ad Tech Providers
As debates over the shape of federal privacy legislation in the United States continue, online advertising remains a key focus of scrutiny in the US Congress, with its recent hearing on digital advertising and data privacy. Amidst these debates, the Networking Advertising Initiative (NAI), the leading self-regulatory body for online advertising technology (ad tech) providers, announced a major update to its Code of Conduct on May 14, 2019. The revised 2020 Code of Conduct officially expands the scope of the NAI Code to a broader range of products and technologies in the online advertising industry and strengthens existing requirements, which is a crucial step given the recent attention to privacy in ad tech. The NAI is a non-profit, self-regulatory association responsible for creating and enforcing third-party advertising standards for data collection and uses for online and mobile advertising.
According to the NAI, the 2020 Code of Conduct is the largest overhaul of its self-regulatory requirements since the Code was originally released in 2000. The 2020 Code now includes digital advertising practices such as the use of offline data for tailored advertising, and incorporates sensor technology and real-time uses of location data. The Code update also aims to “future-proof” Tailored Advertising by covering any use of previously collected user data to target advertising across websites and apps.
Key takeaways from the 2020 Code of Conduct:
Inclusion of “Audience-Matched Advertising” and TV Data. One of the largest updates to the NAI Code is that NAI has expanded the scope of its coverage to include “Audience-Matched Advertising” and “Viewed Content Advertising,” which, along with more traditional online and mobile advertising and cross-device linking, are collectively termed “Tailored Advertising.” Audience-Matched Advertising refers to “using data linked, or previously linked, to personally-identified information for the purpose of tailoring advertising . . .” In essence, this refers to supplementing a target audience using data or inferences that were originally tied to identified individuals (via a name or email address, for example), often from offline sources, such as loyalty programs, retailers, or public records.
Viewed Content Advertising refers to data collected from viewed video content, for example from Smart TVs. This inclusion reflects a broader debate over regulating data privacy issues involving Smart TVs, which are increasingly collecting data on consumer activity and viewing preferences for digital marketing purposes. For more, see FPF’s 2018 description of Smart TV data collection practices, Seeing the Big Picture on Smart TVs and Smart Home Tech).
Sensitive Health Data. The NAI has long required opt-in consent for the use of data about sensitive health conditions, which includes a fact-specific determination of the seriousness or sensitivity of the condition. Under the NAI’s commentary, sensitive conditions include, for example: drug addiction; sexually transmitted diseases; mental health conditions; pregnancy termination (but not pregnancy); cancers; and — new to the 2020 Code — “all conditions predominantly affecting or associated with children that are not treated by over-the-counter medications.” It does not include less serious health conditions, such as allergies or cold and flu, or wellness interests, such as vitamins and supplements. Other updates include: (1) an exemption for sensitive interest segments for fundraising and non-profit uses (such as, reaching people likely to donate to specific health causes, as long as they are not inferred to have the condition); (2) an exemption for targeting to medical professionals; and (3) an explicit requirement of opt-in consent to target users at sensitive locations, such as abortion clinics or LGBT clubs, using precise location data.
“Sensor Information.” The NAI has added this term and a requirement for opt-in consent to access “information from a camera, microphone, or any sensor on a user’s device that may collect biometric data.” In commentary, NAI notes that NAI members seeking opt-in consent should ensure “just-in-time notice, such as through an interstitial page, prior to the use of platform-provided consent mechanisms.” Sensor information does not include information such as barometric pressure or accelerometer data that is used to determine the status of the device, and here we note a possible future privacy concern: the collection of multiple points of non-sensitive sensor data for “behavioral biometrics,” or a means of identifying a user or device based on holistic information about how users physically interact with the device.
Precise Location Information. The 2020 Code extends an opt-in consent requirement for real-time uses of precise location data. Previously, consent was required before precise location data could be collected or shared for tailored advertising; however. real-time uses (such as targeting an advertisement to a geo-fenced zone), were considered “contextual” and not covered by the Code. The Code now includes such real-time uses in its overarching requirement of opt-in consent for precise location data. At the same time, the requirements have been relaxed slightly to permit NAI members to rely on reasonable assurances that a partner application or website has obtained consent on behalf of the member.
Transparency for Political Targeting. The 2020 Code now requires NAI members that use interest segments based on political categories (e.g. “Republican” or “Pro-Choice”) to disclose these political segments on their websites. While this update is timely in light of federal efforts to address accountability in political advertising, such as the 2017 Honest Ads Act, it is limited insofar as it does not address a wide variety of political advertising practices, such as proxy segments (targeting political content to “gun owners,” or “vegetarians”), custom segments (combinations of attributes), or generic segments that may nonetheless be used for express political advocacy. The NAI Code update is intended to complement the Digital Advertising Alliance’s (DAA) recently released guidelines for a “Political Ad” icon which is intended to provide users with information about political advertisers directly from the ads.
Expanded Prohibitions on Secondary Uses of Data. NAI members have long been prohibited from using advertising data for employment, credit, health care, and insurance eligibility. Under the 2020 Code, members are also explicitly prohibited from using such data for tenancy eligibility and education admissions. The 2020 Code further clarifies that such data should not be used for any non-marketing purposes, even if not specifically enumerated.
Age Restrictions for Tailored Advertising. The minimum age restriction for which NAI members may specifically target advertising without obtaining parental consent was raised from 13 to 16. This change is occuring in the midst of discussions in DC about creating additional privacy protections for children and teenagers. For example, Senator Markey recently introduced legislation to update the Children’s Online Privacy Protection Act (COPPA) by prohibiting internet companies from collecting personal and location information from anyone 13- to 15-years old without the user’s consent.
Use of Personally-Identified Information (PII). NAI uses the term “personally-identified” to refer specifically to data linked to a person’s identity (usually their name, email address, or phone number), such as data appended from offline sources. The 2020 Code clarifies that members who use PII or hashed PII for Tailored Advertising must now provide a PII-based opt-out mechanism on both the member’s website and on the NAI website, that is applicable to the NAI member’s use of that PII on all browsers, applications, and devices. Members retaining PII for behavioral advertising must also provide users with reasonable access, and an option for the user to request that the member permanently delete the user’s PII information. These changes parallel similar requirements for access, deletion, and opt-out, that will go into effect with the passage of the California Consumer Privacy Act (CCPA) in 2020. Although possible amendments to the CCPA are still pending, the law is anticipated to have a broad effect on the ad tech industry.
The revised NAI Code of Conduct will come into effect in January of 2020. With advertising technologies continuing to advance, federal regulators as well as state and local legislators are paying greater attention than ever before to digital advertising practices. In these debates, self-regulatory efforts play a unique role, and organizations such as the NAI have the ability, not only to go beyond existing laws in addressing consumer privacy concerns, but also to help shape evolving legislative efforts. We are glad to see that the NAI is taking major steps in the right direction in order to continue to be on the front lines of protecting consumer privacy and promoting responsible business practices.
CCPA Amendment Update June 2019 – Twelve Bills Survive Assembly and Move to the Senate
Privacy professionals seeking clarity on compliance with the California Consumer Privacy Act (CCPA) are monitoring numerous amendment bills introduced in the California State Assembly and the California State Senate. Twelve bills garnered the votes needed to pass the Assembly and moved to the Senate for further revision and voting. The Assembly’s calendar prohibits consideration of new bills this session, which means that any amendments enacted prior to the CCPA’s 2020 effective date will be based on these bills. California’s complex legislative process makes tracking amendments yeoman’s work.
This post provides clarity to an otherwise murky process by: 1) presenting an overview of the California state legislative process; 2) identifying a CCPA timeline and key deadlines; 3) analyzing the CCPA amendments that recently passed the Assembly along with noteworthy bills that failed in the Senate; and 4) outlining likely next steps for amendment efforts prior to the law’s effective date.
Legislative Process
Four moving parts of the California State Legislature impact the content and status of CCPA amendment bills: the California State Assembly, California State Senate, California Governor, and California Attorney General’s Office.
The Legislature consists of both the California State Assembly (lower house) and the California State Senate (upper house). While bills may be introduced in either house, amendments introduced in the Assembly must first pass the Assembly floor with a majority vote before moving to the Senate for further amendment and votes.
Bills that pass the Senate are sent to the Governor to be either signed into law or vetoed. In addition to any legislative amendments, the California Attorney General’s Office can clarify or modify CCPA requirements by promulgating regulations under its CCPA rulemaking authority. As part of the preliminary rulemaking process, the California AG has held seven public forums and concluded the comment period. As mentioned during the January 14th public forum, the Attorney General is likely to issue draft regulations this fall, followed by a formal notice and comment period.
Timeline and Key Deadlines
The bills that passed the Assembly have moved to the Senate for committee process, which will continue for the coming months.
CCPA Amendment Bills
Below is a summary of the key provisions of each CCPA amendment bill that passed the Assembly and moved to the Senate for further consideration:
– Exempts vehicle information shared between a new auto dealer and a vehicle manufacturer when information is shared or retained pursuant to, or in anticipation of, a vehicle repair relating to warranty work or recall.
– Defines a “data broker” and requires data brokers to register with and provide certain information to the Attorney General, and failure to register may lead to liability (civil penalties, fees and costs).
– Requires businesses that use facial recognition technology to disclose that usage in a physical sign that is clear and conspicuous at the entrance of every location that uses facial recognition technology.
– Under the non-discrimination provision, allows differential treatment of a consumer who has exercised CCPA rights if the differential treatment is reasonably related to value provided to the business by the consumer’s information;
– Requires businesses to disclose in their privacy policy consumer’s right to request specific pieces of information and the categories of information collected by businesses.
– Provide personal information to a government agency solely for the purposes of carrying out a government program;
– Sell personal information of consumers who have opted-out of sale solely for the purpose of detecting security incidents, fraud or illegal activity prevention.
– Requires businesses to make available a toll-free number or a physical address and email address for submitting requests;
– A business that exclusively operates online is only required to provide an email address for requests.
Below is an analysis of two noteworthy bills that failed in the Senate, which contain top-of-mind concerns for privacy professionals. Although these particular bills will not move forward this year, they are noteworthy because the senators who authored the bills will now vote on, and potentially revise, bills passed by the Assembly.
– Authored by Senator Hannah-Beth Jackson, the bill would have expanded the private right of action to privacy violations, including statutory damage class actions, and eliminated the 30-day right to cure.
Exemption of targeted advertising, definition of sale
– Authored by Senator Stern, the bill would have allowed a carve-out for certain targeted advertisements from selling requirements and narrowed the definition of “sale.”
Next Steps and What to Expect
Attention is shifting to the bills that have passed the Assembly and are under consideration by the Senate. While it is impossible to accurately predict which amendments will pass the Senate, it is important to note that the Senate has traditionally exhibited a different sensibility than the Assembly, and it is possible that Senators, such as Sen. Hanna Beth Jackson (author of SB-561) and Sen. Henry Stern (author of SB-753), might attempt to rewrite amendments or vote against amendments that do not align with ideals championed by the failed amendments.
Bills approved by the Senate will then cross the Governor’s desk to be either signed into law or vetoed in mid-October. We expect the California AG’s office will clarify ambiguities via its CCPA rulemaking authority in the fall. With all of these elements at play, it is safe to say that a dynamic amendment process will continue over the next several months.
Privacy professionals would be prudent to expect that the core provisions of the CCPA will likely remain intact. And although some of the amendments under consideration could result in significant ramifications if signed into law, it is crucial for companies to focus on the core obligations of CCPA to comply with the 2020 effective date.
FPF Welcomes the 2019 Class of Policy Fellows
FPF is pleased to announce the selection of its 2019 Policy Fellows: Katelyn Ringrose, Charlotte Kress, and Anisha Reddy. Working at FPF for one- or two-year terms, Fellows are key members of the FPF policy team. Fellows focus on consumer and commercial privacy issues, from technology-specific areas such as drones, wearables, connected cars, and student privacy, to general data management and privacy issues related to ethics, deidentification, algorithms, and the Internet of Things. Their roles may include filing comments on proposed regulatory actions, researching and analyzing US and European privacy issues, developing industry best practices or standards, and tracking consumer privacy legislation.
The Christopher Wolf Diversity Law Fellow is Katelyn Ringrose. This two-year fellowship is distinguished by its commitment to bring diverse perspectives to FPF’s work on contemporary privacy issues. Ringrose comes to this position as a recent graduate of the University of Notre Dame Law School, where she was the Faculty Awardee for Excellence in Advanced Legal Research and the President of the LGBT Law Forum. She published law review articles on facial recognition and body-worn cameras, government surveillance, and data collection in schools, among other topics.
While in law school, Ringrose founded Impowerus, an online company connecting juvenile immigrants to pro bono legal aid that was chosen as one of the top eight student tech startups in the U.S. by Inc. Magazine. She interned for the U.S. Department of Justice, the Washington State Attorney General’s Office, the Public Defender’s Office of the Juvenile Justice Center in South Bend, IN, and the Hawaii State Supreme Court. Prior to attending law school, she was a teacher in Tacoma, WA.
“Katelyn brings experience as an entrepreneur, an academic, and a public servant to her fellowship at FPF,” said Christopher Wolf, FPF Founder and Board President. “We’re excited that she will be sharing her unique perspective and extraordinary talents with our staff, supporters, and partners.”
Charlotte Kress is the Elise Berkower Memorial Fellow. This Fellowship is dedicated to the memory of Elise Berkower, a leader in the consumer privacy field, who was known for her ability to identify and nurture young lawyers, with a focus on consumer protection and business ethics. Kress graduated from The George Washington University Law School, where she was a member of the Federal Circuit Bar Journal and the Society for European Law Students and volunteered with the International Privacy and Security Forum. A fluent German speaker, Kress undertook post-graduate studies at Heidelberg University in Germany after her graduation from Bucknell University. Kress served as a Legal Intern at the Office of General Counsel for the District of Columbia Housing Authority, and at DLA Piper and Sedgwick law firms.
“Charlotte’s energy and enthusiasm for legal scholarship, collaboration, and ethical negotiation makes her a fitting recipient of the Elise Berkower Memorial Fellowship,” said Howard Berkower, Partner at McCarter & English, and brother of Elise Berkower. “Her international experience, drive, and commitment to the continued development of technology and privacy laws will be a great asset for FPF.”
FPF gratefully acknowledges the Nielsen Foundation, the Berkower family, IAPP and friends of Elise as founding sponsors of the Elise Berkower Memorial Fellowship.
Anisha Reddy is the inaugural FPF Education Privacy Fellow. This newly created position will focus on expanding FPF’s library of resources on student privacy issues, as well as tracking student and child-specific privacy legislation at state and federal levels. Reddy will also help process applicants to the Student Privacy Pledge. Reddy will set the bar high for this role. At Penn State’s Dickinson Law, Reddy was honored with the University’s 2017-2018 Montgomery and MacRae Award for Excellence in Administrative Law. She held the offices of Executive Editor for Digital Media of the Dickinson Law Review, President of the Asian Pacific Law Students Association, and Vice President of the Women’s Law Caucus. Reddy served as a Certified Legal Intern for the Children’s Advocacy Clinic in Carlisle, PA, where she represents children involved in civil court actions like adoption, domestic violence, and custody matters. She previously interned at the Governor of Pennsylvania’s Office of General Counsel, at Udacity in Mountain View, CA and at Blockchain, Inc. in New York, NY.
“We are thrilled to welcome Anisha to the FPF Education Privacy team,” said Amelia Vance, Senior Policy Counsel and Director of the FPF Education Policy Project. “The breadth and variety of her work experience and legal scholarship will be an asset when engaging with the variety of stakeholders in the education community.”
FPF is looking forward to welcoming Ringrose, Kress, and Reddy to the team in September. These three talented, young lawyers will continue a strong tradition of FPF Policy Fellows who make exceptional contributions to the privacy community during their fellowships and throughout their careers.
FPF is seeking to continue to support the next generation of privacy leaders through our fellowships. Please click on the button to make your contribution in support of the Elise Berkower Memorial Fellowship and the Christopher Wolf Diversity Law Fellowship.
FPF and IAF Release "A Taxonomy of Definitions for the Health Data Ecosystem"
Healthcare technologies are rapidly evolving, producing new data sources, data types, and data uses, which precipitate more rapid and complex data sharing. Novel technologies—such as artificial intelligence tools and new internet of things (IOT) devices and services—are providing benefits to patients, doctors, and researchers. Data-driven products and services are deepening patients’ and consumers’ engagement, and helping to improve health outcomes.Understanding the evolving health data ecosystem presents new challenges for policymakers and industry. There is an increasing need to better understand and document the stakeholders, the emerging data types and their uses.
The Future of Privacy Forum (FPF) and the Information Accountability Foundation (IAF) partnered to form the FPF-IAF Joint Health Initiative in 2018. Today, the Initiative is releasing ATaxonomy of Definitions for the Health DataEcosystem; the publication is intended to enable a more nuanced, accurate, and common understanding of the current state of the health data ecosystem. The Taxonomy outlines the established and emerging language of the health data ecosystem. The Taxonomy includes definitions of:
The stakeholders currently involved in the health data ecosystem and examples of each;
The common and emerging data types that are being collected, used, and shared across the health data ecosystem;
The purposes for which data types are used in the health data ecosystem; and
The types of actions that are now being performed and which we anticipate will be performed on datasets as the ecosystem evolves and expands.
This report is as an educational resource that will enable a deeper understanding of the current landscape of stakeholders and data types. We hope it will be valuable as a common, reference language for the evolving health ecosystem. This is particularly important as organizations take on more data governance projects, take inventory of the data flowing into and out of their organizations, and participate in complex data exchanges. We intend the Taxonomy to be used to create consistent data collection and use models across the healthcare ecosystem. Establishing common, shared terminology is particularly useful as state privacy laws and pending Congressional proposals seek to codify a comprehensive consumer privacy framework in the United States; these proposals often include provisions that would require organizations to undertake data mapping and inventory activities.
For additional information about this Taxonomy or the Joint Health Initiative work between the Future of Privacy Forum and Information Accountability Foundation, please contact Stan Crosley ([email protected]) and John Verdi ([email protected]).
As Legislators Debate Ad Tech, Browsers and Operating Systems Announce New Technical Controls
Congress continues to hold data privacy hearings, including yesterday’s Understanding the Digital Advertising Ecosystem and the Impact of Data Privacy and Competition Policy. The continued debate over adtech practices is reaching a crescendo, making the case for quick action on a comprehensive federal privacy law that can set parameters for how personal data is collected, used, and shared, for adtech and for the many ways data is used by companies of every sort. But any law would do well to incentivize technical solutions to privacy challenges, rather than rely solely on legal commitments, as FPF CEO Jules Polonetsky argued in his recent testimony at the Senate Commerce Committee. There is no question that well-crafted laws and rules can give consumers important rights and provide companies with clarity about their obligations. It is also important to recognize that technical solutions continue to play an integral role in improving consumers’ privacy. As policymakers craft new privacy protections in law, they should be mindful that both legal and technical safeguards are necessary to ensure strong consumer protections.
We have seen many examples of technological solutions bolstering or otherwise supplementing legal protections.
In 2003, Congress passed CAN-SPAM – a law designed to combat unsolicited junk email. Although CAN-SPAM established important legal obligations for bulk email senders (e.g., requiring email marketers to offer recipients a clear and conspicuous means to opt out), unsolicited and fraudulent marketing remained common. Consumers remained frustrated, and many inboxes continued to be flooded with spam. Only when major email providers managed to increase the effectiveness of a range of technical solutions for accurate and pervasive anti-spam filtering did the battle against spam make major advances.
The Telephone Consumer Protection Act (TCPA) was enacted in 1991 – the law imposes legal obligations regarding unsolicited phone calls, faxes, and text messages. But TCPA did not eliminate unsolicited calls, junk faxes, or spam text messages. In fact, consumers are increasingly targeted with fraudulent robocalls executed by automated dialers. The proliferation of robocalls has resulted in the development and adoption of new technologies to identify and prevent intrusive marketing behavior. Mobile carriers and other communications providers offer technical solutions to combat such calls (including AT&T’s Call Protect, T-Mobile’s Scam Block, and Verizon’s Call Filter). There is real progress on an industry-developed standard to combat unlawful communications that rely on caller ID spoofing. TCPA is a useful tool for enforcement, but until the technology is perfected, we won’t have the win needed to stop intrusive robocalls. Similarly, to make progress online, we will need strong federal legislation and continued progress by companies in advancing the effectiveness of current privacy tools as well as developing new ones.
Recent developments in this direction by leading companies are a welcome step in this direction. Some of the most recently announced privacy updates from Google, Apple, and Mozilla include:
Google Privacy Updates
At Google’s annual I/O developer conference, the company announced updates to its products and services, including changes to the Chrome browser, that will provide users with enhanced privacy controls.
Modified Cookie Controls. At I/O, Google announced upcoming changes to the way cookies are handled in the Chrome browser. Rather than only having binary controls that depend on the user to clear cookies, the updates will require web developers to specify when cookies are allowed to work across websites. This will allow Chrome to automatically determine the appropriate treatment for each cookie consistent with users’ browser settings. These changes are designed to allow sites to retain personalized settings, such as login information and site preferences, while preventing cross-site tracking of users who explicitly opt out.
Reducing Device Fingerprinting. Device fingerprinting is the process of gathering data about device characteristics in order to generate a unique “fingerprint,” allowing websites to repeatedly recognize that device over time even when a site is prevented from setting cookies. In part because browsers are increasingly blocking third-party cookies, fingerprinting is becoming more common, even though fingerprinting methods typically lack transparency and adequate user controls. According to Google, forthcoming updates to the Chrome browser are intended to “aggressively restrict fingerprinting” on the web. This follows similar updates to Mozilla’s Firefox and Apple’s Safari announced in 2018.
Auto-Delete Options for Location History and Search. Google Location History is an account-level setting that maintains a history of a user’s location – data that is typically generated by the user’s mobile device. When enabled by the user, this information may be used to provide personalized maps, recommendations, real-time traffic updates, and behavioral advertising. Google Web & App Activity is an account-level setting that saves users’ searches and activities within Google products and services, and may be used to give users more personalized search results and content recommendations. Google plans to expand the existing deletion controls for these settings. Currently, users have the option to turn off Location History and Web & App Activity altogether, and can choose to either bulk delete their entire histories, or manually delete individual data points. According to Google, forthcoming updates will allow users to exercise more nuanced control, allowing users to configure their account activity settings to auto-delete search and location data on an ongoing basis, every 3 or 18 months.
Incognito Mode Expansion. The Chrome browser’s Incognito Mode employs technical safeguards to ensure that the browser does not save browsing history, cookies, information entered in web site forms, or related data. Incognito Mode limits the risk that other users who share a device might be able to access an individual’s browsing history. In 2019, this feature will be available in a broader range of Google services (now available in YouTube, and soon to come to Google Search and Google Maps). Although Incognito Mode does not address all privacy concerns, it allows users to browse the web and interact with Google services without their data being linked to activities performed outside of the Incognito Mode experience or their Google accounts. (Google Safety and Security Blog).
Limiting Workarounds. While Safari blocks third-party cookies by default, ITP goes a step further by limiting a workaround called link decoration – a technique involving web sites that insert user attributes into the URL of a clicked link, allowing third-parties to track users across sites using cookies set in a first-party context. A previous version of ITP automatically capped the expiration of such cookies at 7 days and the ITP 2.2 update will change this default to one day. This update coincides with the release of iOS 12.3 (and Safari macOS).
Privacy Preserving Ad Click Attribution for the Web. In order to facilitate ad click reporting, a process that has traditionally relied on third-party cookies which are increasingly affected by cookie blocking technologies, Apple has proposed a browser-based technology solution that will enable ad click attribution in a manner that offers greater privacy protections to users than traditional methods. This solution is currently available as an experimental feature in Safari Technology Preview and is being proposed as a W3C standard that would be available in other browsers.
We anticipate additional privacy announcements at Apple’s upcoming WWDC conference in early June.
Mozilla Privacy Updates
Mozilla has long been a leader in developing and implementing technical privacy solutions. Mozilla’s Firefox was the first browser to implement Do Not Track and one of the first browsers to block third-party cookies by default. Mozilla has released several technologies and policies this year to strengthen user privacy protections.
Firefox Anti-Tracking Policy. Mozilla’s Security/Anti-Tracking Policy defines tracking techniques Mozilla believes should be blocked by browsers by default (e.g., tracking cookies, URL-based tracking, and device fingerprinting). In their newest Firefox Quantum browser, Mozilla released a capability that allows users to specifically block Fingerprinters and Cryptominers for improved privacy, security, and performance.
Firefox Add-On Policy (Effective June 10, 2019). Browser add-ons enable users to modify and personalize their web experience. Mozilla’s new policies for add-ons includes disclosure requirements for data collection, storage, and user data sharing. For example, companies must disclose when and why cookies are used, as well as provide an opportunity for users to refuse them. Violations of the updated add-on policy may result in the rejection of add-ons and/or the block or deletion of developers’ accounts.
Funding Research for Super Privacy Browsing Mode. Mozilla issued a call for research proposals that would develop additional privacy safeguards for the Firefox browser. Mozilla will fund research to solve “inefficiencies currently present in Tor so as to make the protocol optimal to deploy at scale” and technologies that might support “a Super Private Browsing (SPB) mode” for users.
Conclusion
It will take a combination of solutions to address consumer privacy issues. As Congress debates federal privacy legislation, policymakers should bear in mind the importance of technical safeguards in protecting consumers’ data. Lawmakers should consider ways that a baseline, comprehensive privacy law could create incentives for organizations to develop and implement technical safeguards that align with consumers’ privacy expectations.
Understanding Artificial Intelligence and Machine Learning
The opening session of FPF’s Digital Data Flows Masterclass provided an educational overview of Artificial Intelligence and Machine Learning – featuring Dr. Swati Gupta, Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech; and Dr. Oliver Grau, Chair of ACM’s Europe Technology Policy Committee, Intel Automated Driving Group, and University of Surrey. To learn more about the Basics of AI/ML and how Bias and Fairness impact these systems, watch the class video here.
Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.
In conjunction with this class, FPF released The Privacy Expert’s Guide to AI and Machine Learning. Covering much of the course content, this guide explains the technological basics of AI and ML systems at a level of understanding useful for non-programmers, and addresses certain privacy challenges associated with the implementation of new and existing ML-based products and services.
The Digital Data Flows Masterclass is a year-long educational program designed for regulators, policymakers, and staff seeking to better understand the data-driven technologies at the forefront of data protection law & policy. The program features experts on machine learning, biometrics, connected cars, facial recognition, online advertising, encryption, and other emerging technologies.
Personal Data and the Organization: Stewardship and Strategy
Personal data – used lawfully, fairly, and transparently – is central to helping organizations achieve their missions. Today, Boards of Directors, CEOs, policymakers, and others need to understand the wide range of data inputs, the broad scope of risks and benefits, and how privacy and ethics are at the center of an organization’s ability to fulfill its leaders’ vision. Traditionally, privacy was considered a legal and compliance matter, but now it is a fundamental concern because of emerging issues such as advertising practices, content standards, global data flows, concerns about civil rights, law enforcement cooperation, ethics, community engagement, research standards, and more. With these trends in mind, FPF has created an infographic that shows:
The complexities of how organizations collect and use data
The risks involved
How principled data stewardship supports the goals of innovation, growth, brand development, and social responsibility.
We hope you will find this useful in communicating with colleagues, students, leaders, and policymakers.