Change Could be Soon Coming to the FTC, the Lead U.S. Agency on Privacy
The U.S. Presidential election is almost upon us, and it could have a big impact on the future of the Federal Trade Commission (FTC), the de facto national privacy regulator and law enforcer. The FTC lineup has been steady since 2018 but that could soon change – no matter who wins the election.
Prior to the appointment of the five current Commissioners, the FTC had only two serving. This happened because new Commissioners were not appointed by the President and confirmed by the Senate as they finished their terms and departed. Though all five current FTC Commissioners were appointed in 2018, their terms in office end years apart.
FTC Commissioners’ Terms
Commissioners serve seven-year terms, with appointment and expiration dates set on a staggered schedule. The FTC Act has been interpreted to mean that Commissioners’ seven-year terms run “with the seat,” so that the term expires on the scheduled date, regardless of when the Commissioner was appointed, confirmed, and sworn in. If a Commissioner’s replacement is not appointed at the end of their term, they may stay on until their replacement is seated. This is currently the case with Commissioner Chopra, whose term ended in September 2019.
Commissioners can be re-appointed. Sometimes they leave before the end of their term.
Commission Chairs often leave when there is a change in Presidential Administration. If FTC Chairman Joe Simons chose to step down, his vacancy could be filled by a new Chair, or a non-Chair Commissioner could be appointed and a sitting Commissioner elevated to Chair.
Nominating New Commissioners
Two of the five Commissioners must not be from the President’s political party. It’s typical for the Administration and Senate leaders to agree to “pair” appointees from each party when there is more than one vacancy in order to ease Senate confirmation. That would be unlikely in a new Democratic administration because there would not be a Republican vacancy unless more than one sitting Republican vacated their seats. But “pairing” can happen across agencies, with Senate leaders of each party agreeing to move the nominations they support as part of complicated bi-partisan agreements.
Although the current Commissioners reflect a range of ideological perspectives, the agency has generally been fortunate to be led by appointees recognized for their professionalism, integrity, policy smarts, and ability to collaborate across party lines – traits that will be valued in their eventual successors as well.
A Privacy Playbook for Connected Car Data
Drivers and passengers expect cars to be safe, comfortable, and trustworthy. Individuals often consider the details of their travels—and the vehicles that take them between their home, the office, a hospital, their place of worship, or their child’s school—to be sensitive, personal data.
The newest cars contain numerous sensors, from cameras and GPS to accelerometers and event data recorders. Carmakers, rideshare services, tech companies, and others are increasingly using data about cars to reduce emissions, manage traffic, avoid crashes, and more. The benefits of connected vehicles for individuals, communities, and society are clear. So are the privacy risks posed by increased collection, use, and sharing of personal information about drivers, passengers, cyclists, and pedestrians.
It is crucial that companies, advocates, academics, technical experts, and policymakers craft creative solutions that promote the benefits of connected vehicles while mitigating the privacy risks. Global legal frameworks have a role to play in assuring meaningful data protection and promoting trust, as do voluntary, enforceable codes of conduct and technical standards.
However, it is plain that entities must look beyond legal obligations and consider how they will earn and maintain consumer trust. With this white paper, Otonomo has taken an important step to advance the dialogue on connected car data privacy.
Originally released in October 2019, Otonomo’s Privacy Playbook for Connected Car Data presents nine plays for putting privacy at the center of your data business practices.
Artificial Intelligence, Machine Learning, and Ethical Applications
FPF and IAF to Host Event at 39th International Conference of Data Protection and Privacy Commissioners Discussing Key Technologies and Impacts for Privacy, Data Protection, and Responsible Information Practices
Technologists have long used algorithms to manipulate data. Programmers can create software that analyzes information based on rules and logic, performing tasks that range from ranking web sites for a search engine to identifying which photos include images of the same individual. Typically, software performs this analysis based on criteria selected and prioritized by human engineers and data scientists. Recent advances in machine learning and artificial intelligence support the creation of algorithmic software that can, with limited or no human intervention, internally modify its processing and criteria based on data.
Machine learning techniques can help hone algorithmic analysis and improve results. However, reduced human direction means that AI can do unexpected things. It also means that data protection safeguards should ensure that algorithmic decisions are lawful and ethical – a challenge when specific algorithmic criteria may be opaque or not practical to analyze. Increasingly, technologists and policymakers are grappling with hard questions about how machine learning works, how AI technologies can ethically interact with individuals, and how human biases might be reduced or amplified by algorithms that employ logic but lack human intuition.
On September 25, 2017, FPF and IAF will bring together technologists, policymakers, and privacy experts to discuss:
How machine learning and artificial intelligence work;
How these emerging technologies can support better outcomes for users of online services, patients with mental health conditions, and systems designed to combat bias;
The challenges and implications raised by machine learning and artificial intelligence in the context of efforts to support legal, fair, and just outcomes for individuals; and
How these emerging technologies can be ethically employed, particularly in circumstances when artificial intelligence is used to interact with people or make decisions that impact individuals.
Presenters include:
Rich Caruana, Senior Researcher, Microsoft
Stan Crosley, IAF Senior Strategist
Andy Chun, Associate Professor, Department of Computer Science, and Former Chief Information Officer, City University of Hong Kong
Yeung Zee Kin – Deputy Data Protection Commissioner, Singapore
Sheila Colclasure, Chief Privacy Officer and Global Executive for Privacy and Public Policy, Acxiom
Peter Cullen, IAF Executive Strategist
John Verdi, FPF Vice President of Policy
The event will be held from 3:30pm – 5:00pm (15:30 – 17:00) in Kowloon Room II (M/F) of the conference venue in Hong Kong. Registration is not required. For more information, please contact John Verdi at [email protected] or Peter Cullen at [email protected]. Please also look out for other side events from our colleagues at IAPP, Nymity, and OneTrust.
Study: EU-US Privacy Shield Essential to Leading European Companies
New FPF Study Documents More Than 100 European Companies Participating in the Privacy Shield Program
From Major Employers such as Ingersoll-Rand and Lidl Stiftung to Leading Technology Firms like Telefónica, RELX, and TE Connectivity, European Companies Depend on the EU-US Agreement
EU Firms are Signing up for Privacy Shield at a High Rate – the One-Year-Old Privacy Shield Program Includes a Larger Percentage of EU Companies than the Predecessor Safe Harbor Program
Termination of the Privacy Shield Program Could Inhibit European Employment – Nearly One Third of Privacy Shield Companies Rely on the Framework to Transfer HR Information of European Staff
The Future of Privacy Forum conducted a study of the companies enrolled in the US-EU Privacy Shield program and determined that 114 European headquartered companies are active Privacy Shield Participants. These European companies rely on the program to transfer data to their US subsidiaries or to essential vendors that support their business needs.
FPF staff also found that EU companies comprise 5.2% of all Privacy Shield companies, an increase over the 3.5% of all companies that were based in Europe under the previous EU-US Safe Harbor Program in 2014. At only a year old, the number of Privacy Shield participants is already greater than 2,000 and typically grows by several members each week.
Leading EU companies that rely on Privacy Shield include:
• ABB, Swiss electrical equipment company
• CNH Industrial America, Dutch capital goods company
• Ingersoll-Rand, Irish globally diversified industrial company
• Lidl Stiftung, German grocery market chain
• NCS Pearson, British education assessment and publishing company
• Reckitt Benckiser, British consumer goods company
• RELX, British and Dutch information and analytics company
• TE Connectivity, Swiss consumer electronics company
• Telefónica, Spanish mobile network provider
With the first annual review of the Privacy Shield framework underway, it is important for parties on both sides of the Atlantic to recognize the program’s benefits to the US and to Europe. Although no system is perfect, there is substantial value for many stakeholders, including leading European companies, in maintaining Privacy Shield protections for companies and consumers in both the United States and Europe.
FPF research also determined that over 700 companies, nearly a third of the total number analyzed, use Privacy Shield to process their human resources data. Inhibiting the flow of HR data between the US and EU could mean delays for EU citizens receiving their paychecks, or a decline in global hiring by US companies. Therefore, employees stand to lose if the Privacy Shield were terminated or materially altered.
The research identified 114 Privacy Shield companies headquartered or co-headquartered in Europe. This is a conservative estimate of companies that would be impacted by cancelation of the Privacy Shield framework – FPF staff did not include global companies that have major European offices but are headquartered elsewhere. The 114 companies include some of Europe’s largest and most innovative employers, doing business across a wide range of industries and countries. EU-headquartered firms and major EU offices of global firms depend on the Privacy Shield program so that their related US entities can effectively exchange data for research, to improve products, to pay employees and to serve customers. These companies would be severely burdened and disadvantaged by termination of the Privacy Shield program. Given the importance of this mechanism to companies and consumers on both sides of the Atlantic, FPF recommends that the Privacy Shield arrangement be preserved.
Methodology:
• FPF staff recorded a list of 2,188 active Privacy Shield companies as of July 2017 from https://www.privacyshield.gov.
• FPF staff performed a web search for each current company by name, checking the location of the company’s headquarters on a combination of public databases such as LinkedIn, CrunchBase, Bloomberg, and the company’s own website.
• A company that listed its headquarters in an EU member state or in Switzerland was counted as a match; companies that merely had a prominent EU office or founded in an EU member state were not counted.
• 114 total EU-headquartered companies were identified using this method.
Advancing Knowledge Regarding Practical Solutions for De-Identification of Personal Data: A Call for Papers
De-identification of personal information plays a central role in current privacy policy, law, and practice. Yet there are deep disagreements about the efficacy of de-identification to mitigate privacy risks. Some critics argue that it is impossible to eliminate privacy harms from publicly released data using de-identification because other available data sets will allow attackers to identify individuals through linkage attacks. Defenders of de-identification counter that despite the theoretical and demonstrated ability to mount such attacks, the likelihood of re-identification for most data sets remains minimal. As a practical matter, they argue most data sets remain securely de-identified based on established techniques.
There is not agreement regarding the technical questions underlying the de-identification debate, nor is there consensus over how best to advance the discussion about the benefits and limits of de-identification. The growing use of open data holds great promise for individuals and society, but also brings risk. And the need for sound principles governing data release has never been greater.
To help address these challenges, the Brussels Privacy Symposium, a joint program of FPF and the Brussels Privacy Hub of the Vrije Universiteit Brussel (Free University of Brussels or VUB), is pleased to announce an academic workshop and call for papers on Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization. Abstracts are due August 1, 2016, with full papers to follow on October 1, 2016.Selected papers will be considered for publication in a special symposium of International Data Privacy Law, a law journal published by Oxford University Press. In addition, authors will be invited to present at a workshop titled Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization in Brussels on November 8.
Authors from multiple disciplines including law, computer science, statistics, engineering, social science, ethics and business are invited to submit papers for presentation at a full-day program to take place in Brussels on November 8, 2016.
Submissions must be 2,500 to 3,500 words with minimal footnotes and in a readable style accessible to a wide academic audience. Abstracts must be submitted no later than August 1, 2016, at 11:59 PM ET, to [email protected]. Papers must be submitted no later than October 1, 2016, at 11:59 PM ET, to [email protected]. Publication decisions and workshop invitations will be sent in October.
Protecting the Privacy of Customers of Broadband and Other Telecommunications Services
The Future of Privacy Forum filed comments with the Federal Communications Commission (FCC) in response to the FCC’s proposed rules regarding the privacy and data practices of Internet Services Providers (ISPs). The FCC’s March 31, 2016 Notice of Proposed Rulemaking (NPRM or Notice) seeks to regulate ISP’s data practices pursuant to Section 222 of the Communications Act – a sector-specific statute that includes detailed requirements that apply to telecommunications services, but does not apply to other services offered by broadband providers nor to online services operating at the edge of the network (e.g. web sites).
The FCC’s notice states that responsible data practices protect important consumer interests. FPF wholeheartedly agrees. Because de-identification of personal data plays a key role in protecting consumers’ privacy, one portion of our comments seeks to ensure that the final FCC rules are consistent with the leading current thinking and practices regarding de-identification.
The FCC’s proposed rules erroneously treat data as either fully de-identified or fully identifiable. FPF’s comments urge the FCC to issue a rule recognizing that de-identification is not a black and white binary, but rather that data exists on a spectrum of identifiability. FPF’s comments take particular note of the Federal Trade Commission’s (FTC) extensive guidance regarding de-identification. According to the FTC, data are not “reasonably linkable” to individual identity to the extent that a company: (1) takes reasonable measures to ensure that the data are de-identified; (2) publicly commits not to try to re-identify the data; and (3) contractually prohibits downstream recipients from trying to re-identify the data. Industry self-regulatory guidelines use similar approaches. The FTC and self-regulatory frameworks recognize that data is not either “personal” or “non-personal.” Instead, it falls on a spectrum; with each step towards “very highly aggregated,” both the utility of the data and the risk of re-identification are reduced.
FPF’s comments argue that the proposed FCC rules reflect a rigid binary understanding of personal information that does not align with the spectrum of intermediate stages that exist between explicitly personal and wholly anonymous information. As a result, the FCC rules are simultaneously too narrow and too broad, both excluding and including data uses that should be permitted subject to reasonable controls and safeguards. In independent comments, FTC staff agree, stating “the [FCC’s] proposal to include any data that is ‘linkable’ could unnecessarily limit the use of data that does not pose a risk to consumers. While almost any piece of data could be linked to a consumer, it is appropriate to consider whether such a link is practical or likely in light of current technology. FTC staff thus recommends that the definition of PII only include information that is ‘reasonably’ linkable to an individual.”
FPF therefore proposes an alternative approach, which recognizes that non-aggregate data can be de-identified in a manner that makes it not reasonably linkable to a specific individual. This approach is consistent with leading government and industry guidelines with respect to de-identified data, including key work by the Federal Trade Commission, and is illustrated by FPF’s Visual Guide to Practical De-Identification.
The Benefits, Challenges, and Potential Roles for the Government in Fostering the Advancement of the Internet of Things
Yesterday, the Future of Privacy Forum filed comments with the National Telecommunications and Information Administration (NTIA) in response to NTIA’s inquiry into the Internet of Things (IoT). NTIA asked policy experts and other stakeholders to identify key issues affecting deployment the IoT – a broad category of devices, appliances, and objects that can be connected via the Internet. The Internet of Things has been a focus of FPF’s work since our founding in 2008. FPF recognizes the enormous potential benefits to consumers and to society of the inter-connected applications offered through the Internet of Things.
FPF’s comments, “The Benefits, Challenges, and Potential Roles for the Government in Fostering the Advancement of the Internet of Things,” describe the privacy and security challenges presented by IoT technologies, as well as the enormous potential benefits to consumers and to society of the inter-connected applications offered through the Internet of Things. FPF urges NTIA to promote the use of IoT data in ways that will benefit disadvantaged populations and promote inclusion. Our comments highlight IoT technologies that offer direct, meaningful benefits for individuals who are elderly, infirm, visually impaired, deaf, living with chronic health conditions, suffering from mobility-related disabilities, or economically disadvantaged. Today, IoT technologies are improving the day-to-day quality of life of traditionally underserved groups:
Sensors can alert relatives when a family member fails to take medicine, eat, or return home from a walk.
IoT devices in hospitals can track when patients get in and out of bed, help prevent falls, monitor clinical roundups to ensure that clinicians check in on patients at least once per hour, and revolutionize the protocol for preventing and treating painful pressure ulcers.
Wearable video cameras can translate text to audio in real time, providing crucial assistance to the visually impaired.
Smart home technologies allow users to control things in his or her home that may be physically difficult to reach, such as lights, door locks or security systems.
M2M technology, integrated with new payment platforms, is expanding access to credit by enabling two new payment methods: pay-as-you-go asset financing, which allows consumers to pay for products over time, and prepaid, where consumers pay for services on an as-needed basis.
Emerging IoT technologies promise to broaden inclusiveness for traditionally underserved groups in the immediate future. Common sense privacy protections can build trust in IoT technologies and help ensure that consumers enjoy the full benefits of IoT sensors and devices.