What does privacy mean to you?

Future of Privacy Forum’s CEO, Jules Polonetsky, spoke with Goethe Institut about what privacy means to him. Jules discussed his experience leading FPF and the various circumstances he encounters. You can watch the full clip below.

Video

The Goethe-Institut is the Federal Republic of Germany’s cultural institute, active worldwide. Goethe-Institut promotes the study of German abroad and encourage international cultural exchange. Learn more by visiting www.goethe.de

Privacy Engineering Research and the GDPR: A Trans-Atlantic Initiative

Workshop Description

A multidisciplinary research approach to understanding privacy is needed and deploying new privacy-aware approaches may require changes in existing technology systems, in business processes, in regulations, and in laws, as stated in the US National Privacy Research Strategy. When the EU’s GDPR becomes fully applicable on 25 May 2018, many data protection requirements will be seen in a new perspective. Among other aspects, “data protection by design and by default” will become an explicit legal obligation. Organizations who are processing personal data have to apply privacy engineering so that their systems implement data protection principles and integrate the necessary safeguards.

With this event, we aim to determine the relevant state of the art in privacy engineering; in particular, we will focus on those areas where the “art” needs to be developed further. The goal of this trans-Atlantic initiative is to identify open research and development tasks, which are needed to make the full achievement of the GDPR’s ambitions possible. See full agenda here.

When: Friday, November 10, 2017 from 9:30 – 17:00

Where: University of Leuven, Parthenonzaal (Mgr. Sencie Instituut MSI 1 03.18) Erasmusplein 2, 3000 Leuven, Belgium

Click here to find more detailed information about how to get to the University of Leuven and other practical matters.

Featured Speakers and Panelists

  • Giovanni Buttarelli, European Data Protection Supervisor
  • Wojciech Wiewiorowski, Assistant European Data Protection Supervisor
  • Norman Sadeh, Professor of Computer Science and Co-Director, Privacy Engineering Program, Carnegie Mellon University (CMU)
  • Claudia Diaz, Professor at the COSIC research group of the Department of Electrical Engineering (ESAT), KU Leuven
  • Josep Domingo-Ferrer, Professor of Computer Science, Chairholder of the UNESCO Chair in Data Privacy, and ICREA-Acadèmia Researcher, Universitat Rovira i Virgili
  • Jaap-Henk Hoepman, Professor in the Digital Security group at the Institute for Computing and Information Sciences and Scientific Director of the Privacy & Identity Lab, Radboud University Nijmegen
  • Naomi Lefkovitz, Senior Privacy Policy Advisor in the Information Technology Lab at the National Institute of Standards and Technology, U.S. Department of Commerce
  • Simon Hania, Vice President Privacy & Security / Corporate Privacy Officer, TomTom

Event Agenda:

This full-day session will include:

Who should attend:

Organizers

GET UPDATES


Call for Participation

This workshop aims at bringing together those who are working at the forefront of research on adapting data processing strategies, developing privacy engineering, and practitioners applying these technologies. The workshop will include a Key Note presentation, a panel discussion reviewing the current landscape followed by breakout sessions to address relevant themes, such as the ones mentioned below (but not limited to them). To encourage a diverse set of attendees, we ask those who are interested in attending the workshop to provide a one-page submission that includes the following information (no specific format required):

Biosketch. One to two paragraphs introducing who you are and your background, including whether you come from industry, the public sector, academia, or other (please specify).

Theme. The theme that most strongly resonates with you, why, and your stated position on this theme. It may be related to a project you are already working on, a project you recently concluded or a project you intend to start. Please explain in what sense you are well-positioned to help address this theme.

Influence. Please describe how you would be influential within your respective community, for example by disseminating the workshop outcomes after the conclusion of the workshop, implementing certain measures, etc.

Please send submissions to [email protected]. *Submissions are now due 5 October, 2017. Submissions will be reviewed by the workshop program organizers — Jules Polonetsky, CEO Future of Privacy Forum; Achim Klabunde, IPEN/European Data Protection Supervisor; Bettina Berendt, Professor in the Computer Science Department at the University of Leuven/DTAI; and Norman Sadeh, Professor of Computer Science and Co-Director, Privacy Engineering Program, Carnegie Mellon University (CMU) — and your attendance will be accepted based on the goal to ensure a diverse set of attendees and the relevance to the workshop themes and goals. Participation is free of charge.

Possible Workshop Themes to be discussed include, but are not limited to:

You can find information about accommodation at http://www.visitleuven.be/en/stayingover. For other questions regarding the venue, please contact the local organiser, Bettina Berendt, at [email protected]

This workshop is supported in part by the National Science Foundation under Grant No. 1654085

FPF Statement on GAO Release of Vehicle Data Privacy Report

A new report released today by the United States Government Accountability Office reviews consumer privacy issues related to connected vehicles. The report examines the use, types, and sharing of vehicle data; surveys automakers to understand how their privacy policies align with privacy best practices; consults experts in the field to understand the issues at play in this space; and examines related Federal efforts. The report paints a picture of an industry on the brink of significant change, with industry and Federal practices actively adapting to these new realities.

The report’s core recommendation is that National Highway Traffic Safety Administration better define its roles and responsibilities related to connected vehicle data privacy. The Department of Transportation agreed to provide a detailed response within 60 days.

In a key conclusion, GAO explains that most of the automakers surveyed (representing 90 percent of the U.S. auto market) do limit data collection, use, and sharing in accordance with privacy best practices. Despite these positive practices, their privacy notices are written in legalese that makes it difficult for consumers to understand, they do not specify data sharing and use practices, and they offer limited individualized controls to consumers.

“We applaud GAO for their thoughtful survey of this space, and FPF was pleased to have served as a selected subject matter expert for the report,” said Lauren Smith, FPF Policy Counsel. She continued, “GAO did a laudable job describing the landscape of connected cars and privacy, tracking progress and practices in the industry, highlighting areas for improvement, and calling for clarity in Federal agency roles.”

The report highlights several FPF projects, including our consumer guide to data in the connected car, and our paper describing the spectrum of de-identification.

Just as the GAO report supports greater understanding of data on connected vehicles, our recently released infographic “Data and the Connected Car – Version 1.0,” similarly maps data elements in the connected car.

Smart Cities Need Smart Privacy Protections: FPF seeks public comments on proposed Open Data Risk Assessment for the City of Seattle

Action: Proposed draft report on conducting an Open Data Risk Assessment for the City of Seattle

Published: August 18, 2017

Comments due: October 2, 2017

The Future of Privacy Forum (FPF) requests feedback from the public on the proposed City of Seattle Open Data Risk Assessment. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy. To ensure its Open Data program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked FPF with creating and deploying an initial privacy risk assessment methodology for open data.

This Draft Report provides tools and guidance to the City of Seattle and other municipalities navigating the complex policy, operational, technical, organizational, and ethical standards that support privacy-protective open data programs. In the spirit of openness and collaboration, FPF invites public comments from the Seattle community, privacy and open data experts, and all other interested individuals and stakeholders regarding this proposed framework and methodology for assessing the privacy risks of a municipal open data program.

Following this period of public comment, a Final Report will assess the City of Seattle as a model municipality and provide detailed recommendations to enable the Seattle Open Data program’s approach to identify and address key privacy, ethical and equity risks, in light of the city’s current policies and practices.

How to Comment:

Please note that the comment period has closed as of Oct. 2, 2017

All timely and responsive public comments will be considered and will be made available on a publicly accessible FPF or City of Seattle website after the final report is published. Because comments will be made public to the extent practical, they should not include any sensitive or confidential information.

Interested parties may provide feedback in any of the following ways:

READ THE DRAFT REPORT

Location Controls in iOS 11 Highlight the Role of Platforms

~ ~ ~ ~

Author’s Note 9/8/17: Following the release of additional versions of iOS 11, it appears that the “Blue Bar” notification discussed in the second part (below) will no longer be a mandatory feature for app developers. Questions? Email [email protected].

~ ~ ~ ~

From Pokémon Go, to the geo-targeting of abortion clinics, to state legislative efforts, the last year has seen significant attention paid to the many ways our apps use and often share location data. In the midst of this heightened awareness of geo-location privacy, iPhone users and app developers may notice a difference this Fall, when Apple will be releasing updates to iOS 11 that will increase users’ control over how their geo-location may be collected and used. The changes highlight the ongoing importance—and legal implications—of platform settings for consumer privacy.

At Apple’s 2017 Worldwide Developer Conference, a variety of privacy updates to iOS 11 were announced that may have potentially far-reaching impact when they become broadly available in the Fall. Apple has made two significant changes related to how apps can access users’ location: (1) the “Only When In Use” setting will now be required; and (2) the prominent Blue Bar notification that indicates when persistent, real-time location is being collected, will now appear more often.

Requiring the “Only When In Use” Authorization Setting for Location Services

In iOS 11, mobile apps seeking to access a user’s location will now be required to offer the intermediate authorization setting “Only When in Use.” Previously, location-based apps had the ability to bypass this option and present the user with only two choices: Always or Never. In iOS 11, this binary consent flow will no longer be possible: if an app wants to request access to location “Always,” it must also provide the user with the option of accessing location “Only When in Use.” (Of course, if “Always” is not needed, it is still possible to provide the two lesser options, Never or Only When in Use).

Apple reports that about 21% of location-based apps are currently offering their users only those two choices: Always or Never. At the June developer conference, Apple technologist Katie Skinner reasoned that this is likely due in part to developer confusion. While this is probably true, it is also the case that some apps might be seeking Always authorization in order to monetize the additional location history data for location-based marketing, interest-based advertisements, and other data analysis.

“Why do apps ever need ‘Always’ authorization for location?” This is a question we often hear from privacy-minded observers, and the answer lies in the added functionalities that “Always” authorization enables. As shown in the chart below, with “Always” authorization, apps can make use of a number of lower-energy background monitoring services that permit them to incorporate contextual triggers based on movement or specific geographical areas (“geo-fences”).

For example, a retail app might offer discounts or points when the user enters a store; an airline app might launch a boarding pass when the user enters the airport; or a smart home app might send a parent a notification when their child enters or leaves a location, such as home or school. These sorts of location-based actions could not occur if the user were required to have each app open and running in the foreground for it to access location.

 

Although there are legitimate reasons to request “Always” authorization, apps do not always provide good explanations for why they are asking for it. Because Apple’s update ensures users will always have the option of choosing “Only While In Use,” developers that want “Always” access to location will now have a greater incentive to justify their request. In a small way, this shifts some of the burden from the iPhone user (of understanding apps’ data practices) to the app developers (of explaining them), which is a consumer-friendly step.

Expanding the “Blue Bar” Notification to Match User Expectations

Another significant update to iOS 11 is that the “Blue Bar” notification will now appear any time an app has requested “Continuous Background Location” and the user subsequently navigates away from the app.

Currently, this Blue Bar does not appear for apps with authorization to collect location “Always.”  In theory, users that have agreed to share location “Always” do not need (or necessarily want) this notification for most of the typical, approved uses of background location described above, such as receiving reminders when they enter a geo-fence. But do users expect that an app might also be receiving high frequency, high accuracy, battery intensive location data? For those apps–and only when they are collecting “Continuous Background Location”– the Blue Bar will now appear.

Importantly, most methods of accessing location using the “Always” permission will not trigger the Blue Bar, presumably because they are expected when a user approves “Always” authorization.  These include visit monitoring, significant change monitoring, and geo-fencing.  (For more on these background location services, read Apple’s Developer Documentation, “Getting the User’s Location” (here) or the Location & Maps Programming Guide (here)). Only the “Continuous Background Location” service will be impacted—i.e. the accurate persistent, real-time collection of location after a user leaves an app. Compared to the other services, it is the most power-intensive, but delivers the most accurate and immediate data.

 

Generally speaking, the only apps that need this particular high-energy location service are the ones providing real-time navigation, or mapping a route (e.g. a Runkeeper or a MapMyRun). As Apple tells developers: “Use this service only when you really need it, such as when offering navigation instructions or when recording a user’s path on a hike.” For these kinds of apps, the Blue Bar is more than just a notification, it’s also feature—it makes it easy to return to the app after leaving it to take a call or do something else on the phone.

For these reasons, industry sources are noting that most location-based mobile marketing are unlikely to be affected. We agree, although it’s worth noting that there are almost certainly some small number of actors in the mobile marketing space whose overzealous collection of location data will be halted by this OS upgrade. So far, the only incentive not to use this feature to collect unnecessary data has been that it depletes the user’s battery significantly; and that it causes the OS to occasionally generate notifications to users.

Operating Systems Play a Key Role in Shaping Expectations

Understanding the details of iOS and Android upgrades is important not only because of their direct effect on consumer privacy, but because platforms can have a powerful effect on user expectations—both in creating trust by aligning privacy controls with what they are, and in shaping what they become. When those consumer expectations are particularly strong and clearly expressed, we have seen in recent years that the Federal Trade Commission (FTC) is willing to step in and take actions against companies who violate them.

In 2016, mobile ad network Inmobi settled with the FTC for ignoring the mobile operating system settings of users who disabled apps’ location access, and inferring those users’ location anyway through other means. As we discussed at the time, the ad network was able to infer location by detecting the users’ proximity to known Wi-Fi access points. Aside from the practice being a violation of the OS’s Terms of Service, the FTC considered it to be a “deceptive business practice,” in part because they had misrepresented the practice to their app partners. Under the FTC’s Section 5 authority, the agency can take action against companies who deviate from their stated policies in a way that is considered deceptive.

After the case settled, many within industry have wondered whether the FTC could have brought the case if the ad network had disclosed its practices accurately in a Privacy Policy, explaining that it was ignoring the mobile operating system privacy settings. We suspect that this would not have been sufficient. As we have seen from state legislative efforts and cases involving law enforcement access to location history, reasonable people have strong feelings and expectations around the disclosure of geo-location. As a result, platform and operating system settings for controlling the disclosure of this type of information are likely to require a high degree of respect because of the expectations users have for how these settings work.

In addition to setting user expectations, recent research demonstrates that operating systems influence how app developers view privacy. In May 2017, Katie Shilton and Daniel Greene at the University of Maryland analyzed hundreds of discussions in iOS and Android developer forums, and concluded that platforms act not just as intermediaries but as regulators who define what privacy means:

Mobile platforms are not just passive intermediaries supporting other technologies. Rather, platforms govern design by promoting particular ways of doing privacy, training [developers] on those practices, and (to varying degrees) rewarding or punishing them based on their performance. . .  “Privacy by design” is thus perhaps better termed, at least in the mobile development space, “privacy by permitted design” or “privacy by platform.”

– Daniel Greene & Katie Shilton, “Platform privacies: Governance, collaboration, and the different meanings of “privacy” in iOS and Android development” (Read full article here)

As consumers increasingly interact with platforms controlling data from their children’s toys, connected cars, and their homes, it is important that privacy settings evolve and become increasingly nuanced.  We look forward to continuing to see Apple and other platforms refine privacy settings to ensure they match consumer expectations.

 

Resources:

For more, we recommend watching two sessions from Apple technologists at WWDC 2017, “Privacy and Your Apps,” and “What’s New in Location Technologies” (find all WWDC 2017 videos here).

The Future of Microphones in Connected Devices

Microphone (2)

Today, FPF released a new Infographic: Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices (read the Press Release here). From Amazon Echos to Smart TVs, we are seeing more home devices integrate microphones, often to provide a voice user interface powered by cloud-based speech recognition.

Last year, we wrote about the “voice first revolution” in a paper entitled “Always On: Privacy Implications of Microphone-Enabled Devices.” This paper created early distinctions between different types of consumer devices, and provided initial best practices for companies to design their devices and policies in a way that builds trust and understanding. Since then, microphones in home devices — and increasingly, in city sensors and other out-of-home systems — have continued to generate privacy concerns. This has been particularly notable in the world of children’s toys, where the sensitivity of the underlying data invites heightened scrutiny (leading the Federal Trade Commission to update to its guidance and clarify that the Children’s Online Privacy Protection Act applies to data collected from toys). Meanwhile, voice-first user interfaces are becoming more ubiquitous, and may one day represent the “normal,” default method of interacting with many online services and connected devices, from our cars to our home security systems.

As policymakers consider the existing legal protections and future direction for the Internet of Things, it’s important to first understand the wide range of ways that these devices can operate. In this Infographic, we propose that regulators and advocates thinking about microphone-enabled devices should be asking three questions: (1) how the device is activated; (2) what kind of data is transmitted; and, on the basis of those two questions, (3) what are the legal protections that may already be in place (or not yet in place).

#1. Activation

In this section, we distinguish between Manual, Always Ready (i.e., speech-activated), and Always On devices. Always Ready devices often have familiar “wake phrases” (e.g. “Hey Siri,”). Careful readers will notice that the term “Always Ready” applies broadly to devices that buffer and re-record locally (e.g., for Amazon Echo it is roughly every 1-3 seconds), and transmit data when they detect a sound pattern. Sometimes that pattern is a specific phrase (“Alexa”), but it can sometimes be customizable (e.g. Moto Voice let’s you record your own launch phrase) and sometimes it need not be a phrase at all — for example, a home security camera might begin recording when it detects any noise. Overall, Always Ready devices have serious benefits, and (if designed with the right safeguards) can be more privacy protective than devices designed to be on and running 100% of the time.

#2 – Data Transmitted

In this section, we demonstrate the variety of data that can be transmitted via microphones. If a device is designed to enable speech-to-text translation, for example, it will probably need to transmit data from within the normal range of human hearing — which, depending on the sensitivity, might include background noises like traffic or dogs barking. Other devices might be designed to detect sound in specialized ranges, and still others might not require audio to be transmitted at all. With the help of efficient local processing, we may begin to see more devices that operate 100% locally and only transmit data about what they detect. For example, a city sensor might alert law enforcement when a “gunshot” pattern is detected.

#3 – What are the existing legal protections?

In this section, we identify the federal and state laws in the United States that may be leveraged to protect consumers from unexpected or unfair collection of data using microphones. Although not all laws will apply in all cases, it’s important to note that certain sectoral laws (e.g. HIPAA) are likely to apply regardless of whether the same kind of data is collected through writing or through voice. In other instances, the broad terms of state anti-surveillance statutes and privacy torts may be broadly applicable. Finally, we outline a few considerations for companies seeking to innovate, noting that privacy safeguards must be two-fold: technical and policy-driven.

cleanshot 2023 07 18 at 11.15.54@2x

New Infographic: Understanding Uses of Microphones in Internet of Things (IoT) Devices

Microphones Clip Part1

FOR IMMEDIATE RELEASE                          

August 10, 2017

Contact:

Stacey Gray, Policy Counsel, [email protected]

Melanie Bates, Director of Communications, [email protected]

New Infographic: Understanding Uses of Microphones in Internet of Things (IoT) Devices

Washington, DC – Today, the Future of Privacy Forum released an infographic, “Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices.” In order to enable the benefits of new voice-based services while protecting data privacy, this infographic attempts to explain the range of possible uses of microphones in connected devices. Microphones & the Internet of Things describes to consumers in an easily digestible format: 1) how microphones are used in home devices, 2) the different ways these devices can be activated (“Manual,” “Always Ready,” or “Always On”), 3) the types of data that can be transmitted, and 4) current U.S. legal protections and best practices.

“Voice is an increasingly useful interface to engage with technology,” said Stacey Gray, FPF Policy Counsel. “Consider the Amazon Echo, which is activated by a spoken command (“Alexa”), or Apple’s personal assistant Siri (“Hey, Siri”), or the Smart TVs that are incorporating voice interactions. But consumers don’t always understand when and in what ways these devices are actually collecting information, leading to legitimate concerns that companies can address through transparency and strong privacy safeguards.”

By their method of activation, consumer devices can be categorized as manual, always ready, or always on. In the past, most recording devices could be considered on or off. Many new voice-based home personal assistants today are “always ready” because they do not begin transmitting data off-site until they detect a wake phrase. The infographic FPF released today describes these categories:

  1. manual (requiring a press of a button, or other intentional physical action);
  2. always ready (requiring a spoken “gate phrase”); and
  3. always on (device transmits data 100% of the time on a standalone basis, and further processing occurs externally).

Microphones & the Internet of Things also explains that microphone-enabled devices are not always transmitting the same kinds of audio data, or transmitting audio data at all.  Some devices, such as smart speakers, can use microphones to do things like calibrate sound to the shape of a room for better music quality.

The publication of Microphones & the Internet of Things follows last year’s FPF paper, Always On: Privacy Implications of Microphone-Enabled Devices. The paper identifies emerging practices by which manufacturers and developers can alleviate privacy concerns and build consumer trust in the ways that data is collected, stored, and analyzed. Today’s release also coincides with the 2017 National Conference of State Legislatures Legislative Summit, where Gray spoke on a panel titled “The Future of Artificial Intelligence and Voice Recognition Technology.”

“Information networks and devices that make up the ‘Internet of Things’ promise great benefits for individuals and society,” Gray said. “However, if we do not have the right guiding principles or necessary privacy safeguards, consumers will lose trust in the evolving technologies. We need to address security and privacy issues to allow the Internet of Things to achieve its full potential.”

### 

The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

Congratulations to Former FPF Advisory Board Member, Jim Byrne, the new General Counsel of the Veterans Administration

Former FPF Advisory Board Member (and former IAPP Chairman of Board) James Byrne was confirmed on Friday by the United States Senate to be General Counsel of the Veterans Administration.

Jim has had a great career as a privacy leader at Lockheed Martin, and is a great choice to help address the many challenges facing the VA and our veterans. We have appreciated his advice, support and friendship and look forward to his success. We wish him good luck!

Jim most recently served as Associate General Counsel and Chief Privacy Officer at Lockheed Martin Corporation where he was also the company’s lead cyber and counterintelligence attorney.

Prior to joining Lockheed Martin, Mr. Byrne served as the career Senior Executive Service Deputy Special Counsel with the Office of the United States Special Counsel, and both General Counsel and Assistant Inspector General for Investigations with the Office of the Special Inspector General for Iraq Reconstruction.

Jim has over 20 years of experience in the public sector, including service as a deployed Marine Infantry Officer and a U.S. Department of Justice (DOJ) international narcotics prosecutor. He volunteered for the past ten years on the Executive Board of Give an Hour, a non-profit organization that has developed national networks of volunteer professionals capable of providing complimentary and confidential mental health services in response to both acute and chronic conditions that arise within our society, beginning with the mental health needs of post-9/11 veterans, servicemembers and their families. Mr. Byrne is a Distinguished Graduate of the U.S. Naval Academy, where he received an engineering degree and ultimately held the top leadership position of Brigade Commander. He earned his J.D. from Stetson University College of Law, St. Petersburg, Florida and started his legal career as a judicial law clerk to the Honorable Malcolm J. Howard, U.S. District Court, Eastern District of North Carolina.