Protecting privacy and promoting inclusion with the 'Internet of Things'

To technologists and innovators, the “Internet of Things” (IoT) represents a world of exciting new benefits that will solve important technical and social problems. To critics, IoT represents a world of pervasive surveillance, with toys that spy on kids and microphone-enabled devices recording and retaining our most personal data. As a think tank focused on helping chief privacy officers of companies both large and small navigate privacy challenges, as well as advocating for ethical data practices in support of emerging technologies, we believe they are both right. From traffic management to healthcare improvements, there is a wide range of possible benefits that will be derived from information networks created by the IoT. There is the potential to improve personal safety, improve public safety, increase consumer convenience, provide environmental benefits and promote business innovation. However, if we do not have the right guiding principles or necessary privacy safeguards, consumers will lose trust in the evolving technologies. We need to address security and privacy issues to ensure that the IoT achieves its full potential.

Recognizing this need, Samsung recently hosted a conference bringing together leaders from both government and industry to discuss the future of IoT. In his opening remarks, Oh-Hyun Kwon, vice chairman and CEO at Samsung Electronics, emphasized that the conversation around the possibilities of IoT should shift from focusing on smart homes, offices and factories, to smart communities, smart nations and a smarter world with better living standards for everyone, everywhere. In comments we filed recently for input into a new Department of Commerce green paper on shaping the future of IoT, we discussed ways IoT technologies are improving the day-to-day quality of life for people with low income, people with disabilities and traditionally underserved populations, among others. For example:

It is important that we do not lose sight of the broad hope that IoT technology will not simply be more gadgets for the affluent, but also a platform for improving quality of life for the traditionally underserved. As government policymakers and regulators examine, understand and embrace emerging IoT technologies, they must encourage strategies that benefit everyone, while at the same time apply commonsense privacy protections that build trust in IoT technologies to help ensure that consumers enjoy the full benefits of IoT sensors and devices.

This piece was originally published on June 29, 2016, 11:16 AM ET, by The Hill. 

W&L Law Review Publishes First-ever Disclosure of Facebook Internal Review Process

Today, Washington and Lee University (W&L) published a piece about a recent study titled “Evolving the IRB: Building Robust Review for Industry Research.” The study was authored by Molly Jackman and Lauri Kanerva of Facebook.

W&L explains:

“According to the authors, companies increasingly conduct research in order to decide what products to build and to improve customers’ experience with those products. But they say that existing ethical guidelines for research do not always completely address the considerations that industry researchers face, and they argue that companies should develop principles and practices that take into account the values set out in law and ethics. In Facebook’s case, this means maintaining a standing committee of five employees, including experts in law, ethics, communications, and policy to vet research proposals and identify ethical concerns.”

Read the full piece on W&L’s wesbite

FTC Settles with Major Ad Platform for Deceptive Location Tracking via Wi-Fi

The FTC announced a settlement today with InMobi, a major advertising platform provider, for engaging in deceptive location tracking practices. As explained below, InMobi used alternative methods to collect location data from users, even after the users had chosen not to share their location in apps via Location Services. But InMobi’s major mistakemisrepresenting the fact that they were collecting location data anyway via Wi-Fi networks—is one that many companies need to pay close attention to. There are many ways that location is collected about mobile devices and describing the options correctly can be difficult, especially if a partner’s practices are not transparent.

As the Future of Privacy Forum staff have explained in filings to the FTC and FCC as well as in a 2015 report on cross-device tracking, there are many ways that consumer devices are tracked. In this summary, we explain what exactly was happening, and how controls over various methods of location sharing are often misunderstood.

InMobi collected location information regardless of the common Location Services permission provided by users

InMobi provides a mobile advertising platform; by partnering with InMobi and integrating their software development kit (SDK), app developers can monetize their apps through targeted advertisements, and advertisers can target consumers via any apps which have integrated InMobi’s SDK.

Much of this advertising is geo-targeted—InMobi gives advertisers the ability to target ads based on the user’s precise location, as well as the patterns of locations of where the user had been over the previous two months. In iOS and Android phones, the operating system requires that an app has to ask your permission before it shares your location via Location Services, so consumers (and app developers) assumed that this geo-targeting was based on opt in consent. And in fact, InMobi told developers that geo-targeted ads were based on opt in consent. So far, so good.

The big problem—and the reason they were just penalized by the FTC—was that prior to December 2015, even when a user had declined to provide their location by selecting “no” in response to the app’s request (or turning it off manually in the phone Settings), InMobi used an alternative method to infer their location via the information collected about the Wi-Fi network to which the user was connected; and/or the Wi-Fi networks in range of the device. By compiling this information into a geo-location database (along with the more detailed information from users who had opted in to Location Services), InMobi could match Wi-Fi networks to specific locations. Despite this practice, the company told developers that geo-targeting was only available if users gave their permission via Location Services, thus opening the door to an injunction and civil penalties from the FTC for deceptive practices.

No Surprise: Many Alternative Methods for Location Targeting Exist

InMobi’s behavior was deceptive because they misrepresented their data collectionthat is, they told the public and app developers that geo-targeting required opt-in consent, stating that they would respect users’ choices, and then didn’t respect those choices. However, the use of Wi-Fi itself to infer location is not new, and comes as no surprise. We have explained, in filings to the FTC and FCC, as well as in a report on cross-device tracking, that it is easier than ever to gather location data through smartphones using a variety of methods.

Of the methods by which apps can determine location, users are often most familiar with Location Services, the service controlled by the mobile operating system (OS). This is the primary way apps request location permission, and it’s usually optimal because it aggregates data from different sources—including GPS, cellular triangulation, nearby Wi-Fi signals, and Bluetooth positioning—to pinpoint the device’s location more accurately than any individual system.

But most users are not familiar with the range of other methods that can be used to determine a device’s location. These include cell tower location (cell towers broadcast unique Cell IDs, which are compiled in publicly available databases); carrier triangulation (uniquely, mobile ISPs can analyze signals from multiple surrounding cell towers); Wi-Fi networks (as explained below); beacons (small radio transmitters that broadcast one-way Bluetooth signals to apps that can receive them to infer proximity); and mobile location analytics (passive detection of devices’ Wi-Fi MAC addresses or Bluetooth addresses to determine things like airport or retail traffic).

How does an app determine location through Wi-Fi?

Even without access to Location Services, apps can infer geo-location through the device’s routine scanning for nearby Wi-Fi networks. Large databases exist of the unique identifiers (MAC addresses and SSID) of wireless routers and their known locations, which are continuously updated in a variety of ways, including by the mobile operating system itself (with permission during set-up). An early report in 2014 specifically documented the use by InMobi of this method of using local and previously logged Wi-Fi networks to capture location information.

wifi track

Mobile devices can infer geo-location by scanning for the MAC addresses and SSIDs of nearby publicly broadcasted Wi-Fi access points.

Deception via Misrepresentation to App Developers

It’s particularly interesting to note that in addition to making certain public statements in their own marketing campaigns, InMobi also misrepresented their geo-targeting practices in their statements to app developers. The FTC’s Complaint focuses on the fact that the InMobi SDK integration guides for Android and iOS developers contained inaccurate statements. As a result, the FTC states:

“. . . numerous application developers that have integrated InMobi SDK have represented to consumers in their privacy policies that consumers have the ability to control the collection and use of location information through their applications, including through the device location settings. These application developers had no reason to know that Defendant tracked the consumer’s location and served geo-targeted ads regardless of the consumer’s location settings.” FTC Complaint para. 37 (emphases added).

This focus is especially interesting in light of the fact that most FTC enforcement actions for deceptive business practices focus on misrepresentations in a company’s own privacy policy.

Going Forward

Apps can provide great value to consumers by using location for a wide range of services and geo-targeted ads can often provide useful relevant information. But app developers need to understand and demand transparency from their partners so that they can be accurate and honest with their consumers.

 

 

Read the FTC’s Complaint and Settlement here.

Read FPF’s Cross-Device Tracking report here.

 

For media inquiries, contact:

Melanie Bates

[email protected]

The FAA released rules for the operation of commercial drones

This morning, the Federal Aviation Administration (FAA) released the highly anticipated rules governing the operation of small UAS (sUAS) for commercial purposes.  The new rules are scheduled to take effect in late August – until that time, commercial operators may continue to operate under Section 333 exemptions.  As expected, Part 107 generally follows the proposed rules that were contained in the Notice of Proposed Rulemaking (NPRM) that was issued by the FAA in February 2015.

One of the most significant changes for industry is that commercial operations that fit within the framework of Part 107 will no longer require approval by exemption, which has typically taken months to secure.  Undoubtedly, the new framework will mean increased efficiency for commercial operators who will also not be required to secure airworthiness certification for their sUAS.

Read the full story by Diana Marina Cooper in Robohub.

Samsung Electronics Vice Chairman Traveling to the United States for Internet of Things – Transforming the Future Conference

* * * MEDIA ALERT & INTERVIEW AVAILABILITY * * *
Samsung Electronics Vice Chairman O.H. Kwon to Host June 21st Inaugural Internet of Things – Transforming the Future Conference

in Washington, DCVice Chairman Traveling to the United States to Make Important Announcements About Samsung’s Vision for the Internet of Things

WHAT:

On June 21, 2016, Samsung will host its inaugural “Internet of Things – Transforming the Future” conference at the Washington Post, during which the company will lay out its vision for a human-centered approach to the Internet of Things (IoT) that focuses on the outcomes the technology will create for people and societies across the globe. As IoT is the connective tissue that brings the physical and digital world together into a system for smarter living, Samsung is hosting the event and convening leaders from government and industry to ensure that the rapid expansion of IoT benefits everyone, everywhere through the transformative power of innovation. The Washington, DC event will be the beginning of an international series of dialogues on IoT.

WHEN:

Tuesday, June 21, 2016

8:00 AM – 11:30 PM

WHERE:

The Washington Post Building

1301 K Street NW

Washington, DC 20071

WHO:

Doug Davis, Senior Vice President and General Manager, Internet of Things Group, Intel Corporation

Dean Garfield, President and CEO, Information Technology Industry Council (ITI)

John Godfrey, Senior Vice President for Government Affairs, Samsung Electronics America

Congressman Darrell Issa (R-CA)

Wonkyong Kim, Executive Vice President for Government Affairs, Samsung Electronics America

Oh-Hyun Kwon, Vice Chairman and CEO, Samsung Electronics

Jules Polonetsky, CEO, Future of Privacy Forum

Curtis Sasaki, Vice President of Ecosystems and IoT General Manager, Samsung Strategy and Innovation Center (SSIC) and more.

MEDIA RSVP:

https://iotvisionfortomorrow.splashthat.com/?em=537

Join the conversation on Twitter at #VisionForTech Stay informed: @SamsungDC

CONTACT:

For interview requests or live-stream link for reporters unable to travel to Washington, DC please contact Alex Mitchell, 202-772-6965, [email protected]

From accessibility and aging in place to smart cities and global transportation, IoT has the power and potential to truly transform the world in which we live. The half-day Samsung event will bring together leading consumer, industry and government stakeholders to discuss the many ways IoT can benefit society, and tackle the challenges that remain.  Following the Vice Chairman’s keynote speech, the conference will feature multiple panels engaging in a cross-sector dialogue about IoT technology and the path forward.  The keynote and panel discussions are OPEN PRESS for credentialed media.

###

Examining Ethics, Privacy, and Research Reviews

Today, the Future of Privacy Forum (FPF) and the Ohio State University’s Program on Data and Governance are holding a discussion of ethics, privacy and practical research reviews in corporate settings. This timely event, which follows the White House’s call to develop strong data ethics frameworks, convened corporate and academic leaders to discuss how to integrate ethical and privacy considerations into innovative data projects and research.  The roundtable is an important extension of FPF’s December 2015 workshop, “Beyond IRBs: Designing Ethical Review Processes for Big Data Research,” supported by the National Science Foundation and Alfred P. Sloan Foundation.

A major focus of the roundtable is a new paper by Facebook’s Molly Jackman and Lauri Kanerva entitled, “Evolving the IRB: Building Robust Review for Industry Research.” The paper provides a detailed overview of the company’s research review process. Informed by consultations with a wide range of experts, the Facebook process details the specifics steps taken by the company to review its internal research work and is an important step forward for corporate research ethics.

“Developing meaningful processes and standards for ethical reviews of data research is one of the critical challenges companies face today,” said Jules Polonetsky, CEO, Future of Privacy Forum. “Socially valuable advances will only be feasible if trustworthy paths are established for academic and corporate researchers alike. Kanerva and Jackman’s paper documenting the Facebook research process provides researchers with a valuable model for serious evaluation of the benefits and risks of new projects.”


PAPERS

Read Facebook’s paper

Read Facebook’s blog

Read Jules Polonetsky’s and Dennis Hirsch’s joint op-ed in Re/code

Read Beyond IRBs: Designing Ethical Review Processes for Big Data

View event photos

Advancing Knowledge Regarding Practical Solutions for De-Identification of Personal Data: A Call for Papers

De-identification of personal information plays a central role in current privacy policy, law, and practice.  Yet there are deep disagreements about the efficacy of de-identification to mitigate privacy risks.  Some critics argue that it is impossible to eliminate privacy harms from publicly released data using de-identification because other available data sets will allow attackers to identify individuals through linkage attacks. Defenders of de-identification counter that despite the theoretical and demonstrated ability to mount such attacks, the likelihood of re-identification for most data sets remains minimal. As a practical matter, they argue most data sets remain securely de-identified based on established techniques.

There is not agreement regarding the technical questions underlying the de-identification debate, nor is there consensus over how best to advance the discussion about the benefits and limits of de-identification.  The growing use of open data holds great promise for individuals and society, but also brings risk. And the need for sound principles governing data release has never been greater.

To help address these challenges, the Brussels Privacy Symposium, a joint program of FPF and the Brussels Privacy Hub of the Vrije Universiteit Brussel (Free University of Brussels or VUB), is pleased to announce an academic workshop and call for papers on Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization. Abstracts are due August 1, 2016, with full papers to follow on October 1, 2016. Selected papers will be considered for publication in a special symposium of International Data Privacy Law, a law journal published by Oxford University Press. In addition, authors will be invited to present at a workshop titled Identifiability: Policy and Practical Solutions for Anonymization and Pseudonymization in Brussels on November 8.

Authors from multiple disciplines including law, computer science, statistics, engineering, social science, ethics and business are invited to submit papers for presentation at a full-day program to take place in Brussels on November 8, 2016.

Submissions must be 2,500 to 3,500 words with minimal footnotes and in a readable style accessible to a wide academic audience. Abstracts must be submitted no later than August 1, 2016, at 11:59 PM ET, to [email protected]. Papers must be submitted no later than October 1, 2016, at 11:59 PM ET, to [email protected].  Publication decisions and workshop invitations will be sent in October.

Full Details Regarding the Call for Papers

Protecting the Privacy of Customers of Broadband and Other Telecommunications Services

The Future of Privacy Forum filed comments with the Federal Communications Commission (FCC) in response to the FCC’s proposed rules regarding the privacy and data practices of Internet Services Providers (ISPs).  The FCC’s March 31, 2016 Notice of Proposed Rulemaking (NPRM or Notice) seeks to regulate ISP’s data practices pursuant to Section 222 of the Communications Act – a sector-specific statute that includes detailed requirements that apply to telecommunications services, but does not apply to other services offered by broadband providers nor to online services operating at the edge of the network (e.g. web sites).

The FCC’s notice states that responsible data practices protect important consumer interests. FPF wholeheartedly agrees. Because de-identification of personal data plays a key role in protecting consumers’ privacy, one portion of our comments seeks to ensure that the final FCC rules are consistent with the leading current thinking and practices regarding de-identification.

The FCC’s proposed rules erroneously treat data as either fully de-identified or fully identifiable. FPF’s comments urge the FCC to issue a rule recognizing that de-identification is not a black and white binary, but rather that data exists on a spectrum of identifiability.  FPF’s comments take particular note of the Federal Trade Commission’s (FTC) extensive guidance regarding de-identification.  According to the FTC, data are not “reasonably linkable” to individual identity to the extent that a company: (1) takes reasonable measures to ensure that the data are de-identified; (2) publicly commits not to try to re-identify the data; and (3) contractually prohibits downstream recipients from trying to re-identify the data. Industry self-regulatory guidelines use similar approaches.  The FTC and self-regulatory frameworks recognize that data is not either “personal” or “non-personal.” Instead, it falls on a spectrum; with each step towards “very highly aggregated,” both the utility of the data and the risk of re-identification are reduced.

FPF’s comments argue that the proposed FCC rules reflect a rigid binary understanding of personal information that does not align with the spectrum of intermediate stages that exist between explicitly personal and wholly anonymous information. As a result, the FCC rules are simultaneously too narrow and too broad, both excluding and including data uses that should be permitted subject to reasonable controls and safeguards.  In independent comments, FTC staff agree, stating “the [FCC’s] proposal to include any data that is ‘linkable’ could unnecessarily limit the use of data that does not pose a risk to consumers.  While almost any piece of data could be linked to a consumer, it is appropriate to consider whether such a link is practical or likely in light of current technology.  FTC staff thus recommends that the definition of PII only include information that is ‘reasonably’ linkable to an individual.”

FPF therefore proposes an alternative approach, which recognizes that non-aggregate data can be de-identified in a manner that makes it not reasonably linkable to a specific individual. This approach is consistent with leading government and industry guidelines with respect to de-identified data, including key work by the Federal Trade Commission, and is illustrated by FPF’s Visual Guide to Practical De-Identification.

Read the FCC’s Proposed Rules.

Read FPF’s Comments.

June 22nd Webinar: PII Cookies and De-Identification – Accounting for Shades of Grey

PII, Cookies, and De-Identification – Accounting for Shades of Grey

Broadcast Date: June 22, 2016

Time: 1:00 – 2:30 pm ET

IAPP members: $159

Nonmembers: $179

Are tracking cookies personally identifiable information (PII)? What about IP addresses, MAC addresses, or mobile advertising identifiers — are they personal data, or can they be described as anonymous? EU laws, as well as HIPAA and COPPA in the U.S., have labeled these identifiers personal in many cases. Yet in most privacy policies, it remains widespread practice to describe these kinds of data points as “non-personal” or “anonymous.”

Despite a broad consensus around the need for and value of de-identification, one of the biggest challenges in the privacy profession remains how to determine when data is, or is not, de-identified. Join us for this in-depth discussion on how and when privacy professionals, industry groups, and regulators around the world have tackled this thorny question.

The panel will also explain and examine a recent effort by the Future of Privacy Forum to categorize data into a spectrum of identifiability, and make time to take your questions. You’ll hear us discuss how to create incentives for organizations to avoid explicit identification and to deploy elaborate safeguards and controls, while at the same time allowing data sets to maintain their utility.

What you’ll take away:

•   A better understanding of current definitions of PII in varying environments and for different usages.

•   Guidance on developing policy to effectively de-identify data sets, and under what circumstances.

•   How current regulatory regimes are defining PII, and how that may change moving forward

•   An understanding of the new Future of Privacy Forum’s data categorization tool, and how you could use it within your     organization

Moderator:

Kelsey Finch, CIPP/US,  Policy Counsel, Future of Privacy Forum

Panelists:

Projjol Banerjea, CIPT, Founder & Chief Product Officer, Zeotap

Leigh Freund, President and CEO, Network Advertising Initiative

Peter Lefkowitz, CIPP/US, Chief Privacy Officer, GE Digital

Eligible CPEs: CIPM, CIPP/C, CIPP/E, CIPP/G, CIPP/US and CIPT 1.50 CPE

REGISTER HERE

Purpose or Interest: that is the question!

We are pleased to present this guest post from Prof. Lokke Moerel, a leading EU privacy lawyer.  We think her blog and paper are fascinating and important contributions to the current discussion of key privacy topics, including big data, the Internet of Things, and EU data protection laws. 

Let us imagine a mobile phone application that traces your movements and phone calls in order to inform you whether you are likely to catch influenza. The app can even tell you which friends you should avoid in order to minimize your risk of catching the flu – even if those friends have not yet been affected by it themselves (this is not fiction, see research of MIT Professor Alex Pentland). Would you install this application on your smartphone as soon as you had the chance? Then imagine the use of a similar app by the World Health Organization, in order to protect public health during pandemics. Two applications that both collect and process personal data for the same purpose: the monitoring and personalized prediction of health and illness. But the sentiments that these two applications give rise to are likely very different.

When we pause to reflect on this, the conclusion is that it is not so much the purposes for which personal data might be used that are the primary consideration here, but rather the interests that are served by the use of the data collected. And yet, both the current and the upcoming EU data protection regime are based primarily on the purpose for which data are collected and processed, while the interests served play a much more subordinate role. This raises the question of whether this legal regime can be effective and can be considered legitimate as we move into a future whereby society is driven by data.

In my paper with Prof. Corien Prins: “Privacy for the homo digitalis: Proposal for a new regulatory framework for data protection in the light of Big Data and the Internet of Things,” we analyze innovations in data processing as a result of developments such as big data and the Internet of Things and discuss why these developments undermine the effectiveness and legitimacy of the current as well as upcoming EU data protection regime, thereby focusing on the private sector. The paper includes a detailed analysis of key data processing principles used in the European data protection regime (purpose limitation, informational self-determination and data quality) and argues that due to social trends and technological developments, the principle of purpose limitation should be abandoned as a separate criterion. Also, other principles (such as consent and the performance of an agreement) should no longer be recognised as grounds that play a role on their own in legitimizing data processing. Instead, we propose a single test: whether there is a legitimate interest for the whole life cycle of personal data processing (collection, use, further use and destruction of data). We argue that such a test will provide for a more effective data protection regime that will have more legitimacy than the assessment under the existing legal regime that is primarily based on the purposes for which data may be collected and further used. This test has been drafted in such a way that it enables companies to comply with the new requirements under the upcoming EU General Data Protection Regulation, which will become effective in 2018. We conclude our analysis with proposals to increase the effectiveness of enforcement of the data protection rules.

Read the paper summary

Read the full paper