Facebook Gives Users More Control With New “Facebook Login”

Today at f8, Facebook announced a new version of “Facebook Login,” the system that allows Facebook users to log into third-party apps and sites with their Facebook account. The new Facebook Login has a number of new and improved privacy controls, which will be very helpful for users seeking to control the information that gets shared with these third-party apps.

First, the new Facebook Login offers Line-by-Line control, allowing users to pick and choose what information apps will be able to get from their Facebook profile. For example, if a user wants to share their email address with an app, but not their birthday, they can make that choice before using the app. The new Facebook Login will also prevent apps from posting to Facebook without permission.

Second, the new Facebook has a new feature called “Anonymous Login.” This feature does what its name suggests: it provides an easy way for people to log into an app without sharing any of their personal information from Facebook. Users can still log into third party apps using their Facebook credentials (obviating the need to remember additional usernames and passwords), but no personal information from their Facebook profile will be shared. People can decide later if they want to share any additional information, once they understand more about the app.

Third, Facebook is now providing users with a centralized App Control Panel. This dashboard will let users see a list of all the apps they use, manage specific permissions for each app, or remove apps entirely. The control panel as well as all the above features will be available on both desktop and mobile platforms.

“Facebook’s improvements to its Login system are a great addition that will give users even more control of how their information is shared with third parties,” said FPF Executive Director and Co-Chair Jules Polonetsky. “Studies have shown that some users have avoided using social log-ins because they weren’t sure what data would be shared. Facebook’s new changes should make users more comfortable using social logins.”

Press Release: EU-US Safe Harbor Essential To Leading European Companies

NEW FPF STUDY DOCUMENTS OVER 150 EUROPEAN COMPANIES PARTICIPATING IN THE US-EU SAFE HARBOR PROGRAM. FROM MAJOR EMPLOYERS SUCH AS ALCATEL LUCENT, ADIDAS, BMW, NOKIA TO FAST-GROWING START-UPS LIKE APP DEVELOPER MIND CANDY, EUROPEAN COMPANIES DEPEND ON EU-US AGREEMENT

_____________________________________________________________________________________

The Future of Privacy Forum has conducted a study of the US-EU Safe Harbor program run by the United States Department of Commerce and has documented that more than 150 European companies are active Safe Harbor participants.

Recently, some European policymakers have called for an end to the Safe Harbor program, while others have called for the program to be improved.  FPF believes that simply terminating the program would have negative consequences for data protection and for companies and consumers not only in the United States, but in Europe as well.  FPF has previously noted the consequences of termination for those European employees who rely on the Safe Harbor program for the processing of their human resources data.¹ FPF’s new study reveals that termination would adversely impact many leading European companies as well.  To date, 152² active Safe Harbor member companies are headquartered or co-headquartered in European countries.  These companies include some of Europe’s largest employers, across a wide range of industries and countries, including:

 

These and other participating European companies depend on the Safe Harbor program so that their US subsidiaries can effectively use data for research, to improve products, to pay employees and serve customers. These companies would therefore be severely burdened and disadvantaged by termination of the program.  FPF agrees with the need to improve the Safe Harbor in a number of key areas and has detailed these recommendations in a recently-released report.³  Given the importance of this mechanism to companies and consumers on both sides of the Atlantic, FPF recommends that the Safe Harbor arrangement be preserved and improved.

Methodology:

For the full list of European companies in the Safe Harbor program, or to schedule an interview with Christopher Wolf or Jules Polonetsky, email [email protected].

­_____________________________________________________________________________________

ABOUT FUTURE OF PRIVACY FORUM

Future of Privacy Forum (FPF) is a Washington, DC based think tank that seeks to advance responsible data practices. FPF is led by Internet privacy experts Jules Polonetsky and Christopher Wolf and includes an advisory board comprised of leading figures from industry, academia, law and advocacy groups.



1 http://fpf.org/2013/12/20/the-libe-committee-wants-to-suspend-the-safe-harbor-along-with-thousands-of-eu-employee-salaries/

2 The survey does not include those global companies with EU offices and as a result is a conservative estimate of impacted European companies.

3 For a comprehensive list of FPF’s recommendations, see The US-EU Safe Harbor: An Analysis of the Framework’s Effectiveness in Protecting Personal Privacy, available at http://fpf.org/wp-content/uploads/FPF-Safe-Harbor-Report.pdf

The Need for Privacy and Technology in our Schools: Rethinking Privacy in Education

Last Thursday, Jules Polonetsky participated in a Congressional E-Learning Caucus Briefing on “Data Privacy in Education” on Capitol Hill. Moderating the discussion was Intel’s David Hoffman, who today summarized his thoughts on the event:

The demise of inBloom and many of the findings of the Pew research point to a need for continued dialogue on the issue of education privacy. . . . While transparency and parent engagement are critical, we need to supplement them with a better understanding of how organizations should use student data. We ask all of you to join us in that effort, and help us describe what ‘the appropriate and accountable’ use of data means in education.

His complete thoughts are available at Policy@Intel.

FPFcast: Can Intellectual Property Law Inform Privacy? A discussion with Eric Goldman

April 25, 2014: Can Intellectual Property law inform Privacy? A discussion with Eric Goldman

[audio

In this podcast, FPF Legal and Policy Joe Newman talks with Eric Goldman, Professor at Santa Clara University School of Law, Director of the High Tech Law Institute, and a new member of FPF’s Advisory Board. Professor Goldman discusses the problems that arise when trying to incorporate property law concepts within privacy debates.

It’s no surprise that personal information today is a hot commodity, bought and sold by thousands of entities daily. Companies like Covata and Personal tell consumers they can “Own their data,” suggesting that personal information be characterized as property. Goldman discusses the history of privacy thinking with respect to the idea of “data” as “property” and the general reluctance of privacy thinkers to export intellectual property law concepts to cover privacy. General cynicism about the effectiveness of IP law in the 1990s made many fear that once data was “propertized,” companies “would just find easier ways of grabbing that data,” at the expense of consumers. “People’s norms about property get really weird,” Goldman notes. “A lot of the time the label ‘property’ actually distorts the conversation in fundamental and often unhelpful ways.”

 

New Mobile Tracking Dos and Don’ts from Apple

We wrote in the past about how Apple was addressing privacy concerns about mobile tracking by restricting the identifiers that mobile developers can use to track devices. Apple announced in 2011 that developers moving forward would only be permitted to track an iOS device using Apple’s new Advertising Identifier (IDFA).  Despite the fact that this identifier was specifically labeled as for advertising purposes, some companies assumed it could be used for analytics as well.

However, in February, reports began to surface indicating that the App Store was rejecting new apps that used the IDFA for analytics but did not host ads. This raised the concern that analytics were not going to be allowed for any purpose.

In its new iTunes Connect module for developers, Apple explains how the IDFA can and cannot be used within apps distributed on the App Store.

Developers must now specifically indicate as part of their app submission to Apple whether they use the IDFA to serve ads within an app, as well as whether they attribute app installation or other actions within the app to a previously served advertisement. Thus, Apple is permitting the use of the IDFA for serving ads and tracking conversion events.  Other limited uses of the IDFA may yet be permissible, as Apple suggests that developers contact them if they believe they have another acceptable use for the identifier.

As we have mentioned before, The IDFA is subject to a user controlled privacy setting labeled “Limit Ad Tracking” and found within “Settings –> Privacy –> Advertising –> Limit Ad Tracking” in iOS 7 (in iOS 6, the setting is at “General –> About –> Advertising –> Limit Ad Tracking”).

The new language also clarifies that an application, as well as any third party that interfaces with the application, is subject to the new rules. More discussion of the new iTunes Connect module can be found at TechCrunch here.

Google Updates Developer Program Policies with New Rules for Ads

Google has recently updated its set of rules that developers must follow when distributing apps on the Google Play store. The updated rules are designed in part to guide developers in promoting their apps. The rules prohibit apps from promoting themselves through deceptive ads (for instance, by simulating a Google service or app notification), misleading install tactics or unsolicited SMS messages. Google has also expanded its restrictions on acceptable app behavior, curbing “erotic” content, links to malicious software and apps that alter a device’s browser settings or bookmarks.

Google’s new policy also increases transparency surrounding in-app purchases: “If your product description on Google Play refers to in-app features to which a specific or additional charge applies, your description must clearly notify users that payment is required to access those features.” These changes should help to boost confidence in the ever-expanding mobile app ecosystem.

Also, note that while not a new addition to the Policy, developers should be aware of the approaching August 1st deadline for moving app tracking based on the “Android ID” to the new “Android Advertising Identifier.” After the deadline, all apps may track using only the Advertising ID and may not link it to any persistent identifier such as a MAC address without the explicit consent of the user. For more information on Google’s developer policy, visit the Android Police site.

Comments for the White House "Big Data Review"

This afternoon, FPF submitted comments to help inform the White House Office of Science and Technology Policy’s “Big Data Review.” Announced in January, the White House Big Data Review has been a helpful exercise in scoping out how big data is changing our society.  Through public workshops at MIT, NYU, and Berkeley, the review has solicited thought leadership from a wide array of academics and researchers. Moving forward, FPF believes there is much that can be done to promote innovation in a way that advances privacy.

We advanced the following recommendations for the OSTP Big Data Review report:

1)      Embrace a flexible application of Fair Information Practice Principles (FIPPs). Traditional FIPPs have guided privacy policy nationally and around the globe for more than 40 years, and the White House Consumer Privacy Bill of Rights is the most recent effort to carry these principles forward into a world of big data. FPF supports the continued reliance on the FIPPs and believes they remain flexible enough to address many of the challenges posed by big data when applied in a practical, use-based manner. Our Comments recommend a nuanced approach to their applicability that accounts for modern day technical realities.

2)      Promote the benefits of big data in society. Researchers, academics, and industry have demonstrated how big data can be useful in driving economic growth, advancing public safety and health, and improving our schools. Yet, privacy advocates and the public appear skeptical of these benefits in the face of certain outlier uses. More work is needed to understand the ways big data is already improving society and making businesses more efficient and innovative. This report should highlight the importance of big data’s benefits and identify additional opportunities to promote positive uses of big data.

3)      Support efforts to advance practical de-identification, including policy and technological solutions. While the Federal Trade Commission (FTC) has acknowledged that data that is effectively de-identified poses no significant privacy risk, there remains considerable debate over what effective de-identification requires. FPF believes that technical anonymization measures are only one component of effective deidentification. Instead, a broader understanding that takes into account how administrative and legal safeguards, as well as whether data is public or non-public, should inform conversations about effective de-identification procedures.

4)      Encourage additional work to frame context and promote enhanced transparency. The context in which data is collected and used is an important part of understanding individuals’ expectations, and context is a key principle in both the Consumer Privacy Bill of Rights and the FTC Privacy Framework. Respect for context is an increasingly important privacy principle, yet more work by academics, industry, and policymakers is needed about how to properly frame and define this principle. The Department of Commerce-led Internet Policy Task Force (IPTF) should continue its work convening stakeholders and hold programs that could help frame context in an age of big data. At the same time, another important tool that can be used to promote public trust in big data is enhanced transparency efforts. In particular, FPF has called for more transparency surrounding high-level decisional criteria that organizations may use to make decisions about individuals.

5)      Encourage efforts to promote accountability by organizations working with big data. Data privacy frameworks increasingly rely on organizational accountability to ensure responsible data stewardship. In the context of big data, FPF supports the further development of the concept of internal review boards that could help companies weigh the benefits and risks of data uses. In conjunction with the evolving role of the privacy professional, accountability measures can be put in place to ensure big data projects take privacy considerations into account.

6)      Promote government leadership on big data through its own procedures and practices. The federal government is one of the largest producers and users of data, and, as a result, the government may inform industry practice and help demonstrate the value of data through its own uses of big data across and among agencies. The Federal Chief Information Officer (CIO) Council is particularly well-positioned to ensure the federal government can maximize the potential of big data with an eye toward privacy protection.

7)      Promote global efforts to facilitate interoperability. Recent privacy developments in the Asia Pacific and the European Union have given new life to constructive collaboration on the cross jurisdictional issues presented by big data. FPF urges government to actively promote and maintain existing frameworks to facilitate interoperability, including the US-EU Safe Harbor and the Asia Pacific Economic Cooperation’s (APEC) Cross Border Privacy Rules (CBPR) System

Big data presents many benefits and potential risks. A thoughtful, balanced analysis of the value choices now at hand is essential. The Administration’s efforts to convene thought leaders have produced many fruitful conversations, and more are needed. At the same time, it will be essential that the Administration provide transparency and a clear plan of action to all stakeholders moving forward.These broad next steps are suggested as a helpful beginning to the work that needs to be done.

Big data offers the United States a great opportunity to provide global leadership on promoting innovation – and protecting privacy. It also presents a challenge, but we have the privacy principles and frameworks needed to thoughtfully address that task.

Chris Wolf Does a "Soap Box" Presentation on Big Data

Tomorrow, Berkeley will host a workshop on Big Data: Values and Governance, the third in the White House’s public events around big data and privacy. As part of this discussion, Chris Wolf presented a “soap box” presentation on big data today.  He suggested a few high-level recommendations, including the need to “fully take stock of the benefits in performing a cost-benefit analysis” around big data.

“It is hard to do a cost-benefit analysis if we are only talking about the costs. Researchers, academics, and industry are using big data to deliver big benefits. We need to understand and promote those benefits so that we can more reasonably evaluate whether and how to address the risks that may arise,” he said.

MAC Addresses and De-Identification

Location analytics companies log the hashed MAC address of mobile devices in range of their sensors at airports, malls, retail locations, stadiums and other venues. They do so primarily in order to create statistical reports that provide useful aggregated information such as average wait times on line, store “hot spots,” and the percentage of devices that never make it into a zone that includes a checkout register. FPF worked with the leading companies providing these services to create an enforceable Mobile Location Code of Conduct that restricts discriminatory uses of data, creates a central opt out, promotes in-store notice and other protections. We filed comments last week with the FTC describing the program in detail.

The only data transmitted by mobile devices that most location companies can log is the MAC address – the Wi-Fi or Bluetooth identifier devices broadcast when Wi-Fi or Bluetooth is turned on. The privacy debate around the use of this technology and the Code has centered on the sensitivity of logging and maintaining hashed MAC addresses, and hinges on whether a MAC address should be considered personal information.

Is a MAC address personal information? Well, it is linked to individual consumer devices, either as a consistent Wi-Fi or Bluetooth identifier. If enough data is linked to any consistent identifier over time, it is in the realm of technical possibility that the identity of a user can be ascertained. If there was a commercially-available database of MAC addresses, it is possible that such a database could be used to identify users. We are not aware of any such MAC address look-up database. But we do recognize that the data collected is linked to a specific device. For this reason, the Code of Conduct treats hashed MAC addresses associated with unique devices as something in between fully anonymized data and explicitly personal data. This reflects the view that Professor Daniel Solove posited effectively when he argued that PII exists not as a binary, but on a spectrum, with no risk of identification at one end, and individual identification at the other. In many real-world instances of data collection, the privacy standards in place reflect where the data lies on this spectrum; they consist not only of technical measures to protect the data, but also internal security and administrative controls, as well as enforceable legal commitments. In the case of Mobile Location Analytics, many companies are confident that by hashing MAC addresses, keeping them under administrative and security controls, and publicly committing not to attempt to identify users, they have adequately de-identified the data they log.

However, it is important to understand, that Code does NOT take the position that hashing MAC addresses amounts to a de-identification process that fully resolves privacy concerns. According to the Code, data is only considered fully “de-identified” where it may not reasonably be used to infer information about or otherwise be linked to a particular consumer, computer, or other device. To qualify as de-identified under the Code, a company must take measures such as aggregating data, adding noise to data, or statistical sampling. These are considered to be reasonable measures that de-identify data under the Code, as long as an MLA company also publicly commits not to try to re-identify the data, and contractually prohibits downstream recipients from trying to re-identify it. To assure transparency, any company that does de-identify data in this way must describe how they do so in their privacy policy.

As most of the companies involved in mobile location analytics do indeed link hashed MAC addresses to individual devices, the data they collect to track devices over time does not qualify as strictly “de-identified” under the Code and the data they collect is not exempt from the Code. Rather, the companies collect and use what the Code terms “de-personalized” data.* De-personalized data is defined in the Code as data that can be linked to a particular device, but cannot reasonably be linked to a particular consumer. Companies using de-personalized data must:

    1. take measures to ensure that the data cannot reasonably be linked to an individual (for instance, hashing a MAC address or deleting personally identifiable fields);
    2. publicly commit to maintain the data as de-personalized; and
    3. contractually prohibit downstream recipients from attempting to use the data to identify a particular individual.

When companies hash MAC addresses, they are thus fully subject to the Codes requirements, including signage, consumer choice, non-discrimination.

Different kinds of data on the PII/non-PII spectrum — given the inherent risks and benefits of each — merit a careful consideration of the combination of reasonable technical encryption and administrative measures and legal commitments that would be most suitable. After all, if “completely unidentifiable by any technical means, no matter how complex or unlikely” were the standard for the use of any data in the science and business worlds, much valuable research and commerce would come to an end. The MLA Code represents a pragmatic view that allows vendors to provide a service that is useful for businesses and consumers, while applying responsible privacy standards.

* Suggestions for a better term that de-personalized are welcomed. We considered “pseudonymized” but found the term awkward.

Jules' thoughts on Facebook's "Privacy Dinosaur"

Jules’ new article on LinkedIn discusses Facebook’s recent efforts to remind its users about adjusting the sharing settings on their posts. A pop-up notice with an illustrated dinosaur occasionally appears on a user’s home page when the user posts something publicly, and reminds the user about their option to keep the post limited to a smaller circle of friends if desired.

At the Washington Post Live All Things Connected Forum this week, FPF called for efforts to “poke and provoke” designers to get in the game on designing for usable privacy. The Facebook “Privacy Dinosaur” is just one example of how creative designers and technologists can advance transparency and privacy in a way everyday users can appreciate.