Tracking Do Not Track: New Ad Network Data Shows That 8 Percent Of Users Have DNT On

Getting Ready For Tracking Transparency Law to Kick In

Starting in 2014, California’s new law AB 370 requires all websites that collect personally identifying information to disclose in their privacy policies how they respond to browser Do Not Track signals.  FPF has launched AllAboutDNT as a resource for companies preparing to make a statement about DNT, providing a location to point consumers for more information.  The site includes instructions for activating the DNT header on a variety of devices as well as a list of companies with public commitments honoring DNT.

We are also releasing interesting data we recently received from Chitika, an online advertising network that honors browser DNT requests.  Chitika reports that its ad network delivers over four billion targeted ads each month to a network of over 300,000+ sites.  A sample of Chitika’s data shows that currently over 8 percent of users across all browsers are transmitting a DNT signal indicating a preference not to be tracked.

pie chart

chart 2

Browser  Share of sample

 DNT:1 signal ON

 Chrome

22%

2.06%

 Safari

13%

5.86%

 Firefox

12%

7.35%

 IE 6

6%

0.00%

 IE 8

13%

0.27%

 IE 9

5%

8.82%

 IE 10

8%

69.14%

 Android

8%

0.00%

 other

12%

1.97%

 Grand Total

100%

8.39%

This data is likely consistent with what an average ad network would see daily with respect to user implementation of DNT.  However, this data does not reflect what percentage of users have actually chosen to turn DNT on or off; determining that number is more complicated because the above statistics encompass browsers and versions for which DNT is unavailable, as well as browsers that have DNT on by default.  For instance, these numbers include users of IE 10, for which “Express” installations set the DNT setting on by default.  (Although 69% of IE 10 users have the DNT setting on, IE 10 users only make up 8% of the sample size.)  It’s also interesting that almost 31% of IE 10 users do not have DNT:1 on, which suggests that a surprising number have expressly adjusted the setting to allow tracking.  Additionally, the actual Firefox adoption rate of DNT is likely higher than 7.35%, because 10% of the Firefox data set uses Firefox 3, which does not have a built-in DNT feature.

For detailed statistics broken by browser version, please download this Excel file.

Testimony on Privacy Policies before the California State Assembly

This morning, Jules Polonetsky, FPF’s Executive Director, will be speaking before the California State Assembly Joint Committee Hearing on Digital Privacy on the question of whether privacy policies adequately protect consumer privacy.  Jules’ testimony will note that “[p]rivacy policies are not useful for many consumers, but are essential accountability mechanisms. Consumers need to be able to rely on the design and user interface of a service to quickly grasp how data is being used.”

Jules will discuss a variety of different mechanisms that organizations can implement both to protect consumers and offer them value for their data.  FPF has proposed several ideas for places to start, such as (1) more transparency of algorithms, (2) treating data use like a feature, (3) advances in de-identification, (4) serious self-regulation and (5) effective privacy professionals.  Policymakers need to encourage creative approaches to addressing privacy challenges.

Read Jules’ full testimony here.

The US-EU Safe Harbor: An Analysis of the Framework's Effectiveness in Protecting Personal Privacy

This morning, the Future of Privacy Forum (FPF) released our report on the effectiveness of the U.S.-EU Safe Harbor program.  Our analysis, which we first announced in August, responds to recent recommendations by the European Commission and suggests a number of areas where the framework can be further strengthened.

An overview of key findings and recommendations found in the report are listed below:

Findings

    1. Suspending the Safe Harbor’s protections would weaken personal privacy protections for EU citizens.  Under the Safe Harbor, the FTC has the capacity to enforce against US companies on behalf of EU citizens, simplifying complex jurisdictional issues.  The Safe Harbor program also results in stronger investigatory and monitoring powers for the FTC.
    2. Alternatives to the Safe Harbor program as a mechanism of compliance with the EU Data Directive may not be feasible for all companies.  These alternative mechanisms, including express consent, model contracts, and binding corporate rules, are either too inflexible or too difficult to implement at scale for the wide variety of companies that rely on the Safe Harbor and provide less transparency for regulators about data flows.
    3. Eliminating the Safe Harbor will not prevent the NSA from accessing EU citizens’ data.  The global economy, and particularly the transatlantic economy, will continue to rely on international data transfers, and when US-based companies are presented with a valid legal order from the US government for information, companies will be compelled to provide access to that data regardless of their membership in the Safe Harbor.
    4. Restricting the ease of data flows between the EU and US could have an extremely harmful effect on the trans-Atlantic economy.

Recommendations

 

With these reforms, as well as continued vigilance by regulators and compliance bodies, the Safe Harbor will become even more effective in safeguarding citizens’ commercial privacy rights.  FPF hopes this report will help advance constructive dialog about the Safe Harbor framework moving forward.

The full report is available to read here.  

Future of Privacy Forum Releases Report on the Effectiveness of the US-EU Safe Harbor Privacy Framework

For immediate release, December 11, 2013

Future of Privacy Forum Releases Report on the Effectiveness of the US-EU Safe Harbor Privacy Framework

Report Responds to EU Concerns, Finds the Safe Harbor Program Has Been Effective but Calls for Improvements to Strengthen Trans-Atlantic Privacy Protections

Washington, D.C. December 11, 2013 – The Future of Privacy Forum (FPF), a think tank that seeks to advance responsible data practices, released a report today detailing the effectiveness of the Safe Harbor agreement in protecting personal privacy.  It finds that the Safe Harbor largely has been successful in maintaining strong personal privacy protections for European citizens while allowing the free flow of data between the EU and US.  The report also cautions against the precipitous termination of the Safe Harbor, which has become a cornerstone of trans-Atlantic data transfers, and instead suggests a number of areas where the framework can be strengthened.

Christopher Wolf, Founder and Co-Chair of FPF, who is speaking in Brussels at privacy events this week, said: “This report shows that the Safe Harbor still is our best bet for protecting peoples’ data in a global economy.  By requiring companies to make commitments that can be enforced by the US Federal Trade Commission, EU citizens gain privacy protections in ways not possible without the Safe Harbor agreement.  We should continue to look for common-sense solutions to improve the agreement without upsetting the balance that has been the driver of the Safe Harbor’s success.”

Jules Polonetsky, Executive Director and Co-Chair of FPF said: “FPF has conducted an in-depth study of the Safe Harbor framework and its alternatives and the results are clear: the Safe Harbor framework is uniquely capable of harmonizing US and EU privacy concerns while encouraging trans-Atlantic data transfers.  Case studies, compliance interviews, and enforcement actions all show that the Safe Harbor is effectively enforced and that participants take heed of Safe Harbor responsibilities.  While improvements to the Safe Harbor can and should be made, our focus needs to remain on growing the program and covering more individuals and businesses with these privacy safeguards.”

To read the full report, click here.

An overview of key findings and recommendations found in the report are listed below.

Findings

Recommendations

 

With these reforms, as well as continued vigilance by regulators and compliance bodies, the Safe Harbor will become even more effective in safeguarding citizens’ commercial privacy rights.

For any questions, or to schedule an interview with Christopher Wolf or Jules Polonetsky, email: [email protected]

European Commission's Safe Harbor Report Released

This morning, the European Commission released its long-awaited report evaluating the US-EU Safe Harbor. The Commission proposed series of recommendations “to restore trust in data flows between the EU and the U.S.”

The Future of Privacy Forum is currently preparing an in-depth report on the Safe Harbor that will address the concerns presented by the European Commission.  However, FPF believes the European Commission’s criticism is largely misplaced.  Christopher Wolf, FPF Co-Chair, suggests that the Commission’s analysis “does not reveal any significant deficiencies . . . It is clear that the main area of concern is national security access to data, which is not what the Safe Harbor was intended to or can address.”

 

FPF Responds to European Commission Report on US-EU Safe Harbor and Finds Criticisms Misplaced

 For immediate release, November 27, 2013

Future of Privacy Forum Responds to European Commission Report on US-EU Safe Harbor and Finds Criticisms Misplaced

Washington, D.C. November 27, 2013 – The Future of Privacy Forum (FPF), a think tank focused on advancing personal privacy, responded to the European Commission statements on the US-EU Safe Harbor program including a threat that the EU will terminate the agreement in mid-2014 unless certain conditions are met.  FPF cautioned against the precipitous termination of the program that has demonstrable benefits for the protection of EU personal data transferred from the EU to the US.  FPF is preparing an in-depth report on the Safe Harbor that will address the concerns presented by the European Commission.

Christopher Wolf, Founder and Co-Chair of FPF said:  “Regrettably, officials in the EU have conflated the issues raised by the recent NSA revelations with the issue of whether the Safe Harbor provides effective protection for the personal data of EU citizens.  The issue of national security access to personal data needs to be addressed separately, for example in government talks focused on that issue, not through threats to terminate a demonstrably effective framework for protecting privacy in commercial cross-border flows of personal data.  On first reading, the European Commission’s analysis of the operation of the Safe Harbor does not reveal any significant deficiencies, certainly not enough to call for termination of the cross-border data transfer arrangement.  It is clear that the main area of concern is national security access to data, which is not what the Safe Harbor was intended to or can address.”

Jules Polonetsky, Director and Co-Chair of FPF added:  “The Safe Harbor doesn’t protect against law enforcement or intelligence access, but it provides EU citizens with significant protections against consumer privacy abuses which the Commission risks undermining.  A forthcoming report by the Future of Privacy Forum will examine the efficacy of the Safe Harbor, including what it would mean for the privacy of European citizen data if the Safe Harbor were to be terminated.”

FPF recommends that rather than suspending the Safe Harbor, European and American policymakers should work together to strengthen the program and continue to address questions of national security in a separate context.

To schedule an interview with Christopher Wolf or Jules Polonetsky, email: [email protected]

A New Privacy Paradigm for the “Internet of Things”

Today the FTC is hosting a workshop on the Internet of Things, which will feature many great panelists including FPF’s Co-Chairman Christopher Wolf.  Chris and FPF Executive Director Jules Polonetsky have also today released a whitepaper arguing for a new privacy paradigm in the new highly connected world.

The whitepaper argues that current implementations of Fair Information Practice Principles (FIPPs) are becoming outdated in the world of the Internet of Things, where nearly every device or appliance will be connected to the internet and collecting data about consumers.  Attempting to provide meaningful “notice” in a world of billions of connected devices is not feasible when many devices lack meaningful user interfaces or screens, and relying on consumers to read thousands of Privacy Policies will lead to many simply “giving up” on their privacy. Similarly, FIPP’s strict usage limitations may thwart technological progress, because many socially valuable uses of data are not discovered until the data is already collected.  The challenge then is to allow practices that will support progress, while providing appropriate controls over those practices that should be forestalled or constrained by appropriate consent.

To that end, the paper proposes the following principles:

Use anonymized data when practical.  Anonymizing personal information decreases the risks that personally identifiable information will be used for unauthorized, malicious, or otherwise harmful purposes.  Although there is always some risk of Re-Identification, when data sets are anonymized and stored properly, re-identification is no easy task.

Respect the context in which personally identifiable information is collected.  Managing consumer expectations is a good first step; however, respect for context should not focus solely on what individuals “reasonably” expect.  There may be unexpected new uses that turn out to be valuable societal advances or important new ways to use a product or service.  Rigidly and narrowly specifying context could trap knowledge that is available and critical to progress. Finding a balance may require more sophisticated privacy impact assessments that can analyze the impact of risks or harms and assess the potential benefits for individuals and society.

Be transparent about data use.  Organizations making decisions that affect individuals should, whenever feasible, disclose the high-level criteria used when making those decisions.  This will help insure that factors – such as a user’s ethnicity, sexual orientation, and political preferences – are not factored into a company’s determinations when they would be irrelevant or unduly discriminatory.

Automate accountability mechanisms.  Automated accountability mechanisms could monitor data usage and determine whether the uses comply with machine readable policies.

Develop Codes of Conduct.  Self-regulatory codes of conduct will be the most effective means to honor these preferences and others in the rapidly evolving landscape of the Internet of Things.  Codes of conduct could establish frameworks that enable individuals to associate usage preferences with their connected devices.

Provide individuals with reasonable access to personally identifiable information.  This will likely enhance consumer engagement with and support of the Internet of Things.

Latanya Sweeney and Andrea Matwyshyn to Join FTC

Today, the FTC announced that the appointment of Latanya Sweeney to serve as the agency’s Chief Technologist and Andrea Matwyshyn as a Senior Policy Advisor on privacy and data security issues.

Dr. Sweeney is well-known for her hardline criticism of “anonymity” in publicly released datasets. In 1997, Sweeney used seemingly anonymous medical data to demonstrate that she could identify sensitive health information about William Weld, then governor of Massachusetts.

Her studies have been widely celebrated in the privacy community, and her efforts have cast scrutiny on the state of de-identification best practices. As the founder and director of Harvard’s Data Privacy Lab, Dr. Sweeney continues to work to develop better, more complex algorithmic solutions to protect individual privacy. More recently, Sweeney has suggested that Google search results may demonstrate “racial bias in society.”  According to the study, she found that searches of names typically associated with minorities are more likely to generate advertising related to criminal activity.

Dr. Matwyshyn is an assistant professor in Legal Studies and Business Ethics at the Wharton School, where she focuses on technology innovation and its legal implications for corporate information security and consumer privacy.  She’s written widely about “hackers” and their relationship to the law.  Many of her works also focus on machine-human convergence and what sorts of educational efforts and legal rules are needed to encourage technological entrepreneurship.  She will join the FTC’s Office of Policy Planning in December to advise on privacy and data security policy.

Reuters Talks Tracking in Brick-and-Mortar Retail

Reuters today published an article discussing the ways in which brick-and-mortar retailers are using increasingly sophisticated technology to “catch up” to online retail. Along with the benefits — more efficient stores, targeted discounts — the article raises privacy concerns about the tracking of customer behavior in the offline world. We think, when it comes to addressing these concerns, that our Mobile Location Analytics Code of Conduct is a step in the right direction. Notice, transparency, and customer consent have — and will continue to be — important principles as we continue to collaborate with companies to ensure responsible data practices in the retail space.

Link: Big Retailer is watching you: stores seek to match online savvy (via Reuters)

Protecting Privacy and People Using Airbnb to Go on Vacation

Last month, Attorney General Schneiderman made waves when he subpoenaed data on 15,000 New York City-based users of Airbnb, the service best known for allowing people to rent out their spare bedrooms or their homes while on vacation. The Attorney General is seeking to identify local landlords that are using Airbnb’s service to regularly rent vacant apartments as illegal hotels without paying the appropriate taxes. Certainly, Mr. Schneiderman has an obligation to uphold the law, and he is within his rights to crack down on illegal hotels and tax evaders. He also has every right  to require the assistance of Airbnb in that pursuit, but the Attorney General should be mindful of the potential harms his overbroad subpoena may inflict both to Internet commerce and to individual liberty.

At the Future of Privacy Forum, we were surprised by the breadth of Mr. Schneiderman’s request for information. The demand isn’t just for the small number of users who might be abusing the system, but for information revealing the vacation habits of thousands of New Yorkers. The subpoena demands residents’ names and contact information, dates of guest stays, rates charged, and any communications between users and Airbnb about tax issues. That’s a lot of very personal information to be placing into government hands, particularly where, as here, there is no clear evidence of any user wrong-doing.

Airbnb has challenged the subpoena, arguing that it is overbroad and that the Attorney General is basically trying to begin an investigative “fishing expedition.” On Friday, the company received additional support from a number of technology organizations. The Internet Association filed an amicus brief in support of quashing the state’s subpoena. Its brief argues that innovation and technological disruption can place regulators on the defensive, collecting as much information as possible without coming to grips about the state of the law itself. CDT and EFF also filed a brief that highlights both the overreaching nature of the subpoena and the need for courts to carefully review government information requests about large numbers of Internet users.

We agree.  We don’t think that companies get a pass to avoid the law, just because they do business via the internet. But regulators need to understand that investigative efforts that target companies that hold user data need to be appropriately narrowed and targeted.  Wide grabs of consumer data by well-meaning regulators can have a serious impact on consumer privacy. Internet companies, ranging from Airbnb to giants like Google, rely on user trust to deliver their services.  Every day, we exchange copious amounts of personal data in exchange for services because we trust companies to protect our information. Part of that unspoken agreement is that user information will be reasonably protected from unwarranted government intrusions. In the past, when the federal government subpoenaed Amazon.com for information on the reading habits of its customers, the judge criticized the request as “Orwellian.” He wrote, “If word were to spread over the Net – and it would –  that the FBI and the IRS had demanded and received Amazon’s list of customers and their personal purchases, the chilling effect on expressive e-commerce would frost keyboards across America.”

In addition to undermining internet commerce, these types of data requests can also have a chilling effect on individual behavior. Already, people have removed their apartments from Airbnb’s listings for fear of having the Attorney General knock on their door. This hurts not only Airbnb, but also individuals looking for a temporary bed in New York City, a notoriously expensive destination. Of course, protective orders can be put in place to limit Mr. Schneiderman’s use of Airbnb’s data, but breaches and leaks happen and privacy risks remain.

If the Attorney General’s Office truly wants to capture the likely abusers of the Airbnb services, there are any number of categories of information it could seek from Airbnb that might more narrowly target New Yorkers running what are defacto illegal hotels. For example, individuals that use Airbnb as a part of their rental business can make tens of thousands of dollars per year. Mr. Schneiderman could seek information about users who have made more than a certain amount a year from renting their apartments. Or he could look at high-frequency users, subpoenaing information about users who have rented out their apartments an unusually large number of  times in the last year. These categories of information would help the Attorney General zero in on problematic users without invading the privacy of thousands of New Yorkers and chilling an innovative business model that benefits consumers.