Use of Limit Ad Tracking Drops as Ad Blocking Grows
Behind the scenes in the escalating war between ad-blocking consumers and advertisers and ad-supported publishers, the use of one privacy tool has decreased. Mobile marketing platform firm Tune reports that, as the number of ad-blocker downloads rises, the limit-ad-tracking feature available in iOS and Android devices has actually dropped.
Use of the limit-ad-tracking setting fell to 16.7% of devices in February from 22% in August 2015, according to Tune, which observed 1.3 billion mobile app installs by about 150 million people over seven months, from August 2015 to February 2016.
This is the first example of data measuring the use of the limit-ad-tracking feature, said Jules Polonetsky, CEO of the Future of Privacy Forum. Limit ad tracking, he continued, “despite being this central privacy control, really doesn’t get a lot of debate or discussion.”
We are pleased to announce that John Verdi will be joining FPF as our new Vice President of Policy beginning May 23, 2016. John will be responsible for furthering our efforts to advance the FPF agenda on big data, wearables, connected cars, smart cities and ethics, among other privacy related matters.
John joins FPF after serving as Director of Privacy Initiatives at the National Telecommunications and Information Administration. John led NTIA’s privacy multistakeholder process, and his work touched on unmanned aircraft systems, facial recognition technology, and mobile apps. Prior to NTIA, he was General Counsel for the Electronic Privacy Information Center. John earned his J.D. from Harvard Law School in 2002 and his B.A. in Philosophy, Politics, and Law from SUNY-Binghamton in 1998.
Department of Commerce Director of Privacy Initiatives Joins the Future of Privacy Forum
FOR IMMEDIATE RELEASE
May 9, 2016
Contact: Melanie Bates, Director of Communications, [email protected]
DEPARTMENT OF COMMERCE DIRECTOR OF PRIVACY INITIATIVES
JOINS THE FUTURE OF PRIVACY FORUM
Washington, DC – Today, the Future of Privacy Forum (FPF) announced that John Verdi will join the organization as Vice President of Policy to lead the development of its rapidly growing privacy policy portfolio. Verdi will be responsible for furthering FPF’s efforts to advance responsible privacy practices.
“It has been a great privilege to work with industry, advocates, academics, and policymakers on codes and best practices for new technology,” Verdi said. “I look forward to collaborating with FPF’s diverse group of stakeholders to help shape policies that support beneficial uses of data, while ensuring privacy protections.”
Verdi joins FPF after serving as Director of Privacy Initiatives at the National Telecommunications and Information Administration (NTIA). NTIA, located within the United States Department of Commerce, is the principal advisor to the President on telecommunications and information policy issues. Verdi’s work focused on digital privacy and security issues. He led NTIA’s privacy multistakeholder process, and his work touched on unmanned aircraft systems, facial recognition technology, and mobile apps. Prior to NTIA, Verdi was General Counsel for the Electronic Privacy Information Center, where he supervised the organization’s litigation program, pursued federal lawsuits regarding privacy issues, and authored Supreme Court briefs.
“John’s unique blend of diplomacy and intellect has helped him navigate among stakeholders with a range of privacy viewpoints during his time at Commerce,” said Jules Polonetsky, CEO, FPF. “We look forward to his leadership advancing the FPF agenda on big data, wearables, connected cars, smart cities and ethics, among other privacy related matters.”
“John’s legal skills, personality, and passion for consumer protection are the key combination needed to help tackle the challenges of advancing responsible practices in this complex and nuanced privacy debate,” said Christopher Wolf, Board President, FPF. “With the addition of John, FPF is proud to expand its team of thought leaders.”
Verdi earned his J.D. from Harvard Law School in 2002 and his B.A. in Philosophy, Politics, and Law from SUNY-Binghamton in 1998.
###
The Future of Privacy Forum is a Washington, DC based think tank that seeks to advance responsible data practices. Learn more about FPF’s work by visiting www.fpf.org.
Clear Channel Faces New Questions Over 'Spying Billboards'
Earlier this year, Clear Channel unveiled a new outdoor advertising initiative that involves telling marketers whether their stores are visited by consumers who have viewed ads on billboards.
Jules Polonetsky, CEO of the think tank Future of Privacy Forum, tells MediaPost that Clear Channel would do a service to consumers by providing more information about how its program works. “It may not be deceptive, but it’s certainly not transparent — which is driving the concerns,” he says.
White House Releases Report on Big Data and Discrimination
Building on previous publications from the President’s Big Data Working Group in 2014 and 2015, the White House has released a new report analyzing the confluence of “Big Data” and civil rights. The report strongly supported the potential Big Data holds, lauding the benefits society stands to reap from proper data analysis. This enthusiasm was tempered by an awareness that using technology can exasperate inequities; without proper caution going forward, the blind application of technology may serve to amplify preexisting biases.
The report divides its hopes and fears over big data into two categories: problems with algorithmic inputs and the inner workings of the algorithms themselves. The first category comprises fears over the integrity of underlying data sets; cautioning that data sets which are incomplete, outdated or unrepresentative may perpetuate their errors or biases when used as algorithmic grist. The second category addresses opaque and increasingly complex algorithms which reject users or narrow their options based on personal data. These categories are illustrated by case studies in credit access, education, employment and criminal justice.
For direction, the report seeks further conversation between market participants, academia, the government, and the public on data ethics. Specifically, the report emphasizes due process in data-based decisions, including allowing users to correct data and implementing an appeals process for those affected by these decisions. The administration hopes that the end result of this discussion will be an algorithmic best practices which is transparent and equitable.
FPF has been an early and eager participant in this discussion and was pleased to see the report’s appreciation for the potential of Big Data. In dealing with the risks of discrimination posed by realization of Big Data’s potential, FPF sees strong data ethics framework as a necessary and effective addition to the raw potential of technology. Read about FPF’s ethics work for an understanding of the latest scholarship in this promising area.
Radio Interview – Lauren Smith, FPF Policy Counsel, Discusses the "Textalyzer"
Today, Lauren Smith, FPF Policy Counsel, joined The Takeaway to discuss the legal issues behind the “Textalyzer,” a technology that can tap into a driver’s phone, and whether or not it is the best deterrent to prevent texting and driving.
Challenges with the Implementation of a Right to be Forgotten in Canada
Today, Eloïse Gratton, Partner and National Co-Leader, Privacy and Data Security Practice Group, Borden Ladner Gervais LLP, and Jules Polonetsky, CEO, Future of Privacy Forum, filed a joint-submission paper to the Office of the Privacy Commissioner of Canada (OPC), as part of their consultation and call for essays on online reputation ending today (April 28, 2016). The OPC has recently chosen reputation and privacy as one of its priorities for the next five years and is currently focusing its attention on the reputational risks stemming from the vast amount of personal information posted online and on existing and potential mechanisms for managing those risks. In January 2016, the OPC published a discussion paper, entitled “Online Reputation, What are they saying about me?” in which it asks if a right to be forgotten can find application in the Canadian context and if so, how.
The Future of Privacy Forum and EY Examine Speech Recognition and Smart Devices in New Paper
FOR IMMEDIATE RELEASE
April 28, 2016
Contact: Melanie Bates, Director of Communications, [email protected]
THE FUTURE OF PRIVACY FORUM AND EY
EXAMINE SPEECH RECOGNITION AND SMART DEVICES IN NEW PAPER
Washington, DC – Today, the Future of Privacy Forum (FPF), in collaboration with Ernst & Young LLP, released Always On: Privacy Implications of Microphone-Enabled Devices, a new paper that explores how speech recognition technology fits into a broader scheme of “always listening” technologies. The paper identifies emerging practices by which manufacturers and developers can alleviate privacy concerns and build consumer trust in the ways that data is collected, stored, and analyzed.
Is your Smart TV listening to your conversations? Are your children’s toys spying on your family? These types of questions are increasingly raised as the next generation of internet-connected devices enter the market.
“While we work to ensure that the appropriate privacy protections are in place, it is important to remember that the benefits of speech recognition are undeniable,” said Jules Polonetsky, CEO, FPF. “Hands-free control of technology improves the lives of people with physical disabilities, makes healthcare and other professional services more efficient through accurate voice dictation, enhances automobile safety, and makes everyday tasks more convenient.”
“Increasingly ‘smart’ devices challenge the product development lifecycle,” said Sagi Leizerov, an Executive Director with Ernst & Young LLP and EY’s Global Privacy leader. “The implications of new features, how those features should be made known to impacted individuals, the decision of what the default setting should be and what privacy controls should be provided, are at the heart of building trust when adopting additional ‘Internet of Things’ solutions in the daily lives of consumers.”
FPF and EY conclude that the colloquial term “always on” is often not an effective way to describe the range of technologies that use audio and video recording hardware. Instead, three general categories of microphone-enabled devices are proposed:
(1) manually activated (requiring a press of a button, a flip of a switch, or other intentional physical action);
(2) speech activated (requiring a spoken “wake phrase”); and
(3) always on devices (devices, such as home security cameras, that are designed to constantly transmit data, including devices that “buffer” to allow the user to capture only the most recent period of time).
“Ultimately, companies should keep consumers’ expectations in mind when designing the default frameworks of a device,” said Stacey Gray, Legal and Policy Fellow, FPF. “Our expectations will evolve more quickly in some areas than others, and so the manufacturers of devices that are introducing microphones for the first time—like televisions and toys—should go the extra distance to provide additional transparency and in many cases greater levels of control and choice.”
###
The Future of Privacy Forum (FPF) is a Washington, DC based think tank that seeks to advance responsible data practices. FPF includes an advisory board comprised of leading figures from industry, academia, law and advocacy groups. Learn more about FPF’s work by visiting www.fpf.org.
Always on: Privacy Implications of Microphone-Enabled Devices
Is your smart TV listening to your conversations? Are your children’s toys spying on your family?
These questions are being raised as the next generation of Internet-connected devices enters the market. Such devices, often dubbed “always on,” include televisions, cars, toys and home personal assistants, many of which now include microphones and speech-recognition capabilities.
Voice is an increasingly useful interface to engage with our devices. Consider the Amazon Echo, which can be activated by spoken command (“Alexa”), Mattel’s Hello Barbie, or Apple’s familiar personal assistant Siri, which can be activated by spoken command (“Hey, Siri”). The growing prevalence of voice as the primary way to interact with devices enables companies to collect, store and analyze increasing amounts of personal data. But consumers don’t always understand when and in what ways these devices are actually collecting information.
FPF Testifies at NHTSA Meeting on Autonomous Vehicles
Lauren Smith, FPF Policy Counsel, testified today at the National Highway Traffic Safety Administration’s (NHTSA) second public meeting on autonomous vehicles. The NHTSA is seeking input on planned guidelines for the safe deployment and operation of automated vehicles.
Lauren’s testimony focused on the benefits of autonomous vehicles and the importance of proper data management. “The benefits for facilitating the deployment of autonomous vehicles are so compelling and policymakers should be doing all they can to smooth and speed the way for these technologies to improve as quickly as possible,” Lauren stated. “Applying current best practices around data privacy, paired with existing federal enforcement mechanisms, should facilitate, not stall, this opportunity.”