This year, Future of Privacy Forum is celebrating our tenth anniversary as a catalyst for privacy leadership and scholarship. In recognition of this milestone, we will host an anniversary celebration on April 30 and release a report on rising privacy issues. We also are publishing a series of blog posts over the next several weeks in which our policy experts will share their thoughts on FPF’s work over the past decade, the current privacy landscape, and their vision for the future of privacy.
We will also be asking you to share your thoughts with us: What do you expect the next ten years of privacy to look like? .
Read the first post below with Jules Polonetsky’s Q&A and sign up for our mailing list to receive updates when new posts are published.
Q&A: Jules Polonetsky on the Future of Privacy
Jules Polonetsky has been CEO of the Future of Privacy Forum since its founding 10 years ago. He is uniquely suited to bring together privacy experts from industry, consumer advocacy and government. Before he joined FPF, Jules was the Chief Privacy Officer at AOL and DoubleClick, New York City Consumer Affairs Commissioner, a New York state legislator, a congressional staffer and an attorney.
As we observe its 10th Anniversary, how would you describe the idea behind the Future of Privacy Forum?
Our goal at the Future of Privacy Forum is to provide a roadmap on how our world can experience the benefits of data in a way that is ethical, moral, and that maintains our individual senses of self and autonomy.
There are so many areas where data holds opportunity to improve our health, safety, and happiness, but every one of those opportunities also is a source of great risks. We may come up with new medical advances by studying electronic health records but we need to do that in a manner that respects individual privacy. We want a world that is safer from things like terrorism but we don’t want government monitoring every email and phone call. We believe we can integrate privacy protections with responsible data use that will improve our lives.
So we convene experts from businesses, government, academia and civil society to get the best thinking and promote insightful research. We’ve also spurred industry to take actions with real-world impacts to protect consumer privacy. For example, the 300 companies that have taken the Student Privacy Pledge submit to legally enforceable obligations, such as not to sell students’ personal information. Likewise, companies that support FPF’s Privacy Best Practices for Consumer Genetic Testing Services agree to a set of standards for the collection and use of genetic data, like not sharing individual genetic data without express permission.
And we’re not just influencing industry practices. Our Open Data Risk Assessment used in Seattle and other cities helps government officials navigate the complex policy, technical, organizational and ethical standards that support privacy-protective open data programs.
How do you balance enthusiasm about innovative technology with an awareness that there can be pitfalls?
As a think tank director working with companies and advocates and government and foundations, I’m excited about the latest breakthroughs in technology. I’m also aware of the incredible consequences if we don’t put the right policies and structures and laws in place to make sure society benefits in a way that lifts us all up and takes us in a positive direction.
At FPF, we try to be at the center of the world of privacy. We work with companies to make sure when they use data, they are doing it in a responsible way. We also work with academics and civil society folks who worry that the government or companies could take us down an Orwellian path.
What are you looking for in potential privacy legislation from this Congress?
The White House, Congress, industry and civil society are increasingly in agreement about the need for comprehensive federal privacy legislation, so I hope a productive bill can be passed and signed into law. There are a few things I’ll be looking for in new legislation.
First, the rules should be technology neutral and cover every type of company and business model. Data is collected across platforms — on the web, on mobile, with wearables, smart home devices, and phone and facial tracking — and one clear set of rules is easiest for consumers to understand.
Legislation should recognize that data flows internationally, and move us closer to interoperability with Europe and other countries, while preserving room for new ideas and entrepreneurship.
Another foundational principle is fairness, which means using data only as people would reasonably expect and not using it in ways that have unjustified adverse effects. This concept is very similar to how the current FTC Act authorizes legal action against deceptive or unfair practices.
Also, any legislation should consider the increasingly sophisticated privacy tools that are emerging, including differential privacy to measure privacy risk, homomorphic encryption that can enable privacy safe data analysis, and many new privacy compliance tools that are helping companies better manage data. A law that will stand the test of time and successfully protect privacy rights while enabling valuable uses of data should include mechanisms to incentivize new privacy-friendly technology.
What is a cutting-edge privacy issue that you think will grow in importance over the next ten years?
The world of artificial intelligence and machine learning is fascinating and important. We’re just beginning to scratch the surface of the opportunities, but we’re also starting to understand what a dangerous and dark path the misuses of those types of data could lead to. There is a risk of being overly optimistic and not understanding that we could take ourselves to a place of no return.
I hope that in ten years – when FPF celebrates its 20th anniversary – we won’t be fixing ineffective rules and laws from this era. For example, the GDPR, if interpreted narrowly, could pose some challenges for AI and machine learning, since it specifies that personal data must be collected only for a specified purpose, and must be deleted or minimized when not needed for that purpose. For many machine learning processes today, large and representative data sets are required to power new models, to ensure accuracy and to avoid bias. As European regulators seek to advance trusted AI, ensuring the right balance here will be essential.
And as Congress considers a U.S. privacy framework, we will be advising policymakers on the ways that uses of data for machine learning and other innovations can be supported, when responsible safeguards are in place.
I’m looking forward to seeing the work of FPF’s AI and Machine Learning Working Group – and all of our program areas – evolve in the coming years!
Check back in coming weeks for more discussions with FPF policy experts and sign up for our mailing list to stay informed about important privacy issues.