Consumer Genetic Testing: A Q&A with Carson Martinez

Carson Martinez is FPF’s Health Policy Fellow. She works on privacy challenges surrounding health data, particularly where it is not covered by HIPAA, as is the case with consumer-facing genetics companies, wearables, mobile health and wellness applications, and connected medical devices. Carson also leads the FPF Genetics Working Group and Health Working Group.

How did you come to work on consumer genetic testing issues at FPF?

During my time as Master’s student studying Bioethics and Science Policy at Duke University, I focused on the ethical and policy challenges of technological innovations in healthcare. At Duke, I had the pleasure of taking an Information Privacy Law class with David Hoffman, Associate General Counsel and Global Privacy Officer at Intel Corporation, who introduced me to the pioneering discussions surrounding data privacy. I ended up writing my Master’s thesis at Intel on how government entities and cloud service providers can take steps to promote use, enhance trust, and foster innovation in cloud storage technologies for medical imaging data.

David, who is also on FPF’s Advisory Board, introduced me to Jules Polonetsky and John Verdi. FPF had already worked with industry to create best practices around wearables, and they wanted to expand FPF’s healthcare work.

As the only Policy Fellow at FPF without a law degree, I come at privacy from a unique perspective. My experience with bioethics gives me a good understanding of the research world and the important balance between making data available to advance scientific fields and protecting patient privacy. I work on challenges related to technologies that are outpacing our health privacy laws, like HIPAA and how best to protect this sensitive data without specific guidelines or regulations. That means working with stakeholders to develop best practices and help companies follow them.

What are some of the privacy challenges around consumer genetic tests?

As the price of consumer genetic tests continues to drop, they have become very popular purchases and gifts. Millions of people have used consumer genetic tests to learn about their heritage, identify risk for future medical conditions, and connect with family members. Unlike other personal data, genetic data may implicate future generations and have cultural significance for particular groups. This uniquely sensitive data deserves a high level of privacy protection.

Beginning in 2017, we led a process to develop privacy best practices for the consumer genetic testing industry. Stakeholders who participated in that process included the leading consumer genetic testing companies – some of whom originally approached FPF about the project – as well as experts on the science from the National Society of Genetic Counselors and the American Society of Human Genetics and advocates, from groups like Consumers Union.

What did the stakeholders agree should be in the best practices?

The best practices establish standards for genetic data generated in the consumer context that require:

Recently, FamilyTreeDNA’s president apologized to customers for not disclosing an agreement with the FBI to allow agents to test DNA samples and access consumer genetic data without a warrant. That agreement is out of step with the best practices, and we have removed FamilyTreeDNA a supporter to them.

What new privacy issues could arise around consumer genetic tests?

The science of genetics is still evolving. Someday, we may have access to additional insights from genetic data that we can’t see today. We don’t yet know about many health conditions that may have a genetic component.

In the future, there will be more people taking consumer genetic tests and the tests will offer more extensive analytics. More companies will seek FDA approval to validate the efficacy and safety for identifying markers for health issues. With more people participating in testing, the ability to identify individuals who have not taken tests also will increase. All of that points to the need for a big push on consumer education.

What do you foresee as rising health privacy issues, beyond genetic data?

Looking beyond genetic information, to health data broadly, I expect to see a focus on the Internet of Health Things, fueled by tremendous growth in telehealth, including services tied to wearable or implantable monitoring devices. Those devices could transmit information to doctors, insurers, or employers. As more data is generated, privacy and security concerns may grow as well.

Another rising issue is the interoperability of data. If data is more portable, it can be more easily analyzed. Hopefully, consumer access and the development of third-party APIs to facilitate consumer-directed exchanges will empower people to take control of their own health and biological information and enhance interoperability.

In the medical world, there are more and more opportunities to opt-in to data sharing. Increasingly, I think we will see the development and application of strong privacy engineering solutions to protect sensitive health data and promote sharing for research, such as secure multi-party computation and differential privacy.

Many companies with health data are implementing ethical review processes for their research, which is a positive development. Consumer participation in research should be voluntary, informed, and follow established ethical standards.


FPF will be holding its 10th Anniversary Celebration on April 30th in Washington, DC. Join us to look back on the last decade of privacy and for a glimpse of what will be ahead.

Ticket and registration information for the 10th Anniversary Celebration can be found here.

Artificial Intelligence: Privacy Promise or Peril?

Advanced algorithms, machine learning (ML), and artificial intelligence (AI) are appearing across digital and technology sectors from healthcare to financial institutions, and in contexts ranging from voice-activated digital assistants, to traffic routing, identifying at-risk students, and getting purchase recommendations on various online platforms. Embedded in new technologies like autonomous cars and smart phones to enable cutting edge features, AI is equally being applied to established industries such as agriculture and telecomm to increase accuracy and efficiency. We see already that machine learning is becoming the foundation of many of the products and services in our daily lives, the underlying structure in much the same way that electricity faded from novelty to background during the industrialization of modern life 100 years ago.

Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.

Along with the benefits from the increased use of artificial intelligence and machine learning models underlying new technology, we also have seen public examples of the ways in which these algorithms can reflect some of the most glaring biases within society.  From chatbots that “learn” to be racist, policing algorithms with questionable results, and cameras which do not recognize people of certain races, the past few years have shown that AI is not immune to problems of discrimination and bias.  AI however, also has many potential benefits, including promising outlooks for the disability community and the increased accuracy of diagnosis and other applications to improve healthcare.  The incredible potential of AI means that it is important to address concerns around its implementation in order to ensure consumer trust and safety.  The problems of bias or fairness in ML systems are a key challenge in achieving that reliability.  This issue is complex – fairness is not a fixed concept. What is “fair” by one measure might not be equitable in another.  While many industry leaders have identified controlling bias as a goal in their published AI policies, there is no consensus on exactly how this can be achieved.

In one of the most notable cases of apparent AI bias, ProPublica published a report in which they claimed an algorithm, designed to predict the likelihood a defendant would reoffend, displayed racial bias.  The algorithm assigned a score from 1 to 10, claiming to offer an assessment of the risk that a given defendant would go on to reoffend.  This number was then often used as a factor in determining eligibility for bail.  Notably, “race” was not amongst the various inputs which were used in determining the risk level.  However, in their report, ProPublica found that among defendants who went on to not reoffend, black defendants were more than twice as likely as white defendants to have received a mid- or high-risk score.  ProPublica correctly highlighted the unfairness of such disparate outcomes, but the issue of whether the scores were simply racially biased, it turns out, is more complicated.

The algorithm had been calibrated to ensure the risk level of reoffending “meant the same thing” from one defendant to another.  Thus, of the defendants who were given a level 7, 60% of white defendants and 61% of black defendants went on to reoffend – a statistically similar outcome.  However, in designing the program to achieve this level of equity (a “7” means ~60% chance of reoffending, across the board) means that the program forced distribution between low, mid, and high-risk categories in a way that resulted in more Black defendants receiving a higher score. There is no mathematical way to equalize both of these measures at the same time, within the same model.  Data scientists  have shown that multiple measures of “fairness” may be impossible to achieve simultaneously.

As importantly, the implementation of these scores by the humans within the system is impossible to quantify. There is no way to ensure that the score for one defendant will factored in by the judge in the same way as the score for another.  Because of this tension, it is important that AI and ML designers and providers are transparent in their interpretation of fairness – what factors are considered, how they’re weighted, and how they interact – and that they sufficiently educate their customers in what their technology does or does not do.  This is of special importance when operating in such sensitive fields as the criminal justice system, financial services, or other applications with legally significant impacts on individual customers.

However, even companies whose systems are outside such highly charged environments must remain cognizant of the potential for bias and discrimination. In 2016 the “first international beauty system judged by machines” premiered.  The program was supposed to select a few faces which “most closely resembled human beauty” from a selection of over 6,000 entries. It overwhelmingly selected white faces. This is almost certainly because the training data or test data sets included more white faces than others. Or that the datasets more often had images of white faces associated with “beauty” or “beautiful” in some context. Thus, the algorithmic model “learned” that one of the factors contributing to the conclusion “beautiful” was “whiteness.”

Many types of Machine Learning, including deep learning, mean that the exact processing by which an algorithm makes a recommendation is ultimately unclear, even to its programmers.  It is therefore all the more important to be able to evaluate outcomes objectively, testing for patterns or trends that demonstrate an undesirable bias. There is no such thing as a system without bias. Instead, a commitment to fairness means designing a system that can be evaluated for illegal, discriminatory, or simply undesirable outcomes.  Algorithms trained on existing data from historically human systems will mirror some level of human bias – the goal should be to establish baseline practices for how to manage or mitigate this risk.

The most basic requirement is ensuring that the data sets the system is trained and tested on are appropriately representative.  The chief science officer of the AI beauty contest mentioned above confirmed that one of the issues with the algorithm was that it was not trained with a sufficient sample size of non-white faces.  In a training landscape where one specific race is more highly correlated with the idea of “beauty”, the algorithm will reflect this bias in its outputs.  (For example, systems developed in Asia better distinguish Asian faces over white ones while the opposite is true for systems developed in the United States.)

Similarly, in law enforcement, training datasets are likely to reflect the historic disproportionate incarceration of non-white populations, and will reach outputs that reflect those systemic biases.  However, identifying the potential flaws in datasets can be difficult – there are biases less obvious than those affiliated with race, gender, or other high-visibility factors. Unconscious or unintended bias can be present in less obvious ways, so AI/ML developers must have processes in place to preempt, prevent, or correct such occurrences.

Strategies include responding to research that shows that ensuring the humans behind the algorithm are sufficiently diverse can make a significant impact. Studies have shown that the racial and cultural diversity of the creators of facial recognition software influences the accuracy of the system.  This implies that who trains the systems is an important consideration.  By promoting diversity within their workforces, companies are also more likely to increase the accuracy and value of their systems.

Finally, there are statistical tools – additional mathematical models – that can be used to systematically evaluate program outputs as a way of measuring the validity of their recommendations. These auditing programs are a way of leveraging more math to evaluate the existing process in ways that exceed what human evaluators might be able to identify.

Companies – both those who develop these technologies, and their customers who implement them in different areas – have a responsibility to use all the tools in their power to address the issues of bias in their Machine Learning models. From policy requirements, to development guidance, hiring diversity and sufficient training, they must be able to assure their customers that the products and services based on ML models are sufficiently equitable for their particular application.

The unique features of AI and ML include not just big data’s defining characteristic of tremendous amounts of data, but the additional uses, and most importantly, the multi-layered processing models developed to harness and operationalize that data. AI-driven applications offer beneficial services and research opportunities, but pose potential harms to individuals and groups when not implemented with a clear focus on controlling for, and managing, bias. The scope of impact of these systems means it is critical that associated concerns are addressed early in the design cycle, as lock-in effects make it more difficult to later modify harmful design choices. The design must include the long-term monitoring and review functions, as these systems are literally built to morph and adapt over time.  As AI and ML programs are applied across new and existing industries, platforms, and applications, policymakers and corporate privacy officers will want to ensure that the programs they design and implement provide the full benefits of this advancing technology, while controlling for, and avoiding, the negative impacts of unfair outputs, with the ultimate goal that all individuals are treated with respect and dignity.

By Maria Nava, and Brenda Leong

Smart Communities: A Conversation with Kelsey Finch

kelseyfinch headshotOne of FPF Policy Counsel Kelsey Finch’s areas of focus is Smart Communities, a field which draws from many of FPF’s issue areas. From her Seattle office, she has the opportunity to do hands-on work with cities in the Pacific Northwest. Last year, she worked with city officials on Seattle’s first Open Data Risk Assessment, which she hopes will be a model for other municipalities seeking to develop a thorough, transparent data policy that maximizes the utility of public data to its residents while minimizing privacy risks to individuals and following the highest ethical standards.

A new generation of tech-savvy city leaders is increasingly turning to better uses of data to solve problems of equity, responsiveness, traffic, and environmental protection in their communities. For example, neighborhoods with limited public transportation options could benefit from a more integrated mobility ecosystem, while better use of data could help with crime detection and efficient deployment of resources. This “Cities as Platforms” approach holds exciting possibilities, as long as municipal governments are guided by sound privacy practices.

Kelsey Finch recently discussed her work on Smart Communities and how we can expect cities to be further transformed by data collection and sharing.

How did you come to work on Smart Communities for FPF?

I had been working on Smart Cities as a Westin Fellow at the International Association of Privacy Professionals, where I collaborated with [Current FPF Senior Fellow and Israel Tech Policy Institute co-Founder] Omer Tene on a research paper dealing with privacy protections in a “hyperconnected” town. When I joined FPF, it was a natural fit to continue our collaboration. I love working in this area because it pulls together so many different threads in privacy debates: location data, sharing data for research purposes, mobility, ethics and algorithms, you name it!

What kind of work has FPF done in this area?

We’ve done everything from speaking engagements, academic papers, convenings of stakeholders – that’s chief privacy officers, city policymakers, community organizers, tech contractors, and the like – to interactive infographics that show all the various ways data is collected and integrated throughout a city. You really can’t think of any of these technologies as being siloed from other sites of data collection; it’s all connected.

I’ve also had the chance to work directly with cities like Seattle to consult on their open data programs, ensuring that they make data as transparent and accessible as possible without sacrificing privacy or ethics. We also help train them in using data portals to evaluate the performance of city policies. Those portals help cities measure things like emergency response time, congestion, and more. Our hope is that by optimizing open data programs in flagship cities, we can create models other communities can follow.

Additionally, I’ve filed comments and provided advice on a range of initiatives, including Chicago’s “Array of Things”, the New York City Mayor’s Office of Information Privacy, the City of Portland’s privacy principles, and the Networking and Information Technology Research and Development’s (NITRD) Draft Smart Cities and Communities Federal Strategic Plan. We’re also launching an initiative of our own, Smart Privacy for Smart Cities, which is supported by the National Science Foundation through a nationally competitive grant process. This 2-year effort will see FPF and our partners collaborate to host workshops, create privacy impact assessments, and create a Privacy Leaders Network for local government officials to navigate emerging privacy issues, connect with their peers, and promote responsible data practices.

What are some of the current debates in the world of Smart Cities?

One of the most important conversations in the smart cities space today is about community engagement and participation in decisions about data and privacy. Privacy is usually thought of as an individual preference, but cities and communities need to make collective decisions – so how do we ensure that individuals’ voices will be heard? We should strive to give individuals choices about how their data is collected and used whenever possible, but when that isn’t possible, what democratic processes do we set up to ensure the legitimacy of these systems? And how do we make sure new technologies and data tools do not advantage one population at the expense of another? This is all relatively new, but we’re seeing a lot of innovation in local privacy governance. For example, Oakland has established a permanent Privacy Advisory Commission to evaluate surveillance technologies, Seattle relied on its Community Technology Advisory Board when developing Privacy Principles, and New York City has created a task force on automated decision-making systems. FPF has also recently published a toolkit to guide state and local officials through privacy stakeholder engagement and communications.

Another big discussion in smart cities is about how to protect personal privacy without losing out on the promises of data- and evidence-based policymaking. Cities often promote Open Data programs, for example, in order to make their activities more transparent and accountable to the public. But lot of the data that cities might want to share through open data is sensitive, and can create serious privacy risks. Part of my job, both as the leader of our smart cities work and our de-identification work, is to help cities evaluate these kinds of risks and get access to new tools in order to serve the public interest in an ethical, privacy-conscious way.

An equally important but less visible issue is ensuring that local governments start investing in and developing sustainable privacy programs – including Chief Privacy Officers, citywide privacy awareness and training efforts, and routine privacy and surveillance impact assessments. Many cities already operate on a shoestring budget, so it’s especially important that we help officials to make the most of their resources and see privacy as a competitive advantage. While private-public partnerships with technology providers and consultants can bring immediate privacy expertise to data-driven projects, it’s important for cities to start building those muscles as well.

What can we expect to see from Smart Cities in the future?

I would say more of what we’re already talking about; we’re still so early in the development of formalized privacy programs for local governments. Over the next few years, I hope to see smaller communities learning from today’s leaders and talking with their own communities about what kinds of privacy safeguards and data uses are right for them. Privacy concerns exist no matter how big or small a community is, and the challenge is creating tools that can work for any community. It will also be interesting to see how cities decide to invest in privacy resources and personnel.

Beyond the particulars of data collection, though, we really need to step back and consider the broader philosophical and political implications of smart communities. What does data justice and equity look like within a city? Which privacy harms should communities be protecting themselves against? Which secondary uses are appropriate for civic data? Should other jurisdictions and branches of government have access to city data, and be able to perform their own analyses? The answers will likely differ from city to city. In order to get to that point, though, we need to continue laying a foundation of responsible data practices and meaningful public engagement in order to earn the public’s trust in smart city technologies and in their local government.

FPF will be holding its 10th Anniversary Celebration on April 30th in Washington, DC! Ticket and registration information can be found here.

Future of Privacy Forum is Turning 10!

On April 30, 2019 from 6:00 PM – 8:00 PM, we will host a 10th Anniversary Celebration in Washington D.C. — and you’re invited! We are delighted to announce that at the 10th Anniversary Celebration we will present the following awards:

Helen Dixon

Data Protection Commissioner, Ireland

Distinguished Public Service

J. Trevor Hughes

President & Chief Executive Officer, IAPP (International Association of Privacy Professionals)

Community Builder

Dale Skivington

Privacy Consultant and Adjunct Professor of Law, University of Colorado Law School

Former Chief Privacy Officer, Dell Inc. and Eastman Kodak Company

Career Achievement

Peter Swire

Peter Swire

Elizabeth and Tommy Holder Chair of Law and Ethics, Scheller College of Business Georgia Institute of Technology

Outstanding Academic Scholarship

Please note that this is a private event. For additional information and ticket options, please click here.

Additionally, FPF would like to thank the Leadership Sponsors who make this event possible:

Date and Time

Tue, April 30, 2019

6:00 PM – 8:00 PM EDT

Location

The Line

1770 Euclid St NW

Washington, DC 20009

Interested in attending?

Tickets and registration information can be found here. Individual tickets are available for purchase for non-members. Proceeds from ticket sales benefit FPF’s Scholarship Fund, which supports the Elise Berkower Memorial Fellowship and the Christopher Wolf Diversity Fellowship. Can’t attend? Show your support with a monetary contribution of your choice.

Interested in sponsoring?

Sponsorship opportunities are available and may be found here. For additional sponsorship opportunities for the 10th Anniversary Celebration, contact Barbara Kelly, Leadership Director at [email protected].

FamilyTreeDNA Agreement with FBI Creates Privacy Risks

Company’s Deal with Law Enforcement Surprises Consumers and Is Out-of-Step with Industry Norms and Best Practices 

By John Verdi and Carson Martinez

Last week, FamilyTreeDNA announced an agreement with the FBI to allow agents to test DNA samples from crime scenes, develop genetic profiles, and identify familial matches. This agreement marks the first time a prominent private company has agreed to voluntarily provide law enforcement with routine access to customers’ data. Genetic data, properly obtained and analyzed, can help law enforcement solve crimes and improve public safety. However, unfettered law enforcement access to genetic information on commercial services presents substantial privacy risks.

The FamilyTreeDNA agreement is outside industry norms and inconsistent with consumer expectations. FamilyTreeDNA should terminate the company’s agreement with the FBI and take steps to ensure that law enforcement does not access users’ data without appropriate legal process.

Leading genetic testing companies do not turn over consumer data to the government upon request. They require legal process such as a warrant before allowing law enforcement to access genetic data. Constitutional and statutory warrant requirements are longstanding mechanisms that support important values – they can help police solve crimes and protect individuals’ privacy. Warrants are issued based on evidence, typically target a specific individual, and allow a neutral judge to determine whether there is probable cause to suspect that a particular individual is linked to a crime. FBI genetic searches should be predicated on probable cause and conducted pursuant to appropriate process.

FamilyTreeDNA’s agreement is out of step with consumer expectations. Leading genetic testing companies understand that when users send in their DNA to learn more about their health or heritage, they do not expect their genetic data to become part of an FBI genetic lineup. FamilyTreeDNA users have not received a meaningful notice or opportunity to opt-in or opt-out of these searches. If this agreement remains in place and valid legal process is not obtained before access to genetic data is provided to the FBI, individuals may be erroneously swept up in investigations simply because their DNA was found near a crime scene or at a location where a victim or suspect lived or worked. Genetic profiles turned over to the FBI may also be covertly reused by the FBI on other commercial sites.

Furthermore, FamilyTreeDNA’s agreement conflicts with the Privacy Best Practices for Consumer Genetic Testing Services that FPF published last year. At the time, FamilyTreeDNA announced their support of the Best Practices as a clear articulation of how firms should protect consumers’ privacy. The Best Practices state that genetic data should not be disclosed to or made accessible to third parties, in particular to government agencies, except as required by law or with the separate express consent of the person concerned. The Best Practices also require that companies only process DNA samples and genetic data uploaded by the relevant individual, or with that individual’s permission. These are strong protections for sensitive genetic data.

The approach that the FBI would use to identify individuals by sending DNA samples from a crime scene to FamilyTreeDNA for testing and analysis would occur without a warrant. In light of the new agreement, FamilyTreeDNA has been removed as a supporter of the Best Practices.

Law enforcement should obtain a warrant before seeking disclosure of genetic data from companies, and companies should demand valid legal process before disclosing genetic data. Companies should only process DNA samples and genetic information uploaded with an individual’s permission. That way, genetic data can be used to identify suspects and victims – and consumer privacy can be respected.

AI and Machine Learning: Perspectives with FPF’s Brenda Leong



As we prepare to toast our 10th anniversary, we’re hearing from FPF policy experts about important privacy issues. Today, Brenda Leong, FPF Senior Counsel and Director of Strategy, is sharing her perspective on AI and machine learning. Brenda also manages the FPF portfolio on biometrics, particularly facial recognition, and oversees strategic planning for the organization.
Tell us what you think the next 10 years of AI, machine learning and privacy will bring.
Our 10th anniversary celebration will be on April 30. RSVP here.





AI and Machine Learning: Perspectives with FPF’s Brenda Leong





Brenda LeongHow did you come to join the Future of Privacy Forum and work on AI and machine learning privacy issues?






My first career was in the Air Force, and my last two assignments before I retired were at the Pentagon and the State Department. I learned that I really enjoy working on policy, and I decided to explore new policy areas after I retired from the military. I went to law school at George Mason University, where I became very interested in telecom issues and privacy law.






People kept telling me, if you want to work in privacy in Washington, DC, you need to meet Jules Polonetsky. So I went to a policy event and cornered Jules. That led to an FPF policy fellowship and I’ve been at FPF ever since – almost five years.





About a year after I joined FPF, Jules – who is an expert prognosticator – suggested we should learn more about AI because it was becoming a focus of the tech industry, incorporated into autonomous vehicles, facial recognition, advertising tech and a lot of other areas. I jumped at the chance and I’ve been working on AI and machine learning issues ever since.






What’s the difference between AI and machine learning?






That’s a good question, and something we explored in The Privacy Expert’s Guide to AI and Machine Learning, which FPF released last October. Most of what has been implemented is machine learning – algorithms that can evaluate their own output and make adjustments to their code without human involvement. Machine learning is used in image recognition, facial recognition, sensory inputs for autonomous vehicles, and many other tasks.
I like the definition of artificial intelligence by Stuart Russell, who wrote one of the key textbooks in this space:






“An entity is intelligent to the extent that it does the right thing, meaning that its actions are expected to achieve its objectives… This notion of doing the right thing is the key unifying principle of AI. When we break this principle down and look deeply at what is required to do the right thing in the real world, we realize that a successful AI system needs some key abilities, including perception, vision, speech recognition, and action.”





There aren’t yet many real-world applications for classic AI that meet that definition. Real-time language translation and email spam blocking come to mind. By the way, Russell’s quote is from Architects of Intelligence: The truth about AI from the people building it by Martin Ford – the current FPF Privacy Book Club selection. Anyone can join the book club and participate in our discussion on February 27.






What are some of the privacy issues around machine learning?






Some machine learning requires almost unimaginable amounts of data – millions of records. Traditional privacy practices emphasize data minimization, where you only collect the data you need for one purpose and keep it only as long as necessary for that purpose. However, data minimization is tough to reconcile with machine learning that needs lots of data, sometimes personal data.






There also can be issues of bias and fairness. Some people are concerned about what a company might do with a profile about them. Even without my name, machine learning can help a company come up with things about me that I don’t know it knows, or even things I don’t know about myself. An example would be an analysis of shopping data about people with similar profiles – that may be very accurate at predicting my preferences and behavior.






If a machine learning program is modelling off existing data sets, it can amplify biases that were in the original human-selected data. In that situation, the algorithm has to change to detect and adjust for bias in the data set. In that way, better math is part of the solution. Computer scientists tell us no system is without bias. The point is to understand what biases you have chosen, what priorities are built into the system and whether that will give you the results you want.






When we’re talking about race or ethnicity, some people ask, “can’t you just take that data out?” but it’s not that easy because many other data fields tend to correlate with race. You need to understand the bias in the data and adjust for it.





Another set of concerns are around transparency. People want to understand how their data is being used – that’s a key privacy practice. But that can be difficult if the algorithm can change itself. In machine learning, the program is constantly evolving. That makes it challenging to pick out a moment and determine why the program generated a specific result at that time. So traditional transparency analysis, which follows the steps for data use precisely, is hard to do with machine learning. There are ways to analyze it using math and statistics, but they can be tough to understand, which limits transparency.






What has FPF’s AI and Machine Learning Working Group been up to?






The working group brings together FPF members to stay abreast of how AI and machine learning are being used, learn from outside experts, and review and contribute to FPF documents.





We often have speakers come in to talk about AI and where it is headed. For example, we recently had a computer scientist come in and talk about AI and bias. Our presentations and discussions help the legal and policy people – who tend to be involved with FPF – better understand the technology and how it is being used so they are well-informed in discussions in their companies about products and services that their designers are building.





We also get input from working group members on our publications, like The Privacy Expert’s Guide to AI and Machine Learning, Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models and our publications about facial recognition. The AI and Machine Learning Working Group members have tremendous expertise. It’s great to learn from them and share their perspective with our broader membership and the public.



IoT Devices Should Deal with Privacy Impacts for People with Disabilities

FOR IMMEDIATE RELEASE

January 31, 2019

IoT Devices Should Deal with Privacy Impacts for People with Disabilities

FPF Recommends Approaches to Incorporate Privacy, Accessibility by Design

WASHINGTON, DC – The Future of Privacy Forum today released The Internet of Things (IoT) and People with Disabilities: Exploring the Benefits, Challenges, and Privacy Tensions. This paper explores the nuances of privacy considerations for people with disabilities using IoT services and provides recommendations to address privacy considerations, which can include transparency, individual control, respect for context, the need for focused collection and security.

“Internet of Things devices in homes, cars and on our bodies can improve the quality of life for people with disabilities—if they are designed to be accessible and account for the sensitive nature of the data they collect,” said Jules Polonetsky, CEO of the Future of Privacy Forum. “We expect this first-of-its-kind paper to inspire collaboration among advocates, academia, government, and industry to ‘bake in’ privacy and accessibility from the start of the design process.”

“Data-driven innovation has created new tools that can improve disabled people’s safety, mobility, and independence, leading to enhanced privacy,” said Henry Claypool, Policy Director of the Community Living Policy Center at the University of California, San Francisco, Technology Consultant to the American Association of People with Disabilities and FPF Senior Fellow. “However, companies and advocates should recognize that the IoT can bring unique privacy considerations.”

FPF recommends companies and policymakers follow these recommendations to improve the experiences of people with disabilities when they use IoT-enables devices and respect their privacy:

  1. Prioritize inclusive design. Accessibility and the privacy of people with disabilities should not be an afterthought for the IoT and new technology developers—people with disabilities should be included in the design of IoT technologies. The appropriate timing for integrating accessibility is during the earliest possible stage of design.
  2. Promote research. In order to successfully build the IoT with universal or accessible design, research—both qualitative and quantitative—is needed to understand how people with disabilities utilize the IoT and feel about the current privacy landscape of the IoT.
  3. Privacy by Design approaches should consider people with disabilities. Companies should take into account the sensitive nature of the data collected from the IoT used by people with disabilities and address those consideration in the design of IoT products.
  4. Foster cross-sector collaborations. Advocates, academia, government, and industry should work together to develop IoT solutions that meet the needs of people with disabilities.
  5. Enhance awareness of data risks and benefits. Policymakers should consider not only the potential enhanced risks that people with disabilities face when using the IoT, but also the enhanced autonomy that these very same technologies provide. Members of the disability community should consider becoming engaged in policy processes and voicing their views on the privacy challenges that they face when using IoT devices and services.

IoT devices and services are empowering people with disabilities to participate more fully and autonomously in everyday life by reducing some needs for human intermediaries or accommodations. In addition to the potential benefits of IoT devices and services for people with disabilities, unique privacy risks and challenges can be raised by the collection, use, and sharing of user data. Depending on the circumstances, privacy can be enhanced or diminished by IoT technologies, creating potential tensions between privacy gains and losses.

FPF received support for the paper from the Comcast Innovation Fund and consulted with the American Association of People with Disabilities (AAPD) Technology Forum.

FPF and Comcast Innovation Fund host event today in Washington, DC

Today, Thursday, January 31, 2019, 4:30-5:30pm ET, FPF and the Comcast Innovation Fund are hosting an event about the IoT and people with disabilities at the XFINITY Store in Chinatown, 715 7th St. NW, Washington, DC 20001. Remarks and a panel discussion will be followed by audience Q&A, refreshments and networking. The remarks and panel discussion will be streamed via Facebook Live at https://www.facebook.com/FutureofPrivacy/.

###
The Future of Privacy Forum is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting www.fpf.org.

Media Contact:

Nat Wood

[email protected]

410-507-7898

FPF Report: IoT Devices Should Deal with Privacy Impacts for People with Disabilities

FPF has released The Internet of Things (IoT) and People with Disabilities: Exploring the Benefits, Challenges, and Privacy Tensions. This paper explores the nuances of privacy considerations for people with disabilities using IoT services and provides recommendations to address privacy considerations, which can include transparency, individual control, respect for context, the need for focused collection and security.

IoT devices and services are empowering people with disabilities to participate more fully and autonomously in everyday life by reducing some needs for human intermediaries or accommodations. In addition to the potential benefits of IoT devices and services for people with disabilities, unique privacy risks and challenges can be raised by the collection, use, and sharing of user data. Depending on the circumstances, privacy can be enhanced or diminished by IoT technologies, creating potential tensions between privacy gains and losses.

FPF recommends companies and policymakers follow these recommendations to improve the experiences of people with disabilities when they use IoT-enables devices and respect their privacy:

  1. Prioritize inclusive design. Accessibility and the privacy of people with disabilities should not be an afterthought for the IoT and new technology developers—people with disabilities should be included in the design of IoT technologies. The appropriate timing for integrating accessibility is during the earliest possible stage of design.
  2. Promote research. In order to successfully build the IoT with universal or accessible design, research—both qualitative and quantitative—is needed to understand how people with disabilities utilize the IoT and feel about the current privacy landscape of the IoT.
  3. Privacy by Design approaches should consider people with disabilities. Companies should take into account the sensitive nature of the data collected from the IoT used by people with disabilities and address those consideration in the design of IoT products.
  4. Foster cross-sector collaborations. Advocates, academia, government, and industry should work together to develop IoT solutions that meet the needs of people with disabilities.
  5. Enhance awareness of data risks and benefits. Policymakers should consider not only the potential enhanced risks that people with disabilities face when using the IoT, but also the enhanced autonomy that these very same technologies provide. Members of the disability community should consider becoming engaged in policy processes and voicing their views on the privacy challenges that they face when using IoT devices and services.

FPF received support for the paper from the Comcast Innovation Fund and consulted with the American Association of People with Disabilities (AAPD) Technology Forum.

FPF and Comcast Innovation Fund host event today in Washington, DC

Today, Thursday, January 31, 2019, 4:30-5:30pm ET, FPF and the Comcast Innovation Fund are hosting an event about the IoT and people with disabilities at the XFINITY Store in Chinatown, 715 7th St. NW, Washington, DC 20001. Remarks and a panel discussion will be followed by audience Q&A, refreshments and networking. The remarks and panel discussion will be streamed via Facebook Live at https://www.facebook.com/FutureofPrivacy/.

FPF's John Verdi on Privacy Papers for Policymakers

In recognition of the Future of Privacy Forum’s 10th anniversary, FPF policy experts are sharing their thoughts on FPF’s work over the past decade, the current privacy landscape, and their vision of the future of privacy. This week, FPF Vice President for Policy John Verdi discusses the Privacy Papers for Policymakers project, which began in 2010. To read previous installments in this series, click here.

What do you expect the next 10 years of privacy to look like? Share your thoughts by clicking here.


Q&A: John Verdi on Privacy Papers for Policymakers

John VerdiFPF’s Privacy Papers for Policymakers program brings expertise from academic, tech and policy circles to Members of Congress, leaders from executive agencies, and their staffs to better inform policy approaches to thorny data protection issues. The event highlights the year’s most influential, practical academic work and connects academics with thought leaders from government, industry, and the advocacy community. Awarded articles are chosen both for their scholarly value and because they offer policymakers concrete solutions and practical insights into real-world challenges. Winners are selected by a diverse team of academics, advocates, and industry privacy professionals from FPF’s Advisory Board. Honorary Co-Hosts Senator Edward J. Markey and Congresswoman Diana DeGette will host FPF and this year’s winning authors as they present their work in the Russell Senate Office Building at 5:30pm on February 06, 2019. The event is free, open to the general public, and widely attended. A reception will follow. To RSVP, please visit privacypapersforpolicymakers.eventbrite.com.

How have the award-winning privacy papers changed over the last nine years?

The first award winners tended to deal with what we now consider broad topics in corporate privacy practices – the role of Chief Privacy Officers and federal law enforcers, how to value privacy, regulatory innovation, and so on. While those issues are still quite pertinent, recent award winners have been likely to examine more precise aspects of privacy, specific technical advances, or policies at the local, national, or international levels. A few examples from this year’s winners:

What’s been consistent from the beginning?

For nine years, the research and analysis into consumer beliefs, corporate practices, technological solutions and legal theory compiled in FPF’s Privacy Papers for Policymakers has informed the policy debate in Congress, in the states, and around the world. They are a valuable tool for legislators and staff considering the structure and elements of a national privacy framework.

Many of the papers have dealt with calls for an effective national privacy law in the US. In fact, here is the first line of the first Privacy Paper for Policymakers recognized in 2010 by the Future of Privacy Forum. “Privacy on the Books and on the Ground,” by Kenneth Bamberger and Deirdre Mulligan:

U.S. privacy law is under attack. Scholars and advocates criticize it as weak, incomplete and confusing, and argue that it fails to empower individuals to control the use of their personal information…”

They continued, “as Congress and the Obama Administration consider privacy reform, they encounter a drumbeat of arguments favoring the elimination of legal ambiguity by adoption of omnibus privacy statutes, the EU’s approach.” If you substitute “Trump” for “Obama” you could write the same words today; you would also find experts making many of the same arguments against imposing the top-down, prescriptive aspects of the EU’s approach in the US.

What are some of the research techniques authors use?

We’ve honored papers from academics, practitioners, technologists and lawyers, which means we’ve seen a wide range of approaches to research and analysis.

Some survey the privacy landscape and make recommendations based on the real-world practices they discover. For Shattering One-Way Mirrors. Data Subject Access Rights in Practice, Jef Ausloos (Postdoctoral Researcher, University of Amsterdam’s Institute for Information Law) and Pierre Dewitte (Researcher, KU Leuven Centre for IT & IP Law) contacted sixty information service providers and requested access to data. They concluded that data access rights in the EU are largely underused and not properly accommodated. Their research not only uncovered what they called an “often-flagrant lack of awareness, organization, motivation, and harmonization,” but also identified concrete suggestions aimed at data controllers, such as relatively easy fixes in privacy policies and access rights templates.

For Designing Without Privacy, Ari Ezra Waldman (Professor of Law and Founding Director, Innovation Center for Law and Technology at New York Law School) conducted an ethnographic study of how, if at all, people designing technology products think about privacy, integrate privacy into their work, and consider user needs in the design process. His paper references and expands upon the work of Kenneth Bamberger and Deirdre Mulligan – work that FPF recognized as one of our first award winners in 2010. Professor Waldman looks at how CPOs’ robust privacy norms can best be diffused throughout tech companies and the industry as a whole.

What’s next for Privacy Papers for Policymakers?

The Privacy Papers for Policymakers program will continue to highlight top scholarship, promote pragmatic solutions to privacy challenges, and generate thoughtful dialogue in Washington DC. We look forward to promoting constructive approaches to data protection around legislative drafting tables and in corporate boardrooms.

Most immediately, we are looking forward to a fantastic event honoring his year’s winning authors. On February 6, 2019, FPF and Honorary Co-Hosts Senator Edward J. Markey and Congresswoman Diana DeGette will host FPF and this year’s winning authors as they present their work in the Russell Senate Office Building at 5:30pm on February 06, 2019. The event is free, open to the general public, and widely attended. To RSVP, please visit privacypapersforpolicymakers.eventbrite.com.

FPF's Amelia Vance on the Future of Student Privacy

Amelia Vance, Policy Counsel and Director of the FPF Education Privacy Project, is one of the foremost experts in the nation on education privacy. She has a knack for making complex regulations and technical trends accessible to individuals who are not lawyers or computer scientists, but who care deeply about student privacy – school administrators, parents, students, and others. This skill is very much in demand; whether testifying before Congress or sharing her expertise at conferences across the country, Amelia has been a valuable resource for anyone who wants to understand how federal and state laws impact data practices in the classroom. In this installment of our 10th anniversary series, Amelia discusses FPF’s work in education, the current student privacy landscape, and the debates of the future.

Why do you like working on student privacy?

Student privacy is a microcosm of every privacy issue out there, except we’re talking about kids, which makes things so much more sensitive. FPF works on algorithms and ethics, health privacy, research privacy, IOT, online trackers – which are all part of the student privacy landscape. And because children are recognized – both legally and developmentally – as especially vulnerable, any privacy discussions must be nuanced and thoughtful because getting it wrong means you can end up derailing a child’s future. Student privacy requires not only legal expertise, but also the ability to put yourself in the shoes of a parent, a teacher, an edtech company, or other stakeholders so you can figure out their concerns and how to best respond to them. Framing the conversation correctly is just as important as getting the policies right.

What sort of work has FPF done on education privacy over the past 10 years? What challenges have arisen during that time?

FPF kicked off its education privacy program in 2014 with the launch of the Student Privacy Pledge. That year, over 100 student privacy bills were introduced in 39 states; the prior year, only one student privacy law had passed. This flurry of legislation highlighted a major gap in law and resources on this issue; the federal student privacy law, FERPA, was passed in 1974, and if you get 15 FERPA experts in a room, they’ll come up with 16 interpretations of any provision. The new state laws were creating mandates for states, districts, and companies without providing the resources and training necessary to implement the laws with fidelity. FPF decided to step in and worked with partners like the Data Quality Campaign, the Software and Information Industry Association (SIIA), ConnectSafely, and the National PTA to create actionable resources for different audiences.

The Student Privacy Pledge has been one of our most successful projects. Co-founded with SIIA, the Pledge is a Federal Trade Commission-enforceable code of conduct for edtech vendors. Now with nearly 330 companies as signatories, the Pledge was designed to both raise awareness of best practices and facilitate their implementation.

We are also particularly proud of FERPAǀSherpa, the student privacy resources website. Whether you’re a student, parent, educator, administrator, policymaker, or higher education staffer, we hope to make the vast student privacy landscape more understandable. The explosion in state privacy laws has made this a challenge, but it’s one we embrace. My primary goal is that everything we release be useful and move the student privacy conversation forward.

What should our readers know about the current student privacy landscape?

39 states and DC have passed 125 new laws since 2013, so right now most stakeholders are focused on implementation and seeing how these laws play out on the ground. There are now 450+ resources on student privacy to help stakeholders on the issue, but few state legislatures provide funding and training to districts and state education agencies to implement student privacy best practices. We have also seen many unintended consequences play out over the past few years. It is unfortunately easy to mess this up – for example, a complete ban on selling student data can result in banning school pictures! One state’s law made parents opt into almost all data sharing and caused some schools to stop announcing football players’ names, hanging student artwork in the hallways, and even referring some students to the state scholarship fund. It’s easy to get lost in sensationalism and misunderstandings when discussing issues that affect children; our work injects nuance and informed analysis into the public debate.

What about the next 10 years? What privacy challenges can we expect to emerge in this space?

Now that most of the new student privacy laws have been in place for a couple years, we are likely to start seeing enforcement actions – which, in some states, could mean jail time. There are fewer big student privacy bills being introduced in states at this point, but we’re seeing more legislation with idiosyncratic student privacy requirements that could trip up schools or edtech companies. We also see legislation that should have privacy requirements but doesn’t – that’s a big issue with school safety legislation! We’re also likely to see a re-write of FERPA pass in Congress at some point in the next five years. Finally, as we’ve seen over the past year, the privacy conversation has now spread past education into the general news; this means edtech companies will have to attempt to reconcile the burgeoning universe of consumer privacy law with parallel developments in education. There will likely be times when legal obligations conflict, and it will be interesting to see how well legislators take the lessons learned from student privacy laws to avoid unintended consequences in general consumer privacy laws.