Analysis of personal data can be used to improve services, advance research, and combat discrimination. However, such analysis can also create valid concerns about differential treatment of individuals or harmful impacts on vulnerable communities. These concerns can be amplified when automated decision-making uses sensitive data (such as race, gender, or familial status), impacts protected classes, or affects individuals’ eligibility for housing, employment, or other core services. When seeking to identify harms, it is important to appreciate the context of interactions between individuals, companies, and governments—including the benefits provided by automated decision-making frameworks, and the fallibility of human decision-making.
Today, the Future of Privacy Forum (FPF) and the Ohio State University’s Program on Data and Governance are holding a discussion of ethics, privacy and practical research reviews in corporate settings. This timely event, which follows the White House’s call to develop strong data ethics frameworks, convened corporate and academic leaders to discuss how to integrate ethical and privacy considerations into innovative data projects and research.
Please join the Future of Privacy Forum (FPF) and the Ohio State University’s Program on Data and Governance in Washington, DC, on Tuesday, June 14, 2016, for a discussion of ethics, privacy and practical research reviews in corporate settings.
As the volume of consumer data grows, an increasing number of decisions previously made by humans are now made by algorithms. Many thought leaders have called for algorithmic transparency to ensure that these decisions aren’t leading to unfair or discriminatory outcomes, but algorithmic transparency is tricky to implement. Last December, FTC Commissioner Julie Brill acknowledged […]
Few would deny that technology and social media are changing the way we interact. People today can stay in touch with friends on Facebook, share vacation photos on Instagram, follow trends on Twitter, grow their networks on LinkedIn, and explore communities on Reddit. And people are staying connected wherever they go. The Pew Research Center […]
For all its hype, discussions about Big Data often still devolve into debates about buzzwords and concepts like business intelligence, data analytics, and machine learning. Hidden in each of these terms are important privacy and ethical considerations. A recent article by Kirsten Martin in MIS Quarterly Executive attempts to bring these considerations to the surface by moving past framing […]
Today, FPF filed an additional set of comments in the wake of the FTC’s fall workshop, Big Data: A Tool for Inclusion or Exclusion? The comments focus on some of the challenges around defining what exactly “Big Data” is, and the increasing need to have a firmer ethical framework for having conversations about data use. […]
Yesterday, as he accepted the IAPP Privacy Vanguard award, Intel’s David Hoffman made a “data innovation pledge” that he would work only to promote ethical and innovative uses of data. As someone who only relatively recently entered the privacy world by diving headfirst into the sea of challenges surrounding big data, I think an affirmative […]