MIT Sloan Management Review Highlights "Big Data for All" Scholarship

In a post for the MIT Sloan Management Review, Renee Boucher Ferguson considers the privacy costs involved with the use of data analytics to transform customer service.  The capability of projects such as IBM’s Watson to “to determine consumer wants and desires, even (sometimes!) before customers themselves do so” requires a new approach to privacy, and Ms. Boucher Ferguson turns to Jules Polonetsky and Omer Tene’s “Big Data for All: Privacy and User Control in the Age of Analytics” to develop a new legal framework for these challenges.

PBS NewsHour: Jules Polonetsky on Benefits and Privacy Trade-Offs of 'Big Data'

Watch Era of Online Sharing Offers ‘Big Data,’ Privacy Trade-Offs on PBS. See more from PBS NewsHour.

Finding a Balance Between Privacy and Progress: Jules Polonetsky at TEDxMidAtlantic2012

Seeking Submissions for Privacy Papers for Policy Makers 2013

FPF is pleased to invite privacy scholars, professionals, and others with an interest in privacy issues to submit papers to be considered for inclusion in FPF’s annual edition of “Privacy Papers for Policy Makers.”

The purpose of Privacy Papers for Policy Makers is to present policy makers with highlights of important research and analytical work on a variety of a privacy topics.  Specifically, we wish to showcase papers that analyze cutting-edge privacy issues, and propose either achievable short-term solutions or new means of analysis that could lead to solutions.

Academics, privacy advocates and Chief Privacy Officers on FPF’s Advisory Board will review the submitted papers to determine which papers are best suited and most useful for policy makers in Congress, at federal agencies and for distribution to data protection authorities internationally.  Selected papers will be presented at an event with privacy leaders in the Fall, and will be included in a printed digest that will be distributed to policy makers.

The entry can provide a link to a published paper or a draft paper that has a publication data.  FPF will work with authors of the selected papers to develop a digest.

Our deadline for submissions is July 19, 2013.  Please include the author’s full name, phone number, current postal address, and e-mail address.

Please send submissions via e-mail to [email protected] with the subject line “Privacy Papers for Policy Makers 2013,” or send by mail to:

Future of Privacy Forum

919 18th Street NW, Suite 901

Washington, DC  20006

Click here to view prior editions of “Privacy Papers for Policy Makers.”  We look forward to your submissions.

Looking at Privacy Protections for Facial Recognition

On Sunday, Google announced that it would not allow facial recognition applications on Google Glass until “strong privacy protections” were in place. But this announcement begs the very question: what sort of privacy protections can actually be put in place for this sort of technology?

Thus far, concerns about facial recognition technology have appeared within the context of “tagging” images on Facebook or how it might be used to transform marketing, but these interactions are largely between users and service providers. Facial recognition on the scale offered by wearable technology such as Google Glass can change how we navigate the outside world. As one commenter put it, notice and consent mechanisms can protect Glass users but not the use by the user himself.

Many suggestions have focused on sending signals to the outside world that Glass is at work, such as blinking lights or other audio or visual cues. This is similar to efforts such as requiring cameras to go “click” whenever a photo is taken in order to make surreptitious photography more difficult. However, these sorts of mechanisms place the responsibility on non-users to constantly be aware of their surroundings lest they be recognized without their approval.

In its report last year on best practices for facial recognition technology, the FTC specifically addressed scenarios where companies use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, pointing to mobile apps that could permit users to surreptitiously discover information about people on the street. Noting “the significant privacy and safety risks that such an app would raise,” the FTC suggested that “only consumers who have affirmatively chosen to participate in such a system should be identified.”

As a practical matter, for now, facial recognition on Glass could be tied to a user’s social network. Information that a user has access to about people out in the world would reflect information shared on that social network. Though a heads-up display could be permitted to recognize only “friends,” it seems inevitable that this technology will creep beyond this sort of artificial barrier. Drawing the line will be incredibly difficult. For example, what reason would there be to exclude professional email contacts or prominent public figures from being identified?  With some work, almost anyone who has set foot in a public space can be visually identified. Facial recognition on wearable devices simply lowers this already-diminishing bar. Empowering the general public to affirmatively choose to participate in broad-based, public facial recognition on the scale offered by wearable technologies poses a tremendous challenge to many of our traditional privacy protection tools.

Stopping the collection of this information may prove impossible. Even as Google has pledged to limit facial recognition abilities on Glass, Lambda Labs, which provides facial recognition services, have indicated that facial recognition is “a core feature” of wearable technology and that “Google will allow it or be replaced with something that does.” While creating a comprehensive opt-out program will likely serve as one potential solution, such a system could create further privacy problems by requiring the collection of facial information in order for the application to “know” to ignore that face in the future. Another option could be for other wearable tech to send signals not to identify an individual’s face, creating a Google Glass duel of sorts.

However, the challenge of stopping or restricting facial data collection suggests a focus on regulating potential uses could be more productive. We could attempt to draw distinctions among what facial recognition is being used to accomplish—is it being used to assist or augment the user’s memory? For example, using facial recognition technology to help recall a distant, long absent relative could be distinguished from using additional data sources to learn about someone as you sit across from them at a table. Further, facial recognition applications could provide information based on contextual cues, such as identifying restaurant managers and staff at a restaurant while ignoring other people. In the end, applications will need to specifically enumerate how they will use the facial data they are collecting.

Both software developers and device manufacturers need to think creatively about how to establish guidelines around facial recognition technology. The alternative is a complete loss of anonymity in public, or a complete transformation of the public sphere into a place where individuals must cover up, lower their gazes, and avert their eyes—all actions that seem contrary to Google Glass’ effort to present individuals with new ways to experience our world.