Android M and Privacy: Giving Users Control over App Permissions

Android M and Privacy: Giving Users Control over App Permissions

Android M promises to deliver several new user-control features built to advance transparency, choice, and predictability. The new App Permissions system allows users to select permissions specific to each app and device feature. The granular system requires apps to request user permissions individually as the features are needed, opposed to the former all-or-nothing prompt at install. Once installed, users can modify the app’s access to device features at any time.

Android System Settings

App Permissions simplifies the device-feature access, while providing greater user control.

Android M’s App Permissions model creates eight controllable device-feature groups: Calendar, Camera, Contacts, Location, Microphone, Phone, SMS, and Sensors. Access to each of these features may be selectively denied at the user’s leisure throughout the lifecycle of the app. Lower risk permissions, such as access to the alarm clock and internet, are automatically granted to a requesting app at install. Users can still review these permissions prior to installing, but the current build hides these permissions later.

How to adjust your apps’ permissions.

The Android M Preview build provides users with two methods of accessing and changing permissions to the eight permission groups. First, users can access all permissions that an app has sought by selecting Settings => Apps => [The App] => Permissions. Second, users can view all apps that have sought permissions based on the eight feature categories by selecting Settings => Apps => [3-Dot Menu] => Advanced => App permissions. Whether a user is concerned with the permissions of a specific app or for a distinct device feature, the two methods of access give users a quick and clear means to modify either. Furthermore, users will still have separate access to location requests made by apps via the Settings page.

Users can limit access to the features they want.

Google will no longer allow developers to present users with an all-or-nothing list of permissions. This will address the issue of apps seeking permission for device features unnecessary to the app’s operation, forcing users into undesired permissions if they accept the app. By giving individual-feature choices, users can opt-in to only the permissions – and the associated functionality – that they desire. And the full list of permissions that each app seeks will still be available to users before downloading.

Developers are encouraged to prove the value for users granting apps permissions.

Once an app is installed, users will be able to modify their permission preferences at any time. Permissions may be granted initially. But failing to provide users with an immediate return on investment for their data will lead many to adjust their settings accordingly. At-will modifications provide users with a workaround for use-specific access. For rarely needed features, users can allow access to device features only when they need them.

Developers are rewarded for disclosing the purpose of the app’s feature request.

The Android M permissions model incentivizes developers to explain their reasons for requesting permission to use features. When an app seeks permission to use a feature for the first time, users will be prompted with a choice to allow or deny access. If the user denies access, developers are allowed an opportunity to explain their reasons for seeking access. On the second request for access, the user will be additionally offered the choice “never ask again.” Because users can opt-out of repeated requests, Android M blocks the app from irritating users into submission. Developers must convince users of the permission’s necessity or lose access to the feature.

Android Permissions Never Ask Again

User decisions are respected by increasing the difficulty of hassling them to change their settings.

Once the user stops future requests, apps will be prohibited from directly linking the user to the app’s permissions. While it may increase the difficulty of adjusting the permission settings, Google took this affirmative step to keep apps from repeatedly questioning the user’s privacy decisions.

Apps should not request permission for one-off features.

Often, apps only need to use a device feature sparingly. But the all-or-nothing model lacked sufficient deterrence to developers seeking unlimited access for these one-off features. Users would have to weigh their concern for this granting disproportionate access against the entire value of the app. Now, not only can users disable access when these one-off features are not in use, but Google provides a simple solution for developers seeking this type of access. Instead of requesting permission for the app to use a feature, the developer can direct the user to an app that has permission and retrieve the information from there. This method gives the user peace of mind and promotes transparency and trust in the app developer, because users know that the app does not seek unlimited access to features which are rarely used.

Android Legacy Permissions

Users can opt-out of specific permissions for older apps.

Despite early reports to the contrary, App Permissions will give users the same access and choice to device features for apps built using older Android platforms. Because these apps lack the framework to handle granular permissions, Google has chosen to send blocked requests an empty set. Thus, an app seeking contacts from a user who has denied this permission will display that the user has no contacts. While not the cleanest method for handling a denial, retroactively applying granular permissions will encourage developers to embrace the new selective-privacy system.

Summary

In line with Google’s recently announced redesign of their accounts-page privacy settings, Android M creates a simpler and more transparent interface for user control of private information. The feature-specific opt-in approach of the new App Permissions model will provide users with greater transparency, protection, and control over their personal information.

For iOS app permissions, see our post, iOS 8 and Privacy: Major New Privacy Features. Information for developers is also available at our Application Privacy hub.

Framing the "Big Data Industry"

For all its hype, discussions about Big Data often still devolve into debates about buzzwords and concepts like business intelligence, data analytics, and machine learning. Hidden in each of these terms are important privacy and ethical considerations. A recent article by Kirsten Martin in MIS Quarterly Executive attempts to bring these considerations to the surface by moving past framing Big Data as merely some business asset or computational technique. Instead, Martin suggests analyzing risks and rewards at a macro-level by looking at the entire Big Data ecosystem, which she terms the Big Data Industry (BDI).

Yes, her paper still largely focuses on the negative impacts of Big Data, but instead of a general sense of doom-and-gloom, her focus is on a systemic analysis of where the data industry faces specific challenges. Though the article is peppered with examples of privacy-invading headlines, like Target’s purported ability to predict pregnancy, her framing is particularly helpful because it largely divorces the “risks” posed by Big Data from individualized company practices, anecdotes, and hypotheticals. Instead, she describes the entire Big Data information supply chain from upstream data sources to downstream data uses. Consumer-facing firms, tracking companies, and data aggregators — or data brokers — work together to exchange information and add more value to different data sources.

Martin breaks down the different negative effects that can impact individuals at different points in the supply chain. She highlights some of the existing concerns around downstream uses of Big Data. For example, she notes that both incorrect and correct inferences about individuals could limit individual’s opportunities, encourage consumer manipulation, and ultimately be viewed as being disrespectful to individual concerns. While these sorts of Big Data harms have been long debated, Martin places them on a spectrum alongside concerns raised by upstream suppliers of data, including poor data quality, biases in the data, and privacy issues in the collection and sharing of information. Analogizing to how food providers have become responsible for everything from labor conditions to how products are farmed, she argues that Big Data Industry players, by choosing and creating supply chains, similarly become “responsible for the conduct and treatment of users throughout the chain.”

By looking at Big Data as one complete supply chain, Martin appears to believe it will be easier for members of the Big Data Industry to identify and monitor economical and ethical issues with the supply chain. Yet problems also exist across this nascent industry. Even if we can effectively understand data supply chains, Martin is perhaps more concerned with the systemic issues she sees in the BDI. Specifically, the norms and practices currently being established throughout the entire data supply chain give rise to “everyone does it” ethical questions, and the BDI, in particular, poses two pivotal ethical considerations.

First, data supply chains may create negative externalities, especially in aggregate. Air pollution, for example, can become a generalized societal problem through global warming, and the harm from actions across the manufacturing industry can be considerably greater than the pollution caused by any individual company. Martin posits the Big Data Industry presents a similar dynamic, wherein every member that captures, aggregates, or uses information creates costs to society in the form of surveillance. By contributing to a “larger system of surveillance” and by frequently remaining invisible and out-of-sight to individuals, the BDI may be generating an informational power imbalance. Perhaps because individual companies that are part of the BDI fail to see themselves as part of a larger data ecosystem, few companies have been put in a position to take account of — or even to consider — that their data practices may give rise to such a negative externality.

Second, the Big Data Industry may foster “destructive demand” for consumer-facing companies to collect and sell increasing amounts of consumer data with lower standards. According to Martin, demand can become destructive (1) when a primary markets that promise a customer-facing relationship become a front for a secondary market, (2) when the standards and quality of the secondary market are less than the primary market, and (3) when those consumer-facing companies have limited accountability to consumers for their transactions and dealings in the secondary market. Martin sees a cautionary tale for the BDI in the recent mortgage crisis and the role that mortgage-backed securities played in warping the financial industry. She warns that problems are inevitable as the buying and selling of consumer data becomes more important than “selling an application or providing a service.” Invoking the specter of the data broker boogeyman, Martin argues that consumer-facing organizations lack accountability for their activities in the secondary data market, particularly so long as consumers remain in the dark as to what is going on behind the scenes in the greater BDI.

So how can the Big Data Industry address these concerns? She places much of her faith in the hope that organizations like the Census Bureau that have “unique influence” as well as “providers of key products within the Big Data Industry, such as Palantir, Microsoft, SAP, IBM” can help shape sustainable industry practices moving forward. These practices would embody a number of different solutions under the rubrics of data stewardship, data integrity, and data due process. Many of the proposals under the first two amount to endorsing additional transparency mechanisms. For example, publicly linking companies through a larger supply chain could create “a vested interest in ensuring others in the chain uphold data stewardship and data due process practices.”

Data due process, on the other hand, would help firms to “internalize the cost of surveillance.” Additional internal oversight and due process procedures would, according to Martin, “increase the cost of holding individualized yet comprehensive data and internalize the cost of contributing to surveillance.” As to what these mechanisms could look like, Martin points to ideas like consumer subject review boards, which was first popularized at a Future of Privacy Forum event two years ago and is an effort we have continued to expand upon. The call for data integrity professionals mirrors the notion of “algorithmists” that could monitor not just the quality of upstream data sources but downstream data uses. (As an aside, she chastises business schools, who, even as they race to train Big Data professionals, do not require business students to take courses in ethics.) Effective ethical reviews would require such professionals, which could potentially mitigate some of risks inherent in the data supply chain.

While Martin’s proposals are not a panacea, industry and regulators alike should take her suggestions seriously. Her framing of a greater Big Data Industry provides a path forward for companies — and regulators and watchdogs — to better target their efforts to promote public trust in Big Data. She has identified places in the information supply chain where certain industry segments may need to get “more skin in the game” so to speak. And, at the very least, Martin has moved Big Data from amorphous buzzword to a full-fledged ecosystem with some shape to it.

-Joseph Jerome, Policy Counsel

FPF Welcomes Senior Fellows Evan Selinger and Danielle Citron

LEADING PRIVACY SCHOLARS JOIN FUTURE OF PRIVACY FORUM AS SENIOR FELLOWS FOR ACADEMIC YEAR

Professors Evan Selinger and Danielle Citron Bring Privacy and Ethics Expertise to Help Advance Responsible Data Practices

WASHINGTON, D.C. – Thursday, June 18, 2015 – Today, the Future of Privacy Forum (FPF) announced that two academic leaders have joined the ranks of the groups’ talent for the 2015-2016 academic year.

Danielle Citron, the Lois K. Macht Research Professor and Professor of Law at the University of Maryland’s Frances King Carey School of Law, and Evan Selinger, Professor of Philosophy at the Rochester Institute of Technology (RIT) and Fellow at The Institute for Ethics and Emerging Technology, will contribute FPF’s scholarship and policy leadership on a number of core privacy issues.  The two academics will be spending their university sabbatical years working with FPF staff, junior fellows and Advisory Board members.

Citron’s role at FPF will focus on exploring the crucial role that state Attorneys General play in privacy policy development through the enforcement of state and federal laws and other strategies.

Selinger will help the FPF and its members understand the critical role that ethics play in a number of privacy initiatives, as well as explore the impact of “social listening” – where organizations in the public and private sector analyze and use social media data for a variety of purposes.

“As the use and collection of data creates benefits and challenges across every sector of society, the critical and creative thinking of top scholars is essential to charting the path forward,” said Jules Polonetsky, Co-Chair and Executive Director, FPF. “We are all thrilled to have the academic caliber and high reputation of leaders such as Evan and Danielle as part of our team, and believe their views and voices will add extremely valuable dimensions to our mission and future success.”

Citron’s work has focused on information privacy, cyber laws, automated systems, and privacy in civil rights. She is the author of Hate Crimes in Cyberspace, published by Harvard University Press. Citron’s law review articles have appeared in many of the nation’s leading law review journals, and her further written work has been featured in numerous leading media publications, such as TIME, The Atlantic, Forbes, and New York Times. She also serves on a range of academic, civic and advocacy boards and task forces. Previously in her career, she was an Adjunct Associate Professor at Fordham University School of law – her alma mater; a law clerk at the U.S. District Court for the Southern District of New York; and Associate at the law firm of Willkie, Farr & Gallagher.

Selinger, in addition to his current position at RIT, is affiliated with the school’s Center for Media, Arts, Games, Interaction and Creativity (MAGIC).  His work has included expansive research and insights addressing ethics in technology, science, sustainability, and the law. His service includes a large number of academic appointments as well as visiting affiliations in the U.S. and abroad. Selinger is a prolific academic author, editor, and reviewer, and has written extensively for a variety of newspapers, magazines, and blogs, including Wired, Slate, The Wall Street Journal, The Atlantic, Forbes, Christian Science Monitor, and The Huffington Post. Selinger has a Ph.D., M.A., and B.A. in Philosophy.

About Future of Privacy Forum

The Future of Privacy Forum (FPF) is a Washington, DC based think tank that seeks to advance responsible data practices. The forum is led by Internet privacy experts Jules Polonetsky and Christopher Wolf and includes an advisory board comprised of leading figures from industry, academia, law and advocacy groups.  For more information, visit fpf.org

Media Contact

Nicholas Graham, for Future of Privacy Forum

[email protected]

571-291-2967

 

Top Carnegie Mellon privacy researchers preview new work

Image

 

On July 9th,2015, FPF will be hosting an in-person discussion with privacy researchers from Carnegie Mellon University to discuss some of their current privacy projects. Topics to be presented and discussed include:

 

 

Carnegie Mellon University researchers presenting will include:

Lujo Bauer, Associate Research Professor CyLab and ECE. Professor Bauer teaches classes on Secure Software Systems and Information Security and Privacy. His research focuses on different aspects of computer security, particularly in building usable access-control systems with sound theoretical underpinning and in narrowing the gap between a formal model and a usable system.

Travis Breaux, Assistant Professor of Computer Science. Professor Breaux’s research consists in addressing how to ensure that information systems comply with policies, law and social norms. To improve software quality and reliability, he tackles the challenges to aligning regulations and policies with software specifications. Professor Breaux has also taught several courses: Software Engineering and Engineering Privacy among others.

Lorrie Cranor, Professor, Computer science and Engineering & Public policy; Director, CyLab Usable Privacy and Security Laboratory and Co-director, MSIT-Privacy Engineering masters program. Professor Cranor focuses her research on usable privacy and security, technology and public policy. She also authored several books and many publications.

Jason Hong, Associate Professor, School of Computer Science, Human Computer Interaction Institute. Professor Hong’s research lies at the intersection of human-computer interaction, security and privacy, and systems. More specifically, how rich sensor data can be used to improve lives; how to improve everyday privacy for smart environments and how crowdsourcing can be used to improve privacy and security. In addition, Professor Hong published several papers – his last one addresses The Role of Social Influence in Security Feature Adoption.

Norman Sadeh, Professor in the School of Computer Science. He is director of CMU’s Mobile Commerce Laboratory and its e-Supply Chain Management Laboratory, co-Founder of the School’s PhD Program in Computation, Organizations and Society and co-Director of the MSIT Program in Privacy Engineering. He also co-founded and directs the MBA track in Technology Leadership launched jointly by the Tepper School of Business and the School of Computer Science in 2005. Over the past dozen years, Norman’s primary research focus has been in the area of mobile and pervasive computing, cybersecurity, online privacy, user-oriented machine learning, and semantic web technologies with a particular focus on mobile and social networking.

 

The conversation will run from 9:00 am to 10:30 am (with a light breakfast beginning at 8:30 am) at Comcast NBC Universal, 300 New Jersey Avenue NW, 7th floor, Washington, D.C. 20001.

You can find the invitation and registration details here. Seats are limited, do not miss out!

Student Privacy Pledge – Hits 150!

Ed Tech Student Privacy Pledge reaches 150 signatories

The Future of Privacy Forum (FPF) and the Software & Information Industry Association (SIIA) are pleased to recognize that the Student Privacy Pledge reached the milestone of 150 education technology companies who have signed. The Pledge represents industry’s public commitment to the responsible handling of student data and provides accountability for signatory school service providers. The result is a bolstering of the public trust necessary for continued technology access for school operations and student learning – technology that is critical to the nation’s continued educational and economic competitiveness.

The K-12 Student Privacy Pledge was introduced by FPF and SIIA in October 2014 with 14 original signatories and took effect in January 2015 as a legally enforceable agreement for companies that provide services to schools. The twelve specific commitments in the Pledge detail ongoing industry practices that both meet the demands of families and schools and track key federal and state laws. By signing the Pledge, school service providers clearly articulate their adherence to these practices to schools and parents regarding the collection, use, maintenance, and retention of student data.

Signatories of the Student Privacy Pledge promise to:

The Pledge adds to an existing framework of student data protections, which also include existing laws, contracts, and company privacy policies. A company’s security and other commitments made under the Student Privacy Pledge are legally enforceable under Section 5 of the federal Consumer Protection Act.

“Although many states have passed student privacy laws, the Pledge creates an instantly enforceable and nationally applicable commitment to student data privacy. The Pledge, provides a baseline for communication between companies and end users that builds trust,” said Kobie Pruitt, education policy manager, FPF.

“The Pledge demonstrates the industry’s commitment to these strict legally enforceable best practices for data security and the safeguarding of student privacy,” said Mark Schneiderman, senior director of education policy, SIIA.

FPF and SIIA are proud to facilitate the efforts of education technology companies to take leadership in protecting student privacy by signing the Student Privacy Pledge. We look forward to a continual increase in the number of companies joining this effort, and agreeing to be held publically accountable to the student information safeguards embodied in the Pledge.

NOTE: This blog was cross-posted at both http://fpf.org/category/blog/ and http://blog.siia.net/.

 

Welcome Kobie Pruitt

KobieFPF would like to welcome the newest member of our team, Kobie Pruitt. Kobie is our new Education Policy Manager, and will be taking over the management of FPF’s Education and Student Privacy programs and events. Kobie comes to us from the Hill where he was a legislative assistant to Congresswoman Kaptur (OH-09), and included Education among his portfolio of responsibilities. He is also an adjunct professor for both undergrad and MBA classes at the University of the Potomac, and has previously taught as a guest teacher for 9-12 graders in New York state on government, politics, and social studies.  He earned his J.D. from the University of Maryland School of Law.

We are delighted to have Kobie on the FPF Team, and know he will be an asset to our work on student data privacy issues. Among other education projects, Kobie will take over the maintenance and promotion of the Student Privacy Pledge (www.studentprivacypledge.org), work on evaluation and publication of the results of our recent Parent Survey on Technology and Student Data in Public Schools, and continue distribution and awareness of our just-published Parent Guide to Student Privacy Rights.  Further, Kobie will lead the FPF programs supported by the Bill and Melinda Gates Foundation including a scholarship program outreach to high school students for a student privacy based video competition, and the National Symposium for Student Privacy Issues scheduled for this September.

Kobie can be reached at [email protected] for further information about FPF’s Education and Student Privacy related projects and events.

Customer Privacy and the National Labor Relations Act

Last month, an Administrative Law Judge for the National Labor Relations Board ruled that Macy’s employee handbook contained overly broad confidential information policies. The decision continues efforts by the NLRB to police employer confidentiality policies, but it also demonstrates how industry efforts to protect privacy can inadvertently run afoul of Section 7 of the NLRA, which gives employees various rights to organize and engage in collective bargaining.

In this instance, Macy’s employee handbook repeatedly emphasized the importance of taking the privacy of fellow employees and customers seriously. It explained that the company holds “personal data of its present and former associates, customers and vendors,” and that the company is committed to using this information in a way that respects privacy. As a result, Macy’s required all employees with access to this personal data to protect it against unauthorized use, disclosure, or access. The ALJ found that the handbook’s repeated invocations about protecting information would make employees reasonably believe the were restricted form discussing certain terms and conditions of their employment, which runs contrary to the NLRA.

Yet whenever personal information is shared, it raises privacy issues. Cases like this pose a clash between competing values, and moving forward, it will be important for the NLRB to recognize that protecting employee organizing rights may come at a cost to their privacy expectations, and vice versa. As this case demonstrates, employee rights can also challenge the privacy rights of customers and vendors.

What makes this decision so problematic from a privacy perspective is that the ALJ also found Macy’s “restrictions on the use of information regarding customers and vendors” violated the NLRA. In certain circumstances, that may be true, but here, the ALJ largely relied on a single NLRB decision, Trinity Protection Services, Inc., 357 NLRB No. 117 (2011), which found that “employees’ concerted communications regarding matters affecting their employment with their employer’s customers or with other third parties, such as governmental agencies, are protected by Section 7 and, with some exceptions not applicable here, cannot lawfully be banned.”

Trinity was a very different case. It involved a dispute between a security contractor and newly hired guards. During their training, the guards believed Trinity Protection Services used inadequate security measures and threatened to notify the Department of Homeland Security, who had contracted Trinity, about the inadequacies. Trinity told the guards any information they received was confidential and any disclosure would violate a non-disclosure agreement. This sort of policy was found unlawful because it “inhibit[ed] employees from bringing work-related complaints to, and seeking redress from, entities other than the Respondent.”

And this case could also be distinguished on the grounds that it extends Trinity‘s protection of communications to a whole host of private customer information. The ALJ does concede that his concerns about Macy’s restrictions on use of customer information should apply “to a lesser extent,” but this case demonstrates how legitimate efforts to protect privacy can push up against labor laws.

As David Katz explains, the decision should “concern employers, as the provisions attacked here are likely quite similar to other employers’ policies. Employers should be mindful of the Board’s recent crusade against overbroad handbook provisions, and should review their policies—including those not typically associated with NLRB scrutiny (such as confidentiality and privacy policies)—with an eye towards the Board’s recent rulings.”

Beyond the burdens this decision could place on companies, it also highlights the need for the NLRB and its ALJs to be aware of the privacy implications of their decisions. Certain types of information sharing may indeed be necessary to protect employee rights, but let’s be clear: companies need to stress to their employees the importance of taking privacy seriously. With new data breaches making headlines daily, it’s important to remember that human error can be the biggest threat to data privacy and security. For that matter, employee snooping can also present serious privacy concerns.

The NLRB’s priorities are no doubt to protect workers, but it is important to consider how privacy of customers, and indeed other employees, can be impacted by the disclosure and sharing of information. We would encourage the NLRB as it attempts to rein in employer policies to consult with privacy experts and understand more deeply the need for clear employee rules and guidelines around personal information.

Joseph Jerome, Policy Counsel

User Reputation: Building Trust and Addressing Privacy Issues in the Sharing Economy

Ahead of the Federal Trade Commission’s June 9 workshop on the sharing economy, “User Reputation: Building Trust and Addressing Privacy Issues in the Sharing Economy” by Joseph Jerome, Benedicte Dambrine, and Ben Ambrose discusses the reputational, trust and privacy challenges users and providers face concerning the management and accuracy of shared information.

Sharing economy services – such as Uber, Airbnb, Etsy, and TaskRabbit, among others – rely heavily on online and mobile platforms for transactions and the peer-to-peer sharing of critical, ‘reputational’ information. This includes data regarding recommendations, ratings, profile access, review challenges, account deletion, and more. The FPF survey provides an overview of how reputation-building and trust are frequently essential assets to a successful peer-to-peer exchange, and how ratings, peer reviews, and user comments serve as core functions of such services. It examines the commonly used mechanisms to build reputation, as well as issues surrounding identity and anonymity, and the role of social network integration.

The report undertook a survey of a number of market leaders in the sharing economy sectors of transportation (Lyft, Sidecar, Uber), hospitality (Airbnb, HomeAway, Couchsurfing), retail goods (Etsy, NeighborGoods, eBay) and general services (TaskRabbit, Instacart, Handy) to review how these platforms implement access and correction capabilities. Brands were surveyed to see how they implement access rights, correction and response mechanisms, and whether they provide clear guidance for deleting account information.

User Reputation is available to read here.

Future of Privacy Forum Releases New Survey on Privacy and Trust Issues in the "Sharing Economy"

FUTURE OF PRIVACY FORUM RELEASES NEW SURVEY ON PRIVACY AND TRUST ISSUES IN THE “SHARING ECONOMY”

Whitepaper Examines Benefits and Challenges of Reputation Management in Peer-to-Peer Services and Provides an Overview of Market Leaders in Key Sharing Economy Sectors

WASHINGTON, D.C. – Monday, June 8, 2015 – As peer-to-peer services comprising the “Sharing Economy” continue to gain wide acceptance with U.S. consumers, the Future of Privacy Forum (FPF) today released a timely whitepaper that focuses on the reputational, trust and privacy challenges users and providers face concerning the management and accuracy of shared information.

Released in advance of a June 9 workshop focused on the sharing economy, the FPF paper titled “User Reputation: Building Trust and Addressing Privacy Issues in the Sharing Economy” – comes at a time when the sharing economy, especially in the hospitality and transportation sectors, is expanding in popularity and growth at breakneck speed. The total value of global sharing economy transactions was estimated at $26 billion in 2013, and is estimated to generate as much as $110 billion in coming years.

At the same time, consumers are recognizing the benefits of shared services: a recent study notes 86 percent of adults in the U.S. believe such services make life more affordable, while 83 percent believe they make life more convenient and efficient.

Sharing economy services – such as Uber, Airbnb, Etsy, and TaskRabbit, among others – rely heavily on online and mobile platforms for transactions and the peer-to-peer sharing of critical, ‘reputational’ information. This includes data regarding recommendations, ratings, profile access, review challenges, account deletion, and more. How access to and control of this data is managed by sharing economy brands and services is essential to building user trust, and has important privacy implications as well.

“Uber’s new option that provides riders with access to their ratings is an important step forward,” said Jules Polonetsky, FPF’s Executive Director. “If consumer access to services is dependent on ratings and reviews, consumers need transparency into their scores and into how these systems work”

The FPF survey provides an overview of how reputation-building and trust are frequently essential assets to a successful peer-to-peer exchange, and how ratings, peer reviews, and user comments serve as core functions of such services. It examines the commonly used mechanisms to build reputation, as well as issues surrounding identity and anonymity, and the role of social network integration.

The highlight of the group’s study is a section entitled, “Maintaining Reputation: Privacy Challenges of Rating Systems.” How sharing economy and peer-to-peer platforms are implementing Fair Information Practices concerning user-generated data, especially access and correction capabilities for users and providers, has tangible privacy implications.

As a result, the FPF paper undertook a survey of a number of market leaders in the sharing economy sectors of transportation (Lyft, Sidecar, Uber), hospitality (Airbnb, HomeAway, Couchsurfing), retail goods (Etsy, NeighborGoods, eBay) and general services (TaskRabbit, Instacart, Handy) to review how these platforms implement access and correction capabilities. Brands were surveyed to see how they implement access rights, correction and response mechanisms, and whether they provide clear guidance for deleting account information.

The report concludes with a call to action for many companies in the sharing economy marketplace, encouraging them to strive to provide more guidance to users about reputation and to be more transparent about access and control over information. Such moves will not only amplify consumer trust, but also help ensure fair treatment of consumers.

This is especially important for the future growth of the sharing economy sector, as the report notes:

“While platforms need to have good and reliable reputational systems in place in order to create trust between users, they will also have to ensure their users trust them. It is very likely that…users will rely on the platform’s reputation, in addition to user reputation alone.”

The survey was authored by FPF staffers Joseph Jerome, Benedicte Dambrine, and Ben Ambrose.

About Future of Privacy Forum

The Future of Privacy Forum (FPF) is a Washington, DC based think tank that seeks to advance responsible data practices. The forum is led by Internet privacy experts Jules Polonetsky and Christopher Wolf and includes an advisory board comprised of leading figures from industry, academia, law and advocacy groups. For more information, visit fpf.org

Media Contact

Nicholas Graham, for Future of Privacy Forum

[email protected]

571-291-2967

Balancing Free Expression and Social Media Monitoring

Last week, central Florida’s largest school district announced that it would begin monitoring a number of social media sites for posts “that may impact students and staff.” As more and more school districts look to social media to monitor and track students, it raises big privacy questions. Certainly, many schools have reacted to school shootings, student suicides, and bullying concerns by connecting with social-media-monitoring companies to help them identify problems for which school personnel, parents, or even law enforcement may need to take action. In fact, when tragedies have taken place, the first reaction has often been to scour social media to see whether there were clues that should have led to action or intervention.

Parents appear to have largely accepted this general practice, but the limits of school’s tracking and monitoring their students remain unclear. As Jules and I explored in an op-ed for Education Week last month, we don’t yet know how to strike the right balance between monitoring and tracking—while allowing individuals to vent, blow off steam, and otherwise freely express themselves online without feeling surveilled.

While this story deals largely with monitoring students, the public at large has contradictory and conflicting views about social listening. According to a 2013 Netbase study51% of consumers want to be able to talk about companies without them listening, but 58% want them to respond to complaints and 64% want companies to respond when spoken to. To avoid a notorious creepy label, schools — and indeed any organization — ought to be open and transparent about why they’re listening and what they’re listening for. The public’s views of social listening can be contradictory and confusing.

We hope to explore this issue further, and welcomes any thoughts and feedback from anyone out there . . . listening.

-Joseph Jerome, Policy Counsel