The Top 10 (& Federal Actions): Student Privacy News (November 2017-February 2018)

The Future of Privacy Forum tracks student privacy news very closely, and shares relevant news stories with our newsletter subscribers.* Approximately every month, we post “The Top 10,” a blog with our top student privacy stories. This blog is cross-posted at studentprivacycompass.org. 

The Top 10: Federal Edition

We had so many major student privacy news stories over the past two months that we decided to split our usual Top 10 into a federal vs other news edition. Scroll down to see the non-federal Top 10 stories.

  1. President Trump’s budget proposes $3 million in funding for USED’s Privacy Technical Assistance Center (PTAC), “which serves as a valuable resource center to State and local educational agencies, the postsecondary community, and other parties engaged in building and using education data systems on issues related to privacy, security, and confidentiality of student records” (page 49). The budget also proposes to eliminate federal funding for Statewide Longitudinal Data Systems and Regional Educational Laboratories (page 51). The USED budget request documentation also notes that “One of the significant challenges that ED will work to address is that the large number of ED applications & IT systems currently externally hosted at contractor managed sites across the country are fully compliant w/ Federal & ED cybersecurity and privacy policy and guidelines” (h/t Doug Levin).
  2. The Department of Education (USED) released the first FERPA complaint findings letter to mention an ed tech company in January. FPF released a blog analyzing the decision, and Jim Siegl posted some very interesting thoughts about the decision at his blog. The two key takeaways:
    • Requiring a parent/eligible student to accept a third party’s Terms of Use when they sign up for a school constitutes a forced waiver of rights under FERPA if those Terms of Use are not compliant with FERPA’s school official requirements.
    • When a school requires that an ed tech service be used as a condition of enrollment, that service must either comply with FERPA’s school official exception requirements or parents must be given the right to opt out of its use.
  3. In addition to the Agora letterUSED also released a findings letter that clarifies how video recordings of multiple students should be treated under FERPA (see a write-upof the findings letter from Pullman & Comley).
  4. The FTC and USED held a “Student Privacy and Ed Tech” workshop on December 1stthat examined how the FTC’s “Rule implementing the Children’s Online Privacy Protection Act (COPPA) applies to schools and intersects with the Family Educational Rights and Privacy Act (FERPA),” administered by USED. You can watch a recording of the workshop panels (at the bottom under “Video”), read write-ups via EdWeek and THE Journal, or read the comments filed before the workshop.
  5. On January 30th, the House Education and the Workforce Committee held a hearing, “Protecting Privacy, Promoting Policy: Evidence-Based Policymaking and the Future of Education (watch the recording or read the EdWeek summary). The hearing was very similar to previous hearings in both 2016 and 2017 about protecting student privacy while allowing for ed research. It “remains up for debate whether increased protections for student data should come from updating the law, bolstering parental consent, improving technology or shaking up education policy research,” via Politico. Meanwhile, EdWeek reports that “The Tricky Dance of Researchers and Educators Gets Even More Complex,” in part due to privacy concerns.
  6. After the cyber threats made to schools last fall, the “FBI is asking school districts to make cyber security a priority” and the FBI and USED issued a “warning about cyber-extortion schemes focused on public schools.” This notice shared that the DarkOverlord hackers were responsible for “‘at least 69 intrusions into schools and other businesses, the attempted sale of over 100 million records containing personally identifiable information (PII), and the release of over 200,000 records including the PII of over 7,000 students due to nonpayment of ransoms.’” Meanwhile, “Cyberattacks Increasingly Target Student Data” (GovTech) and EdWeek reports that “Schools Struggle to Keep Pace With Hackings, Other Cyber Threats.” IT leaders are “likely underestimating the dangers they face;” only 15% of school technology leaders say “they have implemented a cybersecurity plan in their own district” according to a recent CoSN survey. FPF published an op-ed in The Hill with Bill Fitzgerald of Common Sense Media noting that “Securing student data is a challenge that requires sufficient funding,” not more legislation.
  7. The USED Inspector General is currently reviewing “whether Office of the Chief Privacy Officer effectively oversees and enforces compliance with selected provisions of [FERPA] and [PPRA]” (page 12).
  8. The “Student Right to Know Before You Go Act” was introduced in the Senate. The bill “makes data available to prospective college students about schools’ graduation rates, debt levels, how much graduates can expect to earn and other critical education and workforce-related measures of success,” while protecting privacy by “requiring the use of privacy-enhancing technologies that encrypt and protect the data that are used to produce this consumer information for students and families” – namely secure multi-party computation. The ACLU seems to support the bill.
  9. The legislation to reauthorize the Higher Education Act, known as the PROSPER Act, has been introduced. It maintains the ban on a federal student unit record, but does allow for a feasibility study to investigate whether to expand the National Student Clearinghouse to set up a third-party data system to analyze student outcomes (via Inside Higher Ed). PoliticoPro reports: “Push for better student data runs into privacy worries in higher education bill.”
  10. The Education Department isn’t doing enough to protect student data collected as part of financial aid programs, according to an audit from the Government Accountability Office,” via Politico.

 

The Top 10

  1. Former tech company employees are “forming a coalition to fight what they built.” Among other actions (including a livestreamed event on February 7th), Common Sense Media and the coalition plan “an anti-tech addiction lobbying effort and an ad campaign at [the] 55,000 public schools in the United States [that currently use the CSM Digital Citizenship Curriculum]… It will be aimed at educating students, parents and teachers about the dangers of technology, including the depression that can come from heavy use of social media” (via NYTimes). The focus on children in schools is because they “had little agency over whether they opted into or out of a technology platform because of pressure from both peers and educators handing out assignments.” In the meantime, there were several articles over the past two months on technology and children: The Wall Street Journal reported on “New guidance on children and technology makes the distinction between passive exposure and active play and learning with screens;” The Atlantic asked “Should Children Form Emotional Bonds With Robots?”; and a couple Apple investors “called on the tech giant to take more steps to curb the ill effects of smartphones.” Axios reports that “The internet was created by adults for adults, but it has seen a sharp uptick in kid users over the past 10 years… Silicon Valley has bet its future on younger users, but has come under fire recently for building products that critics say aren’t safe for children.”
  2. Awesome new resource: the Utah State Board of Education has released video that can be used to train administrators and educators on FERPA exceptions.
  3. Jim Siegl wrote a great blog about “Privacy Differences between Consumer Gmail and G Suite for Education,” and just released a creative commons document that districts can use to inform their community about the district’s G Suite for Education choices and provide information on the difference between Google’s consumer products and G Suite. In related news, some new features for G Suite for Education will “come with a fee.”
  4. Amazon Web Services released a whitepaper on “FERPA Compliance on AWS.”
  5. More news on privacy and personalized learning: EdWeek reports that “States [are taking] Steps to Fuel Personalized Learning,” and PoliticoPro notes that “A dozen states try out ‘personalized learning,’ but questions persist.” Inside Philanthropy asks “Is Personalized Learning the Next Big Thing in K-12 Philanthropy?” and Getting Smart discusses how “Chan Zuckerberg Backs Personalized Learning R&D Agenda” (read the blog on their approach to personalized learning). In the meantime, we have continued to see pushback against personalized learning over the past two months: Diane Ravitch, in an op-ed about “increasing misuse of technology in schools,” writes that “‘Personalized learning,’ …[is a] euphemism for computer adaptive instruction…parents want their children taught by a human being, not a computer.” The 74 shares “5 Ways to Talk (and Think) About Personalized Learning,” EdWeek reports that “Two Districts Roll Back Summit Personalized Learning Platform,” and A Long View on Education discusses “The Propaganda behind Personalised Learning,” But The Atlantic reports on “The Futile Resistance Against Classroom Tech,” noting that ed tech “is here to stay. It’s up to teachers, then, to build networks of learning, solidarity, mutual respect, and even trust.”
  6. Doug Levin has published the full results of his look at the security and privacy of state and (some) district public-facing websites. The primary reported finding is that “State and District Education Websites Fail to Disclose Ad Trackers” (see Doug’s Twitter thread of findings here and coverage from EdSurge). Another security researcher reported their findings that “Website operators are in the dark about privacy violations by third-party scripts,” including at least one ed tech provider described in the article.
  7. Common Sense Media announced that it will be releasing simpler “ratings” for ed tech apps on privacy in early 2018, based on large part on its previous evaluations. They explain more details about the new “roll-up score” apps will receive here. In the meantime, they released a blog on how “it’s often necessary to look in multiple places to find” privacy policies in order to evaluate software.”
  8. very interesting lawsuitan autistic, nonverbal student wants “his school to let him wear a device that records his day, but the district says that would violate the privacy of other students.” In related news, a Virginia mom was “charged for putting recording device in daughter’s backpack to catch bullies.”
  9. Virginia has introduced a few bills after a progressive political group “filed FOIA requests [last fall] seeking the publicly available student directories to get student cell phone numbers at every one of Virginia’s 39 public colleges.” The first bill would simply “remove student cell phone numbers and email addresses from publicly available directories,” but was “scrapped” by the House committee (however, its counterpart in the Senate did pass). The second bill, which has passed the House, is much broader, and could cause unintended consequences in schools: it would change how FERPA’s directory information exception currently works to require an opt-in from students for sharing of their directory information instead of an opt-out (this article provides a good overview of the issue and bills). Virginia Lawyers Weekly advocates for a “scalpel” (the first bill) instead of a “sledgehammer” (the second bill).
  10. Audrey Watters, the “Cassandra of Ed Tech,” has published her end-of-year top ed-tech trends articles. I highly recommend reading them; while you might not agree with her perspective, her trend articles provide a fantastic overview of the major student privacy and ed tech stories from 2017. The most relevant to student privacy: “The Weaponization of Education Data,” “Robots Are Coming For Your Children,” and “Education Technology and the New Behaviorism.”

Image: “School days” by Rachel  is licensed under CC BY-NC-SA 2.0.

FPF Welcomes New Senior Fellow

FPF is pleased to welcome Stanley W. Crosley as a senior fellow. Stanley has over 20 years of applied experience in law, data governance and data strategy across a broad sector of the economy, including from inside multinational corporations, academia, large law firm and boutique practices, not-for-profit advocacy organizations, and governmental agencies, and is the Co-Director of the Indiana University Center for Law, Ethics, and Applied Research in Health Information (CLEAR), is Counsel to the law firm of Drinker Biddle & Reath in Washington, DC, and Principal of Crosley Law Offices, LLC.  Stan is a Senior Strategist at the Information Accountability Foundation and a Senior Fellow at the Future of Privacy Forum, where he leads health policy efforts.  He is an Adjunct Professor of Law at Maurer School of Law, Indiana University and lectures on global privacy and data protection, health privacy, and data strategy.

Stan has served as the 2014-2016 Co-Chair of the federal Privacy and Security Workgroup for Health and Human Services Office of the National Coordinator for Health Information Technology (HHS/ONCHIT) providing guidance on health privacy and data security to HHS and the White House.  Stan also serves on the board of The Privacy Projects, and as the Co-Chair of the Research Committee for C-Change.  Stan is the former Chief Privacy Officer for Eli Lilly and Company and is a former board member of the International Association of Privacy Professionals (IAPP), the International Pharmaceutical Privacy Consortium, the Indiana Health Informatics Technology, Inc., served on the IOM HIPAA and Research Workgroup that delivered the first report on the research implications of HIPAA and the Brookings Institute workgroup providing guidance on the FDA’s Sentinel project.

In his global legal practice, and in his activity as a speaker/lecturer, Stan provides thought leadership and practical guidance on topics ranging from the full spectrum of health data issues, applied and innovative technology, and privacy to data strategy and applied data ethics working with a network of regulators from the US, EU and every region of the world and peers in multinational companies and start-ups in the technology, biopharma, medical device, biotech, communications, social media, and health provider industries.

Please join us in welcoming Stanley to the team!

Consumer Reports Publishes Initial Findings for Privacy and Security of Smart TVs

Today, Consumer Reports released their initial findings on the privacy and security aspects of Smart TVs. Applying their Digital Standard (developed with Ranking Digital Rights and other partner organizations), Consumer Reports identified a range of important privacy aspects and potential security vulnerabilities in Smart TVs from five leading manufacturers (Sony, Samsung, LG, TCL, and Vizio).

As we noted last week in our discussion of Smart TVs, it can be challenging for even well-informed buyers to locate and fully understand the data policies of their TVs. This is complicated by the fact that data from internet-connected TVs may be collected and processed by multiple entities (manufacturers, operating system providers, and third-party apps such as Netflix and Hulu). In addition, the market for TV data is still relatively nascent, although growing rapidly. As buyers become attuned to privacy and security features of all connected devices — including the broader Internet of Things (TVs, smartphones, smart homes, and connected cars) — we expect that the market for secure, privacy-conscious consumer technology will improve and grow.

In response, Roku has expressed their disagreement with Consumer Reports’ characterization of a potential security vulnerability, and Consumer Reports will be conducting a live Q&A Privacy Hour this evening (8-9pm ET / 5-6pm PT) about digital privacy, smart TV and toys, and best practices.

Looking Ahead

We expect Consumer Reports to continue studying and publishing their findings related to privacy and security features of connected devices. Here are some things we look forward to seeing:

Overall, we commend Consumer Reports on their important work. As internet-connected home devices continue to proliferate, these initial findings represent an important milestone in making privacy and security features accessible to consumers, researchers, and advocates.

FPF Welcomes New Interns

Lin-hsiu Huang

Lin is our Communication/Design intern for the Spring Semester. Lin is originally from Kaohsiung, Taiwan, and she is seeking a dual degree in BFA Art/Design and BS Mathematics at Morehead State University. She has presented her advocacy research and productions – documentaries and music videos – in over a dozen conferences (e.g. Poster on the Hill, the Appalachian Studies Conference, the Southern Honors Council Regional Conference). Lin works with Melanie Bates, the Director of Communications, and she will be focusing on various design and re-branding tasks such as the monthly newsletter, the Annual Report, and one-pagers, as well as updating FPF’s privacy calender and monitoring its website analytics.

Esther Lim

Esther is studying for her Masters of Laws in Global Health Law at the Georgetown University Law Center, where she previously served as a Global Health student fellow and a research assistant in the O’Neill Institute of National and Global Health Law. She earned her Bachelor of Laws degree from the University College of London. She previously worked as a Health Policy Analyst in Singapore’s Ministry of Health, working on issues of regulatory policy and health information.

Seeing the Big Picture on Smart TVs and Smart Home Tech

CES 2018 brought to light many exciting advancements in consumer technologies. Without a doubt, Smart TVs, Smart Homes, and voice assistants were dominant: LG has a TV that rolls up like a poster; Philips introduced a Google Assistant-enabled TV is designed for the kitchen; and Samsung revealed its new line of refrigerators, TVs, and other home devices powered by Bixby, their intelligent voice assistant. More than ever before, companies are emphasizing “seamless connectivity” between TVs and other connected home devices. In other words, users will be able to instruct their TV to dim the lights, display footage from their home security camera, show who is standing at the front door, or even see what’s inside the fridge — all features envisioning the TV as the command center of the ideal, futuristic Smart Home.

In the midst of this ongoing explosion in “intelligent” consumer electronics, we continue to see concern about TVs that “listen,” TVs that allegedly “spy,” and more recently, mobile apps that can “hear” what might be playing on the TV or happening in users’ living rooms. It can be challenging to distinguish accurate reports from inaccurate ones (for example, we wrote in 2015 about Samsung TVs and the confusion that stemmed from a privacy policy that was misinterpreted by many journalists).[1]

Nonetheless, Smart TVs are raising serious data privacy questions that apply broadly to all Smart Home devices – for example, how long should a manufacturer be responsible for installing software updates to keep an Internet-connected TV secure? Do buyers fully understand what kinds of data their TV manufacturer is collecting, and how to control it? How much information should be presented on the box before purchase? It is critical to identify solutions that maximize consumer benefits while avoiding privacy risks and harms.

A Deep Dive into Leading Smart TVs

In order to better understand the privacy and security issues raised by Smart TVs, we recently had the opportunity to informally review the policies and user interfaces of 2017 models from three leading manufacturers: Sony (which uses the Android TV interface), LG, and Samsung.[2] We aimed to learn more about the privacy and security aspects of leading Smart TVs.

Overall, Smart TV data practices vary considerably. Consumer choices are not always easy to exercise, and there remains a great need for transparency and consensus around how TV data should be used. Advertising using Smart TV data, for example, is a nascent but rapidly growing industry, and many TV buyers are not yet aware of the extent to which their other activities (online and offline) may be synced with their TV viewing information in a way that informs and drives advertising. Security is also a critical aspect — in today’s TVs, software updates are not necessarily automatic, or guaranteed to continue, and it can be difficult for even a well-informed person to make a purchasing decision on the basis of a company’s security practices. Although Smart TVs promise great benefits to consumers, there is clearly more work to be done to build consensus around privacy and security.

 

Skip ahead to:

What Makes a TV “Smart?”

Smart TVs Vary in their Privacy Settings and Features

Advertising using Smart TV Data is a Nascent, but Growing Industry

Conclusions

 

What Makes a TV “Smart?”

Smart TVs – or, as they are often promoted, “intelligent TVs,” are TVs that connect to the Internet to allow users to access streaming video services (such as Netflix or Hulu), other online media or entertainment, such as music, on-demand video, and web browsers. Many Smart TVs have their own App Stores, making them more similar to large-screen computers than traditional displays. Many are now integrating connectivity with other Smart Home technologies like lights, baby monitors, or kitchen appliances.

In addition to the wide variety of new entertainment options, a less appreciated benefit of Smart TVs is the ability to generate accurate, reliable TV viewing measurement data. Historically, TV viewing was difficult to measure, resulting in efforts by companies such as Nielsen to encourage families to voluntarily track their TV viewing habits. The inevitable result, as discussed by several speakers in the Federal Trade Commission’s recent workshop, was that only the most popular and mainstream TV viewing would typically be measured accurately.

In the last ten years, as TVs and streaming media has become more sophisticated, it has become possible to measure less popular or even obscure content. The ability to know what kinds of content people are actually interested in, even if it isn’t mainstream, has allowed for greater investment in content that previously would have been too risky – for example (as discussed by Samba TV’s Ashwin Navin at the FTC’s Smart TV workshop), Arrested Development (canceled and then re-launched by Netflix), or The Mindy Project (picked up by Hulu).

Nonetheless, the same data collection that allows for accurate TV viewing measurement often creates concerns around individual privacy. For example, individual data can be used to create detailed profiles based on viewing habits, sometimes in expected ways (e.g. Netflix suggestions), and sometimes in unexpected ways.

Smart TVs Vary in their Privacy Settings and Features

Are all Smart TVs the same with respect to their data practices? In many ways, they are not. The TVs we unboxed and set up had significant differences in privacy features, including things like: whether relevant policies are easily available; whether and how users are prompted to consent to data collection and uses; whether users can delete their personal data; and whether software updates are installed automatically. Notably, some TV manufacturers run software from other companies (e.g. LG TVs that run Android OS), but other manufacturers actively collect data for advertising and other purposes. Hardware manufacturers are responsible for many of the things buyers care about, like screen size, picture quality, and durability, but when it comes to data privacy, buyers should also think about the operating system and apps.

There are also some issues applicable to all modern TVs that deserve greater attention – specifically, the fact that digital advertising using TV data is a rapidly growing industry that has not yet developed consensus around privacy norms. These important questions about data privacy have broader implications for other connected devices in the Internet of Things (IoT) and the Smart Home.

Key privacy and security issues:

Relevant privacy policies are not always available.

A minimum standard for any connected device is the existence of a relevant, accurate, and, as far as possible, easy to comprehend privacy policy. Despite longstanding critiques of privacy policies – i.e. that many or most people do not read them – written policies nonetheless play a crucial role in U.S. privacy law. In addition to providing the basis for enforcement by the Federal Trade Commission if companies don’t keep their promises, they provide information to researchers, tech journalists, and privacy advocates, who routinely analyze and compare practices.

All Smart TVs that we reviewed provided users with access to privacy policies within the TV settings menu (although they varied somewhat in ease of navigation and text size). Some Smart TV manufacturers also provide access to their policies outside of the TV itself. For example, Samsung, Vizio, and Google (whose Android TV software powers devices offered by several manufacturers) make policies available online. In contrast, 2017 LG TVs provide a detailed privacy policy on the device, but it does not appear to be available on LG’s website or outside of the TV interface. No major manufacturers provide comprehensive privacy statements on TV packaging, which can make it challenging for prospective buyers in physical retail stores to compare privacy policies across brands.

Unsurprisingly, the TV manufacturer with the most readily available and clear Privacy Policy is most likely Vizio, which is also the only TV manufacturer to date (that we know of) whose TVs have been the subject of regulatory investigations, including a $2.2M settlement with the U.S. Federal Trade Commission and New Jersey Attorney General. Smart TV manufacturers would do well to follow the example of increased transparency and consumer education, and leaders in this space could go much further and present information “on the box” (as we called for in our connected toys report) so that prospective buyers can make well-informed decisions.

Automated Content Recognition (ACR) is a common feature of Smart TVs.

All Smart TVs that we reviewed – and probably, nearly all modern Smart TVs – are equipped with automated content recognition (ACR) technology.  ACR is usually built in to the TV software but also present in many third party apps. An early example of ACR technology is Shazam, the popular music recognition app that is now available on many leading TVs.

Generally, ACR technologies use one of two methods: fingerprinting or watermarking. The most common method, audio/video-based fingerprinting, relies on periodically extracting a “fingerprint” of unique characteristics of the content being watched, and sending it to a third party matching service to identify the content. Watermarking, in contrast, relies on the content creator to embed a unique “overlay” or “watermark” (often imperceptible) into the audio or video file so that it can be recognized again in the future.

Fingerprinting vs. Watermarking
Fingerprinting:

  • Audio or video-based
  • Extracts a set of unique properties from an audio or video signal (does not add any information)
  • Requires comparison with an external “matching” database to identify the content

Common Uses:

  • TV viewing measurement;
  • TV interactivity features (e.g. in-show trivia, live polls, or identifying songs or actors’ names during a show);
  • behavioral advertising;
  • detecting copyright infringement
Watermarking:

  • Audio or video-based
  • Adds a unique “overlay” or “watermark” of data (either perceptible or imperceptible), embedding it within an audio or video signal
  • Requires knowing what you’re looking for, i.e. allowing content creators to identify their own content

Common Uses:

  • tracking individually owned content;
  • identifying content creators;
  • tracing proprietary media (anti-piracy), e.g. in cinema distribution

Most Smart TVs provide users with notice of ACR data collection through on-screen notices, but the policies typically describe the collection of “Viewing Information,” or “Viewing History,” providing little detail about what the ACR technology collects or how it works in detail. Some examples of how ACR-enabled viewing data is described in on-screen policies (on file with author):

Some manufacturers also describe ACR data collection in privacy policies that users can access online; the most comprehensive description of ACR technology and its related uses appears in Vizio’s Privacy Policy, which contains a Viewing Data Supplement. The data practices and privacy disclosures of Vizio Smart TVs have been the subject of substantial public scrutiny, regulatory investigations, and a settlement with the U.S. Federal Trade Commission and New Jersey Attorney General.

TVs vary in how they obtain consent to collect and use ACR data.

A key principle of U.S. privacy law is that technology providers should ask users for their consent prior to the collection of use of their personal information, especially sensitive information such as granular TV viewing data. However, what this consent should look like, and how to structure users’ choices, is a frequent source of debate.

Occasionally, offering choices is not practical, because data might be necessary for a device to work as intended – for example, any Internet-connected device necessarily sends an IP address and MAC address in order to connect to a network. In contrast, automated content recognition data (“ACR Data”), while it may enable certain benefits, is not necessary for the TV to function. Furthermore, ACR involves the collection of sensitive information about everything that viewers are watching and when. As a result, it is appropriate for TV manufacturers to offer robust choices around the collection of this kind of data.

In the TVs that we “unboxed,” notice and consent for the collection of ACR data varied. For example, Samsung TVs ask users to opt in to optional data collection during set-up. In contrast, the LG TV presented a basic privacy statement during set-up, and then asked for additional permissions later when we tried to use the specific features that required those extra permissions. In general, these sorts of “just in time” notices reflect a more privacy-conscious design. Although in some ways it makes sense to place all the information “up front,” the set-up process is also a time when users are eager to get the device running, and not necessarily well-positioned to distinguish between routine terms of service (which may be required to set up the TV at all) and optional privacy choices related to added benefits.

Software updates (a key component of good security) are not necessarily automatic or guaranteed to continue.

As TVs become more like computers, a growing issue is the extent to which manufacturers have an obligation to continue supporting software and pushing updates to fix security vulnerabilities. As any smartphone user knows, receiving persistent reminders for app and OS updates can be frustrating, but updating software is crucial to good security.

In leading 2017 TVs, security updates are possible, but not necessarily automatic by default. In addition, it is not clear how often manufacturers push updates for security vulnerabilities. Most TV manufacturers have bug bounty programs (for example, Samsung’s Smart TV bug bounty program; Google’s vulnerability rewards program, which applies to Android TV; and Sony’s Secure@Sony program), which provide an incentive for independent security researchers to report security flaws so that companies can fix software before consumers are affected. However, without automatic updating or prominent notices on the TV interface, it can be difficult to ensure that TV buyers take the steps necessary to secure their devices.

Finally, many manufacturers do not yet make explicit assurances to their customers about how long they will continue to support older TV models. Given that the average lifespan of a TV is around 7-10 years, it is crucial that Smart TVs, with all of their added connectivity and software-dependent services, continue to be updated for a reasonable time. Furthermore, with enough transparency, software support can be a powerful selling point for a budget-impacting purchase. The importance of Smart TV security is heightened as the newest TVs become linked to other devices in smart homes.

Smart TVs vary in policies for data retention and deletion.

Finally, as with all connected devices, a key question for Smart TV providers is how they should handle users’ requests for deletion of accounts and associated data. Deletion of data is not only a practical consideration – for example, if a buyer decides to sell or re-purpose a TV – but increasingly viewed as an aspect of consumer privacy rights.

Leading Smart TV manufacturers have very different policies regarding data retention and users’ opportunity to meaningfully delete their personal information:

TV Manufacturer On-Screen Retention Policy
Sony (Bravia) “You can stop uploading the TV usage logs at any time in [Help] -> [Privacy setting]. If this uploading is disabled, the above information about the use of this TV will no longer be uploaded to Sony Corporation . . . Information already uploaded to Sony Corporation with a unique number shall be deleted or converted to anonymized statistical data within approximately six months. The viewing history data stored in this TV will also be deleted and as a result the functions which use viewing history data may not be available (such as “Popular” program recommendations).”
Samsung “Interest based advertisements will be linked to a randomized, non-persistent, and resettable device identifier called the “PSID.” You may reset your PSID at any time by visiting the settings menu, and once reset, your viewing history and Smart TV usage information will be cleared and de-linked.”
LGE “We will take reasonable steps to make sure that we keep your personal information for as long as is necessary for us to provide you with LG Smart TV Services or for the purpose for which it was collected, or as required by law.”
Vizio*

* We did not unbox a Vizio TV – the following is from Vizio’s online Privacy Policy

“If you request removal of Personal Information, you acknowledge that residual Personal Information may continue to reside in VIZIO’s records and archives, but VIZIO will not use that Personal Information going forward for commercial purposes.”

 

Digital advertising using Smart TV data is a nascent, but rapidly growing, industry.

While Smart TV data can be used for a wide range of useful features (including measurement, recommendations, and interactivity features such as in-show trivia, polling, or song recognition), it can also be used for personalized advertising in potentially unexpected or intrusive ways. As a result, there is a need for greater transparency, understanding, and consumer education on issues of TV data privacy.

Programmatic advertising, while well-established in the online ecosystem, is still nascent and growing rapidly for Smart TV data. There are two sides to TV data and advertising: (1) the use of TV viewing data for serving advertisements elsewhere, such as on associated devices; and (2) the use of data from other sources (online browsing behavior on associated devices, social media activities, or demographics) to display an advertisement on a TV. Although both activities may be surprising to consumers, they carry different implications for individual privacy.

In discussions around best practices, processors of TV data are often inclined to apply the same, or similar, standards as those that exist for online behavioral advertising, such as the Network Advertising Initiative’s Code of Conduct. Although similarities exist, direct application of standards for online advertising may not be appropriate unless they take into account key differences:

Finally, it was surprising to note that several leading Smart TVs are integrating advertising into their main user interfaces. In other words, the TV’s main screens are being designed to contain static placements for digital advertisements (whether personalized or otherwise). For many, this will be a serious downside that might not be understood at the time of purchase. Will we start to see differential pricing for Smart TVs – a less expensive TV with advertisements, and a more expensive TV without advertisements? Although this would not necessarily be unusual for connected devices, it remains to be seen how TV buyers would respond to such a pricing model.

Conclusions

Overall, the industry for Smart TVs and Smart TV data, much like the broader “Internet of Things” ecosystem, is relatively nascent. In the absence of baseline privacy legislation that would provide minimum standards for commercial collection and use of personal information, there is little consensus or consistency between different TV manufacturers about the appropriate ways to collect and use data. Smart TVs promise a range of benefits and interactive features – but are also collecting data for advertising and commercial purposes that might surprise many Smart TV users.

Unfortunately, even well-informed prospective buyers of Smart TVs do not yet have easily available tools to compare TVs on the basis of their privacy and security features. As more consumers start to use Smart TVs as a central hub for connected home devices, good security is also critical. Ironically, strong security practices can make it more difficult for independent researchers to evaluate privacy features. For example, a side effect of the increased use of SSL encryption (an important security safeguard for well-designed connected devices) is that security researchers are not able to analyze data being sent and received by a Smart TV.

We look forward to Consumer Reports’ emerging Digital Standard, which promises to provide such tools for prospective buyers to compare connected devices on many key privacy aspects – such as, for example, the existence of clear policies, or the default settings of ACR. Inevitably, it will be difficult for outside observers to compare Smart TVs on the basis of their internal business practices (especially as data is increasingly better secured and challenging to assess from external observation). For these reasons, independent trusted organizations will likely play a key role in addressing these challenges in years to come. By working towards greater transparency and privacy commitments,

 

 

[1] For more information on the 2015 Samsung concerns and other privacy issues related to voice data and speech-enabled devices, read Future of Privacy Forum’s 2015 report, “Always On: Privacy Implications of Microphone-Enabled Devices.”

[2] In evaluating Smart TVs, we approached each 2017-model TV from the perspective of an average user, relying only on public-facing documents and the communications presented in the TV’s user interface. Although some of the companies discussed here are also supporters of FPF, we applied the same approach to all TVs and believe that we have presented a fair, accurate comparison of key privacy and security aspects.