Custom Audiences and Transparency in Online Advertising
By Stacey Gray and Gargi Sen*
This morning, Facebook announced that they will begin rolling out new requirements for its “Custom Audiences” targeting tool for advertisers. These updates are a useful step towards creating better user understanding of data flows both on Facebook and in the broader web, and enhancing the accountability of advertisers who use custom marketing lists.
What is Custom Audiences?
Facebook’s Custom Audiences is a tool that allows advertisers to upload their own marketing lists of users’ contact information — typically, email addresses or phone numbers — and to target advertisements to those same users on Facebook. In order to protect privacy, the uploaded list is first “hashed” or encrypted (Facebook supports SHA 256). The platform then compares the advertisers’ encrypted data with its own encrypted data from users’ profiles to see who can be added to the advertiser’s “audience.” This is done without the advertiser knowing which users from their own list were a “match,” or whether they even have Facebook profiles. Instead, advertisers receive a report from Facebook with a rough estimate of how many people they are reaching.
For Facebook users, clicking “Why Am I Seeing This Ad?” at the top of an advertisement results in a pop-up that provides more information about how an ad was targeted. For example, the disclosure may say that the user was targeted based on his or her “Interests,” which are informed by activities such as liking certain pages on Facebook. It may also provide other information about parameters used, such as age or geographical location. With the upcoming new requirements (described below), it will now also provide information about the source of the information, and a disclosure if the advertiser was able to reach them through their phone number or email address.
What’s new for advertisers and Facebook users?
Following todays’ updates, when an advertiser uploads a customer file to create an advertising “audience,” Facebook will now require them to state if they got their information (1) directly from people, (2) from data partners, or (3) a combination of both. When users click “Why Am I Seeing This Ad?”, they will now see this information, along with a disclosure if an advertiser used their email address or phone number in order to target the ad.
As before, users can subsequently choose to stop seeing ads from that particular advertiser, or manage their preferences for targeted ads in Ad Preferences.
Facebook’s “Why Am I Seeing This Ad” feature. Source: https://www.facebook.com/business/news/introducing-new-requirements-for-custom-audience-targeting
Where does the data come from?
Advertisers can obtain marketing lists from many different sources, including from their own customers (for example, through loyalty cards, newsletters, or email subscription lists). They might also be working with a Customer Relationship Management (CRM) system, such as Salesforce, that helps handle data about clients, customers, or prospective customers.
In addition, many advertisers obtain customized lists of “audiences” from online behavioral targeting and marketing companies, such as Acxiom, Experian, or Oracle Data Cloud. For example, as we described in a 2015 cross-device tracking report, Oracle’s BlueKai links 80+ sources of data to “audience categories” based on purchasing intents—e.g. “Back to School Shopper” or “Graduation Gift Buyer.” Although Facebook is winding down its direct integration with these third-party data providers, it remains a common industry practice for advertisers to obtain marketing lists from third-party providers and use them elsewhere (subject to contractual limits).
According to Facebook’s Custom Audiences Terms of Service, it is the advertisers that are ultimately responsible for having permission to share and use the data they hold. Advertisers must promise that their data was obtained legally and appropriately — for example, they must promise to adequately encrypt the data, and to honor any users’ Opt Outs that they have committed to honoring. According to Facebook, advertisers will also now start seeing more regular, detailed reminders of these obligations to help protect users’ privacy.
Implications for AdChoices and Broader Personalized Advertising
Transparency in online advertising–i.e. showing users who placed an ad, and what kind of information was used to inform the placement of the ad– is challenging even in a controlled environment like Facebook. In the broader web, mobile apps, and Smart TVs, it becomes even more challenging, because the infrastructure and protocols must exist for hundreds or thousands of advertising platforms to communicate with users through consistent tools.
In the online environment, the most common method of providing transparency around personalized (behavioral) online advertisements is the Digital Advertising Alliance’s AdChoices icon and opt-out tool. Developed as a self-regulatory program for online advertising, it provides a way for advertisers to share information about data that is being collected about their customers while providing users with a centralized tool to opt out of seeing such ads. The system isn’t perfect–for example, most users do not recognize the icon, and different ad networks may provide different amounts of information, from fairly detailed (“this ad is based on your general location and the time of day”), to very broad (“this ad is based on “information about your online activities”).
Most advertisers have to strike a balance–too broad and the information is not useful, but too detailed, and it may become confusing or inaccessible. Last month, researchers at the Harvard Business School explored ways in which greater transparency may even lead to lowered ad effectiveness, if users are surprised by unexpected information flows.
Looking Ahead
We applaud Facebook’s efforts towards building the necessary infrastructure for robust advertising transparency. Will Facebook users click through to view the new disclosures in targeted advertisements? If so, what will their reactions be? Much of the benefit depends on how the platform raises awareness about the new disclosures, and whether the disclosures are tied to meaningful user choices to better control their data. Following months of news about data privacy and the influence of platforms that enable personalized content, we have seen an enhanced focus on transparency and better understanding of online data flows.
Here are a few things on the horizon:
Better User Education. For users who might not have known that they could be reached using their email addresses or phone numbers, these new requirements are an opportunity for Facebook users to become more aware of online data flows.
More Robust User Controls. Transparency through privacy disclosures is primarily useful when it provides users with accessible tools to control the use of their data. We look forward to seeing not only Facebook but the broader ecosystem of online advertisers continue to improve and iterate on user Opt Outs and controls.
Political Advertisements and Self-Regulation. In the last year, there has been a growing awareness of the specific role of targeted political content in shaping political views. The Honest Ads Act, introduced in 2017, aimed to address these issues by requiring those who purchase and publish such ads to disclose information about the advertisements to the public. Self-regulatory efforts have also emerged, with the Digital Advertising Alliance recently launching an industry-wide initiative to label political ads. Facebook also recently began requiring political advertisers to verify their identity and location for election-related and political issue ads, and making this information available to users with clear labels. The effectiveness of these efforts will help inform efforts around broader advertising transparency.
We look forward to continuing to engage with industry, academics, and advocates on these issues to work towards better consumer education and controls for online advertising.
* Stacey Gray is a Policy Counsel at the Future of Privacy Forum, specializing in Internet of Things, Ad Tech, and geo-location data privacy issues. Gargi Sen is a Legal Fellow at the Future of Privacy Forum, with 10+ years of experience in technology contracts, compliance, and risk assessments.
Facebook, Acxiom, and Salesforce are supporters of the Future of Privacy Forum.
Thanks to Facebook for their proactive engagement with the privacy community on these updates.
Free Student Privacy Bootcamp in Chicago on June 26
FPF has partnered with the Software and Information Industry Association (SIIA) to continue our series of free privacy bootcamps for ed tech companies during the ISTE conference in Chicago. Slots are limited, so RSVP to attend now!
When: Tuesday, June 26th in Chicago from 9am-5:30pm
Where: @1871, 222 W. Merchandise Mart Plaza Suite 1212, Chicago, IL 60654
Need a fast and furious intro to what it takes to do privacy – and security – right as a player in the ed tech market? Want to know why forty states have passed new student privacy laws since 2013? Curious about what other laws apply to your company? Have you had districts ask you to sign privacy contracts, and don’t know why? Want to learn how to get free resources to help you improve your privacy practices and build trust with schools and parents?
Then this event is for you!
This free Student Privacy Bootcamp is co-hosted by SIIA and FPF, the co-founders of the Student Privacy Pledge. We are able to offer this event for free through the support of the Bill & Melinda Gates Foundation.
Speakers include the Director of the Student Privacy Policy and Assistance Division from the US Department of Education, states and districts leading the way on student privacy issues, and education and student privacy experts from itslearning, InnovateEDU, and Orrick.
For the complete agenda and to register, click here.
Please reach out to Sara Collins ([email protected]) with any questions.
We hope you can join!
Empirical Research in the Internet of Things, Mobile Privacy, and Digital Advertising
In the world of consumer privacy, including the Internet of Things (IoT), mobile data, and advertising technologies (“Ad Tech”), it can often be difficult to measure real-world impact and conceptualize individual harms and benefits. Fortunately, academic researchers are increasingly focusing on these issues, leading to impressive scholarship from institutions such as the Princeton Center for Information Technology Policy (CITP), Carnegie Mellon University School of Computer Science, UC Berkeley School of Information, and many others, including non-profits and think tanks.
At events like the Privacy Law Scholars Conference, recently held in Washington, DC (PLSC 2018, May 30-31), privacy-minded scholars from around the world and across disciplines meet to share their research and new ideas each year. Many other conferences sponsor and call for technical privacy research, such as the Privacy Enhancing Technologies Symposium (PETS), which recently announced its accepted papers for 2018. Empirical studies from privacy researchers such as these are an invaluable part of having reasonable and concrete policy debates.
Here is some of the latest in technical and empirical privacy research from 2018:
Researchers conducted semi-structured interviews with owners of smart homes, to explore “privacy awareness, concerns, and behaviors.” They found that: “first, convenience and connectedness are priorities for owners of smart homes, and these values dictate their privacy opinions and behaviors. Second, user opinions about who should have access to their smart home data depend on the perceived benefit. Third, users assume their privacy is protected because they trust the manufacturers of their IoT devices. Our findings bring up several implications for IoT privacy, which include the need for design for privacy and evaluation standards.”
In this study, the authors set up six common IoT devices, and observed network traffic in order to assess privacy and security risks of home devices, as well as study their effects on bandwidth and power consumption. Readers interested in this topic should also check out an oft-cited paper presented at FTC’s 2016 PrivacyCon: A Smart Home is No Castle: Privacy Vulnerabilities of Encrypted IoT Traffic, by Noah Apthorpe, Dillon Reisman, and Nick Feamster (Princeton University).
Based on workshops and interviews with 40 experts, practitioners and scholars (including from Future of Privacy Forum), this report explores the privacy challenges of connected devices and the emerging strategies to address them, including issues of transparency, consent, identifiability, emotional and bodily privacy, and the destabilization of boundaries. UC Berkeley’s Center for Long-Term Cybersecurity published a short white paper version of the research.
The authors explore “device fingerprinting” in the mobile environment, where mobile apps typically rely on non-cookie tracking tools (advertising identifiers), and users of mobile browsers can be more difficult to distinguish from each other than users of web browsers. Among other things, the authors “collect measurements from several hundred users under realistic scenarios and show that the state-of-the-art techniques provide very low accuracy in these settings.”
In this paper, similar to the one above, the authors demonstrate a 90%+ success rate for re-identifying users on the basis of their mobile gestures. As we learn more about the use of mobile sensors for inferring behavior (for example, whether a user is intoxicated), fingerprinting of sensor data from mobile devices may become one of the next major considerations for consumer privacy.
“Won’t Somebody Think of the Children” Privacy Analysis at Scale: A Case Study With COPPAby Irwin Reyes (International Computer Science Institute), Primal Wijesekera (University of British Columbia), Joel Reardon (University of Calgary), Amit Elazari (University of California – Berkeley), Abbas Razaghpanah (Stony Brook University), Narseo Vallina Rodriguez (International Computer Science Institute, IMDEA Networks), and Serge Egelman (International Computer Science Institute, University of California – Berkeley)
Researchers from UC Berkeley and International Computer Science Institute, and University of British Columbia, have demonstrated a way to efficiently analyze network traffic for mobile privacy implications in 80,000 Android apps in the Google Play store.
Email marketers’ uses of third-party tracking tools is often less well understood than online and mobile advertising technology. In order to better understand and describe it, Englehardt et al created a body of emails by signing up for hundreds of commercial mailing lists, and monitored the subsequent web traffic upon opening those emails. Their work describes a complex email marketing ecosystem, stating: “Email tracking is pervasive. We find that 85% of emails in our corpus contain embedded third-party content, and 70% contain resources categorized as trackers by popular tracking-protection lists. There are an average of 5.2 and a median of 2 third parties per email which embeds any third-party content, and nearly 900 third parties contacted at least once.”
This research, published by the Harvard Business School and based off the lead author’s dissertation, investigates how and why ad transparency — the disclosure of ways in which personal data is used to generate personalized or behaviorally targeted ads — impacts the effectiveness of those online ads. The authors predict that “ad transparency undermines ad effectiveness when it exposes marketing practices that violate consumers’ beliefs about ‘information flows’–how their information ought to move between parties.” (p.2). Through experiments, the authors find supporting evidence that whether information flows are deemed acceptable depends on “the extent to which the ad is based on 1) consumers’ activity tracked within versus outside of the website on which the ad appears and 2) attributes explicitly stated by the consumer versus inferred by the firm (the latter of each pair is deemed less acceptable.” (p.38)
Predicted outcomes for ad effectiveness, according to Tami Kim, et al, based on whether the consumer trusts the platform and finds the underlying data flows acceptable. Source: Figure 1, p. 15 of “Why Am I Seeing This Ad? The Effect of Ad Transparency on Ad Effectiveness” by Tami Kim et al.
In this paper, Stacia Garlach and Professor Daniel D. Suthers investigate a small sample of smartphone users, and explore whether and how they notice, understand, and use the “AdChoices” icon in typical mobile advertisements.
Discrimination in Online Advertising A Multidisciplinary Inquiry – by Amit Datta, Carnegie Mellon University, Anupam Datta, Carnegie Mellon University, Jael Makagon, UC Berkeley, Dierdre K. Mulligan, UC Berkeley, and Michael Carl Tschantz, International Computer Science Institute, Proceedings of Machine Learning Research 81:1–15, 2018
In this paper, Datta et al explore the ways in which discrimination may arise in the targeting of job-related advertising. Under Section 230 of the Communications Decency Act, which provides interactive computer services with immunity for providing access to information created by a third party, the authors argue that “such services can lose that immunity if they target ads toward or away from protected classes without explicit instructions from advertisers to do so.”
Did we miss anything? Send us your recommended research and scholarship at [email protected]
Several papers listed above are accepted papers for the 2018 Privacy Enhancing Technologies Symposium (PETS) (July 24-27, 2018).
EDITED 6/13/18 to add a recent report, Clearly Opaque: Privacy Risks of the IoT (May 2018), by The Internet of Things Privacy Forum
A Toast to Privacy: Celebrating Day 1 of the GDPR
On May 24, the Future of Privacy Forum was honored to co-host a “Toast to Privacy” with the European Union Delegation to the United States to mark the implementation of GDPR and celebrate those who have been working on related projects. The event was held at the Delegation of the European Union’s offices and was attended by public and private sector, government, and civil society leaders from Europe and the United States.
A “midnight” toast to data protection and privacy at 6pm (midnight in Brussels) featured remarks from key stakeholders representing EU institutions, industry, academia, and more via video conference. Speakers included Ambassador David O’Sullivan (European Union Ambassador to the United States), Jules Polonetsky (CEO of The Future of Privacy Forum), and Terrell McSweeny (former FTC Commissioner).
By Sara Collins, Stacey Gray, Tyler Park, and Amelia Vance
Privacy advocates have long feared that student data can be used to inappropriately market to students or limit their future opportunities. In the United States, information about students is often used to send them mail or emails about educational opportunities, scholarships, or after-school activities such as tutoring services or sport leagues. Access to student data is heavily regulated when it comes from schools, teachers, or school surveys. But what about when that same data can be collected from other sources, like public records, commercial sources, or the students themselves? This is the topic of a new study by Fordham University’s Center on Law and Information Policy (Fordham CLIP), “Transparency and the Marketplace for Student Data.”
In the study, the authors present findings from years of research into the commercial ecosystem for data about students, with a particular focus on the practices of “data brokers.” In general, data brokers are companies that buy and sell information about consumers from a wide variety of sources, often from public records or commercial sources. This information may be used for many purposes, such as analyzing trends, or buying and selling lists of contact information for various categories of consumers for commercial advertising or direct marketing. Data sharing such as this is common: as we described in our 2015 cross-device report, Oracle’s BlueKai links more than 80 sources of data to online IDs that can be used to target consumers based on categories or purchasing intents—e.g. “Back to School Shopper” or “Graduation Gift Buyer.” Many companies provide similar services, as personalized or targeted offers are more valuable to advertisers than generic content.
When commercial data involves categories of people known or inferred to be “students” — for example, because they might be shopping for new electronics — it raises understandable concerns from parents and advocates. For example, one seller contacted by Fordham CLIP was willing to sell a marketing list for “‘fourteen and fifteen year old girls for family planning services.'” Elana Zeide, Visiting Assistant Professor at Seton Hall University’s School of Law, remarked that the study’s findings “reflect the broader trend to score and credential human beings as an integral part of our data-driven society.”
Legal frameworks exist to protect student information, but when data does not come from teachers or schools, it will fall outside of the strict requirements of the Family Educational Rights and Privacy Act (FERPA) and most state laws, such as the California Student Online Personal Information Protection Act (SOPIPA). Nonetheless, schools must abide by the Protection of Pupil Rights Amendment (PPRA) when administering surveys that could be used for secondary commercial purposes. The Fair Credit Reporting Act (FCRA) and Children’s Online Privacy Protection Act (COPPA) may already be able to address some of the problems raised by the study.
FERPA and State Student Privacy Laws Governing Schools
“One bright spot in the report is that, among the small number of school districts that responded to the researcher’s requests for information, none appeared to be selling or sharing student information to advertisers,” noted Bill Fitzgerald, Project Director at InnovateEDU, on his personal blog. “However, even this bright area is undermined by the small number of districts surveyed, and the fact that some districts took over a year to respond, and with at least one district not responding at all.”
Under FERPA, schools and teachers are prohibited from sharing information from a student’s educational record — such as contact information, demographics, educational goals, or performance — unless there is parental consent or the disclosure falls within one of FERPA’s exceptions. Similarly, the 121 state student privacy laws passed since 2013 place strict requirements around student information held by local and state education agencies (LEAs or SEAs, respectively), or data that third parties obtain in order to fulfill a school function for an LEA or SEA. As a result, schools typically cannot be used as a source of commercial information for things like advertising and marketing.
However, as this study notes, FERPA and state student privacy laws only apply to personally identifiable information collected by a school or a third party acting on behalf of the school. As a result, if similar data can be collected from outside sources like public records, websites, or mobile games, the information can typically be used for commercial purposes (although not without restrictions, as we describe below).
PPRA and Surveys Administered in Schools
When schools administer surveys to students, a lesser-known federal law applies: the Protection of Pupil Rights Amendment (PPRA). In the research presented in this study, the authors describe how information about students can inadvertently become commercially available through surveys that students fill out online or in schools. For example, some schools are likely giving out surveys from the National Research Center for College and University Admissions (NRCCUA), an organization listed in the study as a “Student Data Broker.” NRCCUA says on its website that it obtains its information from surveys of high school students given by “teachers, guidance counselors, and online.”
PPRA requires schools and districts to have a policy, created in consultation with parents, that establishes a parental right to inspect any survey created by a third party before the survey is administered. If the survey asks about certain sensitive topics – such as family income, religion, political beliefs, or anti-social behaviors – PPRA requires even more: parents must be told about the survey and given the opportunity to opt their child out of taking it. The Department of Education has recently released guidance on administration of third party surveys, reiterating that the PPRA requires consent from parents, not students.
PPRA also has a little-known provision that covers student information marketers: the policy that schools and districts must develop must include specific policies regarding:
[t]he collection, disclosure, or use of personal information collected from students for the purpose of marketing or for selling that information (or otherwise providing that information to others for that purpose), including arrangements to protect student privacy that are provided by the agency in the event of such collection, disclosure, or use. 20 U.S.C. § 1232h(c)(1)(E).
If a survey is found to violate PPRA, it is the school, not the entity that created the survey, that faces liability. As a result, schools and districts should carefully review their PPRA policies and ensure that all school staff know the restrictions of the law. Schools should also review how the school as a whole or individual teachers decide which surveys to administer, to whom the survey authors are releasing the data, and for what purposes.
Existing Enforcement Opportunities
While the study mentioned the Fair Credit Reporting Act (FCRA) as a potential model for regulations that should be created to regulate student data brokers, the law itself may already cover certain limited uses of this data under some the most concerning circumstances. FCRA has a broad definition of “consumer report” which includes “any information” related to “general reputation, personal characteristics, or mode of living” used to make decisions regarding credit, employment, housing, or another benefit. For example, if data brokers provide information to colleges or universities with the knowledge that those institutions are using the information provided to make admissions decisions, they will be considered a “consumer reporting agency” and subject to the strict requirements under FCRA. The FTC has previously issued warning letters to a variety of data brokers in different fields regarding their obligations under FCRA.
Schools should also be aware that the FTC has stated that COPPA applies to any data collected from children under the age of 13, even if it is collected through surveys administered in schools or online. If a school or parent believes that data is being collecting information from children under the age of thirteen for secondary or inappropriate purposes, they may file a complaint with the FTC.
Conclusions
In light of this study, we may see greater regulatory attention to data about student consumers from the Federal Trade Commission (FTC) or the states’ Attorneys General. It is also possible that local, state, or federal policymakers will introduce legislation to regulate data about students that falls outside of FERPA, COPPA, and PPRA. For example, Vermont recently passed legislation requiring “data brokers” to post information about their data practices and opt-outs with the Vermont Secretary of State. These state and sectoral laws may prove to be imperfect solutions, and highlight the growing importance of crafting a baseline privacy law in the United States that would provide more consistent protections for all consumers.
Amidst calls for greater transparency and parental control over this type of data, it is important to ensure that strong privacy protections are balanced against legitimate educational uses of student data, such as providing more educational opportunities to students. Companies who provide these kinds of opportunities for direct marketing can help build trust by establishing clear policies around how data is collected, how it may be used in fair and non-discriminatory ways, and how it will be safeguarded.
Note: One of the CLIP study’s authors, Joel Reidenberg, is an FPF Advisory Board Member. Elana Zeide, quoted in this post, is also an FPF Advisory Board Member.
Ensuring School Safety While Also Protecting Privacy: FPF Testimony Before the Federal Commission on School Safety
By Sara Collins, Tyler Park, and Amelia Vance
Amelia Vance, FPF’s Director of Education Privacy, spoke today at the Federal Commission on Student Safety’s Listening Session (see the video here). She asked that any Commission recommendations include the need for privacy “guardrails” around school safety measures to ensure that student privacy and equity are protected.
She suggested three concrete steps that could improve safety and safeguard privacy:
First, the Department of Education’s Privacy Technical Assistance Center should publish guidance regarding FERPA provisions that permit data sharing in response to safety threats;
Second, programs or proposals to collect and analyze additional student data should be targeted at the most serious threats to school safety, not minor infractions of school policy; and
Finally, Schools should be transparent about their data-driven safety initiatives.
Parents trust schools with their children, and schools should act to ensure student safety. In order for that to happen, schools must engage in someforms of surveillance. This includes ensuring preschoolers do not wander off, keeping third graders on task, as well as preventing or identifying instances of bullying or potential violence. These responsibilities are not new, but, as technology has evolved, schools have an increased ability to monitor students continually, both in and out of the classroom. Schools are using services such as social media monitoring, digital video surveillance linked to law enforcement, and visitor management systems to help protect their students. These can be useful tools; however, they can also harm students if there are not appropriate measures in place to regulate and guide their use.
Many recent state school safety proposals include surveillance as a tactic to avoid future school violence. For example, Florida’s new law creates a database combining data from social media, law enforcement, and social services agencies. The school safety plan from Texas proposes combining local, state, and federal resources to scan and analyze not only public student social media posts, but also “private or direct messages” and “Information exchanged in private chat groups [or] via text message.” To be clear, the government is actively seeking out children’s social media accounts, both public and private, and combining this information with existing law enforcement or social services records to profile which students are threats.
Individual districts and states can and should set their own policies of whether and how to monitor students and protect school safety. However, privacy guardrails must be drawn so parents and students can be reassured that their rights will be protected.
The negative effects of surveillance should be considered as well. Surveillance can undermine a student’s sense of safety, creating a prison-like environment where students feel big brother is always watching. Students are still maturing and need to know schools are safe spaces where they can ask questions, think creatively, and make mistakes. Increased surveillance can also create a “permanent record” that can limit a student’s future opportunities. Retention, use and analysis of such data may inform algorithmic decision-making that impacts students later in life, opening a door to the universe of harms that can arise from automated decision-making. There are also legal limitations on when student data can be shared with law enforcement, as FPF discussed in a recent publication.
Some advocates have expressed concern that if districts and states do not set policies limiting and describing the purposes for which surveillance will be used, surveillance could aggravate existing school discipline disparities between groups of students. A recent GAO report on student discipline found that while students with disabilities made up only twelve percent of the student population they accounted for nearly twenty-five percent of the students who were referred to law enforcement, arrested, or suspended. The latest report from the Department of Education’s Civil Rights Data Collection showed that black students accounted for fifteen percent of the student population as a whole, they accounted for thirty-one percent of arrests. This disparity has appeared in the student surveillance space as well: an Alabama school district implemented a new security protocol that included social media monitoring. Of the 14 suspensions issued by the school district due to the new protocol, 12 were African American students. Increased student surveillance is likely to disproportionately affect students of color and students with disabilities.
These effects can be mitigated by adopting privacy protections, such as those laid out in the Fair Information Practice Principles and recommendations from a report about the intersection of school surveillance, privacy, and equity, co-written by Amelia in her previous role at the National Association of State Boards of Education. Any surveillance that is undertaken should have policies about what data is collected, why it is collected, and how the data will be used. Schools should be explicit with students and parents that these tools are designed to curb and respond to threats of violence and harm – not petty infractions of school codes. When a school develops its emergency response plan, it should articulate guidelines about what constitutes a threat to the health or safety of a student, as well as the kind of information that will be shared in response to that threat.
However, privacy should never get in the way of preventing school violence. In the wake of the Virginia Tech shooting, the Family Educational Rights and Privacy Act (FERPA) was amended to clarify when information can be shared during a health or safety emergency. However, that is not enough; districts have shared that they need more guidance on when they are able to report potential safety threats, and not enough teachers are aware of what FERPA allows. The Department of Education’s Privacy Technical Assistance Center (PTAC) has been vital for schools seeking practical guidance on FERPA. Amelia testified that the Commission should recommend that PTAC publish guidance and provide more technical assistance on this issue.
Schools across America are looking to the Commission’s recommendations to guide their decisions around safety and surveillance. Amelia testified that the Commission should recommend that programs or proposals to collect and analyze additional student data should be targeted at the most serious threats to school safety. If applied broadly to less serious violations of school rules, the programs could overwhelm school administrators with data, cast suspicion on students who show no signs of violent behavior, and fail to promptly identify individuals who pose genuine threats to school safety. Amelia also asked that the Commission urge schools to be transparent about their data-driven safety initiatives. Secrecy breeds distrust, and trust is a crucial pillar of school communities.
Student opportunities should not be limited, either by school safety concerns or by violations of their privacy.
Data Privacy: It's Time to Treat Your Car Like a Smartphone
Lauren Smith, a FPF Policy Counsel, was recently featured in 2025 AD. Lauren leads the FPF Connected Cars Working Group, and serves a global expert and thought leader through speaking engagements, media interviews, and interaction with state and federal regulators and strategic partners. In this exclusive interview, she discusses best practices to advance privacy practices and understanding as new mobility technologies come to market. Lauren explains:
“The existence of data in the car is not a new phenomenon. Event data recorder and onboard diagnostics have been in cars for decades. But in the past few years there has been an explosion in the variety, the connectivity and the volume of data. Cars are no longer simply mechanical chassis that take us from point A to point B. It’s time for people to treat their cars like a computer or a smartphone. If you sell your car or return your rental car, you should think about which data you might want to delete.”
Dept of Ed: Parents, Not Minor Students, Must Consent to College Admissions Pre-Test Surveys and Data Sharing
By Amelia Vance, Sara Collins and Tyler Park
Ed Tech vendors that use student data to provide services in schools must navigate a complicated legal landscape, including intertwining state and federal laws, all of which are designed to protect student privacy. Newly-released technical assistance from the US Department of Education’s (USED)Privacy Technical Assistance Center (PTAC) explores student data use practices by state and local education agencies (SEAs and LEAs) that register students for college admissions examinations. School registration of students for these exams or the use of these exams as Title I assessments can raise questions about statutory compliance obligations.
A key finding in the PTAC guidance: optional surveys administered to students as part of the SAT and ACT exams can violate the Protection of Pupil Rights Amendment (PPRA), Family Education Rights and Privacy Act (FERPA), and Individuals with Disabilities Education Act (IDEA) if appropriate transparency, consent, and other privacy safeguards are not employed.
The guidance is not limited to administration of the SAT and ACT in particular; PTAC intends the guidance to inform LEA and SEA practices whenever schools register students for third-party examinations and surveys that collect protected information. Also, it is worth emphasizing that this technical assistance does not apply when students take the SAT or ACT independently.
Why is this happening now?
Originally, students signed up to take the ACT and/or SAT on their own to fulfill college admissions requirements for certain schools. In recent years, states have elected to make the ACT and/or SAT mandatory for high school students and have begun administering the test directly. This change occurred for two reasons. First, states and districts want to increase college access – by making the ACT/SAT free, it removes an economic barrier to entering college. Second, the ACT/SAT is now being used by some states as the high school english and math assessment required and authorized by Title I of the ESEA. In states where this shift occurred, information collected in conjunction with the administration of those tests becomes part of the student’s educational record, and therefore subject to legal obligations under FERPA.
The Issue
As part of an SAT or ACT test, students can fill out a pre–survey that asks questions about various topics, including religious affiliation and parental income. Students currently “opt in” to taking the survey and having the information they enter into the survey shared with education-related third parties. The primary purpose of these surveys is to help colleges, universities, scholarship services, and recruiters identify students who may be of interest to their programs. However, it is not always clear that these surveys are voluntary; PTAC wrote in the technical assistance:
We have heard from teachers and students…that the voluntary nature of these pre-test surveys is not well understood, and that each of the questions requires a response, and the student must affirmatively indicate in response to multiple questions that the student does not wish to provide the information.
PPRA
The Protection of Pupil Rights Amendment (PPRA), passed in 1978, “seeks to ensure that schools and contractors obtain written parental consent before minor students are required to participate in any ED-funded survey, analysis, or evaluation that reveals” certain sensitive information – including religion and parental income, two categories of information included in the SAT and ACT pre-surveys. Under PPRA,
LEAs must also adopt policies to protect student privacy in the event of the administration or distribution of any survey containing questions that ask students to reveal information from one of the eight PPRA-protected areas and also provide notification to parents, at least annually, at the beginning of the school year, of the specific or approximate dates during the school year when such a survey is scheduled or expected to be scheduled and an opportunity for parents to opt their students out of participation in any such survey.
The technical assistance finds that, under PPRA, parents – or students if they are over 18 – must be notified and given the opportunity to opt-out of participation in the SAT and ACT pre-surveys by LEAs.
FERPA and IDEA
FERPA requires data protections around student personally identifiable information in education records. FERPA limits what data can be disclosed, to whom, and determines when consent is required for disclosure. IDEA governs the rights of students with disabilities, requiring broader student data protections than FERPA. Under FERPA, re-disclosing information from student records can only occur with prior written parental consent orunder a FERPA exception. Under IDEA Part B regulations, re-disclosures also cannot occur without written parental consent or an applicable exception.
Implications
The college admission exam technical assistance letter has a number of implications for stakeholders. Because the technical assistance and its implications are layered, more information and impressions will become apparent over the next few weeks. Here are some of our initial takeaways from the letter:
Contracts with Ed Tech Companies
Interestingly, the technical assistance says that “[c]ontracts between testing companies and SEA, LEAs, or schools for testing…should include provisions assuring that before [personally identifiable information] is disclosed nonconsensually, the testing companies (when acting on behalf of the SEA, LEA, or school) will comply with the privacy protections required by Federal law, specifically FERPA and IDEA.” It is unclear – but not unlikely – whether this requirement would apply to contracts between non-testing ed tech companies and SEAs, LEAs, or schools.
Specifically, the technical assistance “encourages SEAs and LEAs to consider the following when contracting with testing companies:
Ensure that the contract with the testing companies specifies the FERPA exception under which PII from students’ education records is to be disclosed to the testing company;
Include specific prohibitions in the contract governing unauthorized use of PII and redisclosure of PII from education records (including biographic or demographic information provided by the SEA or LEA and students’ test scores or test score ranges) without written consent of the parent or eligible student;
Include specific requirements on how the testing companies should safeguard student PII; and
Include any additional requirements that may be mandated by your State.”
This appears to be the first time that PTAC or USED has articulated these best practices.
PPRA: A Bigger Issue Moving Forward?
This is the most extensive technical assistance released on PPRA in almost forty years. It is possible that this indicates that USED has a heightened interest in providing guidance regarding PPRA, and that further guidance may be forthcoming. This is not only interesting because of the issues that PPRA’s survey restrictions raise, but also because PPRA has many little-known but significant restrictions and requirements:
LEAs must develop and adopt policies, in consultation with parents, regarding both surveys on restricted topics and all of the below issues;
These policies must include a policy on “[t]he collection, disclosure, or use of personal information collected from students for the purpose of marketing or for selling that information (or otherwise providing that information to others for that purpose), including arrangements to protect student privacy that are provided by the agency in the event of such collection, disclosure, or use.” However, this does not apply when the marketing or selling is “for the exclusive purpose of developing, evaluating, or providing educational products or services for, or to, students or educational institutions;”
PPRA also requires policies regarding student physical examinations or screenings administered by schools; and
Parents have the right to inspect any instructional material used as part of the educational curriculum for the student.
Parents must be annually notified about these policies, and also notified whenever there is a survey that includes questions on a restricted topic or when student personal information will be “collected from students for the purpose of marketing or for selling that information (or otherwise providing that information to others for that purpose).”
Many schools are unaware of the full scope of PPRA, and additional guidance and technical assistance around the law would likely be beneficial. In this technical assistance, PTAC recommends that LEAs:
Establish policies compliant with PPRA protections that consult parents and eligible students (18 years or older) regarding the distribution and notification of pre-test surveys
Establish a system that allows students and their families to review pre-test survey questions
Explicitly communicate that all pre-test surveys are voluntary and optional to faculty, staff, students and their families;
Give students and families notice that pre-test surveys contain PPRA-protected topics and they have the option to opt-out under PPRA protections; and
Require prior written consent from parents or eligible students for the disclosure of PII from the student’s education record to a third party for college recruiting purposes.
Significant Guidance
The technical assistance – designated as “significant guidance” – issued by PTAC demonstrates that the stakes are high for companies offering college admission testing to students and the schools that rely on them. Although financial penalties are rare, USED has other enforcement options if a school acts in a way that is inconsistent with the law, such as imposing a five-year ban on data transfers from an offending ed tech provider to the LEA or SEA where they violated FERPA. Highlighting to LEAs, SEAs, college admission test vendors, and other ed tech companies what is and is not consistent with FERPA, IDEA, and PPRA can not only increase compliance with those laws, but ensure all stakeholders understand how to better protect student data.
FPF Testifies Before Congress on Promoting and Protecting Student Privacy
FPF Testifies Before Congress on Promoting and Protecting Student Privacy
Washington, D.C– Today, Future of Privacy Forum’s (FPF) Amelia Vance, Director of the Education Privacy Project, will deliver testimony in a hearing before the House Committee on Education and the Workforce, “Protecting Privacy, Promoting Data Security: Exploring How Schools and States Keep Data Safe.” In her prepared testimony, Vance will comment on how states, districts and ed tech companies can work together in ensuring student privacy.
Vance will discuss the value of technology and data when it is used and implemented properly, and discuss innovative practices that help schools, states, and companies protect student privacy. She will also address the challenges and opportunities that stakeholders face when supporting appropriate use of educational technologies while safeguarding privacy.
Also testifying at the hearing are David Couch, K-12 CIO and Associate Commissioner of the Kentucky Office of Education Technology; Catherine Lhamon, Attorney and Former Assistant Secretary for Civil Rights at the U.S. Department of Education; and Dr. Gary Lilly, Superintendent of Bristol Tennessee City Schools.
Vance previously spoke on student privacy at the December 2017 workshop on student privacy and ed tech, hosted by the FTC and the U.S. Department of Education.
FPF is a non-profit organization focused on consumer privacy issues. FPF primarily equips and convenes key stakeholders to find actionable solutions to the privacy concerns raised by the speed of technological development. FPF’s Education Privacy Project works to ensure student privacy while supporting technology use and innovation in education that can help students succeed. Among other initiatives, FPF maintains the education privacy resource center website, FERPA|Sherpa, and co-founded the Student Privacy Pledge.
A full version of Vance’s testimony is available here.
FPF Comments on CA Public Utilities Commission Autonomous Vehicle Passenger Service Proceeding
Last week, the Future of Privacy Forum filed written comments in response to the California Public Utilities Commission’s proposed decision authorizing pilot programs for passenger service in Autonomous Vehicles. The CPUC is a consumer protection agency that oversees, among other topics, provision of passenger service in the state. The proposed decision called for a number of criteria to be met by companies seeking to operate AV passenger service, including reporting of communications between passengers and remote operators of driverless AVs, as well as aggregated operations data. Opening comments by other parties called for even more data to be collected, including GPS information for pick-ups and drop-offs.
FPF filed comments focusing on three main topics: 1) Because the Commission cannot ensure that data will not be made public, it should minimize data collection and incorporate privacy safeguards; 2) Communication between passengers and remote operators could contain sensitive information, and the disclosure contemplated by the Commission could create serious privacy risks; and 3) Other opening comments’ requests for additional service data raise additional, significant privacy and security concerns. Some of these points echo our prior letter to the New York Taxi and Limousine Commission regarding privacy risks of sensitive geolocation and other consumer data).