Talking to Kids About Privacy: Advice from a Panel of International Experts
Now more than ever, as kids spend much of their lives online to learn, explore, play, and connect, it is essential to ensure their knowledge and understanding of online safety and privacy keeps pace. On May 13th, the Future of Privacy Forum and Common Sense assembled a panel of youth privacy experts from around the world for a webinar presentation on, “Talking to Kids about Privacy,” exploring both the importance of and approaches to talking to kids about privacy. Watch a recording of the webinar here.
The virtual discussion, moderated by FPF’s Amelia Vance and Jasmine Park, aimed to provide parents and educators with tools and resources to facilitate productive conversations with kids of all ages. The panelists were Rob Girling, Co-Founder of strategy and design firm Artefact, Sonia Livingstone, Professor of Social Psychology at the London School of Economics and Political Science (LSE), Kelly Mendoza, Vice President of Education Programs at Common Sense, Anna Morgan, Head of Legal and Deputy Commissioner of the Irish Data Protection Commission (DPC), and Daniel Solove, Professor of Law at George Washington University Law School and Founder of TeachPrivacy.
The first thing that parents and educators need to know? “Contrary to popular opinion, kids really care about their privacy in their personal lives, and especially now, in their digital lives,” shared panelist Sonia Livingstone. “When they understand how their data is being kept, shared, monetized and so forth, they are outraged.” To help inform youth, Livingstone curated an online toolkit with young people to answer frequently asked privacy questions that emerged from her research.
And a close second: their views about privacy are closely shaped by their environment. “How children understand privacy is in some ways colored by the part of the world they come from and the culture and ideas about family and ideas about institutions that they can trust, and especially how far the digital world has already become something they rely upon,” Livingstone added.
Kelly Mendoza encouraged audience members to start having conversations about privacy with kids at a young age, and to get beyond the common but too simple advice to not share personal information online. Common Sense’s Digital Citizenship Curriculum provides free lesson plans to address timely topics and prepare students to take ownership of their digital lives by grade and topic.
She also emphasized the important role that schools play in educating parents about privacy in her remarks. “It’s important that schools and educators and parents work together because really we’re finding that schools can play a really powerful role in educating parents,” Mendoza said. “Schools need to do a better job of communicating – what tools are they using? How are they rated and reviewed? What are the privacy risks? And why are they using this technology?” A useful starting point for schools and parents is Common Sense’s Family Engagement Resources Toolkit, which includes tips, activities, and other resources.
Several panelists emphasized the critical role schools play in educating students about privacy. To do so effectively, schools engage and educate teachers to ensure they are informed and equipped to have meaningful conversations about privacy with their students.
Anna Morgan provided a model for engaging children in informing data protection policies through classroom-based lesson plans. Recognizing that the General Data Protection Regulation (GDPR) and Data Protection Law are complex, the DPC provided teachers with a quick start guide to provide background knowledge, enabling them to engage in discussions with children about their data protection rights and entitlements.
Privacy can be a difficult concept to explain, and there’s nothing quite like a creative demonstration to bring privacy concerns to life. One example: the DPC created a fictitious app to solicit children’s reactions to the use of their personal data. Through their consultation, Morgan shared that 60 percent of the children surveyed believed that their personal data should not be used to serve them with targeted advertising, finding it scary and creepy to have ads following them around. A full report from the consultation can be found here.
Daniel Solove also highlighted the need for educational systems to teach privacy. “Children today are growing up in a world where massive quantities of personal information are being gathered from them. They’re growing up in a world where they’re more under surveillance than any other generation. There’s more information about them online than any other generation. And the ability for them to put information online and get it out to the world is also unprecedented,” Solove noted. “So I think it’s very important that they learn about these things, and as a first step, they need to appreciate and understand the value of privacy and why it matters.”
One way for kids to learn about privacy is through storytelling. Solove recently authored a new children’s book about privacy titled, THE EYEMONGER, and shared his motivations for writing the book with the audience. “There really wasn’t anything out there that explained to children what privacy was, why we should care about it, or really any of the issues that are involved in this space, so that prompted me to try to do something about it.” He also compiled a list of resources to accompany the book and help educators and parents teach privacy to their children.
Building on the thread of creating outside-the-box interactive experiences to help kids understand privacy, Rob Girling shared with the audience a game called The Most Likely Machine, developed by Artefact Group to help preteens understand algorithms. Girling saw a need to teach algorithmic literacy given the impact on children’s lives, from determining college and job applications to search engine results. For Girling, “It’s just starting to introduce the idea that underneath algorithms are human biases and data that is often biased. That’s the key learning we want kids to take away.”
Each of the panelists shared a number of terrific resources and recommendations for parents and educators, which we have listed and linked to below, along with a few of our own.
The author thanks Hunter Dorwart for his contribution to this text.
The Cyberspace Administration of China (CAC) released a draft regulation on car privacy and data security on May 12, 2021. China has been very active in automated vehicle development and deployment and has also proposed last fall a draft comprehensive privacy law, which is moving towards adoption likely by the end of this year.
The draft car privacy and data security regulation (“Several Provisions on the Management of Automobile Data Security”; hereinafter, “draft regulation”) is interesting for those tracking automated vehicle (AV) and privacy regulations around the world and is relevant beyond China – not only due to the size of the Chinese market and its potential impact on all actors in the “connected cars” space present there, but also because dedicated legislation for car privacy and data security is novel for most jurisdictions. In fact, the draft regulation raises several interesting privacy and data protection aspects worthy of further consideration, such as its strict rules on consent, privacy by design, and data localization requirements. The CAC is seeking public comment on the draft, and the deadline for comments is June 11, 2021.
The draft regulation complements other regulatory developments around connected and automated vehicles and data. For example, on April 29, 2021, the National Information Security Standardization Technical Committee (TC 260), which is jointly administered by the CAC and the Standardization Administration of China, published a draft Standard on Information Security Technology Security Requirements for Data Collected by Connected Vehicles. The Standard sets forth security requirements for data collection to ensure compliance with other laws and facilitate a safe environment for networked vehicles. Standards like this are an essential component of corporate governance in China and notably fill in compliance gaps left in the law.
The publication of the draft regulation and the draft standard indicate that the Chinese government is turning its attention towards the data and security practices of the connected cars industry. Below we explain the key aspects of this draft regulation, summarize some of the noteworthy provisions, and conclude with the key takeaways for everyone in the car ecosystem.
Broad scope of covered entities: from OEMs to online ride-hailing companies
The draft regulation aims to strengthen the protection of “personal information” and “important data,” regulate data processing related to cars, and maintain national security and public interests. The scope of application of this draft regulation is fairly broad, both in terms of who it applies to and the types of data it covers.
The draft regulation applies to “operators” that collect, analyze, store, transmit, query, utilize, delete, and provide (activities collectively referred to as processing) personal information or important information overseas (during the design, production, sales, operation, maintenance, and management of cars) and “within the territory of the People’s Republic of China.”
“Operators” are entities that design or manufacture cars, or service institutions such as OEMs (original equipment manufacturers), component and software providers, dealers, maintenance organizations, online car-hailing companies, insurance companies, etc. (Note: The draft regulation includes “etc.,” here and throughout, which appears to mean that it is a non-exhaustive list.)
Covered data: Distinction among “personal information,” “important data,” and “sensitive personal information”
The draft regulation considers three data types, with an emphasis on “personal information” and “important data”, which are defined terms under Article 3. In addition, there is also a third type mentioned within the draft, at Article 8, and in a separate press release document: “sensitive personal information.”
Personal information includes data from car owners, drivers, passengers, pedestrians, etc. (non-exhaustive list) and also includes information that can infer personal identity and describe personal behavior. This is a broad definition and is notable because it explicitly includes information about passengers and pedestrians. As the business models evolve and the ecosystem of players in the car space grows, it has become more important to consider individuals other than just the driver or registered user of the car. The draft regulation appears to use the words “users” and “personal information subjects” when referring to this group of individuals broadly and also uses “driver,” “owner,” and “passenger” throughout.
The second type of data covered is “important data,” which includes:
Data on the flow of people and vehicles in important sensitive areas such as military management zones, national defense science and industry units involving state secrets, and party and government agencies at or above the county level;
Surveying and mapping data higher than the accuracy of the publicly released maps of the state;
Operating data of the car charging network;
Data such as vehicle types and vehicle flow on the road;
External audio and video data including faces, voices, license plates, etc.;
Other data that may affect national security and public interests as specified by the State Cyberspace Administration and the relevant departments of the State Council.
The inclusion of this data type is notable because it is defined in addition to “sensitive personal information” and includes data about users and infrastructure (i.e., the car charging network). Article 11 prescribes that when handling important data, operators should report to the provincial cyberspace administration and relevant departments the type, scale, scope, storage location and retention period, the purposes for collection, whether it was shared with a third party, etc. in advance (presumably in advance of processing this type of data, but this is something that may need to be clarified).
The third type of data mentioned in the draft regulation is “sensitive personal information,” and this includes vehicle location, driver or passenger audio and video, and data that can be used to determine illegal driving. There are certain obligations for operators processing this type of data (Articles 8 and 16).
Article 8 prescribes that where “sensitive personal information” is collected or provided outside of the vehicle, operators must meet certain obligations:
Ensuring that it is for the purpose of directly serving the driver or passenger (e.g., enhancing driver safety, assisting driving, navigation, entertainment, etc.),
Informing the driver and passengers that this data is being collected through a display panel or voice in the car,
Ensuring that the driver consents and authorizes the collection each time they enter the car (the default is not to collect),
Allowing the driver to terminate data collection at any time,
Allowing the vehicle owner to view and make inquiries about the sensitive personal information collected, and
Enabling deletion of this data upon request by the driver (the operator shall delete it within two weeks).
The definitions of these three types of data mirror similar definitions in other Chinese laws or draft laws currently being considered for adoption, such as the Civil Code and, respectively, the Personal Information Protection Law and the Cybersecurity Law. Consistency across these laws indicates a harmonization of China’s emerging data governance regulatory model.
Obligations based on the Fair Information Practice Principles
Articles 4 – 10 include many of the fair information practice principles, such as purpose specification and data minimization in Article 4 and security safeguards in Article 5, as well as privacy by design (Articles 6(4), 6(5), and 9). There are a few notable provisions worth discussing in more detail which are organized under the following headings below: local processing, transparency and notice, consent and user control, biometric data, annual data security management, and violations and penalties.
Local (“on device”) processing
Personal information and important data should be processed inside the vehicle, wherever possible (Article 6(1)). Where data processing outside of the car is necessary, operators should ensure the data has been anonymized wherever possible (Article 6(2)).
Transparency and Notice
When processing personal information, the operator is required to give notice of the types of data being collected and provide the contact information for the person responsible for processing user rights (Article 7). This notice can be provided through user manuals, onboard display panels, or other appropriate methods. The notice should include the purpose for collection, the moment that personal information is collected, how users can stop the collection, where and for how long data is stored, and how to delete data stored in the car and outside of the vehicle.
Regarding sensitive personal information (Article 8(3)), the operator is obliged to inform the driver and passengers that this data is being collected through a display panel or a voice in the car. This provision does not include “user manuals” as an example of how to provide notice, which potentially means that this data type is worthy of more active notice than personal information. This is notable because operators cannot rely on notice being given through a privacy notice placed on a website or in the car’s manual.
Consent and User Control, including a two-week deletion deadline
Article 9 requires operators to obtain consent to collect personal information, except where laws do not require consent. This provision notes that consent is often difficult to obtain (e.g., collecting audio and video of pedestrians outside the car). Because of this difficulty, data should only be collected when necessary and should be processed locally in the vehicle. Operators should also employ privacy by design measures, such as de-identification on devices.
Article 8(2) (requirements when collecting sensitive personal information) requires operators to obtain the driver’s consent and authorization each time the driver enters the car. Once the driver leaves the driver’s seat, that consent session has ended, and a new one must begin once the driver gets back into the seat. The driver must be able to stop the collection of this type of data at any time, be able to view and make inquiries about the data collected, and request the deletion of the data (the operator has two weeks to delete the data). It is worth noting that Article 8 includes six subsections, some of which appear to apply only to the driver or owner and not passengers or pedestrians.
These consent and user control requirements are quite notable and would have a non-trivial impact on the design of the car, the user experience, as well as the internal operations of the operator. It could potentially impact the user experience negatively if consent and authorization were required each time the driver got into the driver’s seat. For example, a relevant comparable experience is using a website and facing the consent-related pop-ups that must be closed out before being able to read or use the website at every visit. Furthermore, stopping the collection of location data, video data, and other telematics data (if used to determine illegal driving) could also present safety and functionality risks and cause the car not to operate as intended or safely. These are some of the areas where stakeholders are expected to submit comments for the public consultation.
Biometric data
Biometric data is mentioned throughout the draft regulation, as this type of data is implicitly or explicitly included in the definitions of personal information, important data, and sensitive personal information. Biometric data is specifically mentioned in Article 10, which is about the biometric data of drivers. Biometric data is an increasingly common data type collected by cars and deserves special attention. Article 10 would require that the biometric data of the driver (e.g., fingerprints, voiceprints, faces, heart rhythms, etc.) only be collected for the convenience of the user or to increase the security of the vehicle. Operators should also provide alternatives to biometrics.
Data localization
Articles 12-15 and 18 concern data localization. Both personal information and important data should be stored within China, but if it is necessary to store elsewhere, the operator must complete an “outbound security assessment” through the State Cyberspace Administration, and the operator is permitted to send only the data specified in that assessment overseas. The operator is also responsible for overseeing the overseas recipient’s use of the data to ensure appropriate security and for handling all user complaints.
Annual data security management status
Article 17 places additional obligations on operators to report their annual data security management status to relevant authorities before December 15 of each year when:
They process personal information of more than 100,000 users, or
They process important data.
Given that this draft regulation applies to passengers and pedestrians in addition to drivers, it would not take long for the threshold of 100,000 users to be met, especially for operators who manage a fleet of cars for rental or ride-hail. Additionally, since the definitions of personal information and important data are so broad, it is likely that many operators would trigger this reporting obligation. The obligations include recording the contact information of the person responsible for data security and handling user rights; recording relevant information about the scale and scope of data processing; recording with whom data is shared domestically; and other security conditions to be specified. If data is transferred overseas, there are additional obligations (Article 18).
Violations and Penalties
Violation of the regulations would result in punishment in accordance with the “Network Security Law of the People’s Republic of China” and other laws and regulations. Operators may also be held criminally responsible.
Conclusion
China’s draft car privacy and security regulation provides relevant information for policymakers and others thinking carefully about privacy and data protection regarding cars. The draft regulation’s scope is very broad and includes many players in the mobility ecosystem beyond OEMs and suppliers (e.g., online car-hailing companies and insurance companies).
With regards to user rights, the draft regulation recognizes that other individuals, in addition to the driver, will have their personal information processed and provides data protection and user rights to these individuals (e.g., passengers and pedestrians). The draft regulation would apply to three broad categories of data (personal information, important data, and sensitive personal information).
In privacy and data protection laws from the EU to the US, we have continued to see different obligations arise depending on the type or sensitivity of data and how data is used. This underscores the need for organizations to have a complete data map; indeed, it is crucial that all operators in the connected and automated car ecosystem have a sound understanding of what data is being collected from which person and where that data is flowing.
The draft regulation also highlights the importance of transparency and notice, as well as the challenges of consent and user control. It is a challenge to appropriately notify drivers, passengers, and pedestrians about all of the data types being collected by a vehicle.
Privacy and data protection laws will have a direct impact on the design, user experience, and even the enjoyment and safety of cars. It is crucial that all stakeholders are given the opportunity to provide feedback in the drafting of privacy and data protection laws that regulate data flows in the car ecosystem and that privacy professionals, engineers, and designers become much more comfortable working together to operationalize these rules.
Automated Decision-Making Systems: Considerations for State Policymakers
In legislatures across the United States, state lawmakers are introducing proposals to govern the uses of automated decision-making systems (ADS) in record numbers. In contrast to comprehensive privacy bills that would regulate collection and use of personal information, automated decision-making system (ADS) bills in 2021 specifically seek to address increasing concerns about racial bias or unfair outcomes in automated decisions that impact consumers, including housing, insurance, financial, or governmental decisions.
So far, ADS bills have taken a range of approaches, with most prioritizing restrictions on government use and procurement of ADS (Maryland HB 1323); requiring inventories of government ADSs currently in use (Vermont H 0236); impact assessments for procurement (CA AB-13); external audits (New York A6042); or outright prohibitions on the procurement of certain types of unfair ADS (Washington SB 5116). A handful of others would seek to regulate commercial actors, including in insurance decisions (Colorado SB 169), consumer finance (New Jersey S1943), or the use of automated decision-making in employment or hiring decisions (Illinois HB 0053, New York A7244).
At a high level, each of these bills share similar characteristics. Each proposes general definitions and general solutions that cover specific, complex tools used in areas as varied as traffic forecasting and employment screening. But the bills are not consistent with regard to requirements and obligations. For example, among the bills that would require impact assessments, some require impact assessments universally for all ADS in use by government agencies, others would require impact assessments only for specifically risky uses of ADS.
As states evaluate possible regulatory approaches, lawmakers should: (1) avoid a “one size fits all” approach to defining automated decision-making by clearly defining the particular systems of concern; (2) consult with experts in governmental, evidence-based policymaking; (3) ensure that impact assessments and disclosures of risk meet the needs of their intended audiences; (4) look to existing law and guidance from other state, federal, and international jurisdictions; and (5) ensure appropriate timelines for technical and legal compliance, including time for building capacity and attracting qualified experts.
1. Avoid “one size fits all” solutions by clearly identifying the automated decision-making systems of concern.
An important first step to the regulation of automated decision-making systems (“ADS”) is to identify the scope of systems that are of concern. Many lawmakers have indicated that they are seeking to address automated decisions such as those that use consumer data to create “risk scores,” creditworthiness profiles, or other kinds of profiles that materially impact our lives and involve the potential for systematic bias against categories of people. But, the wealth of possible forms of ADS and the many settings for their use can make defining these systems in legislation very challenging.
Automated systems are present in almost all walks of modern life, from managing wastewater treatment facilities to performing basic tasks such as operating traffic signals. ADS can automate the processing of personal data, administrative data, or myriad forms of other data, through the use of tools ranging in complexity from simple spreadsheet formulas, to advanced statistical modeling, rules-based artificial intelligence, or machine learning. In an effort to navigate this complexity, it can be tempting to draft very general definitions of ADS. However, these definitions risk being overbroad and capturing ADS systems that are not truly of concern — i.e. because they do not impact people or carry out significant decision-making.
For example, a definition such as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, thatmakes a decision or facilitates human decision-making” (New Jersey S1943) would likely include a wide range of traditional statistical data processing, such as estimating average number of vehicles per hour on a highway to facilitate automatic lane closures in intelligent traffic systems. This would place an additional, significant requirement for conducting complex impact assessments for many of the tools behind established operational processes. In contrast, California’s AB-13 takes a more tailored approach, aiming to regulate “high-risk application[s]” of algorithms that involve “a score, classification, recommendation, or other simplified output,” that support or replace human decision-making, in situations that “materially impact a person” (12115(a)&(b)).
In general, compliance-heavy requirements or prohibitions on certain practices may be appropriate only for some high-risk systems. The same requirements would be overly prescriptive or infeasible for systems powering ordinary, operational decision-making. Successfully distinguishing between high-risk use cases and those without significant, personal impact will be crucial to crafting tailored legislation that addresses the targeted, unfair outcomes without overburdening other applications.
Lawmakers should ask questions such as:
Who owns or is responsible for the ADS? Is the system being used by government decision-makers, commercial actors, or both (private vendors contracted by government agencies)? The relevant “owner” of a system may determine the right balance of transparency, accountability, and access to underlying data necessary to accomplish the legislative goals.
What kind of data is involved? Many systems use a wide range of data that may or may not include personal information (information related to reasonably identifiable individuals), and may or may not include “sensitive data” (personal data that reveals information about race, religion, health conditions, or other highly personal information). In some cases, non-sensitive data can act as a “proxy” for sensitive information (such as the use of zip code as a proxy for race). Data may also be obtained from sources of varying quality, accuracy, or ethical collection, for example: public records, government collection, regulated commercial sectors (banks or credit agencies), commercial data brokers, or other commercial sources.
Who is impacted by the decision-making? Does the decision-making impact individuals, groups of individuals, or neither? Is there a possibility for disparate impact in who is affected, i.e. that certain races, genders, income levels, or other categories of people will be impacted differently or worse than others?
Is the decision-making legally significant? In most cases, our tolerance for automated decision-making depends on the decision being made. Some decisions are commonplace or operational, such as automated electrical grid management. Other decisions are so relevant to our individual lives and autonomy that use of automated systems in this context demands greater transparency, human involvement, or even auditing such as: financial opportunities, housing, lending, educational opportunities, or employment. Still other decisions may be in a “grey area”: for example, automated delivery of online advertisements is common, but questions about algorithmic bias in ad quality or who sees certain types of ads (e.g. ads for particular jobs) are leading to increasing scrutiny.
Does the system assist human decision-making or replace it? Some systems replace human decision-making entirely, such as when a system generates an automated approval or denial of a financial opportunity that occurs without human review. Other systems assist human decision-makers by generating outputs such as scores or classifications that allow decision-makers to complete tasks, such as grading a test or diagnosing a health condition.
When do “meaningful changes” occur? Many legislative efforts seek to trigger requirements for new or updated impact assessments when ADSs change, or “meaningfully change.” For such requirements, lawmakers should establish clear criteria for what constitutes a “meaningful change.” For example, machine learning systems that adapt based upon a stream of sensor or customer data change constantly, whether by changing the weights attached to features or by eliminating features. Whether adaptations made as a consequence of typical machine learning operations constitute meaningful changes is an important question best poised to be answered in ways specific to each learning and adapting system. The velocity and variety of changes to ADS driven by machine learning may require other forms of ongoing assessment to identify abnormalities or potential harms as they arise.
These questions can help guide legislative definitions and scope. A “one size fits all” solution not only risks creating burdensome requirements in situations where they are not needed, but is also less likely to ensure stronger requirements in situations where they are needed — leaving potentially biased algorithms to operate without sufficient review or standards to address resulting outcomes that are biased or unfair. An appropriate definition is a critical first step for effective regulation.
2. Consult with experts in governmental, evidence-based policymaking.
Evidence-based policymaking legislation, popular in the late 1990s and early 2000s, required states to construct systems to eradicate human bias by employing data-driven practices for key areas of state decision-making, such as criminal justice, student achievement predictions, and even land use planning. For example, as defined by the National Institute of Corrections, the vision for implementing evidence based practice in community corrections is “to build learning organizations that reduce recidivism through systematic integration of evidence-based principles in collaboration with community and justice partners” (see resources at the Judicial Council of California 2021). The areas chosen for application of evidence-based policymaking are presently causing high degrees of concern about applications of ADS as the mechanisms for ensuring use of evidence and elimination of subjectivity. Examining the goals envisioned in evidence-based policymaking legislation may clarify whether ADS are appropriate tools for satisfying those goals.
In addition to consulting the policies encouraging evidence-based making in order to identify the goals for automated decision-making systems (ADSs) the evidence-based research findings reviewed to support this legislation can also direct legislators to contextually relevant, expert, sources of data that should be incorporated into ADS or into the evaluation of ADS. Likewise, legislators should reflect on the challenges to implementation of effective evidence-based decision-making, such as unclear definitions, poor data quality, challenges to statistical modelling, and a lack of interoperability of public data sources, as these challenges are similar to those complicating use of ADS.
3. Ensure that impact assessments and disclosures of risk meet the needs of their intended audiences.
Most ADS legislative efforts aim to increase transparency or accountability through various forms of mandated notices, disclosures, data protection impact assessments, or other risk assessments and mitigation strategies. These requirements serve multiple, important goals, including helping regulators understand data processing, and increasing internal accountability through greater process documentation. In addition, public disclosures of risk assessments benefit a wide range of stakeholders, including: the public, consumers, businesses, regulators, watchdogs, technologists, and academic researchers.
Given the needs of different audiences and users of such information,lawmakers should ensure that impact assessments and mandated disclosures are leveraged effectively to support the goals of the legislation. For example, where legislators intend to improve equity of outcomes between groups, they should include legislative support for tools to improve communication to these groups and to support incorporation of these groups into technical communities. Where sponsors of ADS bills intend to increase public awareness of automated decision-making in particular contexts, legislation should require and fund consumer education that is easy to understand, available in multiple languages, and accessible to broad audiences. In contrast, if the goal is to increase regulator accountability and technical enforcement, legislation might mandate more detailed or technical disclosures be provided non-publicly or upon request to government agencies.
The National Institutes of Standards and Technology (NIST) has offered recent guidance on explainability in artificial intelligence that might serve as a helpful model for ensuring that impact assessments are useful for the multiple audiences they may serve. The NIST draft guidelines suggest four principles for explainability for audience sensitive, purpose driven, ADS assessment tools: (1) Systems offer accompanying evidence or reason(s) for all outputs; (2) Systems provide explanations that are understandable to individual users; (3) The explanation correctly reflects the system’s process for generating the output; and (4) The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output (p.2). These four principles shape the types of explanations needed to ensure confidence in algorithmic or automated decision-making systems (ADSs), such as explanations for user benefit, for social acceptance, for regulatory and compliance purposes, for system development, and for owner benefit (p. 4-5).
Similarly, the European Commission’s Guidelines on Automated Individual Decision-Making and Profiling provides recommendations for complying with the GDPR’s requirement that individual users be given “meaningful information about the logic involved.” Rather than requiring a complex explanation or exposure of the algorithmic code, the Commission explains that a controller should find simple ways to tell the data subject the rationale behind, or the criteria relied upon to reach a decision. This may include which characteristics are considered to make a decision, the source of the information, and its relevance. It should not be overly technical, but sufficiently comprehensive for a consumer to understand the reason for the decision.
Regardless of the audience, mandated disclosures should be used cautiously as, especially when made public, such disclosures can also create certain risks, such as opportunities for data breaches, exfiltration of intellectual property (IP), or even attacks on the algorithmic system which could identify individuals or cause the systems to behave in unintended ways.
4. Look to existing law and guidance from other state, federal, and international jurisdictions.
Although US lawmakers have specific goals, needs, and concerns driving legislation in their jurisdictions, there are clear lessons to be learned from other regimes with respect to automated decision-making. Most significantly, there has been a growing, active wave of legal and technical guidance in the European Union in recent years regarding profiling and automated decision-making, following the passage of the GDPR. Lawmakers may also seek to ensure interoperability with the newly passed California Privacy Rights Act (CPRA) or Virginia Consumer Data Protection Act (VA-CDPA), both of which create requirements that impact automated decision-making, including profiling. Finally, the Federal Trade Commission enforces a number of laws that could be harnessed to address concerns about biased or unfair decision-making. Of note, Singapore is also a leader in this space, launching their Model AI Governance Framework in 2019. It is useful to understand the advantages or limitations of each model and to recognize the practical challenges of adapting systems for each jurisdiction.
General Data Protection Regulation (GDPR)
The EU General Data Protection Regulation (GDPR) broadly regulates public and private collection of personal information. This includes a requirement that all data processing be fair (Art. 5(1)(a)). The GDPR also creates heightened safeguards specifically for high risk automated processing that impact individuals, especially with respect to decisions that produce legal, or other significant, effects concerning individuals. These safeguards include organizational responsibilities (data protection impact assessments); and individual empowerment provisions (disclosures, and the right not to be subject to certain kinds of decisions based solely on automated processing).
Organizational Responsibilities. Data protection impact assessments (DPIAs) required under the GDPR for “high risk” processing activities, must include a systematic description of the envisaged processing operations and the purposes of the processing, an assessment of the necessity and proportionality of the processing operations in relation to the purposes, an assessment of the risks to the rights and freedoms of data subjects, and measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data. Recital 75 of the GDPR, which details the Art. 35 DPIA requirements, provides details about the nature of the data processing risks intended to be covered. In addition the GDPR requires all automated processing to incorporate technical and organizational measures to implement data protection by design principles (Art. 25).
Individual Control. In addition to providing organizational responsibilities such as data protection impact assessments (DPIAs), the GDPR also requires controllers to provide data subjects with information relating to their automated processing activities (Art. 13 & 14). In particular, controllers must disclose the existence of automated decision-making, including profiling, meaningful information about the logic involved, and the significance and envisaged consequences of processing for the data subject. These disclosures are required when personal data is collected from a data subject, and also when personal data is not obtained from a data subject. In addition, the GDPR creates the right for an individual not to be subject to decisions based solely on automated processing which produce legal, or similarly significant, effects concerning an individual (Art. 22). Suitable measures to safeguard the data subject’s rights, freedoms, and legitimate interests include the rights for an individual to: (1) obtain human intervention on the part of the controller (human in the loop), (2) express their point of view, and (3) contest a decision.
California Privacy Rights Act (CRPA)
The California Privacy Rights Act (CPRA), passed via Ballot Initiative in 2020, expands on the California Consumer Privacy Act (CCPA)’s requirements that businesses comply with consumer requests to access, delete, and opt-out of the sale of consumer data.
While the CPRA does not create any direct consumer rights or organizational responsibilities with respect to automated decision-making, its consumer access rights includes access to information about “inferences drawn . . . to create a profile” (Sec. 1798.140(v)(1)(K)) and most likely information about the use of the consumer’s data for automated decision-making.
Notably, the CPRA added a new definition of “profiling” to the CCPA, while authorizing the new California oversight agency to engage in rulemaking. In alignment with the GDPR, the CPRA defines “profiling” as “any form of automated processing of personal Information . . . to evaluate certain personal aspects relating to a natural person, and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements” (1798.140(z)).
The CPRA authorizes the new California Privacy Protection Agency to issue regulations governing automated decision-making, including “governing access and opt‐out rights with respect to businesses’ use of [ADS], including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer.” (1798.185(a)(16)). Notably, this language lacks the GDPR’s “legal or similarly significant” caveat, meaning that the CPRA requirements around access and opt-outs may extend to processing activities such as targeted advertising based on profiling.
Virginia Consumer Data Protection Act (VA-CDPA)
The Virginia Consumer Data Protection Act (VA-CDPA), which passed in 2021 in Virginia and will come into effect in 2023, takes an approach towards automated decision-making inspired by both the GDPR and CPRA.
First, its definition of “profiling” aligns with that of the GDPR and CPRA (§ 59.1-571). Second, it imposes a responsibility upon data controllers to conduct data protection impact assessments (DPIAs) for high risk profiling activities (§ 59.1-576). Third, it creates a right for individuals to opt out of having their personal data processed for the purpose of profiling in the furtherance of decisions that produce legal or similarly significant effects concerning the consumer (§ 59.1-573(5)).
Organizational Responsibilities. The VA-CDPA requires data controllers to conduct and document data protection impact assessments (DPIAs) for “profiling” that creates a “reasonably foreseeable risk of (i) unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (ii) financial, physical, or reputational injury to consumers; (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person; or (iv) other substantial injury to consumers.” These DPIA’s are required to identify and weigh the benefits against the risks that may flow from the processing, as mitigated by safeguards employed to reduce such risks. They are not intended to be made public or provided to consumers. Instead, these confidential documents must be made available to the State Attorney General upon request, pursuant to an investigative civil demand.
Individual Control. The VA-CDPA grants consumers the right to submit an authenticated request to opt-out of the processing of personal data for purposes of profiling “in the furtherance of decisions that produce legal or similarly significant effects concerning the consumer,” which is defined as “a decision made by the controller that results in the provision or denial by the controller of financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, health care services, or access to basic necessities, such as food and water.”
The FTC Act and broadly applicable consumer protection laws
Finally, a range of federal consumer protection and sectoral laws already apply to many businesses’ uses of automated decision-making systems. The Federal Trade Commission (FTC) enforces long-standing consumer protection laws prohibiting “unfair” and “deceptive” trade practices, including the FTC Act. As recently as April 2021, the FTC warned businesses of the potential for enforcement actions for biased and unfair outcomes in AI, specifically noting that the “sale or use of – for example – racially biased algorithms” would violate Section 5 of the FTC Act.
The FTC also noted its decades of experience enforcing other federal laws that are applicable to certain uses of AI and automated decisions, including the Fair Credit Reporting Act (if an algorithm is used to deny people employment, housing, credit, insurance, or other benefits), and the Equal Credit Opportunity Act (making it “illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance”).
Comparison chart:
5.Ensure appropriate timelines for technical and legal compliance, including building capacity and attracting qualified experts.
In general, timelines for government agencies and companies to comply with the law should be appropriate to the complexity of the systems that will be needed to review for impact. Many government offices may not be aware that the systems they use every day to improve throughput, efficiency, and effective program monitoring may constitute “automated decision-making.” For example, organizations using Customer Relations Management (CRM) software from large vendors may be using predictive and profiling systems built into that software. Also, governmental offices suffer from siloed procurement and development strategies and may have built or purchased overlapping ADS to serve specific, sometimes narrow, needs.
Lack of government funding, modernization, or resources to address the complexity of the systems themselves, and the lack of prior requirements for tracking automated systems in contracts or procurement decisions, means that many agencies will not readily have access to technical information on all systems in use. Automated decision-making systems (ADSs) have been shown to suffer from technological debt, opaque and incomplete technical documentation, or are dependent on smaller automated systems that can only be discovered through careful review of source code and complex information architectures.
Challenges such as these were highlighted during 2020 as a result of the COVID-19 pandemic, which prompted millions to pursue temporary unemployment benefits. When applications for unemployment benefits surged, some state unemployment agencies discovered that their programs were written in the infrequently used programming language, COBOL. Many resource-strapped agencies were using stop-gap code, intended for temporary use, to translate COBOL into more contemporary coding languages. As a result, many agencies lacked programming experts and capacity to efficiently process the influx of claims. Regulators should ensure that offices have time, personnel, and funding to undertake the digital archaeology necessary to reveal the many layers of ADSs used today.
Finally, lawmakers should not overlook the challenges of identifying and attracting qualified technical and legal experts. For example, many legislative efforts envision a new or expanded government oversight office with the responsibility to review automated impact assessments. Not only will the personnel needed for these offices need to be able to meaningfully interpret algorithmic impact assessments, they will need to do so in an environment of high sensitivity, publicity, and technological change. As observed in many state and federal bills calling for STEM and AI workforce development, the talent pipeline is limited and legislatures should address the challenges of attracting appropriate talent as a key component of these bills. Likewise, identifying appropriate expectations of performance, including ethical performance, for ADS review staff will take time, resources, and collaboration with new actors, such as the National Society of Professional Engineers, whose code of conduct governs many working in fields responsible for designing or using ADS.
What’s Next for Automated Decision System Regulation?
States are continuing to take up the challenge of regulating these complex and pervasive systems. To ensure that these proposals achieve their intended goals, legislators must address the ongoing issues of definition, scope, audience, timelines and resources, and mitigating unintended consequences. More broadly, legislation should help motivate more challenging public conversations about evaluating the benefits and risks of using ADS as well as the social and community goals for regulating these systems.
At the highest level, legislatures should bear in mind that ADS are engineered systems or products that are subject to product regulations and ethical standards for those building products. In addition to existing laws and guidance, legislators can consult the norms of engineering ethics, such as the NSPE’s code of ethics, which requires that engineers ensure their products are designed so as to protect as paramount the safety, health and welfare of the public. Stakeholder engagement, including with consumers, technologists, and the academic community, is imperative to ensuring that legislation is effective.
FPF Ethical Data Use Committee will Support Research Relying on Private Sector Data
FPF has launched an independent ethical review committee to provide oversight for research projects that rely upon sharing of corporate data with researchers. Whether researchers are studying the impact of platforms on society, supporting evidence based policymaking, or understanding issues from COVID to climate change, personal data held by companies is increasingly essential to advancing scientific knowledge.
Companies want to be able to cooperate with researchers to use data and machine learning tools to drive innovation and investment, while ensuring compliance with data protection rules and ethical guidelines. To accomplish this, some companies are ramping up their internal ethical knowledge base and staff. However, reviewing high-risk, high-reward analytics projects in-house can be expensive, complex, and may lead to accusations of favoritism or ethics-washing. Traditional academic IRBs may consider the corporate data previously collected for business uses to be out of scope of their review, creating a gap for independent expert ethical review.
Many of the projects that seek to expand human knowledge rely on insights derived from combinations of data and use of machine learning or other advanced data analysis techniques. Sharing data for research drives innovation but it may also create novel risks that must be responsibly considered.
The FPF Ethical Data Use Committee (EDUC) provides companies and their research partners with ethics review as a service. The EDUC will provide an independent expert review of proposed research data uses to help companies limit the risks of unintended outcomes or data-based discrimination. The committee also will help researchers ensure ethical alignment with their uses of secondary data. As part of the review, the committee will provide specific recommendations for companies and researchers to implement that could mitigate the identified risks of individual and group or social harms. These reviews are particularly useful for many uses of data, including for machine-learning based research, models or systems.
The Committee – designed and developed with the generous support of Schmidt Futures and building on previous FPF work funded by the Alfred P Sloan Foundation and the National Science Foundation – will include experts from a range of disciplines, including academic researchers, ethicists, technologists, privacy professionals, lawyers, and others. They will complete training on data protection and privacy, AI and analytics, applied ethics, and other topics in addition to their own expertise, to serve terms on the Committee. Technical specialists will also be tapped for guidance on specific topic areas as required.
At this time, the Ethical Data Use Committee is preparing for final user-preference pilot testing. We are soliciting partners who aspire to be the first to use this system under cost conditions that will not be available once the review committee becomes fully operational. Companies and researchers participating in this final testing phase can do so confidentially, at no cost, if you provide feedback on the process.
If you have a project that you think should be reviewed by the Ethical Data Use Committee or if you would like to recommend yourself or someone else as a member for the inaugural review term, please contact Dr. Sara Jordan at [email protected].
FPF Welcomes New Members to the Youth & Education Privacy Team
We are thrilled to announce two new members of FPF’s Youth & Education Privacy team. The new staff – Joanna Grama and Jim Siegl – will help expand FPF’s technical assistance and training, resource creation and distribution, and state and federal legislative tracking.
You can read more about Joanna and Jim below. Please join us in welcoming them to the team!
Joanna Grama is a Senior Fellow with the Future of Privacy Forum’s Youth and Education team. Joanna will be assisting with various Youth and Education team projects, including the Train-the-Trainer program for higher education.
Joanna has more than 20 years of experience with a strong focus in law, higher education, data privacy, and information security. A former member of the U.S. Department of Homeland Security’s Data Privacy and Integrity Advisory Committee, Joanna is a frequent author and regular speaker on privacy and information security topics. The third edition of her textbook, LEGAL AND PRIVACY ISSUES IN INFORMATION SECURITY, was published in late 2020.
An associate vice president at Vantage Technology Consulting Group, Joanna is also a board member and vice president for the Central Indiana chapter of the Information Systems Audit and Control Association (ISACA); and a member of the International Association for Privacy Professionals (IAPP), the American Bar Association, Section of Science and Technology Law (Information Security Committee), and the Indiana State Bar Association (Written Publications Committee). She has earned the CISSP, CIPT, CRISC, and GSTRT certifications.
Joanna was formerly the Director of Cybersecurity and IT Governance, Risk and Compliance programs at EDUCAUSE. Joanna graduated from the University of Illinois College of Law with honors. Her undergraduate degree is from the University of Minnesota-Twin Cities.
“I have spent my career looking at technology use in higher education through a lens that includes law, policy, information security, and privacy. Joining FPF, and the Youth and Education Privacy team in particular, is a “bucket list” opportunity for me. I am excited to contribute thought leadership around student data privacy issues during a time of great technological change.”
Jim Siegl
Jim Siegl, CIPT, is a Senior Technologist with the Youth & Education Privacy team. For nearly two decades prior to joining FPF, Jim was a Technology Architect for the Fairfax County Public School District with a focus on privacy, security, identity management, interoperability, and learning management systems. He was a co-author of the CoSN Privacy Toolkit and the Trusted Learning Environment (TLE) seal program and holds a Master of Science in the Management of Information Technology from the University of Virginia.
“I am excited about joining FPF’s Youth & Education Privacy team during such a unique moment in time for student privacy. I’m looking forward to being a resource to stakeholders as they navigate new and existing student privacy concerns.”
Interested in student privacy? Subscribe to our monthly education privacy newsletter here. Want more info? Check out Student Privacy Compass, the education privacy resource center website.
5 Highlights from FPF’s “AI Out Loud” Expert Panel
On Wed., April 14th, FPF hosted an expert panel discussion on “AI Out Loud: Representation in Data for Voice-Activated Devices, Assistants.” FPF’s Senior Counsel and Director of AI and Ethics, Brenda Leong, moderated the panel featuring Anne Toth, the Director of Alexa Trust, Amazon; Irina Raicu, Internet Ethics Program Director, Markkula Center for Applied Ethics, Santa Clara University, and Susan Gonzales, CEO, AIandYou.
The panel discussed voice-activated systems, such as at home, on mobile devices, and in cars or other commercial applications, to consider how design choices, data collection, and ethics evaluations can affect bias, fairness, and accessibility concerns. This technology offers many benefits and opportunities for quality of life–accessibility for young/aging or disabled populations, convenience, and interactivity across devices and services. But it also carries specific risks including privacy concerns, responsible data management frameworks, legal compliance, and equity and fairness values.
Here are 5 key highlights from “AI Out Loud”:
Irina Raicu pointed out the need for improvements to design and development processes to ensure inclusiveness, equity, accessibility, and safety for users of these systems. She recommended including all stakeholders to share how these technologies directly impact them. She also pointed out the need for caution on new applications of these systems, such as for emotion detection or medical diagnosis, until the supporting research is strong enough to justify such uses.
Susan Gonzales pointed out that the technology behind these systems still faces significant accuracy challenges. A Stanford study found some error rates almost twice as high for blacks as whites. In general, word error rates, the most common metric for evaluating these systems, show lower accuracy for those with strong accents, speaking a second language, with heavy dialects, and in many cases, across age and gender.
The potential harms caused by inaccuracies can vary with context and use case. While poor song recommendations or inaccurate recipe ingredients are relatively low impact, mistakes for those asking about medication, or relying on voice assistants for access to personal accounts and services might carry greater repercussions. Those most dependent on these systems may also be those most at risk for poor results. Ethical standards demand that reliability be sufficiently high for all users.
Anne Toth pointed to the significant advances in accuracy and representation in recent years, as more people engage with these devices in a broader variety of contexts. She confirmed Amazon’s commitment to continuous improvement based on the increased, and more diverse, amounts of voice data available, while also prioritizing personal privacy, and personal access and control by users over their data.
To ensure fairness, inclusiveness, and accessibility in designing these technologies, designers and developers must address diversity at all stages from inception to launch. Companies should collaborate with advocacy groups, civil society, and academia to seek outcomes that provide equitable services to all potential users.
On Friday, the U.S. Department of Education opened an investigation into the data-sharing practices between Florida’s Pasco County sheriff’s office and school district. First uncovered in November 2020 by reporting by the Tampa Bay Times, the Department will be investigating the school district’s partnership with the sheriff’s office, which allowed the sheriff to use student grades, attendance, disciplinary records, and aspects of their home life to identify and target students “at-risk” of criminal activity. FPF applauds the Department’s decision to investigate this concerning partnership. Any school data-sharing partnership must value student privacy and build in community trust and transparency—before the Tampa Bay Times story, parents and students in Pasco County were completely unaware of the sheriff’s practices.
In December 2020, FPF analyzed the sheriff’s public documentation and contract with the school board, concluding that the sheriff’s office unlawfully accessed and used student records for their database in violation of the Family Educational Rights and Privacy Act, FERPA, as well as their contract with the school board. Amelia Vance, FPF’s Director of Youth and Education Privacy, was quoted in the original Tampa Bay Times article revealing the program, noting that
“The law does say school resource officers can access education records because they can be considered ‘school officials.’ But under most circumstances, they can’t share the records with the rest of the department. And they can’t use them in a law enforcement investigation without permission from a parent, unless there is a court order or a health and safety emergency.”
The Department’s announcement follows significant public outcry. Unfortunately, the Department has refused to share the letter during the early stages of its investigation. In January, Representative Bobby Scott (D-VA), Chair of the House Education and Labor Committee, called on the Department to investigate the program for FERPA violations. In his letter Rep. Scott decried the program, noting “this use of student records goes against the letter and spirit of FERPA and risks subjecting students, especially Black and Latino students, to excessive law enforcement interactions and stigmatization.”
FPF Report Outlines Opportunities to Mitigate the Privacy Risks of AR & VR Technologies
A new report from the Future of Privacy Forum (FPF), Augmented Reality + Virtual Reality: Privacy & Autonomy Considerations in Emerging, Immersive Digital Worlds, provides recommendations to address the privacy risks of augmented reality (AR) and virtual reality (VR) technologies. The vast amount of sensitive personal information collected by AR and VR technologies creates serious risks to consumers that could undermine the adoption of these platforms and limit their utility.
“XR technologies are rapidly being adopted by consumers and increasingly being used for work and for education. It’s essential that guidelines are set to ensure privacy and safety while business models are being established,” said FPF CEO Jules Polonetsky.
The report considers current and future use cases for XR technology, and provides recommendations for how platforms, manufacturers, developers, experience providers, researchers, and policymakers should implement XR responsibly, including:
Policymakers should carefully consider how existing or proposed data protection laws can provide consumers with meaningful rights and companies with clear obligations regarding XR data;
Hardware makers should consider how XR data collection, use, and sharing can be performed in ways that are transparent to users, bystanders, and other stakeholders;
XR developers should consider the extent to which sensitive personal data can be processed locally and kept on-device;
XR developers should ensure that sensitive personal data is encrypted in transit and at rest;
Platforms and XR experience providers should implement rules about virtual identity and property that mitigate, rather than increase, online harassment, digital vandalism, and fraud;
Platforms and XR experience providers should establish clear guidelines that mitigate physical risks to XR users and bystanders;
Researchers should obtain informed consent prior to conducting research via XR technologies and consider seeking review by an Institutional Review Board (IRB) or Ethical Review Board (ERB) if consent is impractical;
Platforms and XR experience providers should provide a wide-range of customizable avatar features that reflect the broader community, encouraging representation and inclusion; and
Platforms and XR experience providers should consult with the larger community of stakeholders including, industry experts, advocates, policymakers, XR users, and non-XR users, and integrate community feedback into decisions about software and hardware design and data collection, use, and sharing.
“XR technologies provide substantial benefits to individuals and society, with existing and potential future applications across education, gaming, architectural design, healthcare, gaming, and much more,” said FPF Policy Counsel and paper author Jeremy Greenberg. “XR technology systems often rely on biometric identifiers and measurements, real-time location tracking, and precise maps of the physical world. The collection of such sensitive personal information creates privacy risks that must be considered by stakeholders across the XR landscape in order to ensure this immersive technology is implemented responsibly.”
The release of the report kicks off the start of FPF’s XR Week of activities, happening from April 19thto 23rd. XR Week will explore key elements of the report in greater detail, including the differences between various immersive technologies, their use cases, important privacy and ethical questions surrounding XR technologies, compliance challenges associated with XR technologies, and how XR technology will continue to evolve.
FPF’s featured XR Week event, AR + VR: Privacy & Autonomy Considerations for Immersive Digital Worlds will include a conversation between FPF Policy Counsel Jeremy Greenberg and Facebook Reality Labs Director of Policy James Hairston, followed by a panel discussion with Magic Leap Senior Vice President Ana Lang, Common Sense Media Director of Platform Accountability and State Advocacy Joe Jerome, and behavioral scientist Jessica Outlaw.
To register and learn more about FPF’s other XR Week events, read this blog post.
FPF Testifies on Automated Decision System Legislation in California
Last week, on April 8, 2021, FPF’s Dr. Sara Jordan testified before the California House Committee on Privacy and Consumer Protection on AB-13 (Public contracts: automated decision systems). The legislation passed out of committee (9 Ayes, 0 Noes) and was re-referred to the Committee on Appropriations. The bill would regulate state procurement, use, and development of high-risk automated decision systems by requiring prospective contractors to conduct automated decision system impact assessments.
At the hearing, Dr. Jordan commented as an expert witness alongside Vinhcent Le, who represented The Greenlining Institute. Dr. Jordan commended the sponsors for amending the definition of “automated decisionmaking” to account for the wide range of technical complexity in automated systems. In addition, Dr. Jordan testified that the government contract stage is an appropriate stage for the introduction of algorithmic impact assessments for high-risk applications of automated decisionmaking. This would allow authorities in California to evaluate technology before it is implemented using transparent and actionable assessment criteria.
Find FPF’s infographic “The Spectrum of Artificial Intelligence” (Jan. 2021) here.
Read FPF’s Paper “Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making” (Dec. 2017) here.
Read FPF’s “Ten Questions on AI Risk” (July, 2020) here.
For an international perspective, read the European Commission’s draft AI rules proposing to regulate certain uses of high-risk AI systems (leaked April 13, 2021) here.
FPF partners with FCBA -The Tech Bar and LOEB & LOEB to Launch New Law Student Diversity Internship
FPF and The Tech Bar announced the FPF Loeb & Loeb Diversity Pipeline Internship, a first of its kind partnership among three organizations committed to diversity, equity, and inclusion in the legal and policy profession, especially in the technology, media, and telecom (TMT) sector. The inaugural FPF Loeb & Loeb Diversity Pipeline intern will join approximately 20 other law students interning this summer at leading TMT organizations through the FCBA Diversity Pipeline Program.
Currently, in its first year of operation, the Diversity Pipeline Program is an employment program with a legal skills development component that connects first-year law students from historically underrepresented and disadvantaged groups with paid summer legal internship opportunities in the private sector and at non-governmental organizations (NGOs).
“FPF could not be more pleased to host the inaugural FPF Loeb & Loeb Diversity Pipeline Summer Internship,” said John Verdi, FPF’s VP of Policy. “We are grateful for Loeb’s generous support and the FCBA’s partnership. We all have a responsibility to create a more inclusive tech policy community; this internship promises to highlight and support the voices of early-career professionals with diverse backgrounds and experiences.”
Building on the first phase of the Diversity Pipeline Program that focused on private sector internships, we are thrilled to enter this next phase: a groundbreaking partnership with FPF and Loeb & Loeb. “If we truly want to increase diversity in TMT law and policy work, we have to push beyond firms, companies, and associations to ensure that students from historically underrepresented and disadvantaged groups have access to paid internships in the non-profit sector as well. Working with firms that can help support such efforts is a critical step. This creative partnership will serve as a model for ongoing FCBA initiatives to enable diverse law students to get valuable first-hand experience at researching, analyzing, and formulating policy proposals on the many exciting issues at the cross-section of technology, law, and policy,” said Natalie Roisman, FCBA President. “We are grateful to see the success of the Diversity Pipeline Program in supporting more diversity in the tech space and eager to learn from FPF, an organization with an established TMT law and policy internship program and related alumni network.”
Ken Florin, Chair, Loeb & Loeb, LLP, said, “Loeb is thrilled to be partnering with FCBA—The Tech Bar and FPF by participating in the FCBA Diversity Pipeline Program. We look forward to the opportunity to work alongside FPF to mentor and support a diverse law student in a summer internship at FPF on legal and policy issues at the intersection of technology and privacy. We recognize that building diversity into the legal talent pipeline is critical, and we hope this opportunity will support this year’s intern on their path toward a successful legal career.”
The FPF Loeb & Loeb Diversity Pipeline Summer Intern will work on cutting-edge TMT law and policy issues in areas such as consumer privacy, youth privacy, algorithms, and privacy-enhancing technologies.
“We hope this non-profit/law firm partnership to advance diversity in the TMT is the first of many,” said Rudy Brioche, Diversity Pipeline Committee Co-Chair. “We welcome the opportunity to work with other non-profits as we expand the program next Fall for the 2022 Summer Internship Program.”