Newly Updated Guidance: FPF Releases Updates to the Generative AI Internal Policy Considerations Resource to Provide New Key Lessons For Practitioners

Today, the Future of Privacy Forum (FPF) Center for Artificial Intelligence is releasing a newly updated version of our Generative AI internal compliance document – Generative AI for Organizational Use: Internal Policy Considerations, with new content addressing organizations’ ongoing responsibilities, specific concerns (e.g., high-risk uses), and lessons taken from recent regulatory enforcement related to these technologies. Last year, FPF published a generative AI compliance checklist, which drew from a series of consultations with practitioners and experts from over 30 cross-sector companies and organizations, to provide organizations with a powerful tool to help revise their internal policies and procedures to ensure that employees are using generative AI in a way that mitigates data, security, and privacy risks, respects intellectual property rights, and preserves consumer trust. 

Generative AI uses have proliferated since the technology’s emergence, transforming how we interact, work, and make decisions. From drafting emails and computer code to performing customer service functions, these technologies have made significant progress. However, as generative AI continues to advance and find new applications, it is essential to consider how the internal policies governing them should evolve in response to novel challenges and developments in the compliance landscape.

Key takeaways from the Considerations document include:

As generative AI becomes mainstream through tools such as chatbots, image generation apps, and copilot tools that help with writing and creating computer code, it introduces new and transformational use cases for AI in everyday life. However, there are also risks and ethical considerations to manage throughout the lifecycle of these systems. A better understanding of these risks and considerations is essential as practitioners devise policies to manage the benefits and risks of generative AI tools. The re-release of Generative AI for Organizational Use: Internal Policy Considerations strives to do this. Download the updated version of the Considerations document.

Future of Privacy Forum Launches the FPF Center for Artificial Intelligence

The FPF Center for Artificial Intelligence will serve as a catalyst for AI policy and compliance leadership globally, advancing responsible data and AI practices for public and private stakeholders

Today, the Future of Privacy Forum (FPF) launched the FPF Center for Artificial Intelligence, established to better serve policymakers, companies, non-profit organizations, civil society, and academics as they navigate the challenges of AI policy and governance. The Center will expand FPF’s long-standing AI work, introduce large-scale novel research projects, and serve as a source for trusted, nuanced, nonpartisan, and practical expertise. 

FPF’s Center work will be international as AI continues to deploy globally and rapidly. Cities, states, countries, and international bodies are already grappling with implementing laws and policies to manage the risks.“Data, privacy, and AI are intrinsically interconnected issues that we have been working on at FPF for more than 15 years, and we remain dedicated to collaborating across the public and private sectors to promote their ethical, responsible, and human-centered use,” said Jules Polonetsky, FPF’s Chief Executive Officer. “But we have reached a tipping point in the development of the technology that will affect future generations for decades to come. At FPF, the word Forum is a core part of our identity. We are a trusted convener positioned to build bridges between stakeholders globally, and we will continue to do so under the new Center for AI, which will sit within FPF.”

The Center will help the organization’s 220+ members navigate AI through the development of best practices, research, legislative tracking, thought leadership, and public-facing resources. It will be a trusted evidence-based source of information for policymakers, and it will collaborate with academia and civil society to amplify relevant research and resources. 

“Although AI is not new, we have reached an unprecedented moment in the development of the technology that marks a true inflection point. The complexity, speed and scale of data processing that we are seeing in AI systems can be used to improve people’s lives and spur a potential leapfrogging of societal development, but with that increased capability comes associated risks to individuals and to institutions,” said Anne J. Flanagan, Vice President for Artificial Intelligence at FPF. “The FPF Center for AI will act as a collaborative force for shared knowledge between stakeholders to support the responsible development of AI, including its fair, safe, and equitable use.”

The Center will officially launch at FPF’s inaugural summit DC Privacy Forum: AI Forward. The in-person and public-facing summit will feature high-profile representatives from the public and private sectors in the world of privacy, data and AI. 

FPF’s new Center for Artificial Intelligence will be supported by a Leadership Council of leading experts from around the globe. The Council will consist of members from industry, academia, civil society, and current and former policymakers. 

See the full list of founding FPF Center for AI Leadership Council members here.

I am excited about the launch of the Future of Privacy Forum’s new Center for Artificial Intelligence and honored to be part of its leadership council. This announcement builds on many years of partnership and collaboration between Workday and FPF to develop privacy best practices and advance responsible AI, which has already generated meaningful outcomes, including last year’s launch of best practices to foster trust in this technology in the workplace.  I look forward to working alongside fellow members of the Council to support the Center’s mission to build trust in AI and am hopeful that together we can map a path forward to fully harness the power of this technology to unlock human potential.

Barbara Cosgrove, Vice President, Chief Privacy Officer, Workday

I’m honored to be a founding member of the Leadership Council of the Future of Privacy Forum’s new Center for Artificial Intelligence. AI’s impact transcends borders, and I’m excited to collaborate with a diverse group of experts around the world to inform companies, civil society, policymakers, and academics as they navigate the challenges and opportunities of AI governance, policy, and existing data protection regulations.

Dr. Gianclaudio Malgieri, Associate Professor of Law & Technology at eLaw, University of Leiden

“As we enter this era of AI, we must require the right balance between allowing innovation to flourish and keeping enterprises accountable for the technologies they create and put on the market. IBM believes it will be crucial that organizations such as the Future of Privacy Forum help advance responsible data and AI policies, and we are proud to join others in industry and academia as part of the Leadership Council.”

Learn more about the FPF Center for AI here.

About Future of Privacy Forum (FPF)

The Future of Privacy Forum (FPF) is a global non-profit organization that brings together academics, civil society, government officials, and industry to evaluate the societal, policy, and legal implications of data use, identify the risks, and develop appropriate protections. 

FPF believes technology and data can benefit society and improve lives if the right laws, policies, and rules are in place. FPF has offices in Washington D.C., Brussels, Singapore, and Tel Aviv. Learn more at fpf.org.

FPF Statement on the House Energy and Commerce Subcommittee on Innovation, Data and Commerce’s May 23 unanimous House subcommittee vote on the American Privacy Rights Act

Today, the House Energy and Commerce Subcommittee on Innovation, Data and Commerce unanimously passed the revised draft of the American Privacy Rights Act.

fpf eo quote card 1

Peak Privacy: Vermont’s Summit on Data Privacy

On June 13, 2024, Governor Phil Scott vetoed H. 121. This marked the first governor veto of a comprehensive privacy bill passed by the state legislature.

Immediately prior to the close of the state legislative session on May 10, 2024, the Vermont legislature passed H. 121, “An act relating to enhancing consumer privacy and the age-appropriate design code.” If enacted by Governor Scott, Vermont could become the state with the farthest-reaching comprehensive privacy law. While the Vermont Data Privacy Act (VDPA) is modeled after the popular Connecticut privacy framework, it goes further in many places, drawing inspiration from a variety of sources. Vermont adds data minimization provisions inspired by Maryland’s new privacy law, new digital civil rights protections pulled from the American Data Privacy and Protection Act, a trimmed-down Age-Appropriate Design Code (AADC) focused on design features, and an entirely novel private right of action. 

Applicability

At over 100 pages, determining whether and how an organization will be covered by the H. 121 is a more complicated question than under most state privacy laws.  The VDPA contains unique scoping provisions and tiered effective dates tied to an organization’s size and the types of data it processes, and the AADC scope is entirely distinct from the rest of the VDPA. 

  1. Tiered effective dates

The Vermont Data Privacy Act establishes a tiered timeline for applicability. For larger organizations that process data of 25,000+ Vermont consumers or process data for 12,500 consumers and get more than 25% of their revenue from selling personal data, the law will go into force on July 1, 2025. Come July 1, 2027, the law will apply to organizations that either process data of 6,250+ consumers, or process data of 3,125 consumers and get more than 20% of their revenue from selling personal data. Despite Vermont’s small population, proportionally speaking these are the lowest coverage applicability thresholds across all comprehensive state privacy laws.

  1. No revenue and data processing thresholds for health data and kids data

The VDPA contains heightened protections for minors’ data and provisions concerning consumer health data that are not tied to the above revenue and data processing thresholds. As a result, small businesses could potentially have obligations under these provisions. Vermont joins an emerging trend originating in Connecticut of making certain protections for the most sensitive categories of personal information generally applicable, rather than being subject to a small business exception. 

  1. Separate applicability for Age-Appropriate Design Code

The standalone AADC section also contains a unique applicability threshold. Rather than apply to controllers, the AADC section applies to “covered businesses” that collect consumers’ personal data, determine the purposes and means of processing that data, and, alone or in combination, buy, receive for commercial purposes, sell, or share the personal data of at least 50% of their consumers. Given that this section specifies businesses and the revenue threshold is 50%, it will likely apply to a smaller subset of organizations than those covered under the VDPA. The ultimate scope of this provision is likely to be substantially shaped by how the term “receive for commercial purposes” is interpreted.

Notable protections for Vermonters

Much ink has been spilled over the “state privacy patchwork,” but the Vermont law itself is a bit of a patchwork, given that it draws inspiration from multiple sources, such as Connecticut, Maryland, and the American Data Privacy and Protection Act. Many rights given to individuals may be familiar, such as accessing, correcting, and deleting personal information. However, Vermont’s patchwork bill creates notable differences, including data minimization, prohibitions on selling sensitive data, and prohibitions on discriminatory processing.  

  1. Data minimization

The VDPA places default limits on the collection of personal data to what is reasonably necessary and proportionate to provide or maintain a specific product or service requested by the individual. This limit matches Maryland – however, Vermont lacks Maryland’s requirement that the processing of sensitive data must be strictly necessary, making Vermont somewhat less restrictive. Vermont further limits any processing for a purpose not disclosed in a privacy notice unless an individual’s consent is obtained or the purpose is reasonably necessary to and compatible with a disclosed purpose.   

  1. Prohibitions on selling sensitive data

Similar to Maryland, the VDPA prohibits the sale of sensitive data. Under the VDPA, sensitive data includes, among other things, consumer health data, biometric or genetic data, and personal data collected from a known minor. While the privacy protections for minors’ data and consumer health data largely follow Connecticut’s, Vermont goes further by not allowing the sale of sensitive data even with consent. Vermont may go further than even Maryland because it defines “sale” more broadly than any state privacy law to date, including the exchange of personal information not just for monetary value or other valuable consideration, but for a commercial purpose. 

  1. Prohibitions on discriminatory processing

Vermont prohibits processing an individual’s personal data in violation of State or federal laws that prohibit unlawful discrimination or in a manner that discriminates against individuals or otherwise restricts the enjoyment of goods and services based on protected classes. There are limited exceptions for self-testing and diversity applicant pools. These civil rights protections, derived from the American Data Privacy and Protection Act (ADPPA) and the American Privacy Rights Act discussion draft, go further than existing state privacy laws because the prohibition is not strictly tried to discrimination that is already unlawful. One minor difference from ADPPA is that Vermont prohibits discrimination against individuals, rather than “in a manner that discriminates,” though this distinction may not have a practical impact. 

  1. Broad Right to Opt out of Targeted Advertising

Like the Connecticut framework, the VDPA allows for the option to opt out of targeted advertising. However, the VDPA broadens the definition of targeted advertising to include first-party data shared between distinctly branded websites, including websites operated by the same controller. This expanded definition goes much further than any existing state privacy law.

A limited private right of action

To date, the only comprehensive state privacy law with any private right of action is California, which narrowly provides that certain data breaches can be the basis for a cause of action. Otherwise, comprehensive privacy laws are solely enforced by government regulators such as State Attorneys General. Vermont would break this mold by allowing individuals to bring suit against “large data holders” and data brokers in instances where there were alleged violations involving sensitive data or confidentiality of consumer health data. Vermont defines large data holders as organizations that process the data of more than 100,000 Vermont residents. This is noteworthy because as of the 2020 census, the Vermont population is 643,000. By limiting the private right of action to specific types of entities and particular kinds of privacy violations, the private right of action reflects a compromise between lawmakers in the House who wanted a broad private right of action and lawmakers in the Senate who struck it entirely in an earlier draft. 

In a further act of compromise, Vermont legislators took a creative approach to the timeframe for bringing any lawsuits. The private right of action goes into effect January 1, 2027, which is 18 months after when the largest organizations will have come into compliance with the law. The private right of action will sunset after two years unless the Vermont legislature passes a new law to affirmatively reauthorize it. Separately, the Attorney General is charged with conducting a study and developing recommendations to the legislature on implementing a private right of action, including applicability thresholds to ensure that a private right of action does not harm good-faith actors or small businesses and damages that balance the consumer interest in enforcing rights against any incentives provided to litigants with frivolous claims. The report is due by January 15, 2026, a year before the private right of action takes effect and as the legislature begins its next session.

Heightened protections for minors, including two duties of care

Because Vermont draws from Connecticut’s framework, the VDPA includes heightened protections for children and teens that largely mirror Connecticut. These protections include a duty to avoid a “heightened risk of harm” to minors, restrictions on selling minors’ data, and additional risk assessment requirements for controllers who process minors’ data. One subtle but significant difference is that Vermont adds additional harm to be considered in the duty of care and data protection impact assessments. Covered organizations will need to consider any “unintended disclosure of personal data of a minor.” Interestingly, this is language that was considered in Colorado this legislative session, but was ultimately rejected in favor of “unauthorized disclosure of the personal data of minors as a result of a security breach.” The harm articulated in Vermont is much broader and could cover inadvertent disclosures, not just disclosures due to a security breach. 

However, the protections focused on children and teens do not end there. During the 2024 session, Vermont lawmakers pursued parallel efforts to protect children online: H. 121, a comprehensive privacy bill, which passed, and S. 289, an AADC. A slimmed-down version of S. 289 was appended to H.121, resulting in the passage of both. The Vermont Data Privacy Act provisions address minors’ data protection, while the AADC addresses safety and design features of online services for minors. A key example of this delineation is that while the VDPA restricts dark patterns specifically related to exercising data rights, the Vermont AADC bans all dark patterns. The AADC defines dark patterns broadly as any user interface that undermines user autonomy. Without attaching this restriction to data rights or any specifically identified harm, the prohibition can be interpreted quite broadly. Additionally, the AADC further prohibits “low-friction variable rewards” that “encourage excessive and compulsive use by a minor.” A low-friction variable reward is defined as “design features or virtual items that intermittently reward consumers for scrolling, tapping, opening, or continuing to engage” in a service, with examples including endless scroll and autoplay. 

Another additional wrinkle of the attached AADC is that Vermont actually creates two duties of care for minors. In the comprehensive privacy section, companies have a duty to avoid heightened risk of harm to minors. The AADC separately requires an affirmative minimum duty of care owed to minors by a business that processes a minor’s data in any capacity. 

Lastly, the AADC disclaims that age verification is not required to comply with the obligations of this section. This may be a proactive effort to avoid any litigation regarding the constitutionality of age verification mandates. However, the AADC instead clarifies that the obligations imposed should be done with age estimation techniques. Given how age estimation is defined, this would provide a novel question for a court to consider, should there be any litigation. However, it is worth noting that age estimation often includes additional data collection, so covered organizations will need to take care in reconciling these obligations with the data minimization provisions of the VDPA. 

Next steps

H. 121 has not yet been presented to the Governor for consideration. Once received, the Governor will have merely five days to consider the bill. Given the novelty of several provisions of the bill, it may be cause for concern or may be an opportunity for Vermont to raise the bar across the nation. Should the Governor veto, the bill passed both chambers with the votes necessary to support a veto override. Organizations in scope and Vermonters should take note that the bill also calls for the Attorney General to lead a public education, outreach, and assistance program, which would begin to take effect July 1, 2024. 

New Report Examines Generative AI Governance Frameworks Across the Asia-Pacific Region

May  23, 2024 — Future of Privacy Forum today announced the launch of a comprehensive report, “Navigating Governance Frameworks for Generative AI Systems in the Asia-Pacific.”

This report examines the current state of governance frameworks for generative AI systems in five countries in the Asia-Pacific (APAC) region: Australia, China, Japan, Singapore, and South Korea.

The key takeaways of the report include

The report concludes by highlighting key points for policymakers, developers, and deployers

The report’s unveiling took place at an in-person launch event, co-hosted by FPF and Lee & Ko, one of South Korea’s premier full-service law firms. It featured remarks from senior leaders and government officials, including Kang Do-hyun, the 2nd Vice Minister of Science and ICT; Choi Jang-hyuk, the Vice Chairman of the Personal Information Protection Commission; Christina Montgomery, Vice-President and Chief Privacy and Trust Officer, IBM, and Member of the U.S. National AI Advisory Committee; Ko Hwan-kyung, Lee & Ko Partner, and Josh Lee Kok Thong, FPF APAC’s Managing Director.

“We’re delighted to convene this important discussion on AI governance with Lee & Ko,” Josh Lee Kok Thong, FPF APAC’s Managing Director, said. “The launch of our report, ‘Navigating Governance Frameworks for Generative AI Systems in APAC,’ is the culmination of an extensive research project that started in April last year on the regulatory and governance landscape for generative AI systems and LLMs in the APAC region. The region is at an inflection point in the governance and regulation of generative AI systems, and this report details the approach of five significant jurisdictions and reflects the need to harmonize the regulatory fragmentation across the region.”

For more information about the event, the agenda, and speakers, visit the FPF site.

To discuss the report with Josh Lee Kok Thong or the FPF APAC team, please reach out to [email protected]

The North Star State Joins the State Privacy Law Constellation

On May 19, 2024, the Minnesota Legislature passed HF 4757, an omnibus budget bill that includes the Minnesota Consumer Data Privacy Act (MNCDPA). The bill now heads to Governor Walz for signature. Developed by State Representative Steve Elkins over nearly five years and multiple legislative sessions, the MNCDPA is among the strongest iterations of the Washington Privacy Act (WPA) framework. In this blog post, we highlight nine things to know about the MNCDPA that set Minnesota apart in the state privacy law landscape. If enacted by Governor Walz, the law will take effect on July 31, 2025 for most controllers and on July 31, 2029 for postsecondary institutions regulated by the Office of Higher Education.

1.  Expansive Rights Include Contesting Profiling Decisions, Identifying Specific Third Party Recipients of Personal Data, and Adolescent Privacy Protections

Like the majority of states, the MNCDPA provides core individual rights of access; correction; deletion; portability; and to opt-out of processing personal data for targeted advertising, sale of personal data, or profiling in furtherance of automated decisions that produce legal or similarly significant effects. 

Minnesota is the first state, however, to offer an additional right with respect to profiling: Where an individual’s data is profiled in furtherance of decisions that produce legal or similarly significant effects, the individual has a right to contest the result of the profiling. This includes: a right to be informed of actions that could have been taken by the individual “to secure a different decision” and actions that can be taken in the future; a right to review the personal data used in the profiling; and, if that decision was based on inaccurate personal data, a right to correct that data and have the profiling decision be reevaluated. This right to contest a decision based on profiling appears to be broader than the right to opt-out of profiling, because the opt-out right applies to profiling in furtherance of automated decisions that produce legal or similarly significant effects whereas the right to contest the result of profiling applies to profiling in furtherance of decisions with such effects. 

The MNCDPA also follows trends established by other states with respect to expanded individual rights. Like the Oregon Consumer Privacy Act, the MNCDPA includes a right for individuals to obtain a list of specific third parties to whom their personal data has been disclosed or, if that information is not available, a list of specific third parties to whom the controller has disclosed any individual’s personal data. 

The MNCDPA also provides heightened protections for adolescents. Like the majority of state privacy laws, the MNCDPA deems the personal data of a “known child”—where a controller has actual knowledge of, or willfully disregards, that individual is younger than 13—as sensitive data, requiring opt-in consent for processing. Some states, like Oregon and New Jersey, have started adding additional protections for teenagers by changing opt-out rights to opt-ins, and Minnesota follows that trend: For targeted advertising and sale of personal data, the controller must obtain consent if the controller knows that the individual is between the ages of 13 and 16. Notably, those protections only apply where a controller “knows” the individual is between those ages, not if the controller “willfully disregards” the individual’s age. That is a departure from similar adolescent privacy protections in other states and narrows Minnesota’s adolescent privacy protections. 

2.  When Individuals Exercise Their Rights, Controllers Must Disclose Whether They Collected Certain Information

When an individual exercises any of their rights under the MNCDPA, controllers have an additional requirement to inform individuals “with sufficient particularity” whether the following types of information have been collected but to not disclose the information itself: (1) SSN; (2) driver’s license or government ID number; (3) financial account number; (4) health insurance account or medical identification number; (5) account password, security questions, or answers; or (6) biometric data. This obligation to inform with sufficient particularity that these types of data have been collected applies whenever an individual exercises any of their rights, not just the right to access. Given that a controller must not disclose the listed information, this provision arguably narrows the right to access with respect to these types of data, but is likely to benefit security overall and help prevent identity theft. 

3.  Heightened Data Security Requirements Include Inventorying Data, Documenting Compliance, and Appointing a Chief Privacy Officer

Like the majority of states, the MNCDPA requires controllers to “establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, and integrity, and accessibility of personal data.” Minnesota goes further than other states, however, by explicitly requiring that such security practices include maintenance of a data inventory. Although this is often considered a best practice in many circumstances and is likely a standard practice amongst companies subject to such reasonable security requirements in other states, no prior state comprehensive privacy law has mandated that controllers create and maintain this kind of inventory. The bill provides no specific definition or guidance as to what this inventory should entail.

The MNCDPA also includes prescriptive requirements for controllers to “document and maintain a description of the policies and procedures the controller has adopted to comply with [the law],” including the name and contact information of the chief privacy officer or individual with primary compliance responsibility as well as a description of the policies and procedures taken to comply with the controller duties, which has many subcomponents. The implicit requirement to have a chief privacy officer or similar individual responsible for compliance is a first amongst state comprehensive privacy laws.

Another novel controller duty which will impact data security is that controllers are prohibited from retaining personal data “that is no longer relevant and reasonably necessary in relation to the purposes for which the data were collected and processed.” This retention principle may have already been an implicit requirement under the bill’s data minimization and purpose limitation rules. 

4.  Novel Protections for Deidentified and Pseudonymised Data

State comprehensive privacy laws typically require that controllers who disclose de-identified or pseudonymous data “exercise reasonable oversight to monitor compliance with any contractual commitments” to which that data are subject. Consistent with the Colorado Privacy Act, the MNCDPA extends this obligation to use of such data rather than just disclosure. Additionally, the MNCDPA includes two novel protections for deidentified and pseudonymous data, providing that: (1) processors and third parties may not attempt to identify the subjects of such data without the “express authority” of the controller who deidentified or pseudonymized the data; and (2) controllers, processors, and third parties may not attempt to identify the subjects of data that was collected with only pseudonymous identifiers. 

5.  “Data Privacy and Protection Assessments” Introduce Expansive New DPIA Requirements

As is common under laws drafted in the WPA framework, the MNCDPA requires controllers to conduct and document assessments for certain high-risk processing activities. The MNCDPA uses the term “data privacy and protection assessment” (DPPA) rather than the more familiar terms “data protection assessments” or “data protection impact assessments” used in other states, which reflects the fact that the MNCDPA’s DPPA requirements are similar but not identical to the requirements in other states. 

The triggers for conducting a DPPA are similar to those under other states: processing personal data for targeted advertising; selling personal data; processing sensitive data; conducting any processing activity that presents a heightened risk of harm to individuals; or processing personal data for profiling that presents a reasonably foreseeable risk of certain substantial injuries (e.g., unfair treatment, financial injury, etc.). Where the MNCDPA differs from other states is in its more prescriptive content requirements. DPPAs must take into account the type of personal data to be processed, whether the data are sensitive data, and the context of processing. Furthermore, the DPPA must include the description of policies and procedures which the controller is required to create (see section 2 above for a description of this requirement).  

6.  Minnesota Continues Maryland’s Trend of Heightening Civil Rights and Nondiscrimination Protections

State privacy laws typically prohibit controllers from processing personal data in violation of state or federal laws that prohibit unlawful discrimination. The MNCDPA contains an additional civil rights protection: Controllers may not process individuals’ personal data on the basis of their “actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, lawful source of income, or disability in a manner that unlawfully discriminates against the [individual or class of individuals] with respect to the offering or provision of: housing, employment, credit, or education; or the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.” This is similar to a prohibition in the recently enacted Maryland Online Data Privacy Act (MODPA), which prohibits controllers from processing personal data or publicly available data in a way that either unlawfully discriminates in or unlawfully makes unavailable “the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, or disability,” subject to limited exceptions. 

7.  Specific Geolocation Data is Defined Based on Decimals of Latitude and Longitude Instead of Feet 

The majority of state comprehensive privacy laws include precise geolocation data as a category of sensitive data. Although the language varies slightly from state to state, that term is generally defined as information derived from technology that identifies an individual’s specific location (or a device linked or linkable to the individual, in Oregon), accurate within a radius of 1,750 feet or less (1,850 feet in California). 

The MNCDPA includes “specific geolocation data” as a category of sensitive data, but it abandoned this foot-based standard in favor of a definition based on decimals of latitude and longitude: Specific geolocation data means “information derived from technology . . . that directly identifies the geographic coordinates of a consumer or a device linked to a consumer with an accuracy of more than three decimal degrees of latitude and longitude or the equivalent in an alternative geographic coordinate system, or a street address derived from the coordinates.” This definition includes typically exceptions for content of communications and data generated by utility metering infrastructure or equipment, but it also includes a novel carve-out for “the contents of databases containing street address information which are accessible to the public as authorized by law.”

8. Limited Applicability to Small Businesses, Like Under the Texas Data Privacy and Security Act

The MNCDPA contains two levels of protections for small businesses. First, the law’s thresholds for applicability are relatively high. A controller is not subject to the law unless they process either (1) the personal data of 100K Minnesotans (excluding payment transactions data) or (2) generate at least 25% of their gross revenue from the sale of personal data and process the personal data of at least 25K Minnesotans. Second, small businesses, as defined by the U.S. Small Business Administration in 13 C.R.F. 121, are largely exempt from the MNCDPA. Notwithstanding this limited entity-level exemption, small businesses are prohibited from selling an individual’s sensitive data without that individual’s prior consent. The Texas Data Privacy and Security Act and the recently enacted Nebraska Data Privacy Act include similar provisions, but neither of those laws include controller thresholds on top of the small business exemption.

9.  New Requirements for Privacy Notices and Assessments

The MNCDPA contains novel transparency obligations, requiring that controllers include in their privacy notice “a description of the controller’s retention policies for personal data” as well as the date the notice was last updated. The bill also details how a privacy notice should be made available: Privacy notices “must be posted online through a conspicuous hyperlink using the word ‘privacy’ on the controller’s website home page or on a mobile application’s app store page or download page,” provided via a hyperlink in an app’s settings menu or similarly conspicuous and accessible location, or, if the controller does not operate a website, made available “through a medium regularly used by the controller” to interact with individuals. Controllers are not required to provide a Minnesota-specific notice if their general privacy notice contains all the required information. 

Regulatory Strategies and Priorities of Data Protection Authorities in Latin America: 2024 and Beyond

Authors: Maria Badillo and Momina Imran

Today, the Future of Privacy Forum (FPF) published an Issue Brief analyzing the regulatory strategies and priorities of data protection authorities (DPAs) in Latin America. Titled Regulatory Strategies and Priorities of Data Protection Authorities in Latin America: 2024 and Beyond, the Issue Brief outlines an overview of the various strategies, activity reports, and other announcements made by the DPAs of various Latin American countries in order to understand their strategic priorities for 2024 and coming years. 

Most governments in Latin America recognize privacy and personal data protection as two separate fundamental rights in their constitutional and legal frameworks. Some of these countries have also issued specific data protection laws and created regulatory authorities to ensure compliance with such laws. For example, Argentina, Uruguay, Mexico, Peru, and Colombia have all created authorities that are actively monitoring and regulating the data protection space. 

Advancing technology and increased digitalization have made the need for updated data protection frameworks to govern organizations’ processing of personal data more essential than ever. 

The objective of this Issue Brief is to provide an overview of the current work and future objectives of data protection authorities in Latin America. This brief highlights areas of convergence among the different DPAs and showcases the diverse set of strategies LatAm DPAs have indicated they plan to deploy in coming years. As a result, this Issue Brief is limited to jurisdictions where there is (i) a designated data protection authority and (ii) such authority has issued a strategic or planning document outlining its current and future work. 

The Issue Brief expands upon the following categories:

Review of DPAs regulatory strategies shows that most authorities seek to increase their investigatory and sanctioning powers as part of their enforcement priorities. Authorities also recognize a need for greater awareness of data protection across the board, which they can facilitate via enhanced advocacy and public participation through education and supplemental guidance. Another common priority is to build institutional capabilities by training government and DPA personnel and continue increasing collaboration with other sectoral agencies and DPAs in the region. 

For a more detailed discussion of the regulatory strategies of Latin American DPAs and an in-depth analysis of the strategic and planning documents, download the Issue Brief here.


For inquiries about this Issue Brief, please contact Maria Badillo, Policy Counsel of Global Privacy, at [email protected].

Little Users, Big Protections: Colorado and Virginia pass laws focused on kids privacy

‘Don’t call me kid, don’t call me baby’ – unless you are a child residing in either Colorado or Virginia, where children will soon have increased privacy protections due to recent advances in youth privacy legislation. Virginia and Colorado both have broad-based privacy laws already in effect. During the 2024 state legislative sessions, both states amended those laws to add specific online privacy protections for kids’ data. In Virginia, HB 707/SB 361 passed the state legislature. It moved on to Governor Youngkin’s desk on March 8th, and after some procedural hurdles, it finally passed into law on May 17 as a modest approach for additional youth-tailored protections. In Colorado, SB 41 passed the legislature on May 14th with near-unanimous votes in both chambers, introducing a more expansive youth privacy framework than Virginia. SB 41 is expected to be signed into law by Governor Polis as passed by the Colorado legislature. Following Connecticut’s lead last year, these developments signal a growing trend toward states building off of existing privacy frameworks to strengthen protections for children’s data online. 

Colorado

Although Colorado SB 41 is more expansive than what Virginia passed, the requirements in this law are familiar. SB 41 is almost an exact copy of the youth privacy amendment to Connecticut’s comprehensive privacy bill SB 3, which we covered in a blog post in 2023 As a result, there is a general compliance model for the requirements of this bill. However, it is still worth noting that there are some differences between Colorado SB 41 and Connecticut SB 3 which should be given special attention, especially where the impact of these differences remains to be seen.

What’s familiar about SB 41? 

  1. The scope of SB 41 is nearly identical to SB 3. 

As an amendment to a comprehensive state privacy law, SB 41 will work within the existing Colorado Privacy Act (“CPA”) to provide additional heightened protections for kids and teens up to 18. The compliance requirements of SB 41 rely on the existing definition of controller in the CPA. The obligations under both Colorado and Connecticut apply to controllers who offer any online service, product, or feature to consumers whom the controller has actual knowledge, or willfully disregards, are minors. Most importantly, the text of the bill makes clear that, while some child-focused provisions of Colorado and Connecticut’s laws only apply to controllers that meet specified revenue or user thresholds, the duty of care provisions apply to all controllers.

  1. Both states create a duty of care owed to minors. 

SB 41 creates a duty to use reasonable care to avoid any heightened risk of harm to minors and creates additional risk assessment requirements for minors’ data. This duty to use reasonable care applies where the controller has actual knowledge or willfully disregards that a user is under 18 years of age. If controllers comply with the bill’s risk assessment requirements, there is a rebuttable presumption in any enforcement action brought by the State Attorney General that a controller used the reasonable care required to avoid heightened risk of harm to minors. Therefore, a strong incentive exists for controllers to conduct risk assessments, since doing so could potentially save controllers from enforcement in cases of unforeseeable harm to minors as a result of their online service, product, or feature. 

  1. Both states have requirements that draw on the California AADC, with differences.

The substantive requirements under Colorado are nearly identical to those in Connecticut. Both SB 41 and SB 3 have restrictions on processing minors’ data similar to those originally seen in the enjoined California Age-Appropriate Design Code. For example, SB 41 limits controllers’ ability to profile, process geolocation data, or display targeted ads to a minor’s account without prior consent. However, unlike the California AADC, neither Colorado nor Connecticut requires a controller to estimate the age of users or assess harms related to content. 

What’s different about SB 41? 

  1. An additional harm must be considered in Colorado. 

SB 41 goes a step further than Connecticut SB 3 in the categories that must be included in data protection impact assessments (“DPIAs”) and introduces a fourth type of harm that must be considered – which is the ‘heightened risk of harm’ for any “unauthorized disclosure of the personal data of minors as a result of a security breach.” It is unclear at this time what the magnitude of this impact will be on controllers’ compliance efforts, but it does indicate a strong interest in the security of minor’s data collected through online services, products, and features. Along with the addition of this fourth kind of harm, SB 41 includes three of the same harms that are also seen in SB 3’s “heightened risk of harm to minors” definition: (1) unfair, deceptive treatment or unlawful disparate impact on minors, (2) any financial, physical, or reputational injury to minors, and (3) any physical or other intrusion on the seclusion, solitude, or privacy of minors that would be offensive to the reasonable person. Aside from the general duty of care to avoid these types of harm to minors, under both Connecticut and Colorado, controllers must assess for these harms in DPIAs. 

  1. No ‘unpublishing’ requirement. 

SB 3 had a standalone section focused specifically on obligations for social media platforms. SB 41 lacks SB 3’s requirement that a controller ‘unpublish’ a minor’s social media account. All requirements in SB 41 apply generally to covered services. 

Virginia

Compared to Colorado and Connecticut’s youth privacy amendments, Virginia passed a more modest set of requirements for controllers in the state. Despite this moderate approach, Virginia’s method of heightening child privacy protections online is still worth watching. The Governor’s proposed amendments, which the legislature ultimately rejected, would have been much more expansive, such as raising the age for needing parental consent up to 17. As indicated by the bill sponsors during floor hearings, the smaller step in what was passed is only a starting point for the state. Virginia lawmakers indicated an intent to continue building upon this foundation of privacy protections and raising the age threshold in the law, but first want to get something attainable “on the books… versus [being] stuck in court” with constitutional challenges. 

Scope

Like Colorado SB 41, Virginia HB 707 would work within the state’s existing comprehensive privacy law, taking on the established controller definition. Unlike Colorado, small businesses are exempt from the Virginia Consumer Data Protection Act. HB 707 does not amend the scope or application threshold of the VCDPA to the child privacy provisions of the bill – the application of the child privacy provisions is the same as the application of the other privacy requirements in the VCDPA. The protections afforded under HB 707 apply to known children under 13. 

Controller obligations

Unlike Colorado SB 41 and Connecticut SB 3, Virginia HB 707 does not create a duty of reasonable care. Instead, HB 707 simply limits the processing of minor data, establishes requirements for obtaining consent to process minor data, and expands DPIA requirements. The limits on processing and obtaining consent generally align with what is required by COPPA, though COPPA technically only applies to collecting rather than processing. While HB 707 creates marginally more specific DPIA requirements, existing requirements under the VCDPA already required conducting DPIAs for sensitive data, including children under 13. Additionally, like Colorado and Connecticut, Virginia HB 707 places default limits on collecting a child’s precise geolocation and requires a signal to the child while this geolocation information is collected. 

Conclusion

Despite seeing some variation in the approach to enacting youth-focused amendments to comprehensive privacy laws, starting with Connecticut’s SB 3 in 2023, a trend is developing among state legislators to continue building upon pre-established privacy frameworks. It is worth acknowledging that under state privacy laws, children and teens are part of the definition of “consumers” these laws are scoped to protect. Any broad-based state privacy law will naturally apply to residents of that state, both young and old. However, conceptually, it may be easier for lawmakers to envision what additional protections children and teens need once a baseline privacy framework is in place.

Although this is a new and noteworthy privacy development to watch moving forward, it is not the only approach lawmakers are taking to regulate youth online experiences. Another avenue during the 2024 session was the new Age-Appropriate Design Code framework (“AADC 2.0”). While the AADC 2.0 passed in Maryland and Vermont this year, there are several differences between these two states, as well as some uncertainties about how the AADC 2.0 will hold up to Constitutional scrutiny. Compare this with Connecticut and Colorado, which have nearly identical frameworks for youth protections. Over the last few years, several laws intended to address child privacy and safety online have passed in different states. Still, many, such as the California Age-Appropriate Design Code, have had their implementation delayed by courts over Constitutional challenges.  Given that SB 3 will not come into force until October 2024, it may be too soon to call Connecticut and Colorado’s amendments a pattern. Still, there is potential for lawmakers to converge around this approach to protecting children online where it faces a lower risk of legal hurdles than alternative approaches.  

Colorado Enacts First Comprehensive U.S. Law Governing Artificial Intelligence Systems

On May 17, Governor Polis signed the Colorado AI Act (CAIA) (SB-205) into law, establishing new individual rights and protections with respect to high-risk artificial intelligence systems. Building off the work of existing best practices and prior legislative efforts, the CAIA is the first comprehensive United States law to explicitly establish guardrails against discriminatory outcomes from the use of AI. The Act will take effect on February 1, 2026.

The CAIA is informed by extensive stakeholder engagement efforts led by Colorado Senate Majority Leader Rodriguez and Connecticut Senator Maroney, including a bipartisan multistate policymaker working group convened by FPF last year. The regulation of emerging technologies such as artificial intelligence is a complex issue where effective governance is best achieved by incorporating multiple perspectives and diverse stakeholder input. Throughout the legislative process, the CAIA also incorporated amendments from stakeholders in crafting a framework that can support both functionality and consumer protection.

FPF has released a Two–Page Fact Sheet summarizing the key definitions, consumer rights, and business obligations established by SB-205. 

tatiana quote cosb205

Now, On the Internet, Will Everyone Know if You’re a Child? 

With help from Laquan Bates, Policy Intern for Youth and Education

How Knowledge Standards Have Changed the Status Quo

As minors increasingly spend time online, lawmakers continue to introduce legislation to enhance the privacy and safety of kids’ and teens’ online experiences beyond the existing Children’s Online Privacy Protection Act (COPPA) framework. Proposals have proliferated in both the federal and state legislatures across the U.S. with varying approaches to minors’ privacy protections. Key pieces of this discussion are the age of individuals online, whether online sites and services know that an individual is a child, and how to balance kids’ and teens’ protections with anonymity online.

Recent state legislative proposals have used varied language to communicate knowledge standards and the obligations online sites and services have to know their audience or  individuals’ ages, such as “likely to be accessed by a child,” “targets or is reasonably anticipated to be accessed by children,” and “has actual knowledge, or willfully disregards a minor’s age.”  These standards can determine the scope of a law’s obligations and be accompanied by varying age thresholds defining a child, parental consent requirements, and age assurance mechanisms. With the variety and ambiguity of language used, it has become difficult for online service providers to determine whether they are required to ascertain the age of individuals using their service. 

We have prepared a resource summarizing the knowledge standards currently used in enacted U.S. privacy and online safety laws. 

Key Observations 

The Status Quo of Knowledge Standards Under COPPA

Knowledge standards can impact the scope of entities required to comply with a law as well as the law’s obligations, like the specificity of age assurance required. For example, under COPPA, businesses are required to obtain verifiable parental consent before collecting personal information from children under the age of 13. COPPA’s requirements apply to operators of websites or online services that are directed to children under 13 years of age or that have actual knowledge that they are collecting personal information from children under 13.  

U.S. legislation uses two types of knowledge standards regarding an individual’s age: actual knowledge and constructive knowledge. 

Actual knowledge refers to operators of online services, websites, or products being aware of, or knowing, a user’s age. Under COPPA, operators are not required to ask the age of users or visitors to obtain actual knowledge that a user or visitor is a child. An operator may still choose to ask about the age of users or visitors. Without asking the age of users or visitors, an operator may have actual knowledge of a user’s age if they receive information that allows them to determine the person’s age, like their grade in school. 

Constructive knowledge refers to operators of online services, websites, or products being legally presumed to know that a user is a child because they should have known the age of the user. Constructive knowledge can refer to instances in which an operator suspected the age of a user but decided against further inquiry. When a constructive knowledge standard is used, it may trigger an obligation for operators to ascertain the age of a user. 

The audience of a site or service determines the legal obligations of a company under COPPA. The COPPA Rule dictates that a website or online service may have one of three types of audiences: a general audience, a child-directed audience, or a mixed audience.

A child-directed site under COPPA must treat all individuals as if they are children. Thus, child-directed sites are effectively considered to know or reasonably should know that individuals are children. Sites and services that are child-directed or have a mixed audience must comply with the law’s privacy protections, while general audience sites with no actual knowledge of collecting personal information from children do not. Operators of mixed audience sites or services may implement an age screen to ensure they do not collect personal information from children or they can obtain verifiable parental consent for that collection. However, operators of mixed-audience sites that implement an age screen may not block children from participating altogether, and discovering that an individual is a child will trigger COPPA’s consent and notice requirements. General audience sites may also choose to use an age screen, but COPPA does not prohibit them from blocking children from participating. The FTC has made it clear that mixed audience sites and services are considered “a subset” of the child-directed category. 

Constructive knowledge standards cast a wider net than actual knowledge standards, meaning that more online services should be concerned about compliance with kids’ and teens’ privacy protections. Constructive knowledge standards may cause general audience service providers to be included in new legal obligations and, therefore, increase the likelihood of a company implementing age gates and age assurance methods for the use of its online services. Constructive knowledge standards are also more difficult to interpret and implement, especially in attempting to distinguish between services that are used by 17-year-olds versus those used by 18-year-olds. 

Confusion Created by New Standards 

U.S. lawmakers have trended toward requiring service providers to have a greater awareness of the age of individuals using their online services. Additionally, recent legislation has increasingly included privacy protections for minors older than 13. While some new laws have included actual knowledge standards, there has been an uptick in using more ambiguous language and constructive knowledge standards. For example, Lousiana SB 162 uses “reasonably believes or has actual knowledge” that an individual is a minor under the age of 16. The language “actual knowledge” clearly indicates an actual knowledge standard while including “reasonably believes” is confusing and may lead to ambiguity without defining how a service provider may be considered to “reasonably believe” that an individual is a minor. If “reasonably believes” means that the service provider does believe that an individual is a minor and is reasonable in doing so, then it is effectively actual knowledge and it is unclear why this additional language is necessary. 

In contrast, the Maryland Age-Appropriate Design Code applies to entities that develop and provide online services, products, or features that are “reasonably likely to be accessed by children.” The Maryland law defines “reasonably likely to be accessed by children” as meaning that it is reasonable to expect that the online product would be accessed by children based on,  (1) the online product being directed to children as defined in COPPA; (2) the online product being determined, based on competent and reliable evidence regarding audience composition, to be routinely accessed by a significant number of children; (3) the online product being substantially similar or the same as an online product that satisfies item (2) of this subsection; (4) the online product featuring advertisements marketed to children; (5) the covered entity’s internal research findings determining that a significant amount of the online product’s audience is composed of children; or (6) the covered entity knows or should have known that a user is a child. 

By providing these factors, the Maryland law elucidates that the use of the “reasonably likely to be accessed by children” standard encompasses platforms that have actual knowledge or are directed to children. This standard seems very similar to COPPA’s standard for audience and knowledge of age. However, the Maryland law defines “child” as a consumer under 18, causing more online services to be within the scope of the Maryland law than are currently within COPPA’s. While online services may be familiar with assessing if they are within COPPA’s scope, it may be more complicated to assess whether their online service is reasonably likely to be accessed by anyone under 18 years old because the internet use of a 17-year-old is likely more similar to that of an adult’s than of a 12-year-old’s. 

Similarly, Florida SB 262 uses “likely to be predominantly accessed by children” to describe a gaming or social media platform’s obligation to assess its service’s audience and defines “child” as any consumer under the age of 18. However, Florida SB 262 does not define this standard, leaving more ambiguity. The law states that civil penalties for non-compliance may be increased for violations that involve the data of a “known child,” and a platform that “willfully disregards” a child’s age is considered to have actual knowledge. This increase in civil penalties for having actual knowledge of a child’s age suggests that the “likely to be predominantly accessed by children” standard uses a constructive knowledge or directed-to-children standard. 

“Willfully disregards” is often a constructive knowledge standard that refers to an operator deciding to not inquire about the age of an individual despite circumstances that suggest the operator reasonably knows or should have known an individual was a child. Some recent legislation uses an actual knowledge standard and subsequently states that willful disregard of a child’s age will be deemed as having actual knowledge, such as Florida SB 262 and the California Consumer Privacy Act while other privacy laws combine an actual knowledge standard and a constructive knowledge standard such as the Connecticut Data Privacy Act’s “actual knowledge or willfully disregards” that consumers are minors. 

Conclusion

Recent U.S. legislation has changed the youth privacy status quo from COPPA compliance to additional compliance with new state laws in efforts to improve the privacy and safety of minors online. Impactful changes were made by state legislatures using novel language to communicate the requisite knowledge that an online service provider must have of a minor’s age for compliance with state laws. These laws have expanded privacy protections for teens and require more online services to comply with privacy protections for minors than ever before. Some state laws combined traditional actual knowledge standards with constructive knowledge standards while others used new language altogether. State legislatures continue to prioritize work on new youth privacy legislation and without consistent language and definitions for knowledge standards in privacy laws, compliance across jurisdictions will become exponentially more difficult.

Access our resource for a summary of the knowledge standards currently used in enacted U.S. privacy and online safety laws.