AI Regulation in Latin America: Overview and Emerging Trends in Key Proposals
The widespread adoption of artificial intelligence (AI) continues to impact societies and economies around the world. Policymakers worldwide have begun pushing for normative frameworks to regulate the design, deployment, and use of AI according to their specific ethical and legal standards. In Latin America, some countries have joined these efforts by introducing legislative proposals and establishing other AI governance frameworks, such as national strategies and regulatory guidance.
This blog post provides an overview of AI bills in Latin America through a comparative analysis of proposals from six key jurisdictions: Argentina, Brazil, Mexico, Colombia, Chile, and Peru. Except for Peru, which already approved the first AI law in the region and is set to approve secondary regulations, these countries have several legislative proposals with a varied level of maturity, with some still being in a nascent stage and others more advanced. Some of these countries have had simultaneous AI-related proposals under consideration in recent years; for example, Colombia and Mexico currently have three and, respectively, two AI bills under review1 and both countries have archived at least four AI bills from previous legislative periods.
While it is unclear which bills may ultimately be enacted, this analysis will provide an overview of the most relevant bills in the selected jurisdictions and identify emerging trends and divergences in the region. Accordingly, this analysis was based on at least one active proposal from each country that either (i) targets AI regulation in general, instead of providing technology-specific or sector-specific regulation; (ii) has similar provisions and scope to those found in other more advanced proposals in the region, or (iii) seems to have more political support or is considered the ‘official’ proposal by the current administration in that country – this is particularly the case of Colombia, for which the present analysis was performed considering the proposal introduced by the Executive. Most of these proposals have a similar objective of regulating AI comprehensively through a risk-tiered approach. However, they differ in key elements, such as in the design of institutional frameworks and the specific obligations for “AI operators.”
Overall, AI bills in Latin America:
(i) have a broad scope and application, covering AI systems introduced or producing legal effects in national territory;
(ii) rely on an ethical and principle-based framework, with a heavy focus on the protection of fundamental rights and using AI for economic and societal progress;
(iii) have a strong preference for ex ante, risk-based regulation;
(iv) introduce institutional multistakeholder frameworks for AI governance, either by creating new agencies or assigning responsibility to existing ones, and
(v) have specific provisions for responsible innovation and controlled testing of AI technologies.
1. Principles-Based and Human Rights-Centered Approaches are a Common Theme Across LatAm AI Bills
Most bills under consideration are heavily grounded on a similar set of guiding principles for the development and use of AI, focused on the protection of human dignity and autonomy, transparency and explainability, non-discrimination, safety, robustness, and accountability. Some proposals explicitly refer to the OECD’s AI Principles, focused on transparency, security, and responsibility of AI systems, and to UNESCO’s AI Ethics Recommendation, which emphasizes the need for a human-centered approach, promoting social justice and environmental sustainability in AI systems.
All bills reviewed ground the development of AI in privacy or data protection as a guiding principle to indicate that AI systems must be developed under existing privacy obligations and comply with regulations in terms of data quality, confidentiality, security, and integrity. Notably, the Mexican bill and the Peruvian proposal – the draft implementing regulations for its framework AI law – also include privacy-by-design as a guiding principle for the design and development of AI.
The inclusion of a principle-based approach is flexible and provides room for future regulations and standards, considering the evolution of AI technologies. Based on these guiding principles, most bills authorize secondary regulation by a competent authority to expand on the provisions related to AI user rights and obligations.
In addition, most bills concur in key elements of the definition of “AI system” and “AI operators.” Brazil’s and Chile’s proposals have a similar definition of an AI system to that found in the European Union’s Artificial Intelligence Act (EU AI Act), defining it as a ‘machine-based system’ with varying levels of autonomy that, with implicit or explicit objectives, can generate outputs such as recommendations, decisions, predictions, and content. Both countries’ bills also define AI operators as the “supplier, implementer, authorized representative, importer, and distributor” of an AI system.
Other bills include a more general definition of AI as a ‘software’ or ‘scientific discipline’ that can perform operations similar to human intelligence, such as learning and logical reasoning – an approach which reminds of the definition of AI in Japan’s new law. Peru’s regulation lacks a definition for AI operators, but includes one for AI developers and implementers; and Colombia refers to “AI operators” in similar terms to those found in Brazil and Peru, though it also includes users within its definition of “AI operators”.
A common feature in the bills covered is their grounding on the protection of fundamental rights, particularly the rights to human dignity and autonomy, protection of personal data, privacy, non-discrimination, and access to information. Some bills go further as to introduce a new set of AI-related rights to specifically protect users from harmful interactions and impacts created by AI systems.
Brazil’s proposal offers a salient example for this structure, introducing a chapter for the rights of individuals and groups affected by AI systems, regardless of their risk classification. For AI systems in general, Brazil’s proposal includes:
- The right to prior information about an interaction with an AI system, in an accessible, free of charge, and understandable format.
- The right to privacy and the protection of personal data, following the Lei Geral de Proteção de Dados Pessoais (LGPD) and relevant legislation;
- The right to human determination and participation in decisions made by AI systems, taking into account the context, level of risk, and state-of-the-art technological development;
- The right to non-discrimination and correction of direct, indirect, unlawful, or abusive discriminatory bias.
Concerning “high-risk” systems or systems that produce “relevant legal effects” to individuals and groups, Brazil’s proposal includes:
- The right to an explanation of a decision, recommendation, or prediction made by an AI system;
- Subject to commercial and industrial secrecy, the required explanation must contain sufficient information on the operating characteristics; the degree and level of contribution of the AI to decision-making; the data processed and its source; the criteria for decision-making, considering the situation of the individual affected; the mechanisms through which the person can challenge the decision; and the level of human supervision.
- The right to challenge and review the decision, recommendation, or prediction made by the system;
- The right to human intervention or review of decisions, taking into account the context, risk, and state-of-the-art technological development;
- Human intervention will not be required if it is demonstrably impossible or involves a disproportionate effort. The AI operator will implement effective alternative measures to ensure the re-examination of a contested decision.
Brazil’s proposal also includes an obligation that AI operators must provide “clear and accessible information” on the procedures to exercise user rights, and establishes that the defense of individual or collective interests may be brought before the competent authority or the courts.
Mexico’s bill also introduces a chapter on “digital rights”. While these are not as detailed as the Brazilian proposal, the chapter includes innovative ideas, such as the “right to interact and communicate through AI systems”. The proposed set of rights also incorporates the right to access one’s data processed by AI; the right to be treated equally; and the right to data protection. The inclusion of these rights in the AI bill arguably does not make a significant difference, considering most of these rights are already explicitly recognized at a constitutional and legal level. Furthermore, the Mexican bill appears to introduce a catalog of rights and principles, but it lacks specific safeguards or mechanisms for their exercise in the context of AI. However, their inclusion signals the intention of policymakers to govern and regulate AI primarily through a human-rights-based perspective.
2. Most Countries in LatAm Already Have Comprehensive Data Protection Laws, Which Include AI-relevant Provisions
All countries analyzed have adopted comprehensive data protection laws applying to any processing of personal data regardless of the technology involved – some for decades, like Argentina, and some more recently, like Brazil and Chile. Except for Colombia, most data protection laws in these countries include an individual’s right not to be subject to decisions based solely on automated processing. Argentina, Peru, Mexico, and Chile recognize rights related to automated decision-making, prohibiting such activity without human intervention if it produces unwanted legal effects or significantly impacts individuals’ interests, rights, and freedoms, and is intended for profiling. These laws focus on the potential of profiling through automation, and the data protection laws in Peru, Mexico, and Colombia include a specific right prohibiting such activity, while Argentina prohibits profiling by courts or administrative authorities.
In contrast, Brazil’s LGPD recognizes the right to request the review of decisions made solely on automated processing that affect an individual’s interests, including profiling. While the intended purpose may be similar, the right under the Brazilian framework appears to be more limited, where individuals have the right to request review after the profiling occurs, but not necessarily to prevent or oppose this type of processing. Nonetheless, a significant aspect of the right proposed under Brazil’s AI bill is the explicit reference to human intervention in the review, an element absent from the same right under the LGPD.
While AI can enable different and additional outcomes other than profiling, it is noteworthy that most of the data protection laws in these countries already include some level of regulation of AI-powered automated decision-making (ADM) and profiling, whether the AI bills under consideration in the region will ultimately be adopted or not.
3. Risk-Based Regulation is Gaining Traction
All of the reviewed proposals adopt a risk-based approach to regulating AI, seemingly drawing at least some influence from the EU AI Act. These frameworks generally classify AI systems along a gradient of risk, from minimal to unacceptable, and introduce obligations proportional to the level of risk. While the specific definitions and regulatory mechanisms vary, the proposals articulate similar goals of ensuring safe, ethical, and trustworthy development and use of AI.
Brazil’s proposal is one of the most detailed in this respect, mandating a preliminary risk assessment for all systems before their introduction to the market, deployment, or use. The initial assessment must evaluate the system’s purpose, context, and operational impacts to determine its risk level. Similarly, Argentina’s bill requires a pre-market assessment to identify ‘potential biases, risks of discrimination, transparency, and other relevant factors to ensure compliance’.
Notably, most proposals converge in the definition and classification of AI systems with “unacceptable” or “excessive” risk and prohibit their development, commercialization, or deployment. Except for Mexico, whose proposal does not contain an explicit ban, most of the bills expressly prohibit AI systems posing “unacceptable” (Argentina, Chile, Colombia, and Peru) or “excessive” (Brazil) risks. The proposals examined generally consider systems under this classification as being “incompatible with the exercise of fundamental rights” or those posing a “threat to the safety, life, and integrity” of individuals.
For instance, Mexico’s bill defines AI systems with “unacceptable” risk as those that pose a “real, possible, and imminent threat” and involve “cognitive manipulation of behavior” or “classification of individuals based on their behavior and socioeconomic status, or personal characteristics”. Similarly, Colombia’s bill further defines these systems as those “capable of overriding human capacity, designed to control or suppress a person’s physical or mental will, or used to discriminate based on characteristics such as race, gender, orientation, language, political opinion, or disability”.
Brazil’s proposal also prohibits AI systems with “excessive” risk, and sets similar criteria to those found in other proposals in the region and the EU AI Act. In that sense, the proposal refers to AI systems posing “excessive” risk as any with the following purposes:
- Manipulating individual or group behavior in a way that causes harm to health, safety, or fundamental rights;
- Exploiting vulnerabilities of individuals or groups to influence behavior with harmful consequences;
- Profiling individuals’ characteristics or behaviors, including past criminal behavior, to assess the likelihood of committing offenses;
- Producing, disseminating, or facilitating material that depicts or promotes sexual exploitation or abuse of minors;
- Enabling public authorities to assess or classify individuals through universal scoring systems based on personality or social behavior in a disproportionate or illegitimate manner;
- Operating as autonomous weapon systems;
- Conducting real-time remote biometric identification in public spaces, unless strictly limited to scenarios of criminal investigation or search of missing persons, among other listed exceptions.
Concerning the classification of “high-risk” systems, some AI bills define them based on certain domains or sectors, while others have a more general or principle-based approach. Generally, high-risk systems are left to be classified by a competent authority, allowing flexibility and discretion from regulators, but subject to specific criteria, such as evaluating a system’s likelihood and severity of creating adverse consequences.
For instance, Brazil’s bill includes at least ten criteria2 for the classification of high-risk systems, such as whether the system unlawfully or abusively produces legal effects that impair access to public or essential services, whether it lacks transparency, explainability, auditability which would impair oversight, or whether it endangers human health –physical, mental or social, either individually or collectively.
Meanwhile, the Peruvian draft regulations include a list of specific uses or sectors where the deployment of any AI system is automatically set to be considered high-risk, such as biometric identification and categorization; security of critical national infrastructure, educational admissions and student evaluations, or employment decisions.3 Under the draft regulations, the classification of “high-risk” systems and their corresponding obligations may be evaluated and reassessed by the competent authority, consistent with the “risk-based security standards principle” under the country’s brief AI law, which mandates the adoption of ‘security safeguards in proportion to a system’s level of risk’.
Colombia’s bill incorporates a mixed approach for high-risk classification. It includes general criteria such as those systems that may “significantly impact fundamental rights”, particularly the rights to privacy, freedom of expression, or access to public information; while also including sensitive or domain-based applications, such as any system “enabling automated decision-making without human oversight that operate in the sectors of healthcare, justice, public security, or financial and social services”.
Mexico’s proposal defines “high-risk” systems as those with the potential to significantly affect public safety, human rights, legality, or legal certainty, but omits additional criteria for their classification. A striking distinction from Mexico’s proposal is that it seems to restrict the use and deployment of these systems to public security entities and the Armed Forces (see Article 48 of the Bill).
The Brazilian bill and Peruvian draft implementing regulations have chapters covering governance measures, describing specific obligations for developers, deployers, and distributors of all AI systems, regardless of their risk level. In addition, most bills include specific obligations for entities operating “high-risk” systems, such as performing comprehensive risk assessments and ethical evaluations; assuring data quality and bias detection; extensive documentation and record-keeping obligations; and guiding users on the intended use, accuracy, and robustness of these systems. Brazil’s bill indicates the competent authority will have discretion to determine cases under which some obligations may be relaxed or waived, according to the context in which the AI operator acts within the value chain of the system.
Under Brazil’s AI bill, entities deploying high-risk systems must also submit an Algorithmic Impact Assessment (AIA) along with the preliminary assessment, which must be conducted following best practices. In certain regulated sectors, the Brazilian authority may require the AIA to be independently verified by an external auditor.
Chile’s proposal outlines mandatory requirements for high-risk systems, which must implement a risk management system grounded in a “continuous and iterative process”. This process must span the entire lifecycle of the system and be subject to periodic review, ensuring failures, malfunctions, and deviations from intended purpose are detected and minimized.
Argentina’s proposal requires all public and private entities that develop or use AI systems to register in a National Registry of Artificial Intelligence Systems, regardless of the level of risk. The registration must include detailed information on the system’s purpose, intended use, field of application, algorithmic structure, and implemented security safeguards. Similarly, Colombia’s bill includes an obligation to conduct fundamental rights impact assessments and create a national registry for high-risk AI systems.
Fewer proposals have specific, targeted provisions for “limited-risk” systems. For instance, Colombia’s bill defines these systems as those that, ‘without posing a significant threat to rights or safety, may have indirect effects or significant consequences on individuals’ personal or economic decisions’. Examples of these systems include AI commonly used for personal assistance, recommendation engines, synthetic content generation, or systems that simulate human interaction. Under Mexico’s proposal, “limited-risk” systems are those that ‘allow users to make informed decisions; require explicit user consent; and allow users to opt out under any circumstances’.
In addition, the Colombian proposal explicitly indicates that AI operators employing these systems must meet transparency obligations, including disclosure of interaction with an AI tool; provide clear information about the system to users; and allow for opt-out or deactivation. Similarly, under the Chilean proposal, a transparency obligation for “limited-risk” AI systems includes informing users exposed to the system in a timely, clear, and intelligible manner that they are interacting with an AI, except in situations where this is “obvious” due to the circumstances and context of use.
Finally, Colombia’s bill describes low-risk systems as those that pose minimal risk to the safety or rights of individuals and thus are subject to general ethical principles, transparency requirements, and best practices. Such systems may include those used for administrative or recreational purposes without ‘direct influence on personal or collective decisions’; systems used by educational institutions and public entities to facilitate activities which do not fall within the scope of any of the other risk levels; and systems used in video games, productivity tools, or simple task automation.
4. Pluri-institutional and Multistakeholder Governance Frameworks are Preferred
A key element shared across the AI legislative proposals reviewed is the establishment of multistakeholder AI governance structures aimed at ensuring responsible oversight, regulatory clarity, and policy coordination.
Notably, Brazil, Chile, and Colombia reflect a shared commitment to institutionalize AI governance frameworks that engage public authorities, sectoral regulators, academia, and civil society. However, they differ in the level of institutional development, the distribution of oversight functions, and the legal authority vested in enforcement bodies. All three countries envision coordination mechanisms that integrate diverse actors to promote coherence in national AI strategies. For instance, Brazil proposes the creation of the National Artificial Intelligence Regulation and Governance System (SIA). This system would be coordinated by the National Data Protection Authority (ANPD) and composed of sectoral regulators, a Permanent Council for AI Cooperation, and a Committee of AI Specialists. The SIA would be tasked with issuing binding rules on transparency obligations, defining general principles for AI development, and supporting sectoral bodies in developing industry-specific regulations.
Chile outlines a governance model centered around a proposed AI Technical Advisory Council, responsible for identifying “high-risk” and “limited-risk” AI systems and advising the Ministry of Science, Technology, Knowledge, and Innovation (MCTIC) on compliance obligations. While the Council’s role is essentially advisory, regulatory oversight and enforcement are delegated to the future Data Protection Authority (DPA), whose establishment is pending under Chile’s recently enacted personal data protection law.
Colombia’s bill designates the Ministry of Science, Technology, and Innovation as the lead authority responsible for regulatory implementation and inter-institutional coordination. The Ministry is tasked with aligning the law’s execution with national AI strategies and developing supporting regulations. Additionally, the bill grants the Superintendency of Industry and Commerce (SIC) specific powers to inspect and enforce AI-related obligations, particularly concerning the processing of personal data, through audits, investigations, and preventive measures.
5. Fostering Responsible Innovation Through Sandboxes, Innovation Ecosystems, and Support for SMEs
Some proposals emphasize the dual objectives of regulatory oversight and the promotion of innovation. A notable commonality is their inclusion of controlled testing environments and regulatory sandboxes for AI systems aimed at facilitating innovation, promoting responsible experimentation, and supporting market access, particularly for startups and small-scale developers.
The bills generally empower competent and sectoral authorities to operate AI regulatory sandboxes, on their initiative or through public-private partnerships. The sandboxes are operated by pre-agreed testing plans, and some offer temporary exemptions from administrative sanctions, while others maintain liability for harms resulting from sandbox-based experimentation.
Proposals in Brazil, Chile, Colombia, and Peru also include relevant provisions to support small-to-medium enterprises (SMEs) and mandate the operation of “innovation ecosystems.” For instance, Brazil’s bill requires sectoral authorities to follow differentiated regulatory criteria for AI systems developed by micro-enterprises, small businesses, and startups, including their market impact, user base, and sectoral relevance.
Similarly, Chile complements its proposed sandbox regime with priority access for smaller companies, capacity-building initiatives, and their representation in the AI Technical Advisory Council. This inclusive approach aims to reduce entry barriers and ensure that small-scale innovators have both voice and access within the AI regulatory ecosystem.
Colombia’s bill includes public funding programs to support AI-related research, technological development, and innovation, with a focus on inclusion and accessibility. Although not explicitly targeted at SMEs, these incentives create indirect benefits for emerging actors and academia-led startups.
Lastly, Peru promotes the development of open-source AI technologies to reduce systemic entry barriers and foster ecosystem efficiency. The regulation also mandates the promotion and financing of AI research and development through national programs, universities, and public administration programs that directly benefit small developers and innovators.
6. The Road Ahead for Responsible AI Governance in LatAm
Latin America is experiencing a wave of proposed legislation to govern AI. While some countries have several proposals under consideration, with some seemingly making more progress towards their adoption than others,4 a comparative review shows they share common elements and objectives. The proposed legislative landscape reveals a shared regional commitment to regulate AI in a manner that is ethical, human-centered, and aligned with fundamental rights. Most of the bills examined lay the groundwork for comprehensive AI governance frameworks based on principles and new AI-related rights.
In addition, all proposals classify AI systems based on their level of risk – with all countries proposing a scaled risk system from minimal or low risk, which goes up to defining systems that pose “unacceptable” or “excessive” risk – and introduce concrete mechanisms and obligations proportional to that classification, with varying but similar requirements to perform risk and impact assessments and other transparency obligations. Most bills also designate an enforcement authority to act in coordination with sectoral agencies to issue further regulations, especially to extend criteria or designate types of systems considered “high-risk”.
Along this normative and institutional framework, most AI bills in Latin America also reflect a growing recognition of the need to balance regulatory oversight with flexibility, reflected in the adoption of controlled testing environments and tailored provisions for startups and SMEs.
Except for Brazil and Peru, much of the legislative activity in the countries covered still remains in early stages. However, the AI bills reviewed offer an insight into how key jurisdictions in the region are considering AI governance, framing it as both a regulatory challenge and an opportunity for inclusive digital development. As these initiatives evolve, key questions around institutional capacity, enforcement, and stakeholder participation will shape how effectively Latin America can build trusted and responsible AI frameworks.
- In Mexico, two proposals concerning AI regulation have been introduced, one in the Senate and another in the Chamber of Deputies. Both were put forth by representatives of MORENA, the political party holding a supermajority in Congress. Additionally, the Senate is considering five proposals to amend the Federal Constitution, aiming to grant Congress the authority to legislate on AI matters. Similarly, in Colombia, there are two proposals under the Senate’s consideration and one recently introduced in the Chamber of Deputies. ↩︎
- 1) The system unlawfully or abusively produces legal effects that impair access to public or essential services; 2) It has a high potential for material or moral harm or for unlawful discriminatory bias; 3) It significantly affects individuals from vulnerable groups; 4) The harm it causes is difficult to reverse; 5) There is a history of damage linked to the system or its context of use; 6) The system lacks transparency, explainability, or auditability, impairing oversight; 7) It poses systemic risks, such as to cybersecurity or safety of vulnerable groups; 8) It presents elevated risks despite mitigation measures, especially in light of anticipated benefits; 9) It endangers integral human health — physical, mental, or social — either individually or collectively; 10) It may negatively affect the development or integrity of children and adolescents. ↩︎
- Other uses or sectors included in the high risk category are: access to and prioritization within social programs and emergency services; credit scoring; judicial assistance; Health diagnostics and patient care; criminal profiling, victimization risk analysis, emotional state detection, evidence verification, or criminal investigation by law enforcement. ↩︎
- Proposals from Brazil and Chile, for example, have gone through more extensive debate and are considered the most advanced in the region. See El Pais, “América Latina ante la IA: ¿regulación o dependencia tecnológica?”, March 2025. ↩︎