Processing of Personal Data for AI Training in Brazil: Takeaways from ANPD’s Preliminary Decisions in the Meta Case
Data Protection Authorities (DPAs) across the globe are currently wrestling with fundamental questions raised by the emergence of generative AI and its compatibility with data protection laws. A key issue is under which legal basis companies might be able to process personal data for training AI models. Another one is how the rights of individuals with regard to their personal data can be safeguarded, as well as how to mitigate potential risks arising from the complex and novel processing of personal data entailed particularly by generative AI.
Brazil’s Autoridade Nacional de Proteçao de Dados Pessoais (ANPD) reviewed these issues recently in a set of Preliminary Decisions following an inspection into the lawfulness of processing of personal data by Meta for the training of AI models. Particularly, the DPA initially ordered Meta under an emergency procedure to suspend this processing citing potential harm and irreparable damage to users. The emergency order was later maintained following a first challenge, but it was subsequently reversed after the DPA was satisfied with the level of cooperation by the company and the measures it proposed. However, the main inspection process continues.
Although preliminary, the Brazil ANPD’s decisions contain insights into the assessment criteria that DPAs are starting to deploy when looking at the impact generative AI has on the rights and freedoms of individuals1 in the context of the compatibility of this new technology with data protection law. In particular, the salient issues that surface are related to:
- Relying on “legitimate interests” as a lawful ground for processing publicly available personal data for AI training, paired with providing individuals an accessible right to opt out from such processing;
- Meaningful transparency;
- The scope of what constitutes “sensitive data” and the lawfulness of processing it in this context;
- Protections for children’s personal data.
In this blog, we summarize the procedural steps that have occurred, from initial suspension (Round 1), the upholding of that decision (Round 2), and the current proposed action plan (Round 3), including the ANPD’s reasoning at each stage, and offer our initial reflections and key takeaways, including what it means for ANPD’s enforcement priorities and the future of “legitimate interests.”
Round 1: ANPD Suspends Meta’s Processing for AI Training Purposes
The ANPD issued a preventive Measure on July 2, 2024, requiring the immediate suspension of the processing of personal data by Meta for the purpose of training its generative AI model. The decision came after Meta announced a change in its privacy policy indicating that it could use “publicly available” information collected from users to train and enhance its AI system starting June 26, 2024. The ANPD initiated an ex-officio inspection (Proceeding No. 00261.004529/2024-36) and preliminarily ordered a suspension of that processing activity.
In this initial order, the ANPD determined that preventive measures were necessary to avoid irreparable damage or serious risk to individuals, and in turn, ordered a temporary suspension of the processing activity by Meta. The decision adopted the legal reasoning of Vote 11/2024/DIR-MW/CD, presented by Director Miriam Wimmer, which was supported by the ANPD’s General Coordination of Inspection (CGF) technical report proposing the preventive measure2. In its deliberative vote, the Board determined Meta had potentially violated several provisions of the country’s general data protection law (LGPD) due to:
- the ineffective use of “legitimate interest” as a legal basis for processing personal data for AI training purposes;
- a lack of transparency and disclosure to users about its processing operations involving user data;
- limiting the exercise of data subjects’ rights; and
- processing of personal data of children and adolescents without proper safeguards.
Hurdles for relying on “legitimate interests”: processing sensitive data and the legitimate expectations of users
In its July 2 Order, the ANPD determined “legitimate interests” were not an adequate basis for processing personal data for Meta’s AI training activity, because the processing may have included users’ sensitive data. Of note, the LGPD requires that all processing of personal data is based on a lawful ground (Article 7), similar to the EU’s General Data Protection Regulation but with some variations. Meta’s privacy policy originally stated it relied on the “company, users, and third parties’ legitimate interest” to process any personal data from publicly available sources, including images, audio, texts, and videos. The ANPD found that such information might reveal sensitive information about an individual’s political, religious, and sexual preferences, among other aspects of their personality, and thus qualify as “sensitive data” under Article 5 of the LGPD. Article 5, section II, defines “sensitive personal data” as “personal data on racial or ethnic origin, religious conviction, political opinion, union affiliation or religious, philosophical or political organization, health or sexual life data, genetic or biometric data, when linked to a natural person.”
Under Article 11 LGPD, the processing of “sensitive data” can only be carried out with the data subject’s consent, or if the processing is “indispensable” for a set of specific scenarios, such as:
- Compliance with a legal or regulatory obligation;
- Processing data necessary for the execution of public policies provided by law or regulation;
- Conducting studies by a research body, ensuring that data is anonymized where possible;
- Processing that is part of the regular exercise of rights, including by contract or in judicial, administrative, or arbitral proceedings under the Arbitration Law;
- Protecting the life or physical safety of the data subject or third parties;
- Health protection, exclusive to a procedure performed by health professionals, health services, or the health authority;
- Guaranteeing fraud prevention and security for the data subject, in processes of identification and authentication of registration in electronic systems, or safeguarding [the rights to access data given in Article 9] and except in the case where fundamental rights and freedoms of the data subject prevail, which require the protection of personal data.
Even if Meta’s processing activities did not include providing its model with sensitive data, the ANPD determined the company’s reliance on “legitimate interests” as lawful ground would not be sufficient unless it met the legitimate expectations of the data subjects.
Under Article 10, Section II of the LGPD, a controller must be able to demonstrate the processing of personal data for the intended purpose “respects the legitimate expectations and fundamental rights and freedoms” of the data subjects. In this case, the ANPD argued data subjects could not reasonably expect their personal information would be used to train Meta’s AI model – given that the data was primarily shared for networking with family and friends and included information posted long before the policy change. One point that was not addressed in the decision was whether the source of the publicly available personal data used for AI training, on-platform or off-platform, would make a difference in such an assessment.
To adequately meet required expectations, the ANPD determined a controller must give clear and precise information to data subjects concerning how it intended to use their data and provide effective mechanisms for the exercise of consumer rights. As explained below, the ANPD determined Meta’s new policy was insufficiently transparent and potentially obstructed data subjects’ rights – two potential violations of the LGPD.
Transparency must extend also over how changes in the Privacy Policy are communicated
The ANPD noted that even if Meta’s legitimate interest was adequate, the June change to its privacy policy would nonetheless violate the principle of transparency. Article 10(2) of the LGPD requires data controllers to “adopt measures to guarantee the transparency of data processing based on its legitimate interest.” The ANPD found that Meta failed to provide clear, specific, and broad communication about the privacy policy change. Citing its Guidance on Legitimate Interest, the agency noted that, under this legal hypothesis, data controllers must provide information about the processing clearly and extensively and identify the duration and purpose of the processing, as well as data rights and channels available for their exercise.
Importantly, the agency highlighted the differences in the company’s communication with Brazilian users compared to those in the European Union (EU): EU users were notified about the privacy policy change via email and app notifications, while Brazilian users were not informed and only able to see the privacy policy’s update via Meta’s Privacy Policy Center. In addition, the CGF’s Technical Report, as cited in Vote 11/2024/DIR-MW/CD, highlighted how the failure to provide transparency heightened information asymmetries between the platform and its users, especially for those who are not users but whose personal data might have been employed for AI training.
Exercising Data Subjects’ Rights must be straightforward and involve few steps
The ANPD found that Meta’s privacy policy’s opt-out mechanism was difficult to implement and required users to take several steps before successfully opting out of the processing. The CGF’s Technical Report highlighted that, unlike EU users, Brazilians were required to go through eight steps to access the opt-out form, which was hosted in a complex interface. The ANPD took into account its Cookie Guidelines to demonstrate that companies must provide mechanisms and intuitive tools to assist users in exercising their rights and assert control over their data, as well as a previous recommendation made to Meta in 2021, where the ANPD specifically recommended the company adjust its privacy policy for full compliance with the LGPD. The agency specifically cited the lack of clear communication and difficult mechanisms for exercising the right to opt out as particularly alarming, given that Meta’s processing operations also affect minors.
Processing Data of Children and Adolescents must be done in their “best interest”
The LGPD provides special protection to the data of children and adolescents. Under Article 14, any processing of children and adolescents must be carried out in the “best interests” of the minor and Article 14 Section 6 requires information on the processing to be “provided in a simple, clear and accessible manner, taking into account the physical-motor, perceptual, sensory, and intellectual and mental characteristics of the user.” The ANPD found Meta potentially failed to comply with this obligation and to demonstrate its legitimate interest was adequately balanced against the “best interest” of Brazilian children and adolescents.
While the LGPD does not prohibit reliance on “legitimate interest” to process the personal data of children, this activity must still satisfy the requirement that the processing is in the best interest of the child. The ANPD cited its Guidelines on Processing Personal Data Based on Legitimate Interest to indicate controllers must perform and document a “balancing test” to demonstrate (i) what it considered the “best interest” of the children; (ii) the criteria used to weigh the children’s “best interest” against the controller’s legitimate interest; and (iii) that the processing does not disproportionately impact the rights of children or pose excessive risk. In this case, the ANPD pointed out that Meta’s new policy was silent on how the processing for AI training was beneficial for children and adolescents and noted it did not include any measures to mitigate potential risks.
Round 2: Meta Requests Reconsideration, ANPD Upholds the Suspension
After notification of the July 2 Order, Meta filed for reconsideration to (ii) fully lift the suspension or, in the alternative, (ii) get a deadline extension to certify the suspension of the processing of personal data for AI training in Brazil. In response to the request to lift the suspension, the ANPD upheld its original decision on the basis that Meta did not provide sufficient documentation to demonstrate it had adopted measures to mitigate the risks of harm and irreparable damage to data subjects.
The July 10 Decision was supported by the reasoning of Vote 19/2024/DIR-JR/CD issued by Director Joacil Rael. Although Meta’s intention to implement specific mitigating measures was considered, the Board determined the company failed to specify a date for putting the proposed actions into practice or show evidence that they were in effect.
In that sense, any reconsideration of a full reversal of the suspension would not be considered until the company presented satisfactory documentation indicating a specific ‘work plan’ and timeframe for its implementation. Considering Meta’s alternative request, the ANPD granted a deadline extension for the company to certify it had suspended the relevant processing operations. This extension was based on the argument that it was “technically unfeasible” to confirm full suspension of the processing within the original deadline (five working days from the notification of the July 2 Order) – although the specific reasons for this argument were not included in the Decision. The agency granted Meta five additional business days to present its compliance plan and postponed the analysis on the merits of fully lifting the suspension until that later date.
Round 3: The Proposed Action Plan allows Meta to Resume Processing for AI Training while Waiting for the Conclusion of the Full Inspection Process
After Meta provided the requested documentation, the ANPD reconsidered the company’s request to lift the suspension entirely. In its August 30 Decision, the agency determined the company’s compliance plan adequately improved transparency and allowed for the exercise of data subjects rights. The Board lifted the general suspension and allowed Meta to continue processing personal data for AI training, except for data from individuals under the age of 18.
Addressing its prior concerns about transparency and potential obstruction of data rights, the ANPD considered Meta’s revised plan sufficient to eliminate the previously identified risk of harm. Meta agreed to undertake several changes to its Privacy Policy, app, and website banners to better communicate the purposes of the processing and provide easier ways to opt out of AI training. Full details of Meta’s compliance plan are not given in the decision; however, some of the changes noted by the ANPD include:
- sending email and app notifications to users at least 30 days before the beginning of the new processing, and
- providing a link with easy access to the form to opt out of the processing for both users and non-users of Meta’s platforms. It is not entirely clear how accessible and straightforward the opt-out process must be under LGPD requirements, though the ANPD’s decision suggests the change was adequate because the process was altered to involve “fewer clicks.”
In lifting the suspension, the ANPD accepted Meta’s commitments to adopt safeguards to mitigate risks, including the implementation of pseudonymization techniques during the pre-training phase of its AI model, as well as the adoption of security measures to prevent re-identification attacks. These measures, in addition to the changes proposed to enhance communication and opt-out mechanisms, were sufficient for ANPD to lift the suspension, except for processing of personal data concerning minors, addressing the ANPD’s earlier concerns that the company’s reliance on “legitimate interests” to process personal data to train its generative AI tool did not sufficiently balance the risk to data subjects.
Some Reflections
Importantly, the ANPD stresses that the legality of using “legitimate interests” as a lawful basis for AI training purposes under the LGPD requires further examination. Given the complex and multifaceted issue, the authority leaves the question open for examination in administrative proceedings as part of the ongoing inspection, and subject to further evidence that Meta’s safeguards and techniques effectively address risks associated with processing personal information, including sensitive data, for AI training. Notably, the LGPD is also one of the few data protection laws to explicitly adopt a special level of protection for children’s and adolescents’ data by including the “best interest of the child” standard, which requires a detailed examination of interests between these individuals and the controller’s interest.
The goal of ANPD’s processing suspension was to prevent Meta from processing personal data to train its generative AI, as the authority considered that the company had given insufficient consideration for potential violations of data subjects’ rights and freedoms. The nature of the suspension, as a preliminary measure, was to prevent ongoing harm from violations identified during the inspection. In its initial decision, the ANPD’s key concern was the proper implementation of the required balancing test to rely on legitimate interests in order to process personal data for AI training as well as ensuring sufficient internal controls to mitigate associated risks. The authority noted that the complexity of the legal question, in combination with the technicality of the issue and information asymmetries between the company, users, and non-users, justified a preliminary suspension of the processing.
It is also important to highlight that not all issues identified in the original order are addressed in the reconsideration. For instance, it is not clear whether Meta had already processed the personal information of users with public accounts to train its AI model before the suspension, including sensitive data and data from children and adolescents, and what would happen to that data. Of note, the decision reversing the original order does not include specific details about the steps the company committed to take to effectively comply with the ongoing prohibition on processing children’s and adolescents’ data.
The ANPD was clear nonetheless that Meta is committed to cooperating with the authority to implement its compliance plan, and that such cooperation includes providing evidence of the security measures and internal controls to be adopted.
The ANPD’s initial suspension relied on finding potential violations of the LGPD, most significantly, for the potential lack of a valid lawful ground for processing both newly acquired and previously provided personal data to train the generative AI model. This determination, as well as the criteria taken into account to reverse it, involves a major question that can significantly impact the future of data processing in the context of AI training – a decision that may have a global impact as more authorities worldwide are inevitably faced with similar scenarios given the proliferation of generative AI technologies. The final determinations of this case will provide critical insight into the immediate future of data protection enforcement in Brazil and elsewhere.
- Brazil has 102 million active Facebook users. ↩︎
- According to article 4, sec. II, of the ANPD’s Internal Regulations, Directors can issue a vote when assigned the role of Rapporteur of a matter before the Board of Directors. Under article 17, sec. V, of the Internal Regulations, the CGF may propose the adoption of preventive measures and setting a daily fine for non-compliance to the Board of Directors. ↩︎