Minding Mindful Machines: AI Agents and Data Protection Considerations
We are now in 2025, the year of AI agents. Leading large language model (LLM) developers (including OpenAI, Google, Anthropic) have released early versions of technologies described as “AI agents.” Unlike earlier automated systems and even LLMs, these systems go beyond previous technology by having autonomy over how to achieve complex, multi-step tasks, such as navigating on a user’s web browser to take actions on their behalf. This could enable a wide range of useful or time-saving tasks, from making restaurant reservations and resolving customer service issues to coding complex systems.
At the same time, AI agents raise greater, and sometimes novel, privacy and data protection risks related to the collection and processing of personal information. They also present novel technical challenges related to testing and human oversight, for organizations seeking to develop or deploy AI agents in commercial settings.
Specifically, this Issue Brief explores:
- Part 1: Definitions. While agents are not new, emerging definitions across industries describe them as AI systems that are capable of completing more complex, multi-step tasks, and exhibit greater autonomy over how to achieve these goals, such as shopping online and making hotel reservations.
- Part 2: Emerging Data Protection Considerations. Advanced AI agents raise or heighten many of the same data protection questions raised by LLMs, such as challenges related to the collection and processing of personal data for model training, operationalizing data subject rights, and ensuring adequate explainability. In addition, the unique design elements and characteristics of the agents may exacerbate or raise novel data protection compliance challenges for the collection and disclosure of personal data, security vulnerabilities, the accuracy of outputs, barriers to alignment, and explainability and human oversight.