AI & Machine LearningHealthResearch & EthicsU.S. LegislationYouth & Education Privacy

16th Annual Privacy Papers for Policymakers

Webinar Series March 4 and March 11, 2026 @ 12:00pm ET

Overview

FPF is excited to announce the 16th Annual Privacy Papers for Policymakers winners! This award recognizes leading privacy scholarship relevant to policymakers in the U.S. Congress, at U.S. federal agencies, and international data protection authorities. 

This year’s winning authors will present their work in a two-part webinar series, with the first session on March 4 focused on AI and the second session on March 11 focused on Privacy. Attendees must register separately for each webinar. Visit the About the Webinar Series section to learn how to register for both.

About the Privacy Papers for Policymakers Award

The selected papers highlight important work that analyzes current and emerging privacy issues and proposes achievable short-term solutions or new analytical approaches that could lead to real-world policy solutions.

From the many nominated papers, the winning papers were selected by a diverse team of FPF judges. The winning papers were selected because they offer solutions relevant to policymakers in the U.S. and abroad. To learn more about the submission and review process, read our Call for Nominations

To learn more about the 2024 Annual Privacy Papers for Policymakers, click here.

About the Winning Papers

The winners of the 16th Annual Privacy Papers for Policymakers Award are listed below. To learn more about the papers, judges, and authors, check back for our 2025 PPPM Digest coming soon.  

AI Agents and Memory: Privacy and Power in the Model Context Protocol (MCP) Era, by Matt Steinberg and Prem M. Trivedi

AI and Doctrinal Collapse, by Alicia Solow-Niederman

AI As Normal Technology, by Arvind Narayanan and Sayash Kapoor

Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms, by Christina Lee

Can Consumers Protect Themselves Against Privacy Dark Patterns?, by Matthew B. Kugler; Lior Strahilevitz; Marshini Chetty; Chirag Mahapatra and Yaretzi Ulloa

De-Identification Guidelines for Structured Data (Information and Privacy Commissioner of Ontario), by Information and Privacy Commissioner of Ontario

How the Legal Basis for AI Training Is Framed in Data Protection Guidelines, by Wenlong Li; Yueming Zhang; Qingqing Zheng; and Aolan Li (Link Forthcoming)

About the Webinar Series

This year’s winning authors will present their work in a two-part webinar series, with the first session on March 4 focused on AI and the second session on March 11 focused on Privacy.

March 4th – AI Paper Presentations – REGISTER HERE

March 11th – Privacy Paper Presentations – REGISTER HERE

Attendees must register for each session separately.

Agenda

Wednesday, March 4, 2026

Time

Event

Speakers

12:00 pm –
12:05 pm

WELCOME REMARKS

Matthew Reisman, Vice President for U.S. Policy, Future of Privacy Forum

 

12:05 pm –
12:20 pm

AI As Normal Technology

We articulate a vision of Artificial Intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are “normal” in this conception. This frame stands in contrast to both utopian and dystopian visions that treat AI as a separate species or a highly autonomous, potentially superintelligent entity. The statement “AI is normal technology” is three things: a description of current AI, a prediction about its foreseeable future, and, most importantly, a prescription for how society should treat it. It rejects technological determinism and emphasizes the role of institutions in shaping AI’s trajectory, guided by continuity between the past and the future.

Capability gains do not dictate the speed of adoption

We make a critical distinction between AI methods, AI applications, and AI adoption, as these three phenomena occur at vastly different timescales. While technical advances in AI methods have been rapid, transformative economic and societal impacts are likely to be slow, unfolding over decades. In safety-critical areas, such as medical devices or criminal risk prediction, AI diffusion lags decades behind innovation. When models are complex and less intelligible, it is difficult to anticipate all possible deployment conditions, necessitating stringent testing and validation that act as inherent speed limits.

Furthermore, the speed of technology adoption is constrained by the requirement for organizational and institutional change. Much like the electrification of factories, which took nearly forty years to show substantial productivity gains, AI requires the redesign of entire workflows and organizations. Benchmarks frequently mismeasure real-world utility by focusing on self-contained skills rather than the complex, contextual judgment required in professional practice. Consequently, economic impacts are likely to be gradual, with human labor remaining essential as automation redefines, rather than eliminates, economically valuable tasks.

Against superintelligence

The reliance on the slippery concepts of “intelligence” and “superintelligence” has clouded the ability to reason clearly about advanced AI. By shifting focus from intelligence to power—the ability to modify one’s environment—it becomes clear that humans have always used technology to increase their capabilities. In this view, the “control problem” is a tractable engineering challenge rather than a fight against a “galaxy brain in a box.”

As AI becomes more advanced, control will remain primarily in the hands of people and organizations. A greater proportion of human jobs will involve AI control, encompassing monitoring, auditing, and task specification. This transformation mirrors the Industrial Revolution’s shift from manual labor to the management of automated assembly lines. Market forces will likely drive this trend, as poorly controlled AI is too error-prone to be commercially viable, but regulation must bolster these forces when needed.

Risks associated with AI—including accidents, misuse, and systemic disruptions—are best addressed through downstream defenses. Model alignment doesn’t work—attempting to create a model that cannot be misused is akin to trying to build a computer that cannot be used for harmful purposes. Instead, defenses must focus on the attack surfaces where malicious actors actually deploy AI, such as strengthening cybersecurity infrastructure and biosecurity screening.

Policy implications: resilience over nonproliferation

Policymakers face the challenge of making decisions under deep uncertainty. We advocate for resilience as the overarching approach to catastrophic risks. Resilience consists of taking actions before harm occurs—such as improving technical capacity, ensuring transparency, and fostering open-source competition—to limit damage when shocks do occur.

Conversely, nonproliferation policies that seek to limit access to AI capabilities are likely to be counterproductive. Such measures are difficult to enforce, create dangerous single points of failure, and increase market concentration. Instead, policy should prioritize reducing uncertainty through evidence-seeking measures such monitoring AI failures in the wild. Finally, the normal technology frame suggests that realizing the benefits of AI is not automatic and cannot be left to the private sector alone. It will require experimentation, institutional reform, diffusion-enabling regulation, investing in the complements of automation, and strengthening social safety nets.

Presenting Author

  • Arvind Narayanan, Princeton University

Discussant

  • TBA

12:20 pm –
12:35 pm

AI and Doctrinal Collapse

Artificial intelligence runs on data. But the two legal regimes that govern data—information privacy law and copyright law—are under pressure. Formally, each regime demands different things. Functionally, the boundaries between them are blurring, and their distinct rules and logics are becoming illegible.

This Article identifies this phenomenon, which I call “inter-regime doctrinal collapse,” and exposes the individual and institutional consequences. Through analysis of pending litigation, discovery disputes, and licensing agreements, this Article exposes two dominant exploitation tactics enabled by collapse: Companies “buy” data through business-to-business deals that sidestep individual privacy interests, or “ask” users for broad consent through privacy policies and terms of service that leverage notice-and choice frameworks. Left unchecked, the data acquisition status quo favors established corporate players and impedes law’s ability to constrain the arbitrary exercise of private power.

Doctrinal collapse poses a fundamental challenge to the rule of law. When a leading AI developer can simultaneously argue that data is public enough to scrape—diffusing privacy and copyright controversies—and private enough to keep secret—avoiding disclosure or oversight of its training data—something has gone seriously awry with how law constrains power. To manage these costs and preserve space for salutary innovation, we need a law of collapse. This Article offers institutional responses, drawn from conflict of laws and legal pluralism, to create one.

Presenting Author

  • Alicia Solow-Niederman, George Washington University Law School

Discussant

  • TBA

12:35 pm –
12:50 pm

AI Agents and Memory: Privacy and Power in the Model Context Protocol (MCP) Era

This policy brief examines how AI agents — autonomous systems that use the Model Context Protocol (MCP) to connect across applications – will reshape the boundaries of privacy, security, and governance. By standardizing how AI systems access external tools and retain memory across sessions, MCP allows personal data, context, and identity to move fluidly across digital environments. This interoperability delivers powerful opportunities, but also undermines traditional privacy safeguards built around app-specific data silos. Frameworks such as COPPA, GDPR, and the FTC Act rely on principles of consent, purpose limitation, and minimization — assumptions that break down when agents autonomously share and retain information across multiple services.

This article argues that policymakers and privacy regulators must treat orchestration protocols like MCP as emerging data infrastructure and develop governance standards that ensure privacy resilience at the systems level. Key priorities include interoperable memory dashboards, scoped permissions, cryptographically signed audit trails, and strong portability and deletion rights to prevent user data from becoming locked within proprietary ecosystems. By updating privacy law and technical standards to address persistent, crosscontext memory, regulators can ensure that AI agents remain transparent, accountable, and aligned with user consent in a rapidly converging digital ecosystem.

 

Presenting Authors

  • Prem Trivedi, Open Technology Institute (New America)
  • Matt Steinberg, Knight-Georgetown Institute

Discussant

  • TBA

12:50 pm –
1:05 pm

Beyond Algorithmic Disgorgement: Remedying Algorithmic Harms

AI regulations are popping up around the world, and they mostly involve ex-ante risk assessment and mitigating those risks. But even with careful risk assessment, harms inevitably occur. This leads to algorithmic remedies: what to do once algorithmic harms occur, especially when traditional remedies are ineffective. What makes a particular algorithmic remedy appropriate for a given algorithmic harm?

I explore this question through case study of a prominent algorithmic remedy: algorithmic disgorgement—destruction of models tainted by illegality. Since the FTC first used it in 2019, it has garnered significant attention, and other enforcers and litigants around the country and the world have started to invoke it. Alongside its increasing popularity came a significant expansion in scope. Initially, the FTC invoked it in cases where data was allegedly collected unlawfully and ordered deletion of models created using such data. The remedy’s scope has since expanded; regulators and litigants now invoke it against AI whose use, not creation, causes harm. It has become a remedy many turn to for all things algorithmic.

I examine this remedy with a critical eye, concluding that though it looms large, it is often inappropriate. Algorithmic disgorgement has evolved into two distinct remedies. Data-based algorithmic disgorgement seeks to remedy harms committed during a model’s creation; use-based algorithmic disgorgement seeks to remedy harms caused by a model’s use. These two remedies aim to vindicate different principles underlying traditional remedies: data-based algorithmic disgorgement follows the disgorgement principle underlying remedies like monetary disgorgement and the exclusionary rule, while use-based algorithmic disgorgement follows the consumer protection principle underlying remedies like product recall. However, they often fail to live up to the principles. AI systems exist in the context of the algorithmic supply chain; they are controlled by many hands, and seemingly unrelated entities are connected to each other in complicated ways through complex data flows. The realities of algorithmic supply chain means that algorithmic disgorgement is often a bad fit for the harm at issue and causes undesirable effects throughout the algorithmic supply chain, imposing burden on innocent parties while not imposing cost on the blameworthy; ultimately, algorithmic disgorgement undermines the principles it seeks to promote.

From this analysis, I derive considerations for determining whether an algorithmic remedy is appropriate—the responsiveness of the remedy to the harm and the full impact of the remedy throughout the supply chain—and underscore the need for a diversity of algorithmic remedies.

 

 

Presenting Author

  • Christina Lee, George Washington University Law School

Discussant

  • TBA

1:05 pm –
1:05 pm

CLOSING REMARKS

Justine Gluck, AI Policy Analyst, Future of Privacy Forum

Wednesday, March 11, 2026

Time

Event

Speakers

12:00 pm –
12:05 pm

WELCOME REMARKS

Matthew Reisman, Vice President for U.S. Policy, Future of Privacy Forum

12:05 pm –
12:20 pm

Can Consumers Protect Themselves Against Privacy Dark Patterns? 

Dark patterns have emerged in the last few years as a major target of legislators and regulators. Dark patterns are online interfaces that manipulate, confuse, or trick consumers into purchasing goods or services that they do not want, or into surrendering personal information that they would prefer to keep private. As new laws and regulations to restrict dark patterns have emerged, skeptics have countered that motivated consumers can and will protect themselves against these manipulative interfaces, making government intervention unnecessary. !is debate occurs alongside active legislative and regulatory discussion about whether to prohibit dark patterns in newly enacted comprehensive consumer privacy laws. Our interdisciplinary paper provides experimental evidence showing that consumer self-help is unlikely to fix the dark patterns problem. Several common dark patterns (obstruction, interface interference, preselection, and confusion), which we integrated into the privacy settings for a video-streaming website, remain strikingly effective at manipulating consumers into surrendering private information even when consumers were charged with maximizing their privacy protections and understood that objective. We also provide the first published evidence of the independent potency of “nagging” dark patterns, which pester consumers into agreeing to an undesirable term. These findings strengthen the case for legislation and regulation to address dark patterns. Our paper also highlights the broad popularity of a feature of the recent California Consumer Privacy Act (CCPA), which gives consumers the ability to opt-out of the sale or sharing of their personal information with third parties. As long as consumers see the Do Not Sell option, a super-majority of them will exercise their rights, and a substantial minority will even overcome dark patterns in order to do so.

Presenting Author

  • Matthew B. Kugler, Northwestern Pritzker School of Law

Discussant

  • TBA

12:20 pm –
12:35 pm

How the Legal Basis for AI Training Is Framed in Data Protection Guidelines (Link Forthcoming)

• This paper investigates how the legal basis for AI training is framed within data protection guidelines and regulatory interventions, drawing on a comparative analysis of approaches taken by authorities across multiple jurisdictions.

• Focusing on the EU’s General Data Protection Regulation (GDPR) and analogous data protection frameworks globally, the study systematically maps guidance, statements, and actions to identify areas of convergence and divergence in the conceptualisation and operationalisation of lawful grounds for personal data processing—particularly legitimate interest and consent—in the context of AI model development.

• The analysis reveals a trend toward converging on the recognition of legitimate interest as the predominant legal basis for AI training. However, this convergence is largely superficial, as guidelines rarely resolve deeper procedural and substantive ambiguities, and enforcement interventions often default to minimal safeguards. This disconnect between regulatory rhetoric and practical compliance leaves significant gaps in protection and operational clarity for data controllers, calling into question the reliability and legitimacy of the existing framework for lawful AI training. It warns that, without clearer operational standards and more coherent cross-border enforcement, there is a risk that legal bases such as legitimate interest will serve as little more than formalities.

• Reflecting on these findings, the paper explores the prospects and limitations for achieving greater alignment or harmonisation at the global level. Specifically, it reflects on pathways for global AI governance, emphasising that the progress would benefit from distinguishing issues amendable to international convergence from those that require context-sensitive, locally adaptive solutions. It considers how regulators, practitioners, civil society activists, and scholars might leverage these insights to prioritise evidence-based avenues rather than seeking uniformity at conceptual level for its own sake, with the ultimate goal of advancing both principled and practical frameworks for lawful AI training across diverse legal landscapes.

Presenting Authors

  • Wenlong Li, Zhejiang University
  • Yueming Zhang, University of International Business and Economics

Discussant

  • TBA

12:35 pm –
12:50 pm

De-Identification Guidelines for Structured Data (Information and Privacy Commissioner of Ontario)

(Abstract Forthcoming)

Presenting Authors

  • Dr. Khaled El Emam, University of Ottawa
  • Christopher Parsons, IPC of Ontario

Discussant

  • TBA

12:50 pm –
12:55 pm

CLOSING REMARKS

Justine Gluck, AI Policy Analyst, Future of Privacy Forum

Speakers

Khaled El Emam

Canada Research Chair (Tier 1), University of Ottawa

Dr. Khaled El Emam is the Canada Research Chair (Tier 1) in Medical AI at the University of Ottawa, where he serves as Professor in the School of Epidemiology and Public Health and Director of Medical AI at the Faculty of Medicine. He is also a Senior Scientist at the Children’s Hospital of Eastern Ontario Research Institute and leads the Electronic Health Information Laboratory, focusing on privacy-enhancing technologies for health data sharing and applied machine learning. Previously, he was Scholar-in-Residence at the Office of the Information and Privacy Commissioner of Ontario. Dr. El Emam holds a PhD in Electrical and Electronics Engineering from King’s College, University of London.

Justine Gluck

Policy Analyst, AI Policy and Legislation, FPF

Justine Gluck is a Policy Analyst, AI Policy and Legislation at the Future of Privacy Forum. Previously, she conducted research on semiconductor policy and workforce development at Harvard University’s Belfer Center and the Micro Nano Technology Education Center. Earlier in her career, Justine served as a legislative aide in the California Legislature, drafting and advancing legislation on privacy, AI, and higher education. She holds an M.A. in International Relations with a specialization in Technology and International Affairs from the Fletcher School at Tufts University and a B.A. in Politics and History from Pomona College.

Matthew Kugler

Professor, Northwestern Pritzker School of Law

Matthew Kugler’s primary research areas are privacy, intellectual property, and criminal procedure. His recent research has addressed deepfakes, biometric privacy, dark patterns, and survey evidence in federal cases. He is the author of a free open-access casebook, Privacy Law: Cases and Materials. Prior to joining Northwestern, Matthew completed a Ph.D. in Psychology and Social Policy at Princeton University, was a postdoctoral fellow and adjunct instructor in psychology at Lehigh University and was awarded a JD with highest honors from the University of Chicago Law School. He also clerked for the Honorable Richard Posner on the Seventh Circuit.

Christina Lee

Visiting Associate Professor, George Washington University Law School

Christina Lee is a Visiting Associate Professor of Law and Privacy and Technology Law Fellow at the George Washington University Law School. Her research explores how the law should address the harms of emerging technologies, particularly artificial intelligence (AI). She examines how the complexity of the tech ecosystem—how actors in the tech industry interact with each other to produce, develop, and deploy products and services powered by emerging technologies—challenges the efforts to use traditional legal frameworks that predate the rise of AI to address the harms of AI.

Christina earned a BA with Distinction in International Relations, a BS with Distinction in Mathematics, and an MS in Computer Science from Stanford University. She received her JD cum laude from Harvard Law School, where she served as a Submissions Manager and a Symposium Articles Editor for the Harvard Journal of Law and Technology. Prior to law school, Christina worked as a Senior Program Manager at Microsoft.

Wenlong Li

Research Professor, Zhejiang University

Wenlong is a Research Professor at Zhejiang University, specialising in the regulation of artificial intelligence and data in transnational and global contexts. He is currently member of UNESCO’s AI Ethics without Borders, Scientific Expert of the International Panel on the Information Environment, and research affiliate with the Edinburgh Centre for Data, Culture & Society (CDCS). He also serves as an Associate Editor of International Data Privacy Law and editor of the Future Law book series at Edinburgh University Press. Before joining Zhejiang University, Wenlong developed a distinctly hybrid academic–industry career, bringing together legal scholarship, public policy engagement, and the practical realities of internal governance and awareness-building. He has held several academic posts in the United Kingdom, including Guest Lecturer and Editor of SCRIPTed at the University of Edinburgh, Interdisciplinary Research Fellow in Law, Ethics and Computer at the University of Birmingham , and Lecturer in AI Law (open-ended) at Aston University. In parallel, he worked full-time in research and governance roles across leading global technology companies, including TikTok (Research Lead, Privacy & Data Protection Office), Alibaba Research (Senior Researcher, Digital Economies & Public Policy), and Tencent (Legal Researcher). Wenlong completed his PhD in IT Law at the University of Edinburgh. He previously obtained an interdisciplinary LL.M. (by research) in Law and Journalism and an LL.B. from the University of Political Science and Law, China.

Arvind Narayanan

Professor, Princeton University

Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil, the essay AI as Normal Technology, and a newsletter of the same name which is read by over 60,000 researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan was one of TIME’s inaugural list of 100 most influential people in AI. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).

Christopher Parsons

Director of Research and Technology Policy, Information and Privacy Commissioner of Ontario

Dr. Christopher Parsons is the Director of Research and Technology Policy at the Information and Privacy Commissioner of Ontario. He leads a team with expertise in artificial intelligence, cybersecurity, data governance, identity management, privacy enhancing technologies, and technologies in the education, law enforcement and health sectors.

Formerly, he worked as a Senior Research Associate at the Munk School’s Citizen Lab at the University of Toronto, where his research focused on third-party access to telecommunications data, data privacy, data security, and national security.

Christopher has written policy reports for civil advocacy organizations, submitted evidence to Parliamentary committees, and been an active member of the Canadian privacy and national security communities. He has published in a range of peer-reviewed journals, published book chapters in academic and popular presses, and written many reports with the Citizen Lab.

He holds a Ph.D. in Political Science from the University of Victoria.

Matthew Reisman

Vice President for U.S. Policy, FPF

Matthew Reisman is Vice President for U.S. Policy at the Future of Privacy Forum (FPF), a global nonprofit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Matthew oversees FPF’s U.S. policy work, including legislative and regulatory engagement, research, and initiatives addressing emerging data protection, AI, and technology challenges. He will also lead FPF’s experts across youth privacy, data governance, health, and other portfolios to advance key FPF projects and priorities.

Prior to joining FPF, Matthew served as a Director of Privacy and Data Policy at the Centre for Information Policy Leadership (CIPL), where he led research, public engagement, and programming on topics including accountable development and deployment of artificial intelligence (AI), privacy and data protection policy, cross-border data flows, organizational governance of data, and privacy-enhancing technologies (PETs).

Prior to joining CIPL, Matthew was a Director of Global Privacy Policy at Microsoft, where he helped shape the company’s approach to privacy and data policy, including its intersections with security, digital safety, trade, data governance, cross-border data flows, and emerging technologies such as AI and 5G. Matthew also previously led research on the digital economy at the United States International Trade Commission (USITC) and advanced international economic development programs funded by the World Bank and other donors.

Matthew holds a Master of Public Policy from the Harvard Kennedy School and undergraduate degree from Duke University.

Testimony

Hearing of the Subcommittee on Digital Assets, Financial Technology, and Artificial Intelligence of the U.S. House Committee on Financial Services on “Unlocking the Next Generation of AI in the U.S. Financial System for Consumers, Businesses, and Competitiveness,” September 18, 2025. Hearing of the ​​Internal Temporary Committee on AI of the Federal Senate of Brazil, September 4, 2024 (Virtual Testimony).
Alicia Solow-Niederman

Professor, George Washington University Law School

Professor Solow-Niederman’s scholarship sits at the intersection of law and technology. Her research focuses on how to regulate emerging technologies, such as artificial intelligence, in a way that reckons with social, economic, and political power. With an emphasis on algorithmic accountability, data governance, and information privacy, Professor Solow-Niederman explores how digital technologies can both challenge longstanding regulatory approaches and expose underlying legal values.

Matt Steinberg

Tech and Public Policy Scholar, Knight-Georgetown Institute

Matt Steinberg is a Tech and Public Policy Scholar at the Knight-Georgetown Institute, where he contributes to research on platform accountability and technology litigation. Previously, Matt was a Policy Fellow at New America’s Open Technology Institute, where he studied AI agents and the Model Context Protocol (MCP), focusing on their implications for privacy, security, and competition. He also worked at Georgetown’s Massive Data Institute on projects analyzing election officials’ use of social media to build public trust.

Before entering public policy, he was a playwright and development executive in film and television, developing projects for Netflix and Disney. His award-winning play Socially Unacceptable explored the ethics of content moderation, and was produced across the country.

Matt is pursuing a Master’s in Public Policy at Georgetown University’s McCourt School. He holds a B.A. from NYU’s Gallatin School of Individualized Study.

Prem Trivedi

Director, Open Technology Institute (OTI)

Prem M. Trivedi is the Director of the Open Technology Institute (OTI) at New America, where he leads OTI’s research and advocacy efforts to improve outcomes in technology policy by prioritizing fairness and meaningful transparency in governance. OTI’s core areas of work include artificial intelligence, privacy and responsible data use, and connectivity.

For over a decade, Prem’s work has focused on the relationship between technologies and democratic health, with specializations in privacy and government surveillance, commercial privacy, online content moderation, and corporate accountability. Prior to joining New America, he worked as a privacy advisor for a major technology company, in government at the Privacy and Civil Liberties Oversight Board, and with an academic research institute. Prem is also a non-resident Senior Fellow at American University’s Tech, Law, and Security Program.

Yueming Zhang

Affiliated Researcher, University of International Business and Economics

Yueming Zhang is an affiliated researcher at the Digital Economy and Legal Innovation Research Center, University of International Business and Economics (UIBE). She obtained her PhD in Law from Ghent University, where she was also a voluntary postdoctoral researcher in the Law & Technology research group. Her research focuses on privacy, data protection, AI governance, and cross-border data transfers.