6 Privacy Tips for the Generative AI Era
Data Privacy Day, or Data Protection Day in Europe, is recognized annually on January 28 to mark the anniversary of Convention 108, the first binding international treaty to protect personal data. The Council of Europe initiated the day in 2006, with the first official celebration held on January 28, 2007, marking this year as the 19th anniversary of celebration. Companies and organizations around the world often devote time for internal privacy training during this week, working to improve awareness of key data protections issues for their staff.
It’s also a good time for all of us to think about our own sharing of personal data. Nowadays, one of the most important decisions we need to make about our data is when and how we use AI-powered services. To raise awareness, we’ve partnered with Snap to create a Data Privacy Day Snapchat Lens. Check it out by scanning the Snapchat code and learn more below about privacy tips for generative AI!

- Know When You’re Using Generative AI
As a first step, it’s important to know what generative AI is and when you’re using it. Generative AI is a type of artificial intelligence that creates original text, images, audio, and code in response to input. In addition to visiting dedicated generative AI platforms (such as ChatGPT), you may find that many companies’ existing functionality now also includes generative AI capabilities. For example, a search in Google now provides answers powered by Google’s generative AI, Gemini. Other examples include Snap’s AI Lenses and AI Snaps in creative tools, and Adobe’s Acrobat and Express are now powered with Firefly, Adobe’s generative AI. X’s Grok now assists users and answers questions.
One of the best ways to identify when you’re using generative AI is to look for a symbol or disclaimer. Many organizations provide clues like symbols, and a range of companies like Snap, Github, and many others often use either a sparkle or star icon to denote generative AI features. You might also notice labels like “AI-generated” or “Experimental” alongside results from some companies, including Meta.
- Think Carefully Before You Share Sensitive or Private Information
While this is a general rule of thumb for interacting with any product, it’s especially important when using generative AI because most generative AI systems use data that users provide (such as conversation text or images) to allow their models to continuously learn and improve. While your prompts, generated images, and other pieces of data can improve the technology for all users, it also means that if you share sensitive or private information, it could potentially be shared or surfaced in connection with training and developing the algorithm.
Be especially careful when uploading files, images, or screenshots to generative AI tools. Documents, photos, or screenshots can include more information than you realize, such as metadata, background details, or information about third parties. Before uploading, consider redacting, cropping, or otherwise limiting files to include only the information necessary for your task.
Some companies promise to not use your data for training, often if you are using the paid version of their service. Others provide an option to opt-out of use of your data for training or versions that have special protections. For example, ChatGPT’s new health service supports the upload of health records with additional privacy and security commitments, but you need to be sure to be using the specific Health tab that is being rolled out to users.
- Manage Your AI’s Memory
Many generative AI tools now feature a memory function that allows them to remember details about you over time, providing more tailored responses. While this can be helpful for maintaining context in long-term projects, such as remembering your writing style, professional background, or specific project goals, it also creates a digital record of your preferences and behaviors. A recent FPF report explores these different kinds of personalization.
Fortunately, you typically have the power to control what Generative AI platforms remember. Most have settings to view, edit, or delete specific memories or to turn the feature off entirely. For instance, in ChatGPT, you can manage these details under Settings > Personalization, and Gemini allows you to toggle off “Your past chats” within its activity settings to prevent long-term tracking. Meta also provides options for deleting all chats and images from the Meta AI app. Another option is to use “Temporary” or “Incognito” modes, so you can enjoy a personalized experience without generative AI compiling data attributed to your profile.
In addition to managing memory features, it’s also helpful to understand how long Generative AI services keep your data. Some platforms store conversations, images, or files for only a short time, while others may keep them longer unless you choose to delete them. Taking a moment to review retention timelines can give you a clearer picture of how long your information sticks around, and help you decide what you’re comfortable sharing.
- Define Boundaries for Agentic AI
Agentic AI, a form of generative AI that can complete tasks for users with greater autonomy, is becoming increasingly popular. For example, companies like Perplexity, OpenAI, and Amazon have unveiled agentic systems that can make purchases for consumers. While these systems can take on more tasks, they still require users to review purchases before they are final. As a best practice, you should look over the purchase to check that it aligns with your expectations (e.g., ordering 1 pair of socks and not 10). It is also important to keep in mind that since agentic systems can pull information from third party sources, there is a risk that the system will rely on inaccurate information about a product during purchases (e.g., that an item is in stock).
As agentic systems become more embedded in our lives, you should also be mindful about how much information you share with them. Consumers are already disclosing sensitive details about themselves to more basic chatbots, which businesses, the government, and other third parties may want to access. When interacting with agentic systems, keep this in mind and pay attention to what you disclose about yourself and others. You may similarly want to consider what type of access to provide to the agentic AI product, and rely on the principle of least privilege–only providing the minimum access needed for your use. For example, if an agentic system is going to manage your calendar, think through options for narrowing the access so your entire calendar is not shared, and that other apps connected to your calendar, like your email, are not shared unless necessary.
- Review How Generative AI Products Handle Privacy and Safety
It’s important to regularly review the privacy and security practices of any company with which you share information, and this applies similarly to companies offering generative AI products. This can include checking what data is collected and how, as well as how that information is used and stored.
Snap has a Snapchat Privacy Center where you can review your settings. You can find those choices here.
ChatGPT’s privacy controls are available in the ChatGPT display, and OpenAI has a Data Controls FAQ that outlines where to find the settings and what options are available.
Gemini has the Gemini Privacy Hub, as well as an area to read about and configure your settings for Gemini Apps, which includes options for turning your Gemini history off.
Claude has a Privacy Settings & Controls page that outlines how long they store your data, how you can delete it, and more.
Co-Pilot provides an array of options for reviewing and updating your privacy settings, including how to delete specific memories and how your data is used. These settings are available on Microsoft’s website, here. Microsoft also provides a detailed Privacy FAQ page as well.
Keep in mind that Generative AI products change quickly, and new features may introduce new data uses, defaults, or controls. Periodically revisiting privacy and safety settings can help ensure your preferences continue to reflect how the product works today, rather than how it worked when you first configured it.
- Explore and Have Fun!
LLMs can often provide useful data protection advice, so ask them questions about AI and privacy. Just be sure to double-check sources and accuracy, especially for important topics!
Data Privacy Day is a reminder that privacy is a shared responsibility. By bringing together FPF’s expertise in privacy research and policy with Snap’s commitment to building products with privacy and safety in mind, this collaboration aims to help people better understand how AI works and how to use it thoughtfully.