Tech to Support Older Adults and Caregivers: Five Privacy Questions for Age Tech
Introduction
As the U.S. population ages, technologies that can help support older adults are becoming increasingly important. These tools, often called “AgeTech”, exist at the intersection of health data, consumer technology, caregiving relationships, and increasingly, artificial intelligence, and are drawing significant investment. Hundreds of well funded start-ups have launched. Many are of major interest to governments, advocates for aging populations, and researchers who are concerned about the impact on the U.S. economy when a smaller workforce supports a large aging population.
AgeTech may include everything from fall detection wearables and remote vital sign monitors to AI-enabled chatbots and behavioral nudging systems. These technologies promise greater independence for older adults, reduced burden on caregivers, and more continuous, personalized care. But that promise brings significant risks, especially when these tools operate outside traditional health privacy laws like HIPAA and instead fall under a shifting mix of consumer privacy regimes and emerging AI-specific regulations.
A recent review by FPF of 50 AgeTech products reveals a market increasingly defined by data-driven insights, AI-enhanced functionality, and personalization at scale. Yet despite the sophistication of the technology, privacy protections remain patchy and difficult to navigate. Many tools were not designed with older adults or caregiving relationships in mind, and few provide clear information about how AI is used or how sensitive personal data feeds into machine learning systems.
Without frameworks for trustworthiness and subsequent trust from older adults and caregivers, the gap between innovation and accountability will continue to grow, placing both individuals and companies at risk. Further, low trust may result in barriers to adoption at a time when these technologies are urgently needed as the aging population grows and care shortages continue.
A Snapshot of the AgeTech Landscape
AgeTech is being deployed across both consumer and clinical settings, with tools designed to serve four dominant purposes:
- Health Monitoring or managing a specific health condition such as dementias, heart conditions, or other long-term conditions.
- Remote Monitoring which may include location-tracking by a family member or other caregiver, bodily mobility monitoring by a provider, or other general monitoring not related to a specific condition or diagnosis.
- Daily Task Support including appointments, medication adherence, meals, and errands.
- Emergency Use such as fall detection and prevention, emergency communications, and alerts.
Clinical applications are typically focused on enabling real-time oversight and remote data collection, while consumer-facing products are aimed at supporting safety, independence, and quality of life at home. Regardless of setting, these tools increasingly rely on combinations of sensors, mobile apps, GPS, microphones, and notably, AI used for everything from fall detection and cognitive assistance to mood analysis and smart home adaptation.
AI is becoming central to how AgeTech tools operate and how they’re marketed. But explainability remains a challenge and disclosures around AI use can be vague or missing altogether. Users may not be told when AI is interpreting their voice, gestures, or behavior, let alone whether their data is used to refine predictive models or personalize future content.
For tools that feel clinical but aren’t covered by HIPAA, this creates significant confusion and risk. A proliferation of consumer privacy laws, particularly emerging state-level privacy laws with health provisions are starting to fill the gap, leading to complex and fragmented privacy policies. For all stakeholders seeking to improve and support aging through AI and other technologies, harmonious policy-based and technical privacy protections are essential.
AgeTech Data is Likely in Scope of Many States’ Privacy Laws
Compounding the issue is the reality that these tools often fall into regulatory gray zones. If a product isn’t offered by a HIPAA-covered entity or used in a reimbursed clinical service, it may not be protected under federal health privacy law at all. Instead, protections depend on the state where a user lives, or whether the product falls under one of a growing number of state-level privacy laws or consumer health privacy laws.
Laws like New York’s S929/NY HIPA, which remains in legislative limbo, reflect growing state interest in regulating sensitive and consumer health data that would likely be collected by AgeTech devices and apps. These laws are a step toward closing a gap in privacy protections, but they’re not consistent. Some focus narrowly on specific types of health data individually or in tandem with AI or other technologies. For example, mental health chatbots (Utah HB452), reproductive health data (Virginia SB754), or AI disclosures in clinical settings (California AB3030). Other bills and laws have broad definitions that include location, movement, and voice data, all common types of data in our survey of AgeTech. Regulatory obligations may vary not just by product type, but by geography, payment model (where insurance may cover a product or service), and user relationship.
Consent + Policy is Key to AgeTech Growth and Adoption
In many cases, it is not the older adult but a caregiver, whether a family member, home health aide, or neighbor, who initiates AgeTech use and agrees to data practices. These caregiving relationships are diverse, fluid, and often informal. Yet most technologies assume a static one-to-one dynamic and offer few options for nuanced role-based access or changing consent over time.
For this reason, AgeTech is a good example of why consent should not be the sole pillar of data privacy. While important, relying on individual permissions can obscure the need for deeper infrastructure and policy solutions that relieve consent burdens while ensuring privacy. Devices and services that align privacy protections with contextual uses and create pathways for evidence-based, science-backed innovation that benefits older adults and their care communities are needed.
Five Key Questions for AgeTech Privacy
To navigate this complexity and build toward better, more trustworthy systems, privacy professionals and policymakers can start by asking the following key questions:
- Is the technology designed to reflect caregiving realities?
Caregiving relationships are rarely linear. Tools must accommodate shared access, changing roles, and the reality that caregivers may support multiple people, or that multiple people may support the same individual. Regulatory standards should reflect this complexity, and product designs should allow for flexible access controls that align with real-world caregiving.
- Does the regulatory classification reflect the sensitivity of the data, not just who offers the tool?
Whether a fall alert app is delivered through a clinical care plan or bought directly by a consumer, it often collects the same data and has the same impact on a person’s autonomy. Laws should apply based on function and risk. Laws should also consider the context and use of data in addition to sensitivity. Emerging state laws are beginning to take this approach, but more consistent federal leadership is needed.
- Are data practices accessible, not just technically disclosed?
Especially in aging populations, accessibility is not just about font size, it’s about cognitive load, clarity of language, and decision-making support. Tools should offer layered notices, explain settings in plain language, and support revisiting choices as health or relationships change. Future legislation could require transparency standards tailored to vulnerable populations and caregiving scenarios.
- Does the technology reinforce autonomy and dignity?
The test for responsible AgeTech is not just whether it works, but whether it respects. Does the tool allow older adults to make choices about their data, even when care is shared or delegated? Can those preferences evolve over time? Does it reinforce the user’s role as the central decision-maker, or subtly replace their agency with automation?
- If a product uses or integrates AI, is it clearly indicated if and how data is used for AI?
AI is powering an increasing share of AgeTech’s functionality—but many tools don’t disclose whether data is used to train algorithms, personalize recommendations, or drive automated decisions. Privacy professionals should ask: Is AI use clearly labeled and explained to users? Are there options to opt out of certain AI-driven features? Is sensitive data (e.g., voice, movement, mood) being reused for model improvement or inference? In a rapidly advancing field, transparency is essential for building trustworthy AI.
A Legislative and Technological Path Forward
Privacy professionals are well-positioned to guide both product development and policy advocacy. As AgeTech becomes more central to how we deliver and experience care, the goal should not be to retrofit consumer tools into healthcare settings without safeguards. Instead, we need to modernize privacy frameworks to reflect the reality that sensitive, life-impacting technologies now exist outside the clinic.
This will require:
- Consistent legislation that protects sensitive data regardless of who collects it;
- Design standards that account for caregiving dynamics and decision-making capacity;
- Craft AI frameworks in collaboration with stakeholders that foster transparency and safety;
- Infrastructure for shared access and evolving consent, not just checkbox compliance.
The future of aging with dignity will be shaped by whether we can build privacy into the systems that support it. That means moving beyond consent and toward real protections, at the policy level, in the technology stack, and in everyday relationships that make care possible.