Privacy Impact Assessment Policies Help Cities Use and Share Data Responsibly with their Communities
As the world urbanizes, local governments are turning to “Smart City” initiatives and the data they generate to more effectively manage transportation systems, support real-time infrastructure maintenance, automatically administer public services, enable transparent governance and open data, and support emergency services in public areas. Data held by public and private organizations have the potential to lead to urban planning insights that can benefit governments and communities worldwide – if it can be collected, stored, and accessed in a responsible manner that respects personal privacy.
Click on image to watch on YouTube
As part of its work on smart communities in 2020 FPF co-led a task force of experts to develop a Model Privacy Impact Assessment (PIA) Policy for governments and communities that are considering sharing personal data collected from “smart city” solutions. This Model PIA Policy was developed as part of the G20 Global Smart Cities Alliance on Technology Governance, a partnership of leading international organizations and city networks working to source tried-and-tested policy approaches to govern the use of smart city technologies. Its institutional partners — including FPF — represent more than 200,000 cities and local governments, leading companies, startups, research institutions, and civil society communities.
Privacy Impact Assessments in “Smart Cities”
Cities that adopt a clear policy about how and when to conduct PIAs are taking an important first step in being able to consistently and confidently identify, evaluate, and mitigate privacy risks. PIAs (or similar privacy or data protection risk assessments) are already considered a best practice by public and private organizations around the world. In some places they are even required by law, such as cities covered by the EU’s General Data Protection Regulation.
Although PIAs traditionally focus on identifying privacy risks, they can also be an important mechanism for organizations to articulate the specific value and benefits that they expect to achieve from new smart city data flows and technologies. While PIAs are only one part of a comprehensive privacy program and should sit alongside other safeguards (such training, data minimization, and regulation), cities acquiring or using smart city technologies will find PIAs to be valuable tools to:
- increase transparency and accountability and earn public trust,
- embed privacy by design throughout Smart City project and data lifecycles,
- mitigate potential privacy harms or disparate impacts before they occur,
- improve compliance or reduce legal risk,
- encourage innovation by supporting ethical decision-making,
- facilitate internal and external communication and cooperation, and
- enable more confident and consistent decision-making about data and technology by city officials, their partners, and the public.
Cities must balance their own need to use and share data to conduct business with the broader public welfare and individual privacy interests in a way that builds and maintains public trust. Without public trust, the benefits of smart city technologies will be ultimately unsustainable. Cities must invest in policies and practices that will help individuals, local communities, and technology providers maximize the benefits of responsible data use while minimizing privacy risks to individuals and communities.
The Model Policy
This Model PIA Policy is a flexible, scalable policy framework based on proven best practices that cities around the world are already beginning to adopt. Every component of the policy reflects real-world practices and examples by leading cities and counties, including policies and templates from Seattle, Wellington, Helsinki, Santa Clara, Huron, and Toronto. The framework also builds on expert guidance from organizations like NIST, the Article 29 Working Party, and UN Global Pulse.
The Model PIA Policy is divided into two parts: a “foundations” section that describes the process that cities should follow to conduct PIAs and a “fundamentals” section describes the issues that cities should consider in their PIAs. There are also optional guidance and examples throughout the policy framework for cities that have a greater privacy maturity level or that wish to conduct PIAs in more participatory ways.
In the “Foundations” section, cities are provided with a recommended process for conducting PIAs, which includes:
- Identifying organizational values, legal requirements, and risk tolerance around data and privacy,
- Defining what types of activities that should be evaluated through a PIA and when to conduct PIAs (such as before a smart city technology is acquired, or whenever there are material changes to existing data processes),
- Incorporating threshold assessments and outside expertise to ensure higher risk activities are prioritized and to reduce the likelihood of bottlenecks,
- Recognizing the key roles and responsibilities needed to effectively carry out a PIA (including senior privacy officials, executive supporters, and program staff),
- Ensuring that PIAs are regularly reviewed and integrated into monitoring and recordkeeping systems, and
- Providing options to encourage transparency and engagement around the results of PIAs.
In the “Fundamentals” section, cities are provided with recommended issues for their PIAs to consider when evaluating a proposed smart city technology. These include:
- Identifying who within the city will be using the proposed smart city technology, for what purposes and public benefit, and under what authority (as appropriate),
- Articulating the city’s public values and relevant principles, privacy commitments, legal standards, or organizational risk frameworks,
- Describing the proposed smart city technology, including its technical capabilities and potential privacy impacts on individuals and communities,
- Documenting how the city will respond to any anticipated privacy risks (including potential impacts on civil rights and disparate impacts on marginalized communities), including any safeguards or controls that may be used to mitigate those risks and any data use or management policies for the proposed smart city technology,
- Discussing the availability of funding and resources to provide for ongoing privacy and data protection costs related to the smart city technology, and
- Articulating any additional factors or contextual considerations that may be relevant, such as any community engagement conducted or exigent circumstances that could impact the current privacy risks or safeguards.
Supporting ethical data collection and sharing to enable governments and municipalities to use data from their respective community members is a priority for FPF. Drafting a model PIA policy for a global audience is a complicated process, as wide variation exists in cultural and legal approaches to privacy and data protection around the world. Smart city initiatives also vary considerably in their size and complexity. Our hope is that by providing a model policy for local governments to follow, we can increase the likelihood that cities will consider and address privacy risks in a manner consistent with community expectations.
Acknowledgements: This Model PIA Policy was a collaborative effort by members of the Privacy & Transparency Task Force and other G20 Smart Cities Alliance contributors and reviewers. Special thanks to Task Force Co-Chair Michael Mattmiller (Microsoft) and Task Force Members Pasquale Annicchino (Lex Digital), Sean Audain (Wellington City Council), Chandra Bhusan (Quantela), Dylan Gilbert (NIST), Eugene Kim (Sidewalk Labs), Naomi Lefkovitz (NIST), Jacqueline Lu (Helpful Places), and Daniel Wu (Immuta).
FPF’s participation was funded in part by the National Science Foundation (NSF SPOKES #1761795).
The Future of Privacy Forum works on privacy issues regarding smart communities. To learn more, please contact [email protected] and [email protected].