FPF in 2023: A Year in Review
As 2023 comes to an end, we want to reflect on a year that saw the Future of Privacy Forum (FPF) continue to expand its presence globally and domestically while organizing engaging events, publishing thought-provoking analysis, providing the latest expert updates, and more. FPF continues to convene industry experts, academics, consumer advocates, and other experts […]
FPF Publishes New Report: A Conversation on Privacy, Safety, and Security in Australia: Themes and Takeaways
On October 27, 2023, the Future of Privacy Forum (“FPF”), in partnership with the UNSW Allens Hub for Technology, Law and Innovation (“Allens Hub”), convened a multidisciplinary meeting of experts on technology, privacy, safety, and security in Sydney, NSW, Australia to discuss benefits, challenges, and unanswered questions associated with the Australian eSafety Commissioner’s (“eSafety”) forthcoming […]
A Conversation on Privacy, Safety, and Security in Australia-Themes and Takeaways
DECEMBER 2023 A Conversation on Privacy, Safety, and Security in Australia: Themes and Takeaways The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting fpf.org. AUTHORED BY Amie Stepanovich Vice […]
Risk Framework for Body-Related Data in Immersive Technologies
Today, the Future of Privacy Forum (FPF) released its Risk Framework for Body-Related Data in Immersive Technologies for organizations to structure the collection, use, and onward transfer of body-related data. Organizations building immersive technologies like extended reality and virtual worlds often rely on large amounts of data about individuals’ bodies and behaviors. While body-related data […]
FPF Risk Framework for Body-Related Data FINAL Digital
DECEMBER 2023 RISK FRAMEWORK FOR B O DY- R E L AT E D DATA I N IMMERSIVE TECHNOLOGIES The Future of Privacy Forum (FPF) is a non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. Learn more about FPF by visiting fpf.org. […]
Five Big Questions (and Zero Predictions) for the U.S. State Privacy Landscape in 2024
Entering 2024, the United States now stands alone as the sole G20 nation without a comprehensive, national framework governing the collection and use of personal data. With bipartisan efforts to enact federal privacy legislation once again languishing in Congress, state-level activity on privacy dramatically accelerated in 2023. As the dust from this year settles, we […]
The PrivaSeer Project in 2023: Access to 1.4 million privacy policies in one searchable body of documents
In the summer of 2021, FPF announced our participation in a collaborative project with researchers from the Pennsylvania State University and the University of Michigan to develop and build a searchable database of privacy policies and other privacy-related documents, with the support of the National Science Foundation. This project, PrivaSeer, has since become an evolving, […]
A Blueprint for the Future: White House and States Issue Guidelines on AI and Generative AI
Since July 2023, eight U.S. states (California, Kansas, New Jersey, Oklahoma, Oregon, Pennsylvania, Virginia, and Wisconsin) and the White House have published executive orders (EOs) to support the responsible and ethical use of artificial intelligence (AI) systems, including generative AI. In response to the evolving AI landscape, these directives signal a growing recognition of the […]
OT-FPF-ca-report-infographic-digital_update2
EU AIA Conformity Assessment: A step-by-step guide Step 1 Am I obligated to perform a CA? Q1 Do I fall under the AIA? YES Q2 Is it a ‘high-risk’AI system? YES Q3 Am I the provider? YES NO Material scope: Art 2 Is it an ‘AI system’ as per Art 3(1)? See Table 1 Classification of High-risk AI systems under the AIA AI systems that are safety components of products or are themselves products that fall under Annex II AI Systems that belong to the use cases of Annex III + (EC) the output of the system is not purely accessory & is likely to lead to significant risks / (EP) significant risk of harm. NO Article 3(e) product manufacturer / distributor / importer / user / third-party responsible to perform the CA Step 2 When to perform a CA? EX ANTE Before placing the AI system on the EU market or putting it into service (definitions in Art 3(9,11)) EX POST IF Afer placing the AI system on the EU market or putting it into service: Substantial modification to the AI system• NEW AI system • NEW CA required IF reasons of public security or the protection of life and health of persons, environmental protection, and the protection of key industrial and infrastructural assets high-risk AI system placed on the market without a prior CA. (Art 47) Continued overleaf AI system that continues to learn + pre-determined changes documented in the initial CA no new CA required. 1200 Abernathy Rd NE, Building 600 | Atlanta, Georgia | United States | 30328 Atlanta | London | Bangalore | Melbourne | Denver| Seattle | San Francisco | New York | São Paulo | Munich | Paris | Hong Kong | Bangkok As society redefines risk and opportunity, OneTrust empowers tomorrow’s leaders to succeed through trust and impact with the Trust Intelligence Platform. The market-defining Trust Intelligence Platform from OneTrust connects privacy, GRC, ethics, and ESG teams, data, and processes, so all companies can collaborate seamlessly and put trust at the center of their […]
ICYMI: FPF Webinar Discussed The Current State of Kids’ and Teens’ Privacy
Privacy by design for kids and teens has expanded across the globe. As policymakers, advocates, and companies grapple with the ever-changing landscape of youth privacy regulation, the Future of Privacy Forum recently hosted a webinar discussing the current state of kids’ and teens’ privacy policy. The webinar explored the current frameworks that are influential worldwide, […]