Policy Brief: European Commission’s Strategy for AI, explained
The European Commission published a Communication on “Artificial Intelligence for Europe” on April 24th 2018. It highlights the transformative nature of AI technology for the world and it calls for the EU to lead the way in the approach of developing AI on a fundamental rights framework. AI for good and for allis the motto the Commission proposes. The Communication could be summed up as announcing concrete funding for research projects, clear social goals and more thinking about everything else.
The Communication lays out proposed actions for the following years, fully taking into account that cooperation with Member States and at EU level is crucial. There are already some Member States that developed AI strategies. France presented its national strategy for AI on March 29 – and president Emmanuel Macron has been quitevocal about it. Germany also set up aplatform on learning systems to enable a strategic dialogue between academia, industry and the government, and it has put forward a report on the ethics of automated and connected driving. Finland has put forward astrategy as well. The Commission doesn’t want to see a fragmented Single Market when it comes to AI and it transpires from the Communication that this was one of the main reasons to take action at this stage.
The Strategy proposed by the Commission contains several streams of action, of which the major ones are:
Concrete financial support for the development of AI applications;
Initiatives to make data available to researchers;
Analyzing the impact of AI on workforce and setting up a framework to support Member States to prepare workforce to cope with the age of AI;
Establishing an appropriate ethical and legal framework (even though no major initiatives were announced, besides adopting this year AI ethical guidelines);
Establishing an infrastructure for collaboration with Member States and research institutions.
Each of them will be briefly detailed below.
1) Financial support, including for the creation of an “AI Toolbox”
The Commission pledged 1.5 billion euros (1.75 billion dollars) in the next two years (2018-2020) for research and innovation in AI technologies under the Horizon 2020 program, primarily to support applications which address societal challenges in sectors such as health, transport and agrifood. Through public-private partnerships, this amount is estimated to increase with 2.5 billion euros over the same period of time. The Commission will also support the strengthening of AI research excellence centers. One ambitious target is to stimulate the uptake of AI across Europe through a toolbox for potential users, with a focus on small and medium-sized enterprises, non-tech companies and public administrations.
The toolbox will include:
an AI-on-demand platform giving support and easy access to the latest algorithms and expertise;
a network of AI-focused Digital Innovation Hubs facilitating testing and experimentation; and
the set-up of industrial data platforms offering high quality datasets.
In addition, the Commission aims to stimulate more private investments in AI under theEuropean Fund for Strategic Investments (at least 500 million euros in 2018-2020).
2) Initiatives to make data available for researchers: a new support centre for data sharing
Acknowledging that data is essential for the development of AI, the Commission wants to act towards growing the European data space. It is important however to mention that the focus of the data sharing initiatives is primarily on non-personal data and public sector information (traffic, meteorological, economic and financial data or business registers). As for personal data and privately-held data, the Commission emphasizes that any sharing initiative must ensure full respect for legislation on the protection of personal data.
To support making data available in a responsible way, the Commission published together with the Communication as series of new initiatives and guidance:
Guidance on sharing private sector data in the economy (including industrial data), which is designed for across all sectors of the economy. The Guidance contains a “How to” guide on legal, business and technical aspects of data sharing that can be used when considering and preparing data transfers between companies coming from the same or different sectors.
A Communication on the digital transformation of health and care, including sharing of genomic and other health data sets. This Communication announces several initiatives, including supporting the development of technical specifications for secure access and cross-border exchange of genomic and other health datasets within the internal market for research purposes, in order to facilitate interoperability of relevant registries and databases in support of personalized medicine research.
3) Dealing with the impact of AI on EU workforce
Preparing the society as a whole for the impact of AI is the first main challenge identified by the Communication in this area. The aim is to develop programs that will help all Europeans to develop basic digital skills, as well as skills which are complementary to and cannot be replaced by any machine, such as critical thinking, creativity or management.
The second challenge is the impact of AI on jobs and workers. The Commission announced that the EU will focus efforts to help workers in jobs which are likely to be the most transformed or disappear due to automation, robotics and AI. This means also ensuring access for all citizens, including workers, to social protection. The Commission intends to set-up dedicated re-training schemes in connection with the Blueprint on sectoral cooperation on skills for professional profiles which are at risk of being automated, with financial support from the European Social Fund.
The third challenge is training more specialists in AI, including attracting talent from abroad. The Commission estimates that there are about 350.000 vacancies for such professionals in Europe, pointing to significant skills gap. To this end, the European Institute of Innovation and Technology will integrate AI across curricula in the education courses it supports.
4) Ensuring both legal and ethical frameworks for the development of AI
The Communication does not announce any new legislative proposal, but it emphasizes how the current proposals debated in Brussels are relevant for AI (the Regulation on the free flow of non-personal data, the ePrivacy Regulation and the Cybersecurity Act) and that they need to be adopted as soon as possible so that citizens and businesses alike will be able to trust the technology they interact with. The possibility that the Product Liability Directive will be revised is announced, but the only concrete step mentioned for the time being is an analysis of the current provisions and whether they are fit to deal with AI technology.
In addition, the Commission highlights the role the GDPR plays on regulating the use of personal data for AI related purposes, including with regard to the right of individuals to receive meaningful information about the logic involved in automated decision-making and their right not to be subject to solely automated decision-making except in limited circumstances. The Commission intends to support national and EU-level consumer organisations and data protection supervisory authorities in building an understanding of AI-powered applications with the input of the European Consumer Consultative Group and of the European Data Protection Board.
As for dealing with ethics, the Communication announces that “AI Ethics Guidelines” will be adopted by the end of the year, in collaboration with “all relevant stakeholders”. The Guidelines are expected to address issues such as the future of work, fairness, safety, security, social inclusion and algorithmic transparency. To this end, the Commission set up a High Level Working Group for AI and the European AI Alliance.
5) Establishing partnerships with Member States
Finally, the Commission focuses on establishing partnerships at EU level. By the end of the year the Commission will work on a coordinated plan with Member States to maximise the impact of investments at EU and national levels, exchange information on the best way for governments to prepare Europeans for the AI transformation and address legal and ethical considerations. At the same time, the Commission plans to systematically monitor AI-related developments at Member State level.
It is relevant to note here that a week before the Commission published its strategy on AI, 25 EU Member States signed a Declaration of Cooperation on Artificial Intelligence, and they were joined one month later by the rest of the Member States. Currently, all 29 EU Member States are signatories to the Declaration.
For more information on the Future of Privacy Forum’s work on AI, or if you have questions related to this Policy Brief, contact:
Brenda Leong, Senior Counsel and Director of Strategy at [email protected]
FPF Publishes Report Supporting Stakeholder Engagement and Communications for Researchers and Practitioners Working to Advance Administrative Data Research
The ADRF Network is an evolving grassroots effort among researchers and organizations who are seeking to collaborate around improving access to and promoting the ethical use of administrative data in social science research. As supporters of evidence-based policymaking and research, FPF has been an integral part of the Network since its launch and has chaired the network’s Data Privacy and Security Working Group since November 2017.
This summer, ADRF Network published its first set of working group reports in order to help advance standards and best practices for administrative data researchers and practitioners. The reports address priority issues in administrative data research, including: Data Quality and Standards; Data Sharing Governance and Management; and Communicating about Data Privacy and Security. The working groups engaged over 30 experts from academic universities, government agencies, and other institutions.
FPF CEO Jules Polonetsky and FPF Policy Counsel Kelsey Finch led the work on Communicating about Data Privacy and Security as part of its ongoing efforts to support proactive and privacy-focused stakeholder engagement and communications around administrative data research. While strong privacy safeguards are the foundation of any administrative data research, learning to effectively communicate about how and why administrative data are being used and protected and providing stakeholders with meaningfully input in the research process is essential to maintaining public trust.
In the report, we identify the “why, when, who, and how” of communicating about data privacy and security while doing administrative data research. The report highlights the importance of engaging a diversity of stakeholders at multiple stages in the research lifecycle, and includes an initial matrix model building on the GovLab’s People-Led Innovationframework to ensure active engagement. We also apply the model to a hypothetical research project to further inspire researchers and practitioners to think creatively about meaningful opportunities for stakeholder engagement.
Publication of these reports is a pivotal step toward developing industry-wide best practices for researchers and practitioners working to advance administrative data research. We believe that stakeholder engagement and communicating about data privacy and security are crucial to the future success of administrative data research.
FPF is thankful to the Alfred P. Sloan Foundation for making this work possible; to Monica King for her leadership of the ADRF Network; and our fellow Data Privacy & Security Working Group members for their thoughtful contributions. Working Group participants included: Elizabeth Dabney, Data Quality Campaign; Tanvi Desai, Data Strategy Consultant; Valerie Holt, ECDataWorks; Della Jenkins, Actionable Intelligence for Social Policy; Stefaan Verhulst, GovLab; and Evan White, California Policy Lab.
The Top 10: Student Privacy News (Feb – July 2018)
The Future of Privacy Forum tracks student privacy news very closely, and shares relevant news stories with our newsletter subscribers. Approximately every month, we post “The Top 10,” a blog with our top student privacy stories. This blog is cross-posted at studentprivacycompass.org.
The Top 10
1. School Safety and Student Privacy, Part 1: how do we prevent these tragedies while protecting student privacy?
The horrific shooting at Marjory Stoneman Douglas High School in February has brought up and highlighted numerous student privacy issues. As I discussed in FPF’s statement to the Federal Commission on School Safety, there are many legitimate reasons, including attempting to prevent acts of violence, that schools surveil students, but there are also significant privacy and equity concerns that must be considered. In the wake of the shooting:
Florida passed the Marjory Stoneman Douglas High School Public Safety Act into law. One of the provisions requires that the newly created Florida Department of Education Office of Safe Schools coordinate with the Department of Law Enforcement to “provide a centralized integrated data repository and data analytics resources to improve access to… data from, at a minimum… social media; Department of Children and Families; Department of Law Enforcement; Department of Juvenile Justice; and Local law enforcement;”
At the meeting, Attorney General Jeff Sessions implied that data privacy may hamper school shooting prevention measures, saying “you have juvenile courts — they maintain records quite confidentially in secret. You have school systems that maintain records and information, and they keep it private. Police are taught to maintain privacy in what they do. Psychological treatment is maintained quite privately as well as medical treatments… In a way, you would think wouldn’t it be good if all the people that were involved in this, the school resource officers, the counselors, the principals, the teachers, could discuss a child pretty openly about what kind of difficulties this child may be having, what kinds of risks are there.”
Secretary Nielsen of the Department of Homeland Security asked if there were models, templates, or MOUs the Federal Government could release to help schools manage their relationship and sharing data with law enforcement.
FPF’s John Verdi testified that FERPA’s requirement that the health and safety exception only be used in cases where there is a rational basis for believing there is a threat to the health or safety of someone within the school community gives schools wide latitude to share information without completely disregarding students’ privacy interests;
Sonja Trainor, another panelist speaking on FERPA, noted that schools often cannot share information with law enforcement if they believe a child has been wrongfully identified as a risk to the community. She articulated that schools may need to be given the authority to share information with law enforcement so that they can perform accurate risk assessments of students;
Jennifer Mathis, Director of Policy & Legal Advocacy for the Bazelon Center, reiterated that HIPAA’s privacy protections are important and “without the assurance of privacy protections, students are both less likely to seek out help when they need it and less likely to engage openly with mental health counselors or other service providers”;
Doris Fuller, Parent & Mental Health Advocate, said that the problem with HIPAA’s privacy protections are not that they are too broad, but that providers and health care officials often don’t understand when they can share information and with whom.
Marjory Stoneman Douglas High School students’ “manifesto to fix America’s gun laws” includes “Chang[ing] privacy laws to allow mental healthcare providers to communicate with law enforcement;”
“Local Police Director Wants Laws Protecting Student Records Loosened in Midst of School Shootings” via NBCPhiladelphia;
On Thursday, the Secret Service (an agency within the Department of Homeland Security) released a guide on how schools could implement a threat assessment model to enhance school safety.
3. Facebook, Cambridge Analytica, and the Anti-Tech Wave
In the meantime, the tech backlash has continued to impact ed tech: Wired reports that “It’s time for a serious talk about the science of tech addiction” and its impact on well-being and health; “Ninety-five percent of principals said students spend too much time on digital devices when they’re not in school” via EdWeek; two higher ed professors write that “There Are No Guardrails on Our Privacy Dystopia” in Motherboard; and “Screentime” alsocontinuestobe a bigtopic of debateon blogsand inthe news that has implications for privacy.
4. New Department of Education guidance says that parents, not minor students, must consent to college admissions pre-test surveys and data sharing.
Privacyconcernshave been raisedafter Pearsonpresented at AERAon “social-psychological messages [they tested] in [college computer science] learning software, with mixed results.” Pearson “emphasized that the experiment was ‘an effort to improve student success in higher education courseware’” and noted that, while Pearson always “‘evaluate[s] the introduction of changes to determine if they require additional ethical or legal review or consultation,… the introduction of feedback messages about how to improve student success, was determined to be a part of normal educational practice.’” Education researchers have also pushed backon the concerns raised, with Justin Reich writing in EdWeek that “[e]very educational software company and publisher will be modifying their products over time to try to improve them; and I’d like to incentivize them to do so in a way that the public benefits from those companies sharing what they learn.”
6. Senators Blumenthal and Daines reintroduced their 2015 student privacy bill aimed at vendors, the SAFE KIDS Act, in March.
Meanwhile, in May, the House Education and Workforce Committee held a their fourth hearing on student privacy in three years. FPF’s Amelia Vance was invited to testify. You can watch the hearing here, read the testimony of the speakers here, or read the write-up in EdScoop.
7. Can students or parents record what happens in school?
Three news stories raised this question:
The Bangor Daily News reported that the 1st U.S. Circuit Court of Appeals “ruled against a Maine couple who wants to record the school day of their son with autism and a rare neurological disorder that affects his speaking ability.” The court “point[ed] to an administrative hearing officer’s finding that the recorder would provide “simply no demonstrable benefit.”
A Virginia mother is no longer facing charges after she sent her daughter to school with a recording device to see who was bullying her daughter. The prosecutors said that, although there was enough evidence to support a felony charge, “the office is exercising prosecutorial discretion to not pursue the prosecution of this case.”
In the study, the authors describe existing privacy laws, map the commercial marketplace, and describe the challenges of understanding how data about students is collected and used. FPF released a blog responding to the concerns raised in the study, and expanding on how various federal and state privacy laws – from FERPA to FCRA to PPRA – may or may not apply to the practices described.
Common Sense Media also recently introduced simpler privacy ratings for education apps to make “privacy and security more accessible.”
Just for Fun
There is a fantastic new video from the Utah State Board of Education on the Other FERPA Exceptions (see their original break-out hit here!)
FPF Testifies Before Federal Commission on School Safety
By Amelia Vance, Sara Collins, Tyler Park, and Erika Ross
John Verdi, the Future of Privacy Forum’s Vice President of Policy, testified today before the Federal Commission on Student Safety meeting, “Curating a Healthier & Safer Approach: Issues of Mental Health and Counseling for Our Young.” He recommended that, rather than changing current federal student privacy law, the Commission should explore opportunities to educate school officials and other stakeholders regarding the existing legal authorities for sharing data to support school safety.
He provided three concrete recommendations that the Commission could follow to improve student safety and safeguard privacy:
Be mindful of the full range of privacy risks and harms, as well as the importance of privacy safeguards, as it considers options to improve school safety;
Support efforts to better educate and communicate with stakeholders regarding existing legal authorities that permit data sharing to promote health and safety within a framework that mitigates privacy risks to students; and
Call for neutral, expert analysis of empirical data regarding the nature, extent, and leading causes of the key privacy risks and safety risks facing students and schools.
Mr. Verdi stated that the privacy risks pose particular challenges when they arise in the context of children’s or students’ personal information. Physical harm and loss of liberty are especially egregious when the victim is a child. Financial fraud and identity theft increasingly target young Americans, who are often unable to discover or combat the crimes until years later. Children are also susceptible to specialized schemes – including medical identity theft – that can create substantial health risks when multiples individuals’ medical records are merged as a result of the crime.
FERPA already contains a specific exception that permits information to be shared to protect the health and safety of students, whether the child in question is a threat to themselves, or to others. In 2008, the Department of Education amended FERPA regulations to remove the language requiring strict construction of this exception and permit disclosure when an articulable and significant safety threat exists. The Department assured school officials they would support the disclosure if there was a “rational basis for the school’s determination” at the time it was made.
The 2008 amendments adopted a “totality of the circumstances” test and the “rational basis” approach to Department review of school officials’ decisions. The “totality of the circumstances” test authorizes disclosure of protected student information when the totality of the circumstances suggest that disclosure would mitigate a health or safety threat; this test broadened schools’ authority, replacing the previous “strict construction” standard, which suggested that disclosure was only authorized when strictly necessary to preserve health and safety. The “rational basis” approach assures districts that the Department does not second-guess disclosure decisions from a perspective of perfect hindsight; instead, the Department will view assertion of the health and safety exception as appropriate if the district identifies an articulable threat that serves as the rational basis for the disclosure.
Mr. Verdi testified that untethering disclosure authority from the “totality of the circumstances” or “rational basis” tests would necessarily increase privacy risks to students. He also noted that a dramatic broadening of authority could increase sharing of student information in a way that overwhelms administrators with data, casts suspicion on students who show no signs of violent behavior, and fails to promptly identify individuals who pose genuine threats to school safety. In particular, he mentioned that mentally ill students can be disincentivized from seeking help if they fear that their privacy will not be protected.
Mr. Verdi advocated that the Commission instead focus on educating school officials and other stakeholders regarding the existing legal authorities for sharing data to support school safety. The Department of Education’s Privacy Technical Assistance Center (PTAC) has been vital for schools seeking practical guidance on FERPA. PTAC could publish guidance, hold training sessions, and provide additional technical assistance on this issue.
Finally, Mr. Verdi recommended the the Commission call for further research into the nature, extent, and leading causes of the key privacy risks and safety risks facing students and schools.
At a previous Federal Commission on School Safety listening session, FPF’s Amelia Vance, Director of the Education Privacy Project, spoke on the important balance between schools’ obligation to protect student privacy while providing a safe learning environment for students. She touched on the importance of schools being transparent about their interactions with law enforcement and that data sharing should only be engaged in when there is a serious threat of violence, not a minor infraction of a school code.
FPF is a non-profit organization focused on consumer privacy issues. FPF primarily equips and convenes key stakeholders to find actionable solutions to the privacy concerns raised by the speed of technological development. FPF’s Education Privacy Project works to ensure student privacy while supporting technology use and innovation in education that can help students succeed. Among other initiatives, FPF maintains the education privacy resource center website, FERPA|Sherpa, and co-founded the Student Privacy Pledge.
The Future of Privacy Forum and The Providence Group invite you to participate in the inaugural Privacy War Games event on November 12th, from 8:30 am – 4:00 pm, in San Jose, California. The event will take place at Cisco’s Headquarters, located at 255 West Tasman Drive, Building J, San Jose, CA 95134. Click here for a list of preferred hotels.
In recent years, many leading companies have introduced war games in cybersecurity and other strategic areas as a way to ensure that they are fully prepared for key challenges and unexpected risks. Similarly, the national security community has used war games to provide senior leaders deeper insights into issues, assumptions, and often counterintuitive understandings of decision-making that are not usually available from other qualitative research techniques. War games also provide participants an opportunity to participate in activities and wrestle with issues that are not part of their day-to-day experiences or particular fields of specialty.
Why Privacy War Games?
For privacy professionals who are tasked with managing privacy risk, privacy war games can be an effective way to practice strategic decision making in a risk free environment – before choices have to be made in the real world.
The Future of Privacy Forum and The Providence Group have collaborated to develop and conduct an analytical privacy war game designed to gain insights that will help privacy professionals manage future privacy risk – an increasingly complex task that is made more difficult by: the increasing number of state and sectoral privacy laws; evolving regulatory and compliance requirements; and the regulatory and legal ambiguity of the European General Data Protection Regulation (GDPR).
What is the difference between a table-top exercise and our Privacy War Game?
A table-top exercise usually is a discussion-based game that allows participants, sitting around tables, to interact with one another from their current professional perspective. Table-top games engage players with a set of topics, sometimes in narrative form, and allow specific decisions to be considered. A facilitator will often add new information to spur players into exploring the relationship between their decisions or actions.
Our privacy war game, on the other hand, is a multi-player, scenario-based game with multiple game turns. In a scenario-based game, players are presented with a specific scenario starting point and then play the game through a series of game turns in which each of the game teams must react to and is influenced by the other player’s moves. This dynamic environment adds complexity to the game and forces players to think about both their decisions and the likely impact of the other teams in the game.
Additionally, because it is a multi-player game, game participants assume player roles on the game teams that do not necessarily comport with their current job. This provides game players a unique opportunity to explore a scenario from different perspectives, enabling deeper (and sometimes counter-intuitive) understandings of relevant privacy challenges.
What you’ll take away:
Benefit from an opportunity to “step into the shoes” of another stakeholder, ranging from business executives, regulators, legislators, courts, civil society groups, and consumers.
Learn what to watch out for as you: analyze and navigate a complex privacy scenario; and react to strategic responses and decisions made by other stakeholders who are playing the game.
Take home industry-specific best practices for managing privacy risk.
This Nov. 12 Privacy War Games event will be the beta version of this effort, so we are offering it at a discounted price to our FPF members. We will be using the feedback from this exercise to develop a program that we hope to replicate and offer more broadly.
Mobile Platforms Address Data Privacy with 2018 Updates (iOS 12, Mojave, & Android P)
Authors: Gargi Sen, Stacey Gray *
In light of recent debates over Facebook’s role in protecting users’ privacy against third-party app developers, many are recognizing the importance of mobile platforms in safeguarding user data. As the General Data Protection Regulation (GDPR) takes effect in Europe, and initiatives like the California Consumer Privacy Act are debated in America, we will likely continue to see a strong focus on the privacy protections of technology platforms, intermediaries, and operating systems.
Apple emphasized privacy in its Worldwide Developers Conference (June 4-8, 2018), highlighting several privacy-related updates to the upcoming macOS and iOS 12 (described below). The company has also recently updated its App Review Guidelines, clarifying that app developers must respect user privacy – for example, by adhering to a principle of purpose limitation (data collected for one purpose may not be repurposed without further consent) and avoiding certain kinds of behavioral advertising using sensitive information. Google also made privacy a focus of their newest mobile operating system, Android P, with several key software updates that will restrict app developers’ access to user data.
Apple’s Fall Updates
Privacy upgrades coming to iOS 12 and macOS Mojave include:
Significant Updates to App Store Guidelines:Following the WWDC 2018 conference in early June, Apple made significant changes to its existing App Store Review Guidelines to safeguard user privacy. For example, the updated guidelines bar developers from creating databases from users’ address book information (contact lists and photos). Apps must also have privacy policies, and must request explicit user consent and provide a “clear visual indication when recording, logging, or otherwise making a record of user activity.”
iOS-Style Permissions for DesktopApps: MacOS apps will now be required to request the user’s permission to access certain device sensors, such as the camera or microphone.
Browser Consent Notifications: Safari will prompt browser consent notifications upon detecting website tracking from “Share” and “Like” buttons and website comment feeds. If enabled by default, this will affect social media plugins as well as other interactive features of many websites.
Prevention of Device Fingerprinting: Safari in macOS Mojave will contain updates designed to prevent device fingerprinting. As FPF described in a 2015 report on cross-device tracking, devices and browsers can be identified with a degree of probability through metadata sent in web traffic – such as the system fonts, screen size, installed plug-ins, etc. This kind of digital “fingerprinting,” often referred to as server-side recognition, is often used for short-term advertising attribution and measurement. In Mojave, the Safari web browser will present websites with a “simplified system configuration,” in order to prevent server-side recognition by making each browser appear more standardized.
USB Restricted Mode: Although not discussed in the WWDC keynote, an upcoming feature may require iPhone users to input their passcode to unlock the phone when connecting it to a USB accessory if the phone has been locked for an hour or more. This feature would make it much more difficult for an outside entity, such as a government or law enforcement vendor, to unlock a user’s phone without permission.
Google’s Android Fall Updates
Android P, the newest version of Google’s Android mobile operating system, will also bring ambitious updates for user privacy. Many of these updates were announced in May at Google I/O, Google’s annual developer festival. The Android P operating system, which will be available in the Fall, can currently be downloaded in beta form for those who are interested and have compatible devices.
Restricted Background Access to Sensitive Sensors: If an app is running in the background on an Android P device, the operating system will restrict the apps’ access to the user’s microphone and camera, and will generate an error if an application tries to access them. If an application does need to access these sensors on the phone, it will have to use a foreground service and show a persistent notification on the phone to notify the user of the app’s activity.
Restricted Access to Call Logs and Phone Numbers: Apps running on Android P can no longer read phone numbers or the user’s call log without first obtaining the user’s in-app permission (similar to location, microphone, or other sensitive sensors).
Per-Network MAC Address Randomization: Android P has introduced an experimental feature that will generate a different MAC (media access control) address for every Wi-Fi network that the user connects to, making it harder to track individual users. The MAC address is a unique hardware identifier that devices must broadcast in order to connect to Wi-Fi networks. In many physical locations – such as retail spaces, airports, and stadiums – they are used to count and analyze pedestrian traffic. However, because the MAC address cannot be readily changed, privacy concerns around tracking individual users emerged in recent years. In response, operating systems have begun to randomize MAC addresses sent in Wi-Fi “probe requests” (to automatically connect when nearby to a known network). Android P improves on this default and offers users more anonymity by randomizing the identifier even when they are connected to different Wi-Fi networks.
Location Permission Required for Wi-Fi Scanning: Android P will require apps to obtain the user’s permission to access location information (“ACCESS_COURSE_LOCATION” or “ACCESS_FINE_LOCATION”) before the app may scan for nearby Wi-Fi networks or read Wi-Fi connection information. Many apps facilitate Wi-Fi connections (such as hotspot finders), and in addition, signals from nearby Wi-Fi networks can be used to enhance the accuracy of GPS-based location information. In 2016, the Federal Trade Commission brought an enforcement action against Inmobi, a mobile advertising network, after it was found to be inferring users’ location based on nearby networks even if the users’ had disabled location services.
Protection of a Unique Identifier: Every Android phone has a unique hardware serial number known as an IMEI, which stays the same through any number of factory resets. Following changes made last year in Android O, app developers can no longer access the IMEI without using a new API (Build.getSerial()), which will provide the serial number only if the developer has obtained the user’s permission to read the phone state.
Importantly, we have seen both Apple and Google take concrete steps this year to enforce their existing developer guidelines. For example, Apple began removing apps from the App Store last month for violations of policies against sharing location data with third party advertisers without users’ consent. Similarly, when security researchers found 36 apps on Google Play that inappropriately collected user data, including geo-location and device information, these apps were removed from the Google Play Store.
These updates to iOS 12 and Android P are an important part of ongoing efforts by mobile platforms to safeguard user privacy and enhance user control. Although operating systems primarily act as data intermediaries – facilitating the user’s interactions with mobile apps of his or her choice – they are also well-positioned to protect users’ expectations. For example, most users prefer that apps be restricted from accessing their geo-location without permission, or using it for secondary purposes beyond what they agreed to or expected. Mobile operating systems can provide these protections and assurances through technical measures and app developer license agreements.
Authors:
Gargi Sen is a Legal Fellow at the Future of Privacy Forum, with 10+ years of experience in technology contracts, compliance, and risk assessments.
Stacey Gray is a Policy Counsel at the Future of Privacy Forum, specializing in Internet of Things, Ad Tech, and geo-location data privacy issues.
Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models
FPF and Immuta released the first-ever framework for practitioners to manage risk in artificial intelligence and machine learning (ML) models. The joint whitepaper, Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, provides business executives, data scientists, and compliance professionals with a strategic guide for governing the legal, privacy, and ethical risks associated with this technology.
Beyond Explainability aims to provide a template for effectively managing this risk in practice, with the goal of providing lawyers, compliance personnel, data scientists, and engineers a framework to safely create, deploy, and maintain ML, and to enable effective communication between these distinct organizational perspectives. The ultimate aim of this paper is to enable data science and compliance teams to create better, more accurate, and more compliant ML models.
Immuta and the Future of Privacy Forum Release First-Ever Risk Management Framework for AI and Machine Learning
Immuta and the Future of Privacy Forum Release First-Ever Risk Management Framework for AI and Machine Learning
New Guidelines Provide Global Enterprises with a Practical Approach to Managing the Legal and Ethical Challenges of Artificial Intelligence and Machine Learning
College Park, MD – June 26, 2018 – Immuta and the Future of Privacy Forum (FPF) today announced the first-ever framework for practitioners to manage risk in artificial intelligence (AI) and machine learning (ML) models. Their joint whitepaper, Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models, provides business executives, data scientists, and compliance professionals with a strategic guide for governing the legal, privacy, and ethical risks associated with this technology.
New risks are emerging as AI and ML are increasingly adopted by enterprises across industries. In Beyond Explainability, Immuta and FPF provide compliance personnel and data scientists with concrete steps for minimizing these risks at scale, leveraging Immuta’s experience managing and deploying models and the FPF’s expertise with applying privacy principles and responsible data policies.
“While algorithms are never free from risk, there are concrete steps that we can take to thoroughly document and monitor machine learning models throughout their lifecycle,” said Andrew Burt, chief privacy officer and legal engineer, Immuta. “Future of Privacy Forum continues to show leadership and foresight to help commercial organizations navigate the new privacy challenges of machine learning. By partnering with the FPF on this whitepaper, we are able to provide clear guidance and best practices to data scientists and compliance teams for designing, using, and maintaining more accurate and more responsible machine models.”
The FPF is a leading non-profit organization for guidance on privacy and data governance issues, working in partnership with industry, leading academics, and other civil society stakeholders. Together, Immuta and FPF have created a comprehensive framework to help govern machine learning models, establishing effective communication between the two, independent organizational perspectives represented by compliance departments and data science programs. These perspectives must be aligned now more than ever for machine learning models to be successfully developed and used across the enterprise.
“Rapid technological innovation associated with AI and machine learning has created new ethical and governance challenges,” said Brenda Leong, senior counsel and director of strategy, Future of Privacy Forum. “Through our partnership with Immuta, we seek to clarify those challenges and provide practical solutions by developing a workable business template for enterprises using machine learning models and AI technologies.”
Download a copy of the Beyond Explainability whitepaper today, including a “Model Management Checklist” to aid practitioners as they build, test, and monitor machine learning models at: www.immuta.com/beyond. This link also features a video of Andrew Burt and Brenda Leong discussing the goals of this whitepaper and Immuta’s partnership with FPF.
Immuta is the fastest way for algorithm-driven enterprises to accelerate the development and control of machine learning and advanced analytics. The company’s hyperscale data management platform provides data scientists with rapid, personalized data access to dramatically improve the creation, deployment and auditability of machine learning and AI. Founded in 2014, Immuta is headquartered in College Park, Maryland. For more information, visit www.immuta.com and follow us on Twitter (www.twitter.com/ImmutaData) and LinkedIn (www.linkedin.com/company/immuta/).
About Future of Privacy Forum
Future of Privacy Forum is a nonprofit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. FPF brings together industry, academics, consumer advocates, and other thought leaders to explore the challenges posed by technological innovation and develop privacy protections, ethical norms, and workable business practices. For more information about FPF, visit www.fpf.org and follow us on Twitter (www.twitter.com/futureofprivacy), Facebook (www.facebook.com/FutureofPrivacy), and LinkedIn (www.linkedin.com/company/the-future-of-privacy-forum).
Last week, we launched the Israel Tech Policy Institute, an incubator for tech policy leadership and scholarship, advancing ethical practices in support of emerging technologies. Co-founded by Jules Polonetsky, CEO of the Future of Privacy Forum, and Omer Tene, an Israeli law professor and VP and Chief Knowledge Office at the International Association of Privacy Professionals, the Israel Tech Policy Institute is a new think tank established to provoke, convene and lead policy discussions and support research on privacy, cybersecurity and ethical use of technologies. Liron Tzur-Neumann, a Senior Fellow at the Institute, is an associate at HFN with experience at the Israeli Antitrust Authority. The Israel Tech Policy Institute Advisory Board provides guidance to ITPI staff on major policy initiatives.
In a recent interview with the International Association of Privacy Professionals’ Privacy Tech, Jules expressed that he is betting on Israel to continue its surge upwards, hoping that a country known for being a cybersecurity leader will also end up emerging as a privacy-technology leader. He explains:
“For us, it really made sense since we are the Future of Privacy Forum, and we constantly look at the new technologies and what impact they can have on people and society and how can we put the right rules in place to help privacy leaders and policymakers figure out the right way to engage.”