FPF Weighs In on the Responsible Use and Adoption of Artificial Intelligence Technologies in New York City Classrooms
Last week, Future of Privacy Forum provided testimony at a joint public oversight hearing before the New York City Council Committees on Technology and Education on “The Role of Artificial Intelligence, Emerging Technology, and Computer Instruction in New York City Public Schools.”
Specifically, FPF urged the Council to consider the following recommendations for the responsible adoption of artificial intelligence technologies in the classroom:
- Establish a common set of principles and definitions for AI, tailored specifically to educational use cases;
- Identify AI uses that pose major risks – especially tools that make decisions about students and teachers;
- Create rules that combat harmful uses of AI while preserving beneficial use;
- Build more transparency within the procurement process with regard to how vendors use AI; and
- Take a student-driven approach that enhances the ultimate goal of serving students and improving their educational experience.
During this back to school season, we are observing school districts across the country wrestle with questions about how to manage the proliferation of artificial intelligence technologies in tools and products used in K-12 classrooms. In the 2022-2023 school year, districts used an average of 2,591 different edtech tools. While there is no standard convention for indicating that a product or service uses AI, we know that the technology is embedded in many different types of edtech products and has been for a while now. We encourage districts to be transparent with their school community regarding how AI is utilized within the products it is using.
But first, it is critical to ensure uniformity in how AI is defined so that it is clear what technology is covered and to avoid creating overly broad rules that may have unintended consequences. A February 2023 audit by the New York City Office of Technology and Innovation on “Artificial Intelligence Governance” found that the New York City Department of Education has not established a governance framework for the use of AI, which creates risk in this space. FPF recommends starting with a common set of principles and definitions, tailored specifically to educational use cases.
While generative AI tools such as ChatGPT have gained public attention recently, there are many other tools already used in schools that fall under the umbrella of AI. Uses may be as commonplace as autocompleting a sentence in an email or speech-to-text tools to provide accommodations to special education students, or more complicated algorithms used to identify students at higher risk of dropping out. Effective policies governing the use of AI in schools should follow a targeted and risk-based approach to solve a particular problem or issue.
We can look to the moratorium on adopting biometric identification technology in New York schools following the 2020 passage of State Assembly Bill A6787D as an example of how an overly broad law can have unintended consequences. Although it appeared that lawmakers were seeking to address legitimate concerns stemming from facial recognition software used for school security, a form of algorithmic decision making, the moratorium had broader implications. Arguably, it could be viewed to ban the use or purchase of many of the computing devices used by schools. This summer, the NY Office of Information Technology Services released its report on the Use of Biometric Identifying Technology in School, following which it is likely that the Commission will reverse or significantly modify the moratorium on biometric identification technology in schools. This will present an opportunity for the city to consider what additional steps should be taken if it resumes use of biometric technology and will also likely open a floodgate for new procurement.
Accordingly, this is an important moment for pausing to think through the specific use cases of AI and technology in the classroom more broadly, identify the highest risks to students, and prioritize developing policies that address those higher risks. When vetting products, we urge schools to consider whether that product will actually enhance the ultimate goal of serving students and improving their educational experience and whether the technology is indeed necessary to facilitate that experience.
We urge careful consideration about the privacy and equity concerns associated with adopting AI technologies as AI systems may have a discriminatory impact on historically marginalized or otherwise vulnerable communities. We have already seen an example of how this can manifest in classrooms. Commonly deployed in schools, self-harm monitoring technology works by employing algorithms that rely on scanning and detecting key words or phrases across different student platforms. FPF research found that “using self-harm monitoring systems without strong guardrails and privacy-protective policies is likely to disproportionately harm already vulnerable student groups.” It can lead to students being needlessly put in contact with law enforcement and social services or facing school disciplinary consequences as a result of being flagged. We recommend engaging the school community in conversation prior to adopting this type of technology.
It is also critical to note that using any new classroom technology typically comes with increased collection, storage, and sharing of student data. There are already requirements under laws like FERPA and New York Ed Law 2-D. Districts should have a process in place to vet any new technology brought into classrooms and we urge an emphasis on proper storage and security of data used in AI systems to protect against breaches and privacy harms for students. School districts are already vulnerable as targets for cyber attacks, and it is important to minimize risk.
Finally, we flag that there are disparities in the accuracy of decisions made by AI systems and caution that there are risks when low accuracy systems are treated as gospel, especially within the context of high impact decision making in schools. Decisions made based on AI have the potential to shape a student’s education in really tangible ways.
We encourage you to consider these recommendations and thank you for allowing us to participate in this important discussion.