FPF Health and AI & Ethics Policy Counsels Present a Scientific Position at ICML 2020 and at 2020 CCSQ World Usability Day
On November 12, 2020, FPF Policy Counsels Drs. Rachele Hendricks-Sturrup and Sara Jordan presented privacy-by-design alongside human-centered design concepts during the 2020 CCSQ World Usability Day virtual conference. This presentation followed Drs. Hendricks-Sturrup’s and Jordan’s July 2020 scientific position paper presented at the International Conference on Machine Learning (ICML) 2020, entitled “Patient- Reported Outcomes: A Privacy-Centric and Federated Approach to Machine Learning.”
Drs. Hendricks-Sturrup and Jordan gave their position that patient reported outcomes (PROs) data requires special privacy and security considerations, due to the nature of the data, if used within on-device or federated machine learning constructs as well as in the development of artificial intelligence platforms. Patient reported outcomes, being a raw form of patient expression and feedback, help clinicians, researchers, medical device and drug manufacturers, and governmental stakeholders overseeing medical device and drug development, distribution, and safety monitor, understand, and document, in a readable format, patients’ symptoms, preferences, complaints, and/or experiences following a clinical intervention. Gathering and using such data requires careful attention to security, data architecture, data use, and machine-readable consent tools and privacy policies.
Even so, on-device patient reported outcome measurement tools, like patient surveys within third-party mobile apps that use machine learning, may employ the best machine-readable privacy policies or consent mechanisms, but may ultimately leave key components of privacy protections up to the patient-user. Keeping data in the hands of users opens those users up to unanticipated vectors of attack from adversaries striving to identify the valuable machine learning models or seeking to uncover data about a specific patient.
Drs. Hendricks-Sturrup and Jordan recommended that developers of patient reported outcome measurement systems leveraging federated learning architectures:
- Intentionally design user device security, as well as security for transmission of either data (raw or processed) or model gradients, to the highest level of protections that do not degrade essential performance for critical health and safety monitoring procedures (e.g. remote monitoring for clinical trials, post-market drug safety surveillance, hospital performance scores, etc.);
- Ensure that models are not compromised and that valuable machine learning spending is not lost to competitors;
- Design systems to operate atop a federated machine learning architecture, both when model components are sent or gradients received, to ensure privacy of users’ data; and
- Design learning algorithms with algorithmic privacy techniques, such as including differential privacy, which is essential to secure valuable and sensitive PRO data.
Drs. Hendricks-Sturrup’s and Jordan’s paper, poster, and presentation can be found at these links:
CCSQ World Usability Day recording
CCSQ World Usability Day slides
To learn more about FPF’s Health & Genetics and AI & Ethics initiatives, contact Drs. Hendricks-Sturrup and Jordan, respectively, at: [email protected] and [email protected].