Artificial Intelligence and the COVID-19 Pandemic

By Brenda Leong and Dr. Sara Jordan

Machine learning-based technologies are playing a substantial role in the response to the COVID-19 pandemic. Experts are using machine learning to study the virus, test potential treatments, diagnose individuals, analyze the public health impacts, and more. Below, we describe some of the leading efforts and identify data protection and ethical issues related to machine learning and COVID-19, with a particular focus on apps directed to health care professionals that leverage audio-visual data, text analysis, chatbots, and sensors. Based on our analysis, we recommend that AI app developers:

 

Contents:

I. Overview

II. Analysis of COVID-19 Apps for Health Practitioners

I. Overview

As reported by the National Institute of Health in partnership with several other agencies at a workshop in July 2019,

“Machine Intelligence (MI) is rapidly becoming an important approach across biomedical discovery, clinical research, medical diagnostics/devices, and precision medicine. Such tools can uncover new possibilities for researchers, physicians, and patients, allowing them to make more informed decisions and achieve better outcomes. When deployed in healthcare settings, these approaches have the potential to enhance efficiency and effectiveness of the health research and care ecosystem, and ultimately improve quality of patient care.”

Now – with the development of the pandemic resulting from the spread of the coronavirus (COVID-19), medical providers, institutions, and commercial developers are all considering whether and how to apply machine learning to confront the threat of this current crisis.

AI, some of which is based on machine learning, is being incorporated into the first lines of defense in the pandemic. Leading epidemiologists insist that we can only succeed in projecting the spread of the virus, and thus take steps to combat this crisis if we: 1) know who has the disease; 2) study the data to reliably predict who is likely to get it; 3) and use existing data to inform the resource and supply chain  in the short and long terms. From triage at hospitals, scanning faces to check temperatures, or seeking to track the spread using individual data, various organizations are using machine learning based algorithms with a variety of levels of complexity or sophistication.

In  general, effective AI can either replicate what humans can do faster and more consistently (look at CCTV cameras, detect faces, read CT scans and identify ‘findings’ of pneumonia that radiologists can otherwise also find) or these systems can do things that humans can’t do (such as rapidly comb through thousands of chemical compounds to identify promising drug candidates). As the disease spreads, we see medical researchers around the world rushing to make sense of available data – facing the need to try to complete reliable analysis in a timeframe to be useful to others. In a recent paper, Artificial Intelligence Distinguishes COVID-19 from Community Acquired Pneumonia on Chest CT, a group of Chinese doctors took the data from the first months of the outbreak there to attempt a model that could provide automatic and accurate detection of COVID-19 using chest CTs. Their goal in the study was to develop a fully automatic framework to detect COVID-19 using only these regular chest scans and to evaluate its performance. Their study concluded that a deep learning model can accurately detect COVID-19 and differentiate it from other lung diseases. Others have pushed back against these claims, however, with concerns that this AI system learning was over fit to COVID-19 data subjects, although still an impressive feat given speed and circumstance, and likely a useful tool to a more measured degree.

Researchers from Carnegie Mellon considered an early version of COVID Voice Detector, an app that would analyze a user’s voice to detect an infection. Although since put on hiatus, this proposed application demonstrated the variety of “out of the box” ways diagnosis are being addressed. The app assigns a score to each voice sample based on similarities to voices of those diagnosed with COVID-19. If implemented, the app will be dependent on crowdsourcing through collecting training data via voice samples from both healthy and infected individuals. By analyzing the voice beyond what the human ear can hear, it would identify vocal biomarkers that will enable the healthcare community to get insights on the symptoms and hopefully the onset of the COVID-19 virus. The app works by using Artificial Intelligence to analyze and to correlate the voice with the symptoms of the COVID-19. Then an alert is triggered describing early symptoms and describing ways to monitor at home using only a smartphone.

Machine learning can also help expedite the drug development process, provide insight into which current antivirals might provide benefits, forecast infection rates, and help screen patients faster. Canadian start up, BlueDot, first identified the emergence of COVID-19 by citing an increase in pneumonia cases in Wuhan using a ML natural language processing program which monitored global health care reports and news outlets.

Many of these new and expedited applications are possible because of the compilations springing up of lists of datasets and use cases of machine learning applied to coronavirus. Consideration of these datasets and analyses points out the importance of incorporating review and involvement from scientists, such as biologists, chemists, and other appropriate specialists so that the integration of data is done competently (asking the right questions, designed to solve the actual problems) and also to ensure that outcomes not to contribute to the false information springing up around pandemic conversations (i.e. gargling hot water – turns out, is not helpful).

Ethical implications abound as well. This emergency is creating real life examples of commonly posed challenges to AI systems. Should AI help make life-or-death decisions in the coronavirus fight? Chinese researchers say they have developed an AI tool that can assist doctors in triaging Covid-19 patients. It analyses blood samples to predict comparable survival rates. But this raises the complex questions about whether survivability/treatability should be a deciding factor in triage prioritization.  Likewise with questions about the age of the patient, a doctor’s intuition, or how to design a formula that incorporates and weights several such factors. It is possible that AI can assist in the steps of this even if not used as the final determinor – that is, help identify quickly which markers (in blood, for example) correlate most to survival rates, or seriousness of condition, and so on.

Similar ethical and practical considerations arise when considering whether AI can responsibly provide medical assistance at an individual level? What if people ask a digital  assistant or go online to a chatbot from a provider, insurer, or other platform?

Hospitals, public health agencies, and commercial health companies are seeking accessible ways to screen patients – such as online symptom checkers, which could allow them to screen themselves – for signs of COVID-19. The question is whether these AI-based access points can both keep healthy people from inundating emergency rooms while still protecting those who need care? There is an important risk/benefit analysis to provide useful care to patients, while not being overly exclusive or allowing the spread of harmful misinformation? Amazon announced that Alexa can now assist users in determining whether they might have contracted the virus by asking a series of questions related to travel history, symptoms, and possible exposure to COVID-19. Alexa also offers advice to users based on the Center for Disease Control (CDC) recommendations. Other features include singing a 20-second song to help time how long people should wash their hands.

The emergence of AI/ML in medicine also creates regulatory challenges, such as which medical AI/ML-based products should be reviewed as medical devices or services, and what evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD). The U.S. Food and Drug Administration recently proposed a discussion paper to address some of these issues, and a Nature.com paper responded by arguing that evaluation should be focused on assessing whole systems rather than individual ML-based products.

Finally, AR (augmented reality) and VR (virtual reality) technology are other AI-based systems that aim to provide services for COVID-19 patients and educate others. One example is USA Today’s “Flatten the Curve: A Week in Social Distancing” AR app. The app accesses the device camera and overlays an AR city onto a blank surface. The user addresses situations moving through a city and must choose between two options to learn how to maximize effective social distancing.

Other AR/VR platforms provide for COVID-19 patients to engage in group therapy. XR Health recently announced a VR telehealth support group, virtually bringing together COVID-19-positive patients along with medical professionals. The team behind XR Health hopes the VR experience will improve on traditional teleconferencing to increase the therapeutic benefits of interaction, encouraging patients to share personal experiences and emotions.

Political and structural responses:

The White House announced the launch of the COVID-19 High Performance Computing Consortium with the goal to advance the pace of scientific discovery by funding research proposals with this aim.

Meanwhile, Stanford University is hosting COVID-19 and AI: A Virtual Conference to address this public health crisis by convening experts to advance the understanding of the virus and its impact on society, not just AI applications in diagnostics and treatment, and forecasting of the spread of the virus, but also information and disinformation, and the broader impact of pandemics on economies, culture, government, and human behavior.  C3.ai, an AI company based in California, recently founded a research consortium called the C3.ai Digital Transformation Institute including leading academic institutions, Microsoft, and C3.ai with the goal of tackling challenges posed by COVID-19 using AI. Strategies might include tracking the spread of the virus, predicting its evolution, repurposing and developing new drugs, and fighting future outbreaks.

As a further shared resource, there are numerous tracking resources on AI and COVID19 on Github, Google’s data science competition platform Kaggle, and the COIVD-19 Open Research Dataset (CORD-19) — created in collaboration of Microsoft, the Allen Institute for AI, National Institutes of Health (NIH), and the White House Office of Science and Technology (OSTP) — contain news reports, research studies, available data sets, and more.

II. Analysis of COVID-19 Apps for Health Practitioners

Healthcare practitioners, from physicians to radiology technicians, are grappling with the practical difficulties of working under the high stress, resource constrained, environment brought about by the COVID-19 pandemic.  Calls by practitioners and concerned politicians focus on the need for both low-tech solutions (e.g., face masks), conventional technologies (e.g., ventilators), and high-tech tools (e.g., AI enabled rapid triage) to help these workers protect themselves and serve their patients. A range of existing high-tech tools, specifically those using artificial intelligence, are already part of the landscape of tools available to practitioners.  What are some of those AI tools? And what forms of artificial intelligence power them?

We review below some of the apps and tools available to healthcare practitioners, some of which were already deployed prior to the pandemic,  but are now described as having new capabilities based upon COVID-19 data use.

Voice Data

Suki is an “AI- powered voice assistant” used by physicians to record and auto-complete clinical notes, whether for patients suspected of COVID-19 disease or for ordinary clinic visits. Suki is described as powered by AI and machine learning, specifically natural language processing, which enables the system to  “understand the context of the doctor’s practice and learn the doctor’s preferences. Suki determines intent and accurately selects from similar terms”.  Because Suki data is highly sensitive, being derived from clinical interactions and health records, the data is described as “encrypted in-transit and at-rest with modern ciphers and maximum strength cryptography. Real time analysis is conducted to detect anomalies or suspicious software behavior, to protect against breaches”. Based upon information available on their website, Suki “is currently free to all Urgent Care, Hospitalists, Critical Care, pop up & triage clinics and locum physician assignments until May 31”.

Kara, a product for iPhones produced by Saykara, is another form of physician voice enabled assistant that has recently been augmented with COVID-19 specific uses and availability. Described by some as Alexa for doctors”, this voice to text app automates the process of updating medical records in real time, interfacing with multiple charting systems (e.g., EPIC). This “ambient” system, “listens, interpreting conversations with patients, so you (physician) can enter a room, treat the patient and be done charting”.  Within the context of the COVID-19 pandemic, Kara has been recently described as “test-piloting the solution” specifically designed to accommodate the charting of remote patient encounters (e.g., telehealth).  Improving charting during telemedicine encounters may improve the quality and granularity of health data available for novel and normal medicine. Kara is also available for limited free use by contacting the company.

EPIC, the electronic health records giant, has a similar voice enabled virtual assistant with new information allowing for monitoring of COVID-19 patients specifically. EPIC has notably partnered with app developers to create symptom apps and to share its EHR data with a select group of organizations striving to improve AI and other data-driven COVID-19 responses.

Other Audio Data

Eko, is an “AI powered stethoscope”. Eko’s cardiac products use deep neural networks to differentiate between normal and abnormal sounds produced by blood flow through the heart.  Likewise, neural networks built upon extensive databases of labeled echocardiogram (ECG aka EKG) data detect abnormal heart rhythms.  The otherwise conventional tool of a stethoscope has been embedded with learning systems to ingest and analyze heart and lung sounds to ensure effective monitoring of cardiopulmonary function in patients using telemedicine functions.  On the front lines, Eko is a product that offers practitioners directly treating patients a suite of products that allow for “wireless auscultation” of the heart and/or lungs. This allows practitioners wearing significant amounts of protective equipment the ability to listen to their patients at a distance.

Building audio data based AI tools is also bringing in startups, such as Cough for the Cure, who are developing tools to score individuals’ likelihood of suffering COVID-19 disease based upon the sounds of their cough. A similar tool is being developed by Coughvid. If developed, such a tool might help practitioners engage in more accurate triage of patients who present with cough as a symptom.

Video

Whether the use of thermal-scanning face cameras count as use of video data could be debated. The Care.ai suite of “autonomous monitoring sensors for healthcare” use computer vision tools, including facial recognition (and emotion and intention detection), to support an “always on” platform for monitoring patients’ status, practitioner-patient engagement, behaviors and events pertinent to regulatory compliance, and building administrative data records.  This suite of sensor tools is now leveraging thermal scanning capability to “look for fevers, sweating, and discoloration”.  The specific AI tools used to interpret thermal imaging and how this does or does not integrate into the neural-network driven data that is a normal part of the Care.ai suite of tools is not obvious, however.

Image

The initial discussion of the power of AI for addressing COVID-19 diagnostics arose from the powerful uses of AI when analyzing radiological data in China. Deep learning techniques were used to analyze x-rays, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Technology (PET) scans, to identify lesions or speed image interpretation time. English language reporting of similar efforts to develop neural networking techniques, such as convolutional neural networks, for image recognition are appearing at increasing frequency on venues such as Radiology.

Development of deep learning to improve speed and accuracy in interpretation of diagnostic imaging, such as chest x-rays for patients with suspected pneumonia, is accelerating through innovations by companies such as behold.ai.  Behold.ai used deep learning to develop their “red dot” algorithm to create heatmaps identifying areas of concern for superimposition onto chest x-rays. Behold.ai posits that it’s “red dot algorithm trained on over 30,000 CXRs with detailed annotations from certified radiologists” catalyzes interpretation, comprehension, and action based upon images.

BioMind AI, already identified as using deep learning for classification of lesions in the brain, uses neural network models to perform image segmentation, reconstruction of images, and automated reporting of recommendations based on interpretation of images.

Text

While deep learning for images helps speed diagnostics on the basis of imaging, laboratory tests continue to be a significant component to COVID-19 diagnostics. As described by Surgisphere, developer of the QuartzClinical healthcare data analytics platform, has developed a “decision support tool” using a “machine learning model” that uses “three common laboratory tests to identify patients likely to have coronavirus infection”. This tool leverages increases amount of data sharing collaboration between healthcare systems to increase the sample size of COVID-19 patients.

JVion is a clinical AI platform built on the concept of modeling individual patient’s proximity to known risks which are approximated with “The Eigen Sphere engine” or “an n-dimensional space upon which millions of patients are mapped against tens-of-thousands of Eigen Spheres. Each Eigen Sphere comprises patients who clinically and/or behaviorally demonstrate similarities”. The JVion COVID Community Vulnerability Map uses multiple forms of data, including de-identified patient records, Census information, population statistics, and socioeconomic data (e.g., access to employment), to create a community level view for “identification of the populations at risk”. Unlike other AI tools that use neural networks or are built for diagnosis and treatment of individual patients, JVion’s suite of tools is built for reduction of patient and community risks based upon mathematical modeling incorporated into the background of other predictive modeling.

Similar mapping technology built upon uses of GIS data from multiple sources, such as Esri, HERE, Garmin, and USGS, and county level data, Definitive Healthcare built a mapping tool to identify the number of licensed and staffed hospital beds available. This healthcare data analytics company does not promise to use AI tools, but incorporates many of the sources of data already used by others who do make explicit claims to their uses of AI. Qventus, provides similar bed capacity mapping resources to track the available hospital infrastructure capacity.  Qventus also offers an analytics dashboard to assist in COVID-19 planning.

ChatBots

Microsoft Azure is the backbone of the new CDC COVID-19 chatbot, Clara. Using the customizability of Microsoft’s healthcare bot service, the CDC built this widely available chat bot for individuals to use when making decisions regarding their pursuit of additional healthcare services for diagnosis or treatment of COVID-19. Other health systems, such as Providence, are using Microsofts tools to build chat bots for individuals to understand their own risk and, if needed, to connect them to providers.  Whether powered by Azure or other platforms, the quality of COVID-19 chatbots is reported to be uneven, possibly due to the fast pace of the data streams used to train them.

Another conversation-engine based application, developed by Curai, uses text data to help patients understand and explain their symptoms, and physicians to understand patients. Using NLP, deep learning, and knowledge base tools, Curai tools help patients and practitioners interact in both telemedicine and direct contact environments.

Sensors

Biofourmis, known from early discussions of COVID-19 monitoring in Hong Kong, re-tooled its Biovitals Sentinel platform and its Everion biosensor to help monitor patients under home quarantine. This suite of sensors, “including optical, temperature, electrodermal, accelerometer and barometer” forms the major components of the Biovitals Sentinel dashboard platform.

Ouraring is a biosensor that is being used in a limited study for tracking of healthcare workers biometric data.  In the on-going study, Ouraring users are responding to symptom surveys to determine whether biometric data can help to “identify patterns that could predict onset, progression, and recovery in future cases of COVID-19.”

While not designed for monitoring of healthcare workers specifically, Scripps Research is conducting research to determine if any of the many wearable devices that monitor health data, such as heart rate, can be used to predict or monitor COVID-19 infections.

What should AI app developers do to respond to the COVID-19 pandemic

Responding to the needs of healthcare practitioners during the COVID-19 pandemic is undeniably a whole-community effort.  What can individuals who are working in the AI space do to help healthcare practitioners? What AI tools can others, such as the manufacturing community, use to help healthcare workers now?

Responding to calls from policy experts, even the White House, data scientists, machine learning experts, and artificial intelligence experts, are gathering as a community to derive new insights for guiding drug development, diagnostic apps, contact tracing, information production and tracking, and more.  The COVID-19 pandemic is also prompting AI startups to pivot towards building products to meet patient and practitioner needs.  Engaging with Kaggle competitions and other competitions, such as drug discovery competitions, working with epidemiologists, physicians, and other relevant domain experts is the most obvious way to help those on the sharp end of the pandemic.

However, there are more “ordinary” things that AI/ML experts can do right now while waiting for optimal partnership opportunities.  In brief, these are:

  1. Improve FAIRness of the data
  2. Code check the apps
  3. Validate the models of existing systems
  4. Improve confidence in recommendations

AI/ML and other data experts know well that the quality of any system built is predicated on the quality of the data.  In the context of COVID-19, where data in general is relatively limited and there are only a few trusted repositories, such as the CDC Collection, C3.ai’s data lake, WHO’s research database, CORD-19, Go.Data, the SAS GitHub repository, or the Functional Genomics Platform, finding the material to build systems can be a serious challenge.  While synthetic data may be useful for this space, more baseline efforts to improve data should be revisited.  As data experts and others, such as the National Academies pointed out repeatedly in 2018 and 2019, the lack of quality, interoperable, FAIR, and ethically reusable data, holds back the performance of AI systems in health. Improving the quality of the metadata attached to COVID relevant data sets is the task for organizations such as GO FAIR’s VODAN or CEDAR. Interfacing with these specific initiatives is one way to help but, improving the FAIRness of data sources generally, the utility of which is not yet known, is also an area in which data experts can help.

The rush to build applications for COVID-19 response and preparedness may increase the number of products that may be beautiful but ultimately not useful.  Some performance problems may be due to developers striving to jump over the quotidian tasks of code checking to launch their applications.  Detecting those performance problems will require both openness of the code used to power the systems, and open use of human and machine code analysis tools to find and de-bug programs.  Of interest to those specifically curious to help evaluate the utility of some of the AI products described above, is that there were no obvious pointers to code (e.g. GitHub) or supporting AI/ML research (i.e., via PubMed) for these products (Curai being an exception).

Model validation is an ongoing task for performance tracking of any learning system.  Validating any model is difficult, but validating models with low amounts of data (training or testing) of varying quality, changing numbers of relevant parameters and changing performance expectations, is a challenging task. Validating the usefulness of the output of a model for the end users is also another important validation task.

Across the globe, individuals and groups are grappling for actionable recommendations.  One way that AI/ML experts are helping researchers to improve confidence in their hypotheses is by participation in Kaggle competitions to use NLP to build literature reviews for research development.  Specific to development of resources for front-line practitioners, the degree of confidence that a practitioner should have in the recommendation produced by a learning system emerges through use in a setting where recommendations lead to positive outcomes.  However, aggregating the success rate of a particular app to understand how wide a confidence interval should be attached to a recommendation statement is an on-going challenge.