
Professional Development and Education Advances
Article Tools

OPTIONS & TOOLS
COMPANION ARTICLES
ARTICLE CITATION
DOI: 10.1200/EDBK_350652 American Society of Clinical Oncology Educational Book - published online before print June 10, 2022
PMID: 35687826
Artificial Intelligence in Oncology: Current Capabilities, Future Opportunities, and Ethical Considerations
2Department of Radiology, Mayo Clinic, Rochester, MN
3Center for Digital Health, Mayo Clinic, Rochester, MN
The promise of highly personalized oncology care using artificial intelligence (AI) technologies has been forecasted since the emergence of the field. Cumulative advances across the science are bringing this promise to realization, including refinement of machine learning– and deep learning algorithms; expansion in the depth and variety of databases, including multiomics; and the decreased cost of massively parallelized computational power. Examples of successful clinical applications of AI can be found throughout the cancer continuum and in multidisciplinary practice, with computer vision–assisted image analysis in particular having several U.S. Food and Drug Administration–approved uses. Techniques with emerging clinical utility include whole blood multicancer detection from deep sequencing, virtual biopsies, natural language processing to infer health trajectories from medical notes, and advanced clinical decision support systems that combine genomics and clinomics. Substantial issues have delayed broad adoption, with data transparency and interpretability suffering from AI’s “black box” mechanism, and intrinsic bias against underrepresented persons limiting the reproducibility of AI models and perpetuating health care disparities. Midfuture projections of AI maturation involve increasing a model’s complexity by using multimodal data elements to better approximate an organic system. Far-future positing includes living databases that accumulate all aspects of a person’s health into discrete data elements; this will fuel highly convoluted modeling that can tailor treatment selection, dose determination, surveillance modality and schedule, and more. The field of AI has had a historical dichotomy between its proponents and detractors. The successful development of recent applications, and continued investment in prospective validation that defines their impact on multilevel outcomes, has established a momentum of accelerated progress.
Artificial intelligence in oncology is no longer hypothetical, and its U.S. Food and Drug Administration–approved use is expanding in several clinical scenarios, most prominently involving cancer diagnostics and computer vision.
Applications are in various stages of development across the cancer continuum and in multidisciplinary practice, and some algorithms and advanced clinical decision support systems are demonstrating capabilities that are equivalent to or that surpass expert intervention.
There are unique ethical and legal considerations associated with artificial intelligence models that limit their broad application and reproducibility, including their inherent bias when trained with data sets that disproportionately exclude underrepresented persons.
Barriers to widespread adoption of artificial intelligence involve both ideologic and workflow concerns, as well as limited prospective validation studies; however, modernization of health care is gradually reducing the obstacles to its responsible use.
The future of precision oncology, in which living databases of multimodal datatypes are recursively used to improve clinical models, may yield unprecedented patient outcomes.
Using artificial intelligence (AI) to overcome the complexities of medicine has long been heralded as a near future– and disruptive eventuality. It has a long history that dates back to the 1970s, when clinical decision support systems (CDSS) required humans to provide rules for decision-tree techniques and to manually select attributes for inclusion in these expert systems.1 Schwartz et al2 opined in the New England Journal of Medicine that, “After hearing for several decades that computers will soon be able to assist with difficult diagnoses, the practicing physician may well wonder why the revolution has not occurred,” which is all the more poignant given that the remark was published in 1987. Medicine faced several decades of an "AI winter," in which adoption of the technology rapidly declined as its achievable expectations were found to be incongruent with reality.3 However, the current momentum of practice-changing AI implementation argues for optimism,4 enabled by the advent of deep learning (DL); expansion of machine-accessible, digital health care data (electronic health record [EHR], medical images, and medical literature); and enhanced cloud computing and processing power.5,6
The broad field of computer science in which machines or algorithms are programmed to simulate human intelligence is encompassed by the term AI.7 Machine learning (ML) is a branch of AI in which computers perform defined tasks and apply statistical methods to detect hidden patterns in the data and to improve model performance.8 The ML subfield of DL, unlike classic ML, does not require human-defined heuristics to find a solution for a task.9 Rather, DL operates by the power of multilayered neural networks, thereby enabling self-discovery of features unknown or unanticipated by humans and eliminating manual human effort for feature extraction.10 Convolutional neural networks (CNNs), a type of DL, along with tremendously growing computing power have led to accelerated development of AI-based applications, particularly in medical imaging.11
Natural language processing (NLP) is an adjacent specialty within AI that attempts to interface human language with machine interpretation; it is used to transform unstructured data—from EHR clinical notes and diagnostic or procedural reports—into discrete data elements.12 A recent advancement in the field has led to substantially increased efficacy of the technology, which can be used to automate collection and documentation of date of diagnosis, progression-free survival, and other cancer-related tumor attributes and patient outcomes.13 Such automation could support complex database- and tumor registry development, which recursively increases the power of derived models. Alone or combined with ML/DL techniques, NLP has been used for clinical trial matching and for identifying potential adverse drug reactions.14,15
Beyond these definitions, where and how the science is applied are equally important, as “AI in medicine” may describe familiar use-cases to the practicing oncologist such as basic science drug discovery,16 translational gene-expression– and multiomic tumor profiling,17 and advanced CDSS.18 It may also encompass active development of patient-facing chat bots,19 augmented reality surgical visualization tools,20 surgical robotics,21 and hospital scheduling systems.22 Natural language processing– and ML algorithms are increasingly used to automate CDSS, and their EHR integration is expected to enhance their adoption into clinical workflows.23 Given the broad potential application of AI in oncology, this review will focus on the technologies and algorithms that directly support the care of patients with cancer.
Oncology relies heavily on evidence-based medicine scoring systems for cancer risk assessment and disease diagnosis, prognostic staging, treatment, and surveillance monitoring. These systems frequently originated from simple observations using light microscopy and grew in efficacy with the introduction of more advanced testing, such as gene expression assays and next-generation sequencing of somatic and germline genomes. The outcome of this modernization is an ever-expanding list of prognostic and predictive factors relevant for a particular disease, as evident by the increasing prevalence of genomic-informed clinical models.24–26 However, each additional predictor elevates a model’s complexity and quickly creates a web of interactions among emerging and established disease factors that is beyond comprehension using traditional approaches, although it increases the potential resolving power of its associated modeling algorithms.27 To advance precision oncology and provide an accurate interpretation of an individual’s cancer status, it behooves investigators and clinicians alike to use all information available that will allow the complexity of the computational model to approach the complexity of the biologic system. The only feasible means of synthesizing the magnitude and interdependence of such multimodal data are with AI, leveraging high-performance computing and groundbreaking DL techniques.8,28
An emerging cancer screening strategy is the development of whole blood pan-cancer detection from deep sequencing.29 Whole blood is attractive for analysis given its ready accessibility and the fact that all cells in the body, either directly or indirectly, have access to the circulatory system. Substantial progress has been made in identifying circulating tumor cell-free DNA for cancer prognostication, which subsequently has led to its evaluation for cancer screening and detection, as well as for cancer-recurrence surveillance.30,31 A pan-cancer screening test is appealing, as the combined prevalence of less common cancers may enable a cost-effective method to facilitate their earlier detection.32
The ML/DL algorithms can overcome the limitations of standard computational methods by learning patterns from the whole transcriptome.33 For example, an ML method using whole-transcriptome RNA sequencing data and incorporating multiple tumor profiles has been found to accurately identify a cancerous state and discriminate it from normal cells; it performed well for rare cancer types and demonstrated utility in predicting the tumor site of origin.34 Similarly, neural networks have been applied to transcriptomic data to classify molecular subtypes of various tumors.35
The application of AI to analyze large-volume multiomics data (exome, transcriptome, and epigenome) combined with clinically annotated data sets has furthermore led to the identification of drug-susceptibility genes, variant detection, new cancer biology insights, and prediction of RNA splice sites.36–39
In the field of oncologic radiographic imaging, AI is being used for detection and diagnosis. Computer-aided detection has been used historically for breast cancer imaging, but it did not demonstrate high clinical value.40 Hence, breast cancer imaging has been a prime target for AI-based cancer detection. For example, AI-based models are now routinely a part of breast imaging and are being used clinically in many practices. There are at least five U.S. Food and Drug Administration–approved breast-imaging detection and diagnosis algorithms.41 Besides breast cancer, AI-based detection and diagnosis is currently being used for various types of tumors. For example, in prostate cancer, multiparametric MRI is known to increase detection of clinically relevant malignancy, but challenges such as interobserver variability remain.42,43 Detection based on AI has the potential to overcome these challenges through ML algorithms, and there are commercially available algorithms for prostate segmentation, lesion detection, and workflow integration.44
Imaging algorithms based on AI are also being used in clinical practice to identify and track potentially cancerous lesions and to guide management. For example, a software program approved by the U.S. Food and Drug Administration currently allows comprehensive detection and tracking of pulmonary nodules, prediction of lung malignancy among detected lesions on low-dose CT images,45 and incorporation of management guidelines.46,47 Deep neural networks have also been developed to detect enlarged lymph nodes or colonic polyps in CT images and to enhance colon polyp detection during colonoscopy with a real-time DL computer-aided detection system approved by the U.S. Food and Drug Administration (GI Genius, Medtronic).48,49 Augmented interpretation of endoscopic images using AI has also been shown to consistently improve accuracy in the detection of esophageal cancer.50
Imaging models based in AI are also being used for tumor characterization. Characterization may include anatomic segmentation of tumors, which allows software to identify the borders of diseased tissue among the normal anatomy, and tumor subtype classification, which leverages clues in signal intensity, texture, shape, and other descriptors to make a diagnosis. Segmentation—either 2D or volumetric—may be used in clinical practice for treatment decisions such as radiation planning; however, there is interobserver variability in manual tumor segmentation. The AI-based algorithms have the potential to overcome these biases.51 One such product is a U.S. Food and Drug Administration–approved product that detects brain metastases and conducts segmentation for stereotactic radiosurgery.52 Annotation of data from exams, such as CT imaging scans, can create volumetric units, or voxels, which are akin to 3D pixels. These data extrapolations can lead to computer vision–based insights that are not appreciable to the naked eye. Radiomic analysis involves automated extraction of clinically relevant information from radiologic images, and it can be used to develop radiomic biomarkers through biologic validation of radiomic signatures using genetic, histologic, and other forms of correlative data.53
Imaging-based ML models can also allow for prediction of future outcomes for patients with cancer, such as locoregional recurrence, distant recurrence, and mortality. For example, multiple emerging imaging-based ML models are predictive of clinical outcomes for patients with pancreatic cancer, such as overall survival and disease-free survival.54 In the future, this information may drive individualized care of cancer survivors, including surveillance and optimized strategies to prevent recurrence. Radiomic analysis and evolving imaging-based ML models have demonstrated the potential to predict tumor pathology and genomic alterations. This may enable diagnosis and biomarker information without actual sampling, leading to what is called “virtual biopsy.”55 In glioblastoma, for example, noninvasive imaging-based models are being developed that can predict genetic alterations within the tumor and impact clinical management.56 Although more work is needed to standardize imaging data and create reproducible ML models, it is clear that cancer care will be substantially impacted in the future by these models that extend beyond the realm of diagnosis.
With the accelerated growth of digital pathology, there are several applications of AI in the analysis of pathologic images for diagnosis, grading, and prognostic biomarker interpretation. Important advances have focused on automating time-consuming tasks to increase pathologist efficiency, enabling them to increase the time they allocate to high-level, decision-making tasks, particularly those related to the complexity and confounding factors associated with disease presentation.57 A few specific examples are highlighted, including a DL system that has been developed to assign a Gleason score, with accuracy exceeding that of general pathologists, by whole-slide image analysis of radical prostatectomy specimens.58 A convolutional neural network has been used to automate detection of tumor-infiltrating lymphocytes in images of tissue slides from The Cancer Genome Atlas, a feature prognostic of clinical outcome for patients with 13 different cancer subtypes.59 Another successful application of AI in image analysis is the classification of dermoscopy images and annotation of skin lesions, including melanoma, with precision comparable to that of expert dermatologists.60,61
These applications of AI-based algorithms in oncologic imaging analysis are leading to improved diagnoses and workflow efficiency, and they are actively being translated from research to clinical practice. With the continuous advancements in these algorithms, more AI applications are expected to occur in oncologic imaging in the near future, opening potential venues for detection and management not deemed possible previously.
A major challenge in the development of AI models is the lack of structured, cancer-related health data, as well as the lack of standardization in how unstructured data are collected and stored within the EHR or unified data platform of a single health care system.62 The lack of standardization across health care systems and global communities is even more important as it limits interoperability and widespread exchange of health data and information.63,64 To address this, the minimum Common Oncology Data Elements initiative established universal terms and definitions for frequently used patient- and tumor attributes, classifications of disease status, and therapeutic interventions.65 Its implementation requires substantial information technology and systems resources, and the feasibility of its adoption in routine clinical practice remains under exploration. Patient-reported outcomes and validated questionnaires are other means by which standardized, patient-generated health data can be collected.66 These data are often systematically recorded as part of clinical trials or routine practice but may also be mined directly from the EHR using natural language processing techniques or asynchronous electronic interactions. Importantly, patient-reported outcomes can be powerful prognostic indicators of survival.67 These tools may have several other benefits for cancer care delivery, research, and clinical operations; however, they may also increase the clerical burden on clinical care teams, and on patients and their caregivers.68 A multidisciplinary approach to their clinical implementation will be necessary to minimize duplicative efforts and optimize the completeness of data capture. Ideally, these standardized data collection and management tools should be in place prior to the development of AI models.
Acceptance of AI technologies within medicine is impeded by the ubiquitously referenced “black box” nature of the mechanism, particularly when considering DL- and neural network–based approaches, which rely on convoluted hidden layers of data interaction.69 Although primitive ML algorithms, like linear regression, are fully transparent in functioning, many modern approaches use strategies that involve generating many thousands of overlapping decision trees with convoluted systems of reinforcement that cannot be graphically represented to any usable degree. Interpretability is further complicated by DL, which relies on hidden layers of data interaction inspired by the interconnectedness of the neurons and synapses of the brain.70 The importance of each variable being used to model an outcome can be estimated by reverse engineering these AI approaches through the concept of Shapley values.71,72 These values are generated by considering a single variable and iteratively processing the same data through a model, while only modifying the variable in question to appreciate the magnitude of effect it has on the outcome. Shapley values can be generated for a model to rank each variable by importance, but they may also be used on the individual level. This distinction allows the clinician to interpret causality from AI.73
Another barrier to adoption is the perceived difficulty of navigating an AI-powered study. Principal investigators need not dwell on the technical execution of the science, but instead should learn what kind of questions AI is uniquely suited to answer. Traditional statistical methods are used to assess relationships between variables and provide directed hypothesis testing, whereas AI aims to model a complex system and provide accurate predictions. An appropriate question may attempt to classify a system into binary outcomes, such as predicting the success of a treatment modality; to predict a continuous outcome, such as using Cox regression to estimate length of survival; or to cluster data into an unknown number of bins, such as determining groups of features that correlate with disease subtypes. More universities and academic medical centers are adding AI cores (with data analysts, AI programmers, and computing resources) alongside their statistical and genomics cores. Just as cancer investigators work with genomic data despite few having primary bioinformatics experience, AI-powered cancer research is approachable without a working knowledge of computer programming when supported by robust AI core services and resources.
Although there has been a substantial increase in the development of cancer-related AI algorithms and advanced CDSS, there remains a paucity of research related to their prospective validation when implemented into routine clinical practice, either to replace or augment human intelligence.74,75 Specifically, there are few randomized controlled trials to demonstrate whether these AI tools improve patient outcomes, increase provider efficiency, and/or are cost-effective.76,77 The adoption of AI models into cancer practice should be evidence-based, so that they result in reduced morbidity and mortality and/or in similar clinical outcomes achieved more efficiently or less expensively. Investment is needed to support multicenter, pragmatic clinical trials to evaluate the impact of AI algorithms and CDSS on multilevel patient, provider, and system outcomes. The lessons learned from prospective studies would be expected to build clinician trust in these AI models and their use in patient care, thereby helping to overcome some of the aforementioned limitations.
The application of AI in cancer practice includes providing clinical decision support for cancer diagnosis and screening, processing medical data for cancer detection or characterization of patient prognosis, and optimizing care delivery and clinical operations by increasing system capacity and allocating resources.78,79 Although AI algorithms and advanced CDSS hold great promise for health care delivery, there are several challenges that must be addressed to optimize their reproducibility, enhance their performance, minimize bias, facilitate fairness, and maintain their accuracy over time.80
A major limitation to the broad application of AI algorithms and CDSS in cancer care delivery is the requirement for diverse and inclusive data sets for training. The patient population for implementation and use of these models should reflect the population from which the training data were obtained. If this does not occur, and certain populations or scenarios are over- or under-represented, biased sampling may lead to poor model performance, inaccurate predictions, and even potential harm.76 This limitation is comparable to that observed with the real-world application of clinical trial results to diverse patient populations, including cohorts underrepresented in those trials (the elderly, adolescents, and young adults; racial and ethnic minority individuals; rural, underserved, and unserved communities).81,82 For example, a multiomic prognostic model that incorporates genomic data may not perform well for racial and ethnic minority individuals because most reference repositories of annotated genomic data used for model development are primarily built from individuals of Northern European ancestry. The AI models are often viewed as objective, and thus embedded bias can be insidious or overlooked if the training data sets are not carefully considered or disclosed.83 Similarly, the control cohort, opposite the test cohort, is typically developed from matched individuals within the EHR who lack the disease state being studied; however, because of their inclusion in the EHR, they are more likely to have frequent medical contact and may not represent truly healthy controls. Furthermore, bias in training data sets can limit AI model transferability or result in the inability to reproduce model results in health care systems external to those in which they were developed and implemented.
Additionally, the predicted outcomes or clinical endpoints used for model training (such as care utilization, cost of care, prescriptions, etc.) must be carefully selected to ensure they are not associated with underlying socioeconomic biases, specifically those resulting from intrinsic inequities in health care delivery that systematically lead to disparities in outcomes. A hallmark example of this was demonstrated in a study that evaluated the ability of a commercial algorithm— trained using health care costs as a proxy for uncontrolled illness—to predict patients' need for extra care.84 The algorithm identified Black patients as being healthier (because of lower associated health care costs) than equal-risk White patients, and this in turn reduced the number of Black patients identified for extra care by more than half. As less money is spent on Black patients who have equal need to White patients, largely because of unequal access to care, the algorithm erroneously concluded that Black patients are healthier than White patients.
To prevent or minimize bias from being introduced into AI algorithms and CDSS, it is imperative that training data sets and clinical endpoints are inclusive of the underrepresented cohorts and health care settings they are intended to serve. If this does not occur, model accuracy and scalability will be impacted, and existing health care disparities and systemic biases will be propagated. Methods to identify bias in training data sets are under development, and processes to support their use in model development should become a requirement.85,86 Furthermore, standardized reporting of algorithm source code and training conditions will foster transparency and promote model reproducibility in other similar health care system environments or patient populations.87
As previously noted, reproducibility of AI model output is a challenge when transferred across health care systems and global communities; but even within the environment in which it was developed, AI algorithms and advanced CDSS are subject to data drift over time, which can impact their performance.88 Data drift may be caused by changes in data formatting or sensing, data quality, natural drift that was not present during model training, and change in relation between features (covariate shift).89 Standards will be necessary to continuously monitor AI models and assure their validity is maintained as data distributions, patient cohorts, and the practice of medicine evolves.90
Although the importance of the reporting and reproducibility of AI techniques has been highlighted, this must be tempered by the substantial data security concerns that arise from working with sensitive medical information. Depending on the learning variety and implementation, some algorithms allow for reverse engineering of the final model to re-identify previously de-identified patient information. However, digital forensics now allows for personal health signatures to be captured from a variety of data modalities, including DNA sequencing, CT or MRI imaging, and diagnosis lists, especially if rare diseases are represented.91
The U.S. Food and Drug Administration and the European Union have developed regulations and draft guidance for the application of AI algorithms and advanced CDSS in medical practice and clinical workflows, with emphasis on aspects that promote ethical AI.92,93 Aside from health-related data protection laws, comprehensive federal legislation to regulate AI, ML, and big data is lacking, including absent standards to define who is legally responsible when AI causes harm.94–96 As AI models are increasingly implemented in medical practice, management of machine-human interactions and delegation of responsibility for clinical decisions and errors will be required.97,98 Those who regulate the application of AI in clinical practice must work collaboratively with AI scientists and engineers, as well as with medical practitioners, to fill the gaps in existing law and develop regulations that appropriately balance the freedom to innovate AI technology with protecting vulnerable populations.99
Artificial intelligence in oncology has already passed the critical threshold of outperforming expert opinion–based scoring systems in several cancer applications,9,100,101 leading to an increase in its clinical implementation. With this traction, it is expected that AI-informed methods will continue to be explored and gradually integrated into practice. Imagining how the field will mature risks exacerbating the decades of criticism that has counterbalanced its frequently audacious predictions102; however, it is necessary to foresee how incremental development can lead to the common goal of true precision oncology.
Relative to other diseases, cancer has the largest number of clinically relevant mutations and the most multimodality treatment options, in part because of advances in translational research and the conduct of clinical trials.103 Oncology arguably has the greatest need for individualized care, given the numerous types of malignancy and the heterogeneity of presentations observed. Developers of cancer-related AI attempt to leverage the field’s intrinsic data richness but rely on the trio of computational algorithms, databases, and computing power. Each of these tenets of AI must expand beyond its current confines to realize the level of precision toward which oncology aspires.
The modeling algorithms employed throughout medicine are as varied as the questions each was designed to answer. Because most algorithms are application agnostic, there are substantial efforts across industry and academia to continuously improve the science. This has led to numerous competitions among research groups, including the ImageNet Challenge by which ResNet set the industry standard in 2015, with 78% accuracy in classifying millions of images,104 only to be superseded repeatedly by algorithms routinely achieving greater than 90% accuracy.105–107 This improvement in computer vision is encouraging, but new strategies to combine the strengths of various AI approaches promise to outpace single-data modalities. The concept of multiomics data has been reviewed extensively, with each datatype (DNA sequencing, RNA sequencing, proteomics, epigenomics, etc.) contributing statistical power toward explaining an observable biologic phenomenon. If these molecular data could be combined with whole-EHR data elements—such as patient demographics, patient-reported outcomes, laboratory values, vital signs and more (a data source also known as clinomics)—the intricacies of a known mutation or prominent methylation event could be informed by the patient’s natural history and yield a more organic synthesis of health information. Continue convoluting the same patient’s model with digital pathology, computer vision radiomics, and all other capturable data, and a fully integrated model of personalized medicine comes into focus.
Databases to house this staggering amount of health data must be developed. Strict standards in clinical documentation and data formatting will be necessary to allow interoperability. Equally important, the data must be automatically updated as a patient’s health status changes over time; only by automatic accrual would the depth of data needed for such an ambitious whole-person health model be possible. Auto-updating (“living”) databases highlight the importance of continuous or semicontinuous algorithm training, in which a computational model is either retrained with each additional unit of supplemental information or on a schedule (e.g., nightly runs). Rigorous model quality metrics, akin to p values in traditional statistics, can be calculated and assessed upon retraining, without human intervention, before the updated model is used. This concept of continuous learning is already prevalent in other industries that leverage DL models but is just entering the health care space as proof-of-concept instances arise. Using such an extraordinary breadth of information is anticipated to allow new degrees of medical personalization in the future. Hyperindividualized cancer screening and prevention strategies, dosages of systemic cancer therapeutics and radiation, timing of restaging and surveillance testing, and selection and sequencing of diagnostic and treatment interventions are just a few of the transformative ways this technology may extend life and enable more cures.
The accelerated growth of computational power, increasingly available machine-readable EHRs, multiomics, and medical imaging data—as well as advances in DL, particularly convolutional neural networks—have revolutionized the development and use of AI algorithms and CDSS in cancer-related imaging analysis, genomics, and clinical practice across the cancer continuum. Ongoing research to support the application of AI to cancer genomics is anticipated to enable multicancer early detection and determination of tumor site of origin. This can transform cancer screening, particularly for less prevalent and rare cancers, and it may enhance surveillance strategies for cancer survivors. Continued advances in imaging-based ML can also lead to the development of models that assess risk for various types of cancer, enhance the diagnostic accuracy of cancers, or predict associated morbidity and mortality outcomes. This can allow for individualized screening and prevention strategies, treatment approaches, and survivor surveillance, and it can furthermore support the virtual biopsy to classify the pathologic and genomic features associated with a cancer diagnosis.
The highlighted challenges in the development, implementation, and maintenance of AI models are substantial but not insurmountable. In summary, they include the lack of data standardization, collection, and management; inherent bias in training data sets; lack of reporting standards for source code and training conditions; limited prospective clinical validation of models; a need for seamless integration into clinical workflows without adding clerical or cognitive burden for providers; lack of regulatory and legal frameworks; and limited reproducibility when transferred across health care systems and populations, given the limited EHR interoperability and the dynamic nature of data and best-practice guidelines.108 Concurrent investment in the people and resources required to address these challenges should occur in parallel with the investments that drive AI technology innovations.
Cancer practitioners are now benefitting from early AI tools that are just becoming available within the EHR, and there is no shortage of clamor for future implications.51,109,110 The utility of AI within oncology may be appreciated by visualizing each patient’s health as a digital photograph. Medicine attempts to decipher this image on a pixel-by-pixel basis, using studies rooted in explainable pathophysiology to precisely elucidate individual pixels scattered throughout. Meanwhile, AI takes a whole-picture approach, starting as a poorly focused approximation but with increasing definition as improvements in databases, algorithms, and computational power continue.
Disclosures provided by the authors and data availability statement (if applicable) are available with this article at DOI https://doi.org/10.1200/EDBK_350652.
The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc.
Research Funding: Siemens Healthineers
Patents, Royalties, Other Intellectual Property: Mayo/SK and Imago Systems have a research and development agreement (Inst)
Research Funding: Takeda (Inst)
No other potential conflicts of interest were reported.
1. | Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92:807-812. Crossref, Medline, Google Scholar |
2. | Schwartz WB, Patil RS, Szolovits P. Artificial intelligence in medicine. Where do we stand? N Engl J Med. 1987;316:685-688. Crossref, Medline, Google Scholar |
3. | Floridi L. AI and its new winter: from myths to realities. Philos Technol. 2020;33:1-3. Crossref, Google Scholar |
4. | Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44-56. Crossref, Medline, Google Scholar |
5. | Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504-507. Google Scholar |
6. | Carter SM, Rogers W, Win KT, et al. The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. Breast. 2020;49:25-32. Crossref, Medline, Google Scholar |
7. | Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380:1347-1358. Crossref, Medline, Google Scholar |
8. | Shimizu H, Nakayama KI. Artificial intelligence in oncology. Cancer Sci. 2020;111:1452-1460. Crossref, Medline, Google Scholar |
9. | Dorado-Díaz PI, Sampedro-Gómez J, Vicente-Palacios V, et al. Applications of artificial intelligence in cardiology. The future is already here. Rev Esp Cardiol (Engl Ed). 2019;72:1065-1075. Medline, Google Scholar |
10. | LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-444. Crossref, Medline, Google Scholar |
11. | Erickson BJ, Korfiatis P, Kline TL, et al. Deep learning in radiology: does one size fit all? J Am Coll Radiol. 2018;15(3 pt B):521-526. Crossref, Medline, Google Scholar |
12. | Kehl KL, Xu W, Lepisto E, et al. Natural language processing to ascertain cancer outcomes from medical oncologist notes. Clin Cancer Inform. 2020;4:680-690. Link, Google Scholar |
13. | Brown T, Mann B, Ryder N, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877-1901. Google Scholar |
14. | Haddad T, Helgeson JM, Pomerleau KE, et al. Accuracy of an artificial intelligence system for cancer clinical trial eligibility screening: retrospective pilot study. JMIR Med Inform. 2021;9:e27767. Crossref, Medline, Google Scholar |
15. | Wang G, Jung K, Winnenburg R, et al. A method for systematic discovery of adverse drug events from clinical notes. J Am Med Inform Assoc. 2015;22:1196-1204. Crossref, Medline, Google Scholar |
16. | Chen H, Engkvist O, Wang Y, et al. The rise of deep learning in drug discovery. Drug Discov Today. 2018;23:1241-1250. Crossref, Medline, Google Scholar |
17. | Shipp MA, Ross KN, Tamayo P, et al. Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nat Med. 2002;8:68-74. Crossref, Medline, Google Scholar |
18. | Attia ZI, Noseworthy PA, Lopez-Jimenez F, et al. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. Lancet. 2019;394:861-867. Crossref, Medline, Google Scholar |
19. | Nadarzynski T, Miles O, Cowie A, et al. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digit Health. 2019;5:2055207619871808. Crossref, Google Scholar |
20. | Vávra P, Roman J, Zonča P, et al. Recent development of augmented reality in surgery: a review. J Healthc Eng. 2017;2017:4574172. Crossref, Medline, Google Scholar |
21. | Panesar S, Cagle Y, Chander D, et al. Artificial intelligence and the future of surgical robotics. Ann Surg. 2019;270:223-226. Crossref, Medline, Google Scholar |
22. | Abdalkareem ZA, Amir A, Al-Betar MA, et al. Healthcare scheduling in optimization context: a review. Health Technol (Berl). 2021;11:445-469. Crossref, Medline, Google Scholar |
23. | Szlosek DA, Ferrett J. Using machine learning and natural language processing algorithms to automate the evaluation of clinical decision support in electronic medical record systems. EGEMS (Wash DC). 2016;4:1222. Medline, Google Scholar |
24. | Huang T-T, Lei L, Chen C-HA, et al. A new clinical-genomic model to predict 10-year recurrence risk in primary operable breast cancer patients. Sci Rep. 2020;10:1-10. Medline, Google Scholar |
25. | Spratt DE, Zhang J, Santiago-Jiménez M, et al. Development and validation of a novel integrated clinical-genomic risk group classification for localized prostate cancer. J Clin Oncol. 2018;36:581-590. Link, Google Scholar |
26. | Jiang J, Ding Y, Wu M, et al. Integrated genomic analysis identifies a genetic mutation model predicting response to immune checkpoint inhibitors in melanoma. Cancer Med. 2020;9:8498-8518. Crossref, Medline, Google Scholar |
27. | Boehm KM, Khosravi P, Vanguri R, et al. Harnessing multimodal data integration to advance precision oncology. Nat Rev Cancer. 2022;22:114-126. Crossref, Medline, Google Scholar |
28. | Bhinder B, Gilvary C, Madhukar NS, et al. Artificial intelligence in cancer research and precision medicine. Cancer Discov. 2021;11:900-915. Crossref, Medline, Google Scholar |
29. | Hackshaw A, Clarke CA, Hartman A-R. New genomic technologies for multi-cancer early detection: Rethinking the scope of cancer screening. Cancer Cell. 2022;40:109-113. Crossref, Medline, Google Scholar |
30. | Liu MC. Transforming the landscape of early cancer detection using blood tests-Commentary on current methodologies and future prospects. Br J Cancer. 2021;124:1475-1477. Crossref, Medline, Google Scholar |
31. | Magbanua MJM, Swigart LB, Wu H-T, et al. Circulating tumor DNA in neoadjuvant-treated breast cancer reflects response and survival. Ann Oncol. 2021;32:229-239. Crossref, Medline, Google Scholar |
32. | Ahlquist DA. Universal cancer screening: revolutionary, rational, and realizable. NPJ Precis Oncol. 2018;2:23. Crossref, Medline, Google Scholar |
33. | Tran KA, Kondrashova O, Bradley A, et al. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Med. 2021;13:152. Crossref, Medline, Google Scholar |
34. | Grewal JK, Tessier-Cloutier B, Jones M, et al. Application of a neural network whole transcriptome–based pan-cancer method for diagnosis of primary and metastatic cancers. JAMA Netw Open. 2019;2:e192597. Crossref, Medline, Google Scholar |
35. | Wang K, Duan X, Gao F, et al. Dissecting cancer heterogeneity based on dimension reduction of transcriptomic profiles using extreme learning machines. PLoS One. 2018;13:e0203824. Medline, Google Scholar |
36. | Lee JS, Das A, Jerby-Arnon L, et al. Harnessing synthetic lethality to predict the response to cancer treatment. Nat Commun. 2018;9:2546. Crossref, Medline, Google Scholar |
37. | Zhou J, Theesfeld CL, Yao K, et al. Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. Nat Genet. 2018;50:1171-1179. Crossref, Medline, Google Scholar |
38. | Davis RJ, Gönen M, Margineantu DH, et al. Pan-cancer transcriptional signatures predictive of oncogenic mutations reveal that Fbw7 regulates cancer cell oxidative metabolism. Proc Natl Acad Sci USA. 2018;115:5462-5467. Crossref, Medline, Google Scholar |
39. | Jaganathan K, Panagiotopoulou SK, McRae JF, et al. Predicting splicing from primary sequence with deep learning. Cell. 2019;176(3):535-548. Google Scholar |
40. | Khanani S. Editorial comment: artificial intelligence in mammography-our new reality. AJR Am J Roentgenol. Epub 2022 Jan 12. Google Scholar |
41. | Lamb LR, Lehman CD, Gastounioti A, et al. Artificial intelligence (ai) for screening mammography, from the AI Special Series on AI Applications. AJR Am J Roentgenol. Epub 2022 Jan 12. Google Scholar |
42. | Stabile A, Giganti F, Rosenkrantz AB, et al. Multiparametric MRI for prostate cancer diagnosis: current status and future directions. Nat Rev Urol. 2020;17:41-61. Crossref, Medline, Google Scholar |
43. | McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577:89-94. Crossref, Medline, Google Scholar |
44. | Twilt JJ, van Leeuwen KG, Huisman HJ, et al. Artificial intelligence based algorithms for prostate cancer classification and detection on magnetic resonance imaging: a narrative review. Diagnostics (Basel). 2021;11:959. Crossref, Medline, Google Scholar |
45. | Ardila D, Kiraly AP, Bharadwaj S, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med. 2019;25:954-961. Crossref, Medline, Google Scholar |
46. | Baldwin DR, Gustafson J, Pickup L, et al. External validation of a convolutional neural network artificial intelligence tool to predict malignancy in pulmonary nodules. Thorax. 2020;75:306-312. Crossref, Medline, Google Scholar |
47. | Applied Radiology. AI-Powered Clinical Decision Support Software for Early Lung Cancer Diagnosis Gets FDA Nod. News Release. March 23, 2021. https://appliedradiology.com/articles/ai-powered-clinical-decision-support-software-for-early-lung-cancer-diagnosis-gets-fda-nod Accessed March 16, 2022. Google Scholar |
48. | Roth HR, Lu L, Liu J, et al. Improving computer-aided detection using convolutional neural networks and random view aggregation. IEEE Trans Med Imaging. 2016;35:1170-1181. Crossref, Medline, Google Scholar |
49. | Spadaccini M, Marco A, Franchellucci G, et al. Discovering the first US FDA-approved computer-aided polyp detection system. Future Oncol. 2022;18:1405-1412. Crossref, Medline, Google Scholar |
50. | Zhang SM, Wang YJ, Zhang ST. Accuracy of artificial intelligence-assisted detection of esophageal cancer and neoplasms on endoscopic images: a systematic review and meta-analysis. J Dig Dis. 2021;22:318-328. Crossref, Medline, Google Scholar |
51. | Bi WL, Hosny A, Schabath MB, et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J Clin. 2019;69:127-157. Medline, Google Scholar |
52. | Wang J-Y, Sandhu N, Mendoza M, et al. RADI-12. Deep learning for automatic detection and contouring of metastatic brain tumors in stereotactic radiosurgery: a retrospective analysis with an FDA-cleared software algorithm, 2021. Neurooncol Adv. 2021:3s(suppl abstr RADI-12). Google Scholar |
53. | Tomaszewski MR, Gillies RJ. the biological meaning of radiomic features. Radiology. 2021;298:505-516. Crossref, Medline, Google Scholar |
54. | Janssen BV, Verhoef S, Wesdorp NJ, et al. Imaging-based machine-learning models to predict clinical outcomes and identify biomarkers in pancreatic cancer: a scoping review. Ann Surg. 2022;275:560-567. Crossref, Medline, Google Scholar |
55. | Martin-Gonzalez P, Crispin-Ortuzar M, Rundo L, et al. Integrative radiogenomics for virtual biopsy and treatment monitoring in ovarian cancer. Insights Imaging. 2020;11:94. Crossref, Medline, Google Scholar |
56. | Calabrese E, Villanueva-Meyer JE, Cha S. A fully automated artificial intelligence method for non-invasive, imaging-based identification of genetic alterations in glioblastomas. Sci Rep. 2020;10:11852. Crossref, Medline, Google Scholar |
57. | Bera K, Schalper KA, Rimm DL, et al. Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology. Nat Rev Clin Oncol. 2019;16:703-715. Crossref, Medline, Google Scholar |
58. | Nagpal K, Foote D, Liu Y, et al. Development and validation of a deep learning algorithm for improving Gleason scoring of prostate cancer. NPJ Digit Med. 2019;2:1-10. Medline, Google Scholar |
59. | Saltz J, Gupta R, Hou L, et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep. 2018;23:181-193. Google Scholar |
60. | Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. Google Scholar |
61. | Yu L, Chen H, Dou Q, et al. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans Med Imaging. 2017;36:994-1004. Crossref, Medline, Google Scholar |
62. | Council for Affordable Quality Healthcare. Defining the Provider Data Dilemma, 2016: Challenges, Opportunities, and Call for Industry Collaboration. 2016. www.caqh.org/sites/default/files/explorations/defining-provider-data-white-paper.pdf. Accessed June 15, 2017. Google Scholar |
63. | Hammond WE, Bailey C, Boucher P, et al. Connecting information to improve health. Health Aff (Millwood). 2010;29:284-288. Crossref, Medline, Google Scholar |
64. | Luna D, Mayan JC, García MJ, et al. Challenges and potential solutions for big data implementations in developing countries. Yearb Med Inform. 2014;9:36-41. Medline, Google Scholar |
65. | Osterman TJ, Terry M, Miller RS. Improving cancer data interoperability: the promise of the minimal common oncology data elements (mCODE) initiative. Clin Cancer Inform. 2020;4:993-1001. Link, Google Scholar |
66. | Lipscomb J, Gotay CC, Snyder CF. Patient-reported outcomes in cancer: a review of recent research and policy initiatives. CA Cancer J Clin. 2007;57:278-300. Crossref, Medline, Google Scholar |
67. | Efficace F, Collins GS, Cottone F, et al. Patient-reported outcomes as independent prognostic factors for survival in oncology: systematic review and meta-analysis. Value Health. 2021;24:250-267. Crossref, Medline, Google Scholar |
68. | Cheung YT, Chan A, Charalambous A, et al. The use of patient-reported outcomes in routine cancer care: preliminary insights from a multinational scoping survey of oncology practitioners. Support Care Cancer. 2022;30:1427-1439. Crossref, Medline, Google Scholar |
69. | Kundu S. AI in medicine must be explainable. Nat Med. 2021;27:1328. Crossref, Medline, Google Scholar |
70. | Savage N. How AI and neuroscience drive each other forwards. Nature. 2019;571:S15-S17. Crossref, Medline, Google Scholar |
71. | Štrumbelj E, Kononenko I. Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst. 2014;41:647-665. Crossref, Google Scholar |
72. | Lundberg SM, Erion G, Chen H, et al. From local explanations to global understanding with explainable AI for trees. Nat Mach Intell. 2020;2:56-67. Crossref, Medline, Google Scholar |
73. | Holzinger A, Langs G, Denk H, et al. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9:e1312. Crossref, Medline, Google Scholar |
74. | Angus DC. Randomized clinical trials of artificial intelligence. JAMA. 2020;323:1043-1045. Crossref, Medline, Google Scholar |
75. | Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. Crossref, Medline, Google Scholar |
76. | Parikh RB, Teeple S, Navathe AS. Addressing bias in artificial intelligence in health care. JAMA. 2019;322:2377-2378. Crossref, Medline, Google Scholar |
77. | Emanuel EJ, Wachter RM. Artificial intelligence in health care: will the value match the hype? JAMA. 2019;321:2281-2282. Crossref, Medline, Google Scholar |
78. | Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018;319:1317-1318. Crossref, Medline, Google Scholar |
79. | Ngiam KY, Khor IW. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 2019;20:e262-e273. Crossref, Medline, Google Scholar |
80. | Fletcher RR, Nakeshimana A, Olubeko O. Addressing fairness, bias, and appropriate use of artificial intelligence and machine learning in global health. Front Artif Intell. 2021;3:561802. Crossref, Medline, Google Scholar |
81. | Hamel LM, Penner LA, Albrecht TL, et al. Barriers to clinical trial enrollment in racial and ethnic minority patients with cancer. Cancer Contr. 2016;23:327-337. Crossref, Medline, Google Scholar |
82. | Murthy VH, Krumholz HM, Gross CP. Participation in cancer clinical trials: race-, sex-, and age-based disparities. JAMA. 2004;291:2720-2726. Crossref, Medline, Google Scholar |
83. | Zou J, Schiebinger L. AI can be sexist and racist - it’s time to make it fair. Nature. 2018;559:324-326. Crossref, Medline, Google Scholar |
84. | Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447-453. Crossref, Medline, Google Scholar |
85. | Calmon F, Wei D, Vinzamuri B, et al. Optimized pre-processing for discrimination prevention. Adv Neural Inf Process Syst. 2017;30. Google Scholar |
86. | Vokinger KN, Feuerriegel S, Kesselheim AS. Mitigating bias in machine learning for medicine. Commun Med (Lond). 2021;1:25. Crossref, Medline, Google Scholar |
87. | Hutson M. Artificial intelligence faces reproducibility crisis. Science. 2018;359:725-726. Crossref, Medline, Google Scholar |
88. | Quiñonero-Candela J. Dataset Shift in Machine Learning. Neural Information Processing Series. Cambridge, MA: MIT Press; 2009. Google Scholar |
89. | Davis SE, Lasko TA, Chen G, et al. Calibration drift in regression and machine learning models for acute kidney injury. J Am Med Inform Assoc. 2017;24:1052-1061. Crossref, Medline, Google Scholar |
90. | Chi S, Tian Y, Wang F, et al. A novel lifelong machine learning-based method to eliminate calibration drift in clinical prediction models. Artif Intell Med. 2022;125:102256. Crossref, Medline, Google Scholar |
91. | Rocher L, Hendrickx JM, de Montjoye YA. Estimating the success of re-identifications in incomplete datasets using generative models. Nat Commun. 2019;10:3069. Crossref, Medline, Google Scholar |
92. | US Food and Drug Administration. Guidance Document. Clinical Decision Support Software. Draft Guidance for Industry and Food and Drug Administration Staff. September 2019. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software. Accessed March 11, 2022. Google Scholar |
93. | EIT Digital, EIT AI Community. A European Approach to Artificial Intelligence: A Policy Perspective. European AI Alliance. Updated March 30, 2022. https://futurium.ec.europa.eu/en/european-ai-alliance/document/european-approach-artificial-intelligence-policy-perspective. Accessed March 30, 2022. Google Scholar |
94. | Magdziarczyk M. Right to be forgotten in light of regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec. In 6th International Scientific Conference on Social Sciences and Arts SGEM 2019. Volume 6: Book number 1.1. Vienna, Austria: SGEM,2019;177-184. Google Scholar |
95. | Health Insurance Portability and Accountability Act of 1996. Public Law 104-191. https://www.govinfo.gov/content/pkg/PLAW-111publ148/pdf/PLAW-111publ148.pdf. Google Scholar |
96. | Scherer MU. Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harv JL Tech. 2015;29:353. Google Scholar |
97. | Fenech M, Strukelj N, Buston O; Wellcome Trust. Ethical, social, and political challenges of artificial intelligence in health. London: Future Advocacy; 2018. https://wellcome.org/sites/default/files/ai-in-health-ethical-social-political-challenges.pdf. Accessed March 11, 2022. Google Scholar |
98. | Taddeo M, Floridi L. How AI can be a force for good. Science. 2018;361:751-752. Crossref, Medline, Google Scholar |
99. | Gunkel DJ. Mind the gap: responsible robotics and the problem of responsibility. Ethics Inf Technol. 2020;22:307-320. Crossref, Google Scholar |
100. | Kurita Y, Kuwahara T, Hara K, et al. Diagnostic ability of artificial intelligence using deep learning analysis of cyst fluid in differentiating malignant from benign pancreatic cystic lesions. Sci Rep. 2019;9:6893. Crossref, Medline, Google Scholar |
101. | Gerstung M, Papaemmanuil E, Martincorena I, et al. Precision oncology for acute myeloid leukemia using a knowledge bank approach. Nat Genet. 2017;49:332-340. Crossref, Medline, Google Scholar |
102. | Joyner MJ, Paneth N. Promises, promises, and precision medicine. J Clin Invest. 2019;129:946-948. Crossref, Medline, Google Scholar |
103. | Califf RM, Zarin DA, Kramer JM, et al. Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010. JAMA. 2012;307:1838-1847. Crossref, Medline, Google Scholar |
104. | He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Institute of Electrical and Electronics Engineers, 2016;770-778. Google Scholar |
105. | Dai Z, Liu H, Le Q, et al. CoAtNet: marrying convolution and attention for all data sizes. Adv Neural Inf Process Syst. 2021;34. Google Scholar |
106. | Pham H, Dai Z, Xie Q, et al. Meta pseudo labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Institute of Electrical and Elecronics Engineers and Computer Vision Foundation, 2021;11557-11568. Google Scholar |
107. | Wortsman M, Ilharco G, Gadre SY, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. Google Scholar |
108. | Chua IS, Gaziel-Yablowitz M, Korach ZT, et al. Artificial intelligence in oncology: Path to implementation. Cancer Med. 2021;10:4138-4149. Crossref, Medline, Google Scholar |
109. | Briganti G, Le Moine O. Artificial intelligence in medicine: today and tomorrow. Front Med (Lausanne). 2020;7:27. Crossref, Medline, Google Scholar |
110. | Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA. 2020;323:509-510. Crossref, Medline, Google Scholar |