Less than 5% of patients with cancer enroll in clinical trials, and 1 in 5 trials are stopped for poor accrual. We evaluated an automated clinical trial matching system that uses natural language processing to extract patient and trial characteristics from unstructured sources and machine learning to match patients to clinical trials.

Medical records from 997 patients with breast cancer were assessed for trial eligibility at Highlands Oncology Group between May and August 2016. System and manual attribute extraction and eligibility determinations were compared using the percentage of agreement for 239 patients and 4 trials. Sensitivity and specificity of system-generated eligibility determinations were measured, and the time required for manual review and system-assisted eligibility determinations were compared.

Agreement between system and manual attribute extraction ranged from 64.3% to 94.0%. Agreement between system and manual eligibility determinations was 81%-96%. System eligibility determinations demonstrated specificities between 76% and 99%, with sensitivities between 91% and 95% for 3 trials and 46.7% for the 4th. Manual eligibility screening of 90 patients for 3 trials took 110 minutes; system-assisted eligibility determinations of the same patients for the same trials required 24 minutes.

In this study, the clinical trial matching system displayed a promising performance in screening patients with breast cancer for trial eligibility. System-assisted trial eligibility determinations were substantially faster than manual review, and the system reliably excluded ineligible patients for all trials and identified eligible patients for most trials.

Cancer clinical trials offer innovative interventions that may improve survival and quality of care for patients with cancer.1 However, fewer than 5% of adults diagnosed with cancer participate in clinical trials.2,3 Moreover, 20% of all cancer clinical trials fail to complete enrollment.2 A 2015 study demonstrated that more than 40% of terminated clinical trials were ended because of low enrollment and 20% because of safety or efficacy concerns.4 Some of the reasons for low enrollment were a large number of eligibility criteria, rigid trial inclusion criteria, and enrollments that lacked the statistical power to answer the research question.

Community cancer centers and physicians play a vital role in recruiting patients to clinical trials.5 Patients typically become aware of trial opportunities through clinical staff, such as clinical trial coordinators.6 However, the time required for clinical staff to review medical records, treatment histories, imaging studies, and laboratory reports to identify relevant clinical trials for patients can pose significant barriers to clinical trial accrual.

Technologies that improve the efficiency of trial eligibility screening and enrollment can help to offset some of these challenges. Clinical trial matching systems and services range in scope from identifying relevant trials for which a patient may be eligible to providing full matching services, including screening, detailed assessments, data collection, and enrollment assistance.7,8 The National Library of Medicine’s ClinicalTrials.gov registry provides information on available clinical trials and leverages predictive text to help guide coordinators to relevant trials. BreastCancerTrials.org also contains a searchable database of breast cancer trials.

Clinical decision support systems (CDSSs) may provide information on literature-based guidelines and information on clinical research and trials.9,10 CDSSs enabled by artificial intelligence (AI) can generate insights from data; identify therapeutic options; and summarize real-world evidence, outcomes, and epidemiologic associations.9-16 Tools such as Mendel.AI,17 Antidote,18 Smart Patients,19 Synergy,20 Deep 6 AI,21 and Watson for Clinical Trial Matching (WCTM) are some examples of AI trial matching systems.22 Antidote allows patients to search for trials and provides sponsors with a list of interested patients. Patients can search Smart Patients to find trials on the basis of drug, targeted therapy, or condition. Synergy provides patients with possible clinical trials and allows them to register their interest in selected trials. Mendel.AI can be used by both clinicians and patients to identify suitable trials. Deep 6 AI and WCTM, designed for clinicians, use medical ontologies, machine learning (ML), and natural language processing (NLP) to help to identify patients who match trial criteria. Deep 6 AI and WCTM’s ontologic approach has been shown to assist clinicians in cohort selection for clinical trials.23 Although Deep 6 AI shows all mismatches, near matches, and matches, WCTM’s interface is focused on matches to assist clinicians in efficient trial eligibility determinations.

Few evaluation studies have been conducted on the effectiveness and performance of clinical trial matching systems.10 To further investigate the potential of such systems to reduce workload, this study used WCTM and data from a community cancer center to screen a large number of patients for a select set of clinical trials. After eligibility screening was completed, possible patient-trial matches were manually reviewed by clinical staff to evaluate WCTM performance.

Study Setting

This study was conducted at Highlands Oncology Group, a community cancer practice in Arkansas with 15 physicians and a staff of 310, including 7 full-time and 2 part-time clinical trial coordinators. Highlands’s 3 sites provide comprehensive cancer care, including access to clinical trials, for approximately 11,000 unique patients annually. The clinical trial matching process at Highlands consists of daily screening of medical records, additional patient evaluation to confirm eligibility, and ultimately, consent and trial enrollment for those found to be eligible. One of 3 coordinators reviewed the medical records for each patient and determined inclusion/exclusion for a given trial. The nurse coordinators involved in screening hold a minimum of a bachelor’s degree and receive additional training in clinical trials. Typically, the list of trials generated during the screening process is presented to the oncologist and trial team, who then discuss relevant trials with patients to confirm trial eligibility.

WCTM Architecture

WCTM has 3 main components: a trial data intake process, a patient data intake process, and a matching process. The matching process compares trial eligibility and exclusion criteria with patient data, which yields a list of trials for which patients may qualify. WCTM trial and patient data intake components use NLP to process unstructured information. Eligibility and exclusion criteria are extracted from ClinicalTrials.gov or trial protocols using NLP. Briefly, the sponsor or client first provides the trial protocol, which is ingested by the system. The output produced from WCTM’s trial intake process (Appendix Table A1) details criteria for the trial and provides additional information with regard to clinical concepts and values with associated WCTM attributes. Manual review of inclusion and exclusion criteria by clinical staff ensures that WCTM’s NLP interpreted the criteria correctly. Trial filters may be added at this step to narrow the patient population for consideration and reduce the load of patients who require a full review of inclusion and exclusion criteria.

During matching, WCTM’s NLP analyzes unstructured data from electronic health records (EHRs) and along with its ML component, populates a patient data model with attributes relevant for a specified cancer type. WCTM uses an NLP pipeline that implements traditional language processing steps for tokenization, parts-of-speech tagging, negation, hypotheticals detection, normalization, and expansion of concepts. This NLP pipeline includes annotators to parse clinical documents selected from the EHR that can be used to extract patient-level medical attributes defined in study criteria by identifying named entities, defining mappings for relationships between entities, and identifying pertinent test results.

WCTM ingested a protocol library of portable document format files made available to Highlands that contained inclusion and exclusion criteria for 4 breast cancer trials sponsored by Novartis Pharmaceuticals Corporation (ClinicalTrials.gov identifiers: NCT01633060, NCT02437318, NCT02422615, and NCT01923168). Highlands’s WCTM implementation supported intake of structured patient data, including laboratory tests, sex, cancer diagnosis, and age. Unstructured data sources included the most recent medical progress note for each patient. WCTM was built using data and processes that preceded, and were independent of, the specific implementation at Highlands. The trial data intake process was optimized using 3 rounds of trial ingestion, evaluated by WCTM subject matter experts, to validate ingestion protocols before study inception.

After the criteria from the intake process were curated and patient matching was complete, the matching output was viewed by users in the WCTM user interface on the basis of each user’s permissions for viewing patient personal health information. WCTM returns matching results as met or not met for each protocol criterion through WCTM’s user interface.

Study Design

The objective of this study was to evaluate the performance of WCTM patient data intake and matching processes. During the 16-week study period, WCTM processed data from 1,601 medical oncology visits for 997 unique patients with breast cancer. From this group, 239 patients with breast cancer (representing a single batch of patients for WCTM processing) were randomly selected for validation from patient visits that occurred over the 16-week period.

Performance Assessment

The progress note (used for both WCTM and manual abstraction) was reviewed by 1 of 3 clinical coordinators for manual extraction of patient attributes and determination of trial eligibility for each of the 239 patients in this pilot study, with trial coordinators blinded to WCTM results during manual review. Coordinators were instructed to report only attributes that were explicitly stated in the medical record. If the medical record did not mention an attribute of interest, both WCTM and trial coordinators recorded the result as unknown.

We evaluated system performance of trial intake-optimized WCTM in terms of NLP accuracy for the patient data intake process, matching process accuracy, and time required for manual versus WCTM-assisted trial eligibility determination. Patient intake NLP accuracy was measured in 218 of 239 patients; the excluded 21 patients had multiple primary cancers or bilateral breast cancer, both of which WCTM was unable to process at the time of this study. The accuracy of the patient intake NLP and matching processes were examined in breast cancer trials NCT01633060, NCT02437318, NCT01923168, and NCT02422615. Accuracy of matching was measured by the comparison of inclusion/exclusion determinations from WCTM with determinations made by manual screeners as the gold standard.

Demographic and disease characteristics of the patients in the 239-patient subset were compared with all 997 patients assessed for trial eligibility using Kruskal-Wallis equality of populations.24 The extent to which manual review and WCTM matched on values for each of the clinical attributes was assessed, and the absolute agreement was calculated as a percentage. Sensitivity, specificity, positive predictive value (PPV), and negative predicted value (NPV) associated with use of WCTM were measured, using manual review as the gold standard (ie, correct answer), with positives as trial inclusions, and negatives as trial exclusions. Sensitivity, specificity, PPV, and NPV were calculated as follows: sensitivity (TP rate) = TP / (TP + FN); specificity (TN rate) = TN / (TN + FP); precision (PPV) = TP / (TP + FP); NPV = TN / (TN + FN), where TP is defined as a true positive, FN as false negative, TN as true negative, and TP as true positive.

To determine whether WCTM improved the efficiency of eligibility determinations, the time taken to evaluate trial eligibility for 90 patients (representing patient load for 1 day) for 3 breast cancer trials (NCT02437318, NCT01923168, NCT01633060) was assessed by 2 Highlands coordinators performing the trial eligibility evaluations: One timed the manual process while the other timed the WCTM-assisted process.25 Analyses were conducted using Microsoft Excel (Microsoft Corporation, Redmond, WA), SAS 9.4 (TS1M3; SAS Institute, Cary, NC), and Stata/MP 15.1 (StataCorp, College Station, TX) software.

Institutional Review

Because the current study did not involve directing patient care or enrolling patients in trials, the Western Institutional Review Board found that the protocol for this study (#20152322) met requirements for waiver of consent26 and granted approval as an exempt study of a technology pilot for epidemiologic research. Study sites had access to a limited set of EHR data related to this study; all personally identifiable information in the EHR was redacted before transmittal to WCTM. Sites transmitted encrypted batches of EHR data through a Secure Sockets Layer–secured cloud-based website. Only specifically authorized personnel were able to access data in the WCTM system. For actual patient enrollment in clinical trials, determinations of inclusion or exclusion were based solely on the manual screening process in place at Highlands.

Study Population

During 1,601 clinic visits, 997 unique patients with breast cancer were processed. A representative subset of 239 patients (validation group) was used in this pilot study. The characteristics of this group are listed in Table 1, along with that of the entire 997-patient pool. In both groups, patients with breast cancer ranged in age from 25 to 98 years, were mainly female, and commonly had hormone receptor–positive and human epidermal growth factor receptor 2 (HER2)–negative disease. Compared with the 997-patient pool, the validation subset of 239 patients was younger (mean age, 61.5 v 64.6 years; P < .01), more likely to have metastatic disease (41% v 17%), and less likely to be HER2 positive (13% v 26%). The 2 groups were otherwise similar. Twenty-one patients with multiple primary cancers or bilateral breast cancer were excluded from NLP evaluation because these cancers were not supported by WCTM at the time of this study. The demographics of the 218 patients used for the NLP analysis compared with that of the larger group of 239 patients are listed in Table 1.


TABLE 1. Study Population Demographic and Disease Characteristics

Performance of Patient Attribute Extraction

Accuracy values of WCTM-extracted clinical attributes compared with manual extraction as the gold standard are listed in Table 2. Agreement for individual attributes ranged from 64.3% for BRCA1 mutation status to 94% for the cN category. In this study, there were relatively few cases of BRCA1 mutations because BRCA testing is not routinely performed on all patients at Highlands.


TABLE 2. Natural Language Processing Performance for 17 Clinical Attributes (n = 218)

Several areas of disagreement were artifacts of the study procedure. In this pilot, 36 patients were staged by WCTM using NLP annotations and medical logic from unstructured text using tumor size and lymph node count; however, manual reviewers were instructed to record only the TN staging values that were explicitly stated in the record. Thus, for these 36 patients, the manual reviewer entered unknown. In addition, WCTM assigned TN category values as clinical if no information about surgery could be located or pathologic if information about a surgery was found. WCTM typically prioritizes pathology records for pT and pN values; however, in the current study, pathology records were not available because of a lack of WCTM-EHR integration.

Performance of Eligibility Determination

Manual and WCTM eligibility determinations were compared (Appendix Table A2), and percentage of agreement, sensitivity, and specificity were calculated (Table 3) using manual review as the gold standard. Overall, WCTM and manual review agreed on trial eligibility determinations in 81%-96% of patients. In terms of sensitivity, WCTM correctly identified 91%-95% of patients in 3 of 4 studies. In the 4th study (NCT01923168), WCTM incorrectly excluded 8 of 15 eligible patients, which yielded a sensitivity of 46.7% because of the stringency of neoadjuvant filters that are no longer in use by WCTM. Specificity across all 4 trials ranged from 76% to near 100%, which indicates that WCTM was effective at excluding patients who did not meet eligibility criteria. Inclusion and exclusion mismatches were due to incorrectly extracted attributes (n = 69), second primary cancers and bilateral breast cancers (n = 29), manual reviewer errors (n = 22), incorrect medical logic inferences in the matching process (n = 7), and data conflicts (n = 6).


TABLE 3. Manual Versus WCTM Trial Eligibility Determination (n = 239)

Time for Eligibility Determination

The time required to screen the patient load for 1 day, including both patients with cancer and hematology patients, was recorded. Clinical trial screeners each assess medical records for approximately 90 different patients daily, a process that requires 1-4 hours per screener, depending on the number of patients with cancer types related to a given clinical trial. On the day the timing study was conducted, 90 patients were manually screened by 1 coordinator for 3 breast cancer trials in 110 minutes. Selection of a coordinator to time studies was based on availability of a coordinator on the day timing was measured. Another coordinator timed the WCTM-assisted eligibility determination of these same patients for the same set of trials. The WCTM-assisted approach took 24 minutes, which was a reduction in the time for screening by 78%,25 consistent with other studies using similar tools.10,27,28 The WCTM-assisted approach demonstrated accuracy that ranged from 77.8% to 99.4% and screened out 69 noneligible patients with cancer (Appendix Table A3).

This study evaluated the performance of an AI clinical trial matching tool for eligibility screening for breast cancer clinical trials in the workflow of a busy community oncology practice. Breast cancer trials have some of the most complex inclusion and exclusion criteria of all clinical trials because of advances in the understanding of this malignancy and a growing number of targeted therapies available for treatment. This study examined WCTM performance by comparing system-generated and human-abstracted clinical attributes in trial eligibility determinations for a breast cancer population in a community cancer center. Agreement between system and human attribute extraction varied, with some disagreement as a result of artifacts of the study procedure that either would not be seen in practice or would be addressed in practice. Nevertheless, WCTM correctly identified most eligible and ineligible patients, consistent with similar studies that used automated approaches.10,27,28 In an actual full implementation of WCTM, additional data from the EHR would likely improve WCTM performance and result in greater time savings compared with manual review. This study evaluated the technical performance of WCTM, and additional studies are now under way to assess its effects on trial enrollment.

The time savings from WCTM-assisted screening are consistent with other studies using automated screening, but none have examined performance in contemporary breast cancer trials.25 One 2010 study showed that an automated screening tool reduced the time required to screen patients for 4 of 5 clinical trials for a variety of cancer types.28 Another study demonstrated an increase in the efficiency of screening pediatric patients for trials in an emergency department setting.27 CDSSs are not just for use in clinical trial matching; many health care systems use clinical decision support tools for a variety of functions,29,30 including assisting with clinical events monitoring; processing of radiology, emergency, and pathology reports; and matching for clinical trials.12,13,22 Our study adds to the growing body of evidence that demonstrates that trial matching systems can perform accurately and have the potential to reduce clinical trial coordinator workload, even in the context of patients with complex breast cancer treated in a community cancer center.

This study has several limitations. It was relatively small and conducted at a single rural, community institution for a limited set of breast cancer trials. Therefore, these results might not generalize to other settings, cancers, or trials. Manual review of each patient was performed by only 1 of 3 coordinators, and the timing study was limited to a single day. In addition, only 1 medical note was used for attribute extraction and trial eligibility determination. System performance would likely be enhanced if all available clinical data were used, as in the case with WCTM-EHR integration, which is how WCTM is typically used in practice.

In conclusion, this study provides evidence that an AI-enabled clinical trial matching system can assist and expedite matching of patients with breast cancer to clinical trials. System assistance decreased the time required for research coordinators to assess trial eligibility. WCTM excluded ineligible patients with 76%-99% specificity across all 4 trials and identified eligible patients with > 90% sensitivity in 3 of 4 trials. Additional studies are needed to evaluate WCTM for other cancers and to measure the impact such systems have on actual trial recruitment and patient outcomes.

© 2020 by American Society of Clinical Oncology

Presented at the 2017 American Society of Clinical Oncology Annual Meeting, Chicago, IL, June 2-7, 2017.

Supported by Novartis Pharmaceuticals Corporation. Representatives of IBM Watson Health and Novartis Pharmaceuticals Corporation were involved in the design and conduct of the study and collection, management, analysis, and interpretation of the data.

Conception and design: J. Thaddeus Beck, Melissa Rammage, Gretchen P. Jackson, Irene Dankwa-Mullan, Sadie E. Coverdill, M. Paul Williamson, Kyu Rhee, Michael Vinegra

Financial support: M. Christopher Roebuck, Michael Vinegra

Administrative support: J. Thaddeus Beck, Melissa Rammage, Gretchen P. Jackson, M. Paul Williamson, Michael Vinegra

Provision of study material or patients: J. Thaddeus Beck, Helen Holtzen

Collection and assembly of data: J. Thaddeus Beck, Melissa Rammage, M. Christopher Roebuck, Adam Torres, Helen Holtzen, M. Paul Williamson, Michael Vinegra

Data analysis and interpretation: J. Thaddeus Beck, Melissa Rammage, Gretchen P. Jackson, Anita M. Preininger, Irene Dankwa-Mullan, M. Christopher Roebuck, Sadie E. Coverdill, Quincy Chau, Michael Vinegra

Manuscript writing: All authors

Final approval of manuscript: All authors

Accountable for all aspects of the work: All authors

The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated unless otherwise noted. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/cci/author-center.

Open Payments is a public database containing information reported by companies about payments made to US-licensed physicians (Open Payments).

J. Thaddeus Beck

Research Funding: Novartis (Inst),: Genentech (Inst), Roche (Inst), Eli Lilly (Inst), Amgen (Inst), Heat Biologics (Inst), AbbVie (Inst), AstraZeneca (Inst), Pfizer (Inst), Seattle Genetics (Inst), Astellas Pharma (Inst)

Melissa Rammage

Employment: IBM Corporation

Patents, Royalties, Other Intellectual Property: Patent (publish) with IBM

Gretchen P. Jackson

Employment: IBM Corporation, Vanderbilt University Medical Center

Leadership: IBM Corporation

Stock and Other Ownership Interests: IBM Corporation

Speakers’ Bureau: IBM Corporation

Research Funding: IBM Corporation

Travel, Accommodations, Expenses: IBM Corporation

Anita M. Preininger

Employment: IBM Corporation

Stock and Other Ownership Interests: Merck

Irene Dankwa-Mullan

Employment: IBM Watson Health

M. Christopher Roebuck

Consulting or Advisory Role: Pharmaceutical Manufacturer Association (Inst), IBM Watson Health (Inst)

Adam Torres

Employment: Highlands Oncology Group, IQVIA Biotech

Sadie E. Coverdill

Employment: IBM Watson Health

Stock and Other Ownership Interests: IBM Watson Health

M. Paul Williamson

Employment: Novartis

Stock and Other Ownership Interests: Novartis

Quincy Chau

Employment: Novartis, AstraZeneca

Stock and Other Ownership Interests: Novartis, Immunomedics, Mirati Therapeutics, Exelixis, NewLink Genetics, AstraZeneca

Travel, Accommodations, Expenses: Novartis, AstraZeneca

Kyu Rhee

Employment: IBM Corporation

Leadership: IBM Corporation

Stock and Other Ownership Interests: CVS Health (I), Johnson & Johnson (I), Merck (I), Celgene (I), Allergan (I), Eli Lilly (I)

Michael Vinegra

Employment: Novartis

Stock and Other Ownership Interests: Novartis, Alcon

Travel, Accommodations, Expenses: Novartis

No other potential conflicts of interest were reported.


TABLE A1. Examples of Inclusion/Exclusion Criteria


TABLE A2. Manual Versus WCTM Trial Eligibility Determination


TABLE A3. Time for Eligibility Determination (n = 90 Patients With Cancer and Hematology Patients)


Project and administrative support was provided by Nancy Roper, BMath, and Courtney Simmons, BA, CCRP, of Highlands of Oncology. Technical support was provided by Brett South, PhD, of IBM Watson Health and Eliza Argonza-Aviles and Mickey S. Howell, BSc, of Novartis Pharmaceuticals Corporation. Editorial support of earlier versions was provided by Van Willis, PhD, of IBM Watson Health and Kenneth W. Culver, MD; Robert W. Sweetman, MD; and Ryad A. Ali, MBA, of Novartis Pharmaceuticals Corporation. Novartis Pharmaceuticals Corporation provided financial support for medical editorial assistance from Abbie Saunders, PhD, of ArticulateScience.

1. Clauser SB, Johnson MR, O’Brien DM, et al: Improving clinical research and cancer care delivery in community settings: Evaluating the NCI community cancer centers program. Implement Sci 4:63, 2009 Crossref, MedlineGoogle Scholar
2. Stensland KD, McBride RB, Latif A, et al: Adult cancer clinical trials that fail to complete: An epidemic? J Natl Cancer Inst 106:dju229, 2014 Crossref, MedlineGoogle Scholar
3. Unger JM, Cook E, Tai E, et al: The role of clinical trial participation in cancer research: Barriers, evidence, and strategies. Am Soc Clin Oncol Educ Book 35:185-198, 2016 LinkGoogle Scholar
4. Carlisle B, Kimmelman J, Ramsay T, et al: Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials. Clin Trials 12:77-83, 2015 Crossref, MedlineGoogle Scholar
5. Nass SJ, Moses HL, Mendelsohn J (eds): A National Cancer Clinical Trials System for the 21st Century: Reinvigorating the NCI Cooperative Group Program. Washington, DC, National Academies Press, 2010 Google Scholar
6. Chen L, Grant J, Cheung WY, et al: Screening intervention to identify eligible patients and improve accrual to phase II-IV oncology clinical trials. J Oncol Pract 9:e174-e181, 2013 LinkGoogle Scholar
7. Cancer Action Network: An Overview of Cancer Clinical Trial Matching Services. Atlanta, GA, American Cancer Society, 2018 Google Scholar
8. Opar A: New tools automatically match patients with clinical trials. Nat Med 19:793, 2013 Crossref, MedlineGoogle Scholar
9. Kantarjian H, Yu PP: Artificial intelligence, big data, and cancer. JAMA Oncol 1:573-574, 2015 Crossref, MedlineGoogle Scholar
10. Ni Y, Wright J, Perentesis J, et al: Increasing the efficiency of trial-patient matching: Automated clinical trial eligibility pre-screening for pediatric oncology patients. BMC Med Inform Decis Mak 15:28, 2015 Crossref, MedlineGoogle Scholar
11. Bibault JE, Burgun A, Giraud P: Artificial intelligence applied to radiation oncology [in French]. Cancer Radiother 21:239-243, 2017 Crossref, MedlineGoogle Scholar
12. Cheng Z, Nakatsugawa M, Zhou XC, et al: Utility of a clinical decision support system in weight loss prediction after head and neck cancer radiotherapy. JCO Clin Cancer Inform 10.1200/CCI.18.00058 Google Scholar
13. He T, Puppala M, Ezeana CF, et al: A deep learning-based decision support tool for precision risk assessment of breast cancer. JCO Clin Cancer Inform 10.1200/CCI.18.00121 Google Scholar
14. Patel NM, Michelini VV, Snell JM, et al: Enhancing next-generation sequencing-guided cancer care through cognitive computing. Oncologist 23:179-185, 2018 Crossref, MedlineGoogle Scholar
15. Shivade C, Hebert C, Regan K, et al: Automatic data source identification for clinical trial eligibility criteria resolution. AMIA Annu Symp Proc 2016:1149-1158, 2017 MedlineGoogle Scholar
16. Soysal E, Lee HJ, Zhang Y, et al: CATTLE (CAncer treatment treasury with linked evidence): An integrated knowledge base for personalized oncology research and practice. CPT Pharmacometrics Syst Pharmacol 6:188-196, 2017 Crossref, MedlineGoogle Scholar
17. Mendel: Mendel.AI. https://mendel.ai Google Scholar
18. Antidote: Antidote. https://www.antidote.me Google Scholar
19. Smart Patients: Smart Patients. https://www.smartpatients.com/trials Google Scholar
20. Massive Bio: Synergy. https://www.massivebio.com Google Scholar
21. Deep 6 AI: About Deep 6 AI. https://deep6.ai/about Google Scholar
22. Demner-Fushman D, Chapman WW, McDonald CJ: What can natural language processing do for clinical decision support? J Biomed Inform 42:760-772, 2009 Crossref, MedlineGoogle Scholar
23. Patel C, Cimino J, Dolby J, et al: Matching patient records to clinical trials using ontologies. Lecture Notes Comput Sci 4825:816, 2007 Google Scholar
24. Kruskal WH, Wallis WA: Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47:583-621, 1952 CrossrefGoogle Scholar
25. Beck TJ, Vinegra M, Dankwa-Mullan I, et al: Cognitive technology addressing optimal cancer clinical trial matching and protocol feasibility in a community cancer practice. J Clin Oncol 35, 2017 (suppl; abstr 6501) Google Scholar
26. Office of Human Research Protections: Protection of Human Subjects: Subpart A: Basic HHS Policy for Protection of Human Research Subjects. 45 CFR 46.116(d), 2009 Google Scholar
27. Ni Y, Kennebeck S, Dexheimer JW, et al: Automated clinical trial eligibility prescreening: Increasing the efficiency of patient identification for clinical trials in the emergency department. J Am Med Inform Assoc 22:166-178, 2015 Crossref, MedlineGoogle Scholar
28. Penberthy L, Brown R, Puma F, et al: Automated matching software for clinical trials eligibility: Measuring efficiency and flexibility. Contemp Clin Trials 31:207-217, 2010 Crossref, MedlineGoogle Scholar
29. Bertsimas D, Dunn J, Pawlowski C, et al: Applied Informatics Decision Support Tool for Mortality Predictions in Patients With Cancer. JCO Clin Cancer Inform 10.1200/CCI.18.00003 Google Scholar
30. Walsh S, de Jong EEC, van Timmeren JE, et al: Decision support systems in oncology. JCO Clin Cancer Inform 10.1200/CCI.18.00001 Google Scholar


No companion articles


DOI: 10.1200/CCI.19.00079 JCO Clinical Cancer Informatics no. 4 (2020) 50-59. Published online January 24, 2020.

PMID: 31977254

ASCO Career Center