Research: Machine Learning Proves Faster Than Human Review in Detecting Cancer Cases

April 26, 2016
Open-source machine learning tools were found to be as good as, or better than, human reviewers in detecting cancer cases using data from free-text pathology reports, according to researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing at Indiana University-Purdue University Indianapolis (IUPUI).

Open-source machine learning tools were found to be as good as, or better than, human reviewers in detecting cancer cases using data from free-text pathology reports, according to researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing at Indiana University-Purdue University Indianapolis (IUPUI).

The computerized approach was also faster and less resource-intensive in comparison to human counterparts, the research found. Every state in the U.S. requires cancer cases to be reported to statewide cancer registries for disease tracking, identification of at-risk populations, and recognition of unusual trends or clusters. Typically, however, busy healthcare providers submit cancer reports to equally busy public health departments months into the course of a patient's treatment rather than at the time of initial diagnosis.

As such, this information can be difficult for health officials to interpret, which can further delay health department action, when action is needed, according to the researchers. The Regenstrief Institute and IU researchers have demonstrated that machine learning can greatly facilitate the process, by automatically and quickly extracting crucial meaning from plaintext, also known as free-text, pathology reports, and using them for decision-making.

The researchers sampled 7,000 free-text pathology reports from over 30 hospitals that participate in the Indiana Health Information Exchange (IHIE) and used open-source tools, classification algorithms, and varying feature selection approaches to predict if a report was positive or negative for cancer. The results indicated that a fully automated review yielded results similar or better than those of trained human reviewers, saving both time and money.

"Towards Better Public Health Reporting Using Existing Off the Shelf Approaches: A Comparison of Alternative Cancer Detection Approaches Using Plaintext Medical Data and Non-dictionary Based Feature Selection" is published in the April 2016 issue of the Journal of Biomedical Informatics. The study was conducted with support from the Centers for Disease Control and Prevention (CDC).

"We think that it’s no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not," said study senior author Shaun Grannis, M.D., interim director of the Regenstrief Center of Biomedical Informatics. "We have come to the point in time that technology can handle this. A human's time is better spent helping other humans by providing them with better clinical care."

Grannis continued, "A lot of the work that we will be doing in informatics in the next few years will be focused on how we can benefit from machine learning and artificial intelligence. Everything—physician practices, healthcare systems, health information exchanges, insurers, as well as public health departments—are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can't do it—but computers can."

Grannis, a Regenstrief Institute investigator and an associate professor of family medicine at the IU School of Medicine, is the architect of the Regenstrief syndromic surveillance detector for communicable diseases and led the technical implementation of Indiana's Public Health Emergency Surveillance System, one of the nation's largest. Studies over the past decade have shown that this system detects outbreaks of communicable diseases seven to nine days earlier and finds four times as many cases as human reporting while providing more complete data.

"Machine learning can now support ideas and concepts that we have been aware of for decades, such as a basic understanding of medical terms," said Grannis. "We found that artificial intelligence was as least as accurate as humans in identifying cancer cases from free-text clinical data. For example the computer 'learned' that the word 'sheet' or 'sheets' signified cancer as 'sheet' or 'sheets of cells' are used in pathology reports to indicate malignancy.

"This is not an advance in ideas, it's a major infrastructure advance—we have the technology, we have the data, we have the software from which we saw accurate, rapid review of vast amounts of data without human oversight or supervision,” Grannis said.

Sponsored Recommendations

The Race to Replace POTS Lines: Keeping Your People and Facilities Safe

Don't wait until it's too late—join our webinar to learn how healthcare organizations are racing to replace obsolete POTS lines, ensuring compliance, reducing liability, and maintaining...

Transform Care Team Operations & Enhance Patient Care

Discover how to overcome key challenges and enhance patient care in our upcoming webinar on September 26. Learn how innovative technologies and strategies can transform care team...

Prior Authorization in Healthcare: Why Now?

Prepare your organization for the CMS 2027 mandate on prior authorization via API. Join our webinar to explore investment insights, real-time data exchange, and the benefits of...

Securing Remote Radiology with the Zero Trust Exchange

Discover how the Zero Trust Exchange is transforming remote radiology security. This video delves into innovative solutions that protect sensitive patient data, ensuring robust...