Pinaki Sarder and Brendon Lutnick.

Pinaki Sarder, PhD, assistant professor of pathology and anatomical sciences and senior author on the paper, left, with lead author Brendon R. Lutnick, a doctoral candidate.

New Tool Allows for Easier Reading of Medical Images

Published March 1, 2019 This content is archived.

story based on news release by ellen goldbaum

Department of Pathology and Anatomical Sciences researchers have developed a tool that lets medical professionals analyze images without engineering expertise.

“We have created an automatic, human-in-the-loop segmentation tool for pathologists and radiologists. ”
Assistant professor of pathology and anatomical sciences
Print

Digital images of biopsies are especially valuable in diagnosing and tracking the progression of certain diseases, such as chronic kidney disease and cancer.

Computational tools called neural networks, which focus on complex pattern recognition, are well suited to such applications. But because machine learning is so complex, medical professionals typically rely on computer engineers to “train” or modify neural networks to properly annotate or interpret medical images.

Able to Digitize Medical Images of Any Organ

The technique developed by the Jacobs School of Medicine and Biomedical Sciences researchers was described in a paper published Feb. 11 in Nature Machine Intelligence.

Expected to be applicable to digitize medical images of any organ, the researchers demonstrated the tool with histology images of chronic kidney disease and magnetic resonance images of the human prostate gland.

The tool and the image data that were used for its development are now publicly available.

“We have created an automatic, human-in-the-loop segmentation tool for pathologists and radiologists,” says Pinaki Sarder, PhD, corresponding and senior author, and assistant professor of pathology and anatomical sciences.  

The paper’s lead author is Brendon R. Lutnick, a doctoral student in computational cell biology, anatomy and pathology working on his dissertation research under Sarder’s supervision.

System Increases Efficiency With Each Use

Designed with what the researchers call an intuitive interface, the tool automatically improves annotation and segmentation of medical images based on what it “learns” from the way the human user interacts with the system.

“With our system, you don’t have to know any machine learning,” Sarder says. “Now medical professionals can do structure annotation by themselves.”

“The technique empowers medical professionals for the first time to use their own familiar tools, such as a commonly used whole-slide viewer for image annotation, without getting lost in the translation of machine learning jargon,” he adds.

Lutnick explains that the system is designed to improve its performance as it is “trained” on the same dataset.

“You want to train it on your own dataset iteratively,” he says. “This optimizes the workload of the expert annotator as the system becomes more efficient each time you use it.”

More Precise Understanding of Disease State

The system improves iteratively; essentially learning each time the medical professional redraws a boundary on an image to pinpoint a particular structure or abnormality.

The ultimate goal is a more precise understanding of a patient’s disease state.

“When you take a biopsy, you want to figure out the image features and what they tell you about disease progression,” Sarder says.

He explains that, for example, a darker red area on an image of the glomerulus in the kidney — where waste products are filtered from blood — indicates sclerosis, which may signal that the disease has progressed.

The more precisely the boundaries of those areas can be defined, the better the understanding of what stage of disease the patient is in and how it may progress in the future.

“The system performs better each time, so the burden of the human operating the machine is reduced with each iteration,” Lutnick says. “Each time the individual redraws a boundary on a sample, the system is learning. Importantly, this interaction allows the human to understand the weaknesses of the machine as it learns.”

The image shows how the UB tool works, when applied to histology image data. The large background image shows a mouse renal tissue section with kidney structures called glomeruli marked via automatically estimated boundaries. The boundaries can be iteratively updated during system training. The glomeruli structures change when the disease has progressed.

UB and NIH Main Sources of Project Funding

Co-authors from the Jacobs School are:

Other co-authors are from the following institutions:

  • Medical College of Wisconsin
  • University of California, Davis Medical Center
  • Washington University School of Medicine, St. Louis

Funding for the research was provided by the University at Buffalo, including an IMPACT award, and by Sarder’s grant from the National Institute for Diabetes and Digestive and Kidney Diseases.