• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Researchers Propose Novel Methodology for Computer-Aided Diagnostic System in Melanoma

Article

In cancer care, early diagnosis and treatment is crucial for the best possible outcomes. This is certainly true in skin cancer, which is the most common type of cancer in the United States and one of the fastest growing causes of death. A recent study published in Scientific Reports explored a novel deep learning-based, automatic system for skin lesion segmentation to aid in early melanoma diagnosis.

“As equipment and professional human resources are usually not available for each patient to be tested, an automated computer-aided diagnostic (CAD) system is needed to determine skin lesions such as melanoma, nonmelanoma, and benign,” study authors wrote.

Study authors noted that according to WHO reports, 1 in 3 cancer cases is skin cancer. Given how common melanoma is, ensuring each patient receives appropriate diagnosis and care is a tall order. Well-trained, generalized CAD systems have the potential to interpret dermoscopic images and improve the objectivity of their interpretation.

CAD systems for skin cancer typically work in 4 main steps to classify a lesion: image acquisition, preprocessing, segmentation of the skin tumor, and then lesion classification. CAD programs can also track benign lesions and hopefully prevent them from becoming malignant with proper care.

There has been significant progress in deep learning systems for skin lesion segmentation in recent years, with the International Skin Imaging Collaboration (ISIC) hosting its first public benchmark competition on dermoscopic image processing in 2016 to push the field forward. Still, current deep learning segmentations do not meet the required results set by the inter-observer agreement of expert dermatologists.

“We suggest a novel deep learning-based, fully automatic approach for skin lesion segmentation, including sophisticated pre and postprocessing approaches,” study authors wrote. “We focus on a successful training approach to manage dermoscopic images under different retrieval environments rather than focusing entirely on deep learning network architecture, making the proposed technique highly scalable.”

Their system involves 3 steps. Preprocessing combines morphological filters with an inpainting algorithm to eliminate unnecessary hair structures from the dermoscopic images; model training uses 3 different semantic segmentation deep neural network architectures to improve accuracy; and postprocessing uses test time augmentation (TTA) and conditional random field (CRF) to improve accuracy.

TTA grows the dataset by applying transformations to initial imaging, such as rotations, flips, color saturation, and more to measure the model’s efficacy. CRF is used to fine-tune rough segmentation results and allows for the consideration of neighboring samples for better prediction.

To mitigate the issue of biasness in segmentation due to unbalanced pixel distribution, they assessed different loss functions to find one that minimizes biasness against the background of the image.

The deep learning models in the system include U-Net, deep residual U-Net (ResUNet), and improved ResUNet. The system was analyzed using skin lesion datasets from ISIC-2016 and ISIC-2017, and the system’s predicted labels were categorized into false negatives, true negatives, false positives, and true positives to determine efficiency.

When trained on the ISIC-2016 and ISIC-2017 datasets individually, the proposed method achieved an average Jaccard Index (JAC) of 85.96% and 80.05% on each dataset, respectively. When the system was trained on the 2 datasets combined, it achieved an average JAC of 80.73% and 90.02% in the ISIC-2016 and ISIC-2017 datasets, respectively.

There were still failure cases, with 20% of the images in the ISIC-2017 dataset achieving a JAC index below 70%. Low contrast between tumors and skin, loss of ground truth of lesions due to masks not being tight to the skin lesion, and in some cases incorrect annotation on the provided masks, had to do with those failures.

Even so, the proposed method stacks up to state-of-the-art methods and is highly scalable. And if there had not been incorrect annotations, the overall JAC index could reach up to 80% — an acceptable level based on the inter-observer agreement of expert dermatologists.

Larger training datasets could help reduce over- and under-segmentation to improve performance even further, and the system itself could be expanded to other biomedical image segmentation challenges. In skin cancer, it could potentially help close gaps in diagnosis and care administration if applied more widely.

“Unlike conventional deep learning-based semantic segmentation methods, the proposed methodology predicts a fine-tuned mask by employing Bayesian learning, leading to the improvement in overall performance of lesion segmentation,” the authors concluded.

Reference

Ashraf H, Waris A, Ghafoor MF, Gilani SO, Niazi IK. Melanoma segmentation using deep learning with test-time augmentations and conditional random fields. Sci Rep. Published online March 10, 2022. doi:10.1038/s41598-022-07885-y

Related Videos
Glenn Balasky, executive director of the Rocky Mountain Cancer Center.
Cesar Davila-Chapa, MD
Female doctor in coat with stethoscope on blue background - Pixel-Shot - stock.adobe.com
Krunal Patel, MD
Juan Carlos Martinez, MD
Benjamin Scirica, MD, MPH, associate professor of medicine at Harvard Medical School and director of quality initiatives at Brigham and Women’s Hospital’s Cardiovascular Division
Glenn Balasky during a video interview
Rachel Dalthorp, MD
dr joseph alvarnas
Related Content
© 2024 MJH Life Sciences
AJMC®
All rights reserved.