• Center on Health Equity & Access
  • Clinical
  • Health Care Cost
  • Health Care Delivery
  • Insurance
  • Policy
  • Technology
  • Value-Based Care

Optimized Deep Learning Model Classifies Smartphone Images of Malignant Skin Lesions

News
Article

Across various classes, the researchers identified a model that, when coupled with appropriate data augmentation and optimization partner, may help aid in the detection of skin cancer.

Using an accessible deep learning (DL) system to classify smartphone images of skin lesions may improve early detection of skin cancer, according to researchers of an experimental study published in the International Journal of Intelligent Systems.

Offline data augmentation can significantly enhance deep learning techniques, improving classification performance compared to models trained solely on original clinician skin images. | Image Credit: Alexander - stock.adobe.com

Offline data augmentation can significantly enhance deep learning techniques, improving classification performance compared to models trained solely on original clinician skin images. | Image Credit: Alexander - stock.adobe.com

By using clinical images, rather than commonly used dermoscopic images, the researchers said they were able to provide a more representative and more inclusive look at real-world skin lesions. Using these images may also allow for overcoming certain barriers with current DL algorithms.

“Many cancer detection systems use deep learning (DL) algorithms for skin lesion diagnosis by proffering predictive decisions on skin cancer categories,” wrote the researchers. “However, some factors limit the universal acceptability of DL algorithms in some medical applications. These factors include health data poverty, lack of fairness due to model bias, and lack of DL model interpretability (black-box models).”

Improvements have been made in DL models through developing techniques for diagnosis based on dermascopic images, though challenges remain in deploying use of these models on certain devices, such as mobile devices, due to spatial variations on the images. This challenge has opened up potential for data collection using smartphone cameras with variations in image quality, the researchers explained.

The researchers compared 13 different DL models across 4 neural network classes: DenseNet, ResNet, MobileNet, and EfficientNet. Each model was assessed in the context of 3 variants of the Adam optimizer (Rectified Adam [RAdam], AdamW, and Adam), a common algorithm used to optimize model parameters during training of neural networks.2 

From there, the performance of the top 4 models was assessed in the context of 5 data augmentation (DA) schemes, which included cropping (DA-1), adjusting contrast and other lighting features (DA-2), flipping the images horizontally (DA-3), rotating the image (DA-4), and a combination of the measures (DA-5).1

Throughout the analysis, DenseNet showed the best performance when trained with the DA schemes and optimized with the appropriate Adam variant. In the binary classification, all versions of DenseNet outperformed the other models.

In particular, DenseNet161 yielded a binary accuracy of 78.4%. This accuracy was improved by a percentage point (79.4%) when training for cropping (DA-1) and using the Adam optimizer. Best accuracy across the remaining 3 model classes was 73.3% for ResNet, 68.1% for MobileNet, and 72% for EfficientNet.

The Adam-optimized models focused on irregular borders and pigmentation to predict malignancy, whereas they relied on smoother areas and uniform texture and coloration for benign predictions. Incorrect predictions were due to the model incorrectly highlighting the noncritical regions of images and areas.

Insights from the multiclass classification analysis were consistent with the binary classification, with DenseNet exceeding the accuracy seen with the other classes. A multiclassification accuracy of 75.07% was observed after leveraging the DA-3 scheme and RAdam optimization.

RAdam optimization used distinctive asymmetry and color variations to predict different types of melanoma, such as actinic keratosis and basal cell carcinoma. Incorrect predictions, particularly for melanocytic nevus misclassified as malignant melanoma, were attributed to emphasis placed on features not indicative of malignant lesions.

“This study indicates that offline DA can significantly enhance DL techniques, improving classification performance compared to models trained solely on original clinician skin images,” wrote the researchers. “This indicates that employing DA techniques, factoring in spatial transformations or combinatorial effects of different DA schemes, aided the DL model in effectively generalizing to new testing examples.” 

References

  1. Oyedeji MO, Okafor E, Samma H, Alfarraj M. Interpretable deep learning for classifying skin lesions. Int J Intell Syst. Published online April 29, 2025. doi:10.1155/int/2751767
  2. What is Adam optimizer? Analytics Vidhya. Last Updated April 23, 2025. Accessed May 12, 2025. https://www.analyticsvidhya.com/blog/2023/09/what-is-adam-optimizer/
Related Videos
Naim Alkhouri, MD
Antoine Keller, MD
Pieter Sonneveld, MD, PhD
Dr K. Vish Viswanath
Lalan Wilfong, MD
Katherine Baker, MD, MMHC
Nini Wu, MD, MBA
Ajai Chari, MD
Related Content
© 2025 MJH Life Sciences
AJMC®
All rights reserved.