•  
  •  
 

Abstract

In the diagnosis of skin cancer, dermoscopy images have to be evaluated for feature extraction. In this paper, the authors design a new method to enhance the skin dermoscopy image segmentation accuracy by eliminating artifacts and hair using photometric quasi-invariants, followed by a unique approach for skin lesion segmentation using histogram-based feature fusion. Histogram features are used to extract data range and mean images from the histogram, and these are fused with the free artifacts and hair image to generate the final image. the authors used the PH2 dataset for the proposed segmentation method because it contains the ground truth for each image. In contrast, tray to use the Skin_Hair dataset, which includes artificial hair generated with its corresponding ground truth. On the other hand, some PH2 dataset images have hair artifacts that the proposed pre-processing method can remove. According to the experimental results, our method outperforms existing methods in three aspects: accuracy, efficiency, and robustness, measured by Accuracy (Acc), Precision (Pre), Sensitivity (Sen), Specificity (Spe), Jaccard Index (JI), and Dice (D). Our proposed method achieved an average Acc 96.14, Pre 93.87, Sen 94.49, Spe 95.99, JI 88.19, and D 94.21. Furthermore, the Spe increases to 95.99%, up by about +3.2% over top performing methods. In the meanwhile, JI is brought to 88.19%, which increases by about 1.5%; D takes over and goes up to value of 94.21%. These findings suggest that our methodology can provide a more effective and accurate way of detecting skin cancer.

Keywords

Artifacts and hair removal, Dermoscopic, Feature fusion, Segmentation, Skin cancer

Subject Area

Computer Science

Article Type

Article

First Page

2824

Last Page

2837

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Share

COinS