Skip to main content

Artificial intelligence-driven enhanced skin cancer diagnosis: leveraging convolutional neural networks with discrete wavelet transformation

Abstract

Background

Artificial intelligence (AI) has shown great promise in the field of healthcare as a means of improving the diagnosis of skin cancer. The objective of this research is to enhance the precision and effectiveness of skin cancer identification by the incorporation of convolutional neural networks (CNNs) and discrete wavelet transformation (DWT). Making use of AI-driven techniques has the potential to completely transform the diagnosis process by providing quicker and more accurate evaluations of skin lesions. In an effort to improve dermatology and give physicians reliable resources for early and precise skin cancer diagnosis, this work explores the combination of CNNs with DWT.

Methods

The accurate and timely classification of skin cancer lesions plays a crucial role in early diagnosis and effective treatment. In this, we propose a novel approach for skin cancer classification using discrete wavelet transformation (DWT). The DWT is employed to extract relevant features from skin lesion images, which are then used to train a classification model. The effectiveness of the suggested approach is assessed through the examination of a dataset of skin lesion images with known classes (malignant or benign).

Results

The outcomes of the experiment demonstrate that the suggested model successfully attained a classification result of sensitivity as 94% and specificity as 91% when compared with artificial neural network (ANN) and multilayer perceptron methods.

Conclusions

The HAM 10000 dataset is employed to explore and evaluate the effectiveness of the proposed model, leading to improved accuracy compared to the existing machine learning algorithms in utilization. The results demonstrate the effectiveness of the DWT-based approach in accurately classifying skin cancer lesions, thus aiding in early detection and diagnosis.

Introduction

Skin cancer is the name for the abnormal cell proliferation that occurs in the body as a result of deoxyribo nucleic acid (DNA) damage [1]. Hypodermis (innermost layer), Dermis (middle layer), and Epidermis (outermost layer) are the three layers where skin cancer may develop [2]. The most dangerous types of skin cancer, known as benign and malignant tumors, can cause death [3]. Malignant cancer can be fatal and spreads quickly into the skin layer. The benign cancer has a higher survival rate and weighs less [4]. Dermatoscopes, which provide illuminated and magnified skin images, are the most effective method of detecting skin cancer. This technique provides better visual effect of detailed region of interest (ROI) of obtained lesions via the surface reflection [5]. Fortunately, when the skin cancer is identified at the earlier stage, the diagnosis and the treatment may respond more [6].

In recent years, computer-aided diagnosis systems have emerged as valuable tools to assist dermatologists in skin cancer classification [7]. These systems leverage advanced image processing and machine learning techniques. To derive significant characteristics from images of skin lesions and subsequently categorize them as either benign or malignant, one such technique that has shown promise is the discrete wavelet transformation (DWT) [8]. The skin cells are harmed by both the global climate change and the sun's ultraviolet (UV) radiation [9]. Skin cancer will form from the damaged cells that could not be healed. The worldwide skin cancer resides in the top three positions of leading death cancer. Skin cancer deaths have increased during the past few decades. Therefore, early disease classification and prediction are essential for the patient's life. The method that automatically detects skin lesions is automated using artificial intelligence approaches [10]. The convolutional neural network (CNN), which is based on deep learning, more closely approaches the evidence for automatic skin cancer recognition [11]. The discrete wavelet transformation (DWT)-based features extraction techniques used with CNN for skin cancer classification in this suggested method [12]. By integrating DWT-based feature extraction techniques with CNNs for skin cancer classification offer a powerful framework for leveraging multiscale information, reducing dimensionality, enhancing robustness to noise, improving interpretability, enabling transfer learning, and achieving high-performance results in medical image analysis applications.

By leveraging the power of DWT for feature extraction and classification, accurate and timely diagnosis can be achieved, ultimately leading to improved patient outcomes, reduced healthcare costs, and enhanced clinical decision-making [13].

The most difficult problem in medical image processing is the early diagnosis of skin cancer. There are numerous methods for detecting skin cancer. Machine learning techniques like K-Means and fuzzy-C-Means are used to prevent incorrect predictions [14]. To locate the different types of skin cancer, this approach needs the initial clusters as an input [15].

Recent years have seen the implementation of hierarchical approaches, conditional random fields (CRF), Random Forest (RF), and Markov Random Fields (MRF) to detect skin cancer. In a genuine dermatoscopic image, these two techniques are most frequently utilized for volatile intensity variation [16]. The k-nearest neighbor algorithm is used to find the cancer similarity index. The RF operates on randomly chosen data points from the decision tree [17]. A detection technique based on K-Means and Particle Swarm Optimization (PSO) was created by Praveen et al. In this case, identification is performed using the k-means classification algorithm and the optimization of inheriting the theorized PSO version [18].

Using KNN (K-Nearest Neighbors) and RF (Random Forest), Murugan et al. implemented the categorization of various forms of skin cancer [19]. This approach has demonstrated very poor image illumination and precision. SVM-based skin cancer detection systems with KNN and RF algorithms were developed by Mane et al. [20] and Patel et al. [21]. The approaches used to identify cancer today were manually created using features from doing classification [22]. These features call for increased computation and storage. The DWT features are stripped to prevent this. Using CNN algorithms, all of the collected features are categorized [1].

Utilizing deep learning for the early identification of skin cancer has shown remarkable success in various computer vision challenges, surpassing the performance of human specialists [1, 22]. As a result, this technology has contributed to lowering the death rates associated with skin cancer. By incorporating efficient formulations into deep learning techniques, it becomes feasible to achieve exceptional cutting-edge processing and categorization accuracy [23,24,25].

In a previous investigation [26], the classification of skin disorders was executed by employing an edge detection technique in conjunction with K-NN and C-NN algorithms. The outcomes indicated accuracies of 75% and 75.6%, respectively, in effectively discriminating between potentially benign skin conditions and those that might have the potential to develop into malignant cancer. This categorization was conducted using the dataset from the International Skin Imaging Dataset Collaboration (ISIC).

In study [27], a skin disease classification system was created using the ISIC dataset to differentiate between skin cancer and skin benign tumors. The system utilized deep learning with the PNASNet-5-Large architecture, achieving an outstanding performance accuracy of 76%, signifying its effectiveness in accurately identifying and classifying skin diseases.

Additionally, several other studies [28,29,30] have employed CNN for skin disease detection, yielding notable performance accuracies of 80.52%, 86.21%, and 87.25%, respectively. These results demonstrate the effectiveness of CNN-based approaches in accurately identifying and diagnosing various skin diseases.

In order to improve the efficiency of the skin cancer detection system and expand the available data, researchers conducted an ISIC data augmentation process [31]. By augmenting the dataset, they aimed to increase the diversity and quantity of the training data, leading to potential improvements in the system's accuracy and effectiveness in detecting skin cancer. In the research [32], utilizing the CNN approach with random modifiers, an impressive accuracy of 97.49% was attained in effectively differentiating distinct skin disorder lesions, encompassing nevus lesions, carcinomas, and melanomas. Moreover, the study reported that the highest accuracy of 95.91% was attained using the AlexNet architecture, showcasing its efficiency in skin disorder classification.

Furthermore, there is a lack of research published on the application of the softmax function using the CNN-based discrete wavelet transform (DWT) method. Hence, this current research aims to explore and investigate the novelty and research potential of employing DWT. The application of the softmax function using CNN-based discrete wavelet transform contributes to accurate, interpretable, and probabilistic skin cancer classification. By combining the strengths of DWT-enhanced feature extraction with softmax-based probability estimation, the model can provide reliable predictions, facilitate clinical decision-making, and enhance the overall effectiveness of computer-aided diagnostic systems in dermatology. The study focuses on experimentally evaluating the performance measures and values obtained through the DWT method, contributing to the enhancement of research quality in this domain. The current work focuses on the CNN-based DWT using softmax function to classify the skin cancer lesions to early detect and diagnose by using the HAM 10000 dataset, and ISIC 2018 dataset using Softmax function is used to investigate and assess the performance of the suggested model, which yields higher accuracy compared to the performance of the machine learning algorithms currently in use. The dataset encompasses a range of skin types, captures a variety of lesions across different age groups, and offers insights into dermatological conditions that may aid in the development of robust machine learning models for skin cancer classification. Although not fully representative of the entire global population, the HAM10000 dataset serves as a valuable resource for research and advancements in dermatology and computer-aided diagnosis. The HAM10000 dataset, abbreviated for "Human Against Machine with 10000 training images,” is utilized widely. It has the potential to become a benchmark dataset for future comparisons between human and machine performance.

The results demonstrate the effectiveness of the DWT-based approach in accurately classifying skin cancer lesions, thus aiding in early detection and diagnosis.

Related works

In recent years, there has been a notable surge in the application of diverse deep learning algorithms to effectively classify skin cancer. Table 1 outlines the array of methods utilized for predicting cancer, showcasing the advancements made in this field.

Table 1 Dataset, recent methods used with number of classes and size of the data used to detect skin cancer

Efficient screening and forecasting play a pivotal role in increasing the likelihood of administering proper medication and reducing mortality rates associated with skin cancer. Nevertheless, many studies have primarily concentrated on applying deep learning models directly to raw images, disregarding the potential benefits of using preprocessed images. This approach limits the classification network's ability to adapt effectively. This approach enhances the model's adaptability and potential for more accurate predictions, thereby contributing to improved patient outcomes.

Proposed system

Based on DWT and CNN methodologies, this study offered an accurate skin cancer detection system. The definition of the skin cancer dataset, preprocessing of the cancer images, feature extraction from the DWT, CNN classification, and performance evaluation are its five primary stages. Figure 1 shows the overall architecture of the suggested technique, and the following subsections go into more detail. The pre-processing steps for the images are listed as follows,

  1. 1.

    Resizing and Standardization: Images might be resized to a uniform size to ensure consistency for processing.

  2. 2.

    Normalization

  3. 3.

    Data Augmentation

  4. 4.

    Noise Reduction and Filtering: This could involve denoising algorithms or filters to enhance the clarity of lesions.

  5. 5.

    Class Balancing: Ensuring that each class (different types of skin lesions) has a balanced representation within the dataset is crucial. Techniques such as oversampling minority classes or applying class weights during training might be used to address class imbalances.

Fig. 1
figure 1

Overall workflow of CNN with DWT features in deep learning. Combine CNN and DWT features for deep learning with an efficient and powerful image processing workflow

The input images are preprocessed; thereby, it reduces the extra noises that occur in the image. This image is trained and tested to find the fitness and to evaluate the model. In the training process of convolutional neural networks (CNNs), the feature extractor is automatically learned instead of being manually implemented. The feature extractor in a CNN is composed of specialized neural networks that determine their weights through the training process. This means that instead of hand-crafting specific features, the CNN learns to identify important patterns and features directly from the data during training. As the network undergoes training on a labelled dataset, it adjusts its internal parameters, allowing it to automatically extract and recognize relevant features from the input data. This capability is one of the key advantages of CNNs, enabling them to perform well in various computer vision tasks, such as image classification, object detection, and segmentation.

Discrete wavelet transformation (DWT) and its application to skin lesion images

Definition of skin cancer dataset

ISIC 2018 dataset is used to construct the suggested algorithm for detecting skin cancer [44]. In total,10,015 pictures from seven different cancer types make up this dataset such as: VASC (vascular lesion), DF (dermatofibroma), BKL (benign keratosis), AKIEC (actinic keratosis), BCC (basal cell carcinoma), NV (melanocytic nevus), and MEL (melanoma). The convolutional neural network (CNN), which is based on deep learning, more closely approaches the evidence for automatic skin cancer recognition. After the best characteristics are fed into CNN’s softmax function, which is employed for classifying different types of skin cancers, the DWT approach is used to extract the key features from the skin cancer images. The proposed system more properly categorizes the various forms of skin cancer. The HAM 10000 dataset is used to investigate and assess the performance of the suggested model, which results in greater accuracy than the machine learning algorithms that are currently in use.

Preprocessing

Images from the data set are processed into grayscale using a median filter. The algorithm is made simpler and requires less computing when done in grayscale. It is an image processing technology used in digital photography. White is the lightest shade of gray, while black is the deepest, and all traces of color have been eliminated. The brightness of its intermediate shades is typically the same as the brightness of its primary hues (red, blue and green).

A filtering technique for signal and image noise reduction is the median filter. As it moves pixel-by-pixel over the image, the median filter replaces each value with the median of the pixels in its immediate vicinity. An occasionally helpful nonlinear filtering method is median filtering, which keeps the fine details in an image while filtering the noise.

Feature extraction from DWT coefficients

Dermatoscopic screening facilitates the identification of particular forms of skin cancer, and the volatile pixel intensity differences help to more clearly identify lesions. The dataset picture using the random oversampling and under sampling methods, samples are pre-processed. The pre-processed photographs are fed into the cancer detection method as input. The biopsy procedure is used to carry out this identification. The accuracy of identification of the skin cancer is based on the physician’s knowledge. Avoiding incorrect predictions may result in poor survival. In order to identify skin cancer, machine learning techniques such as Markov Random Field (MRF), Conditional Random Field (CRF), Particle Swarm Optimization (PSO), K-means, Random Forest (RF), K-Nearest Neighbor (KNN), and Fuzzy-C-Means algorithms are used.

Classification

Artificial Neural Network (ANN)-derived deep learning models have recently been used to solve categorization issues. Numerous deep learning algorithms, including LSTM, RNN, and Gan Encoder, are used to solve this issue. CNN technology is a machine learning methodology that dramatically improves the accuracy of cancer detection. Low-Low (LL), Low–High (LH), High-High (HH), and High-Low (HL) features based on the discrete wavelet transformation (DWT) have been retrieved in this study. The passed input image has been used to extract discrete wavelet transformation (DWT)-based features such as Low-Low (LL), Low–High (LH), High-High (HH), and High-Low (HL). The CNN algorithm is used to efficiently detect the different types of cancer utilizing the LL feature, which in this procedure retains 50% of the relevant pixels from the original image. The primary concept behind discrete wavelet transform (DWT) in image processing is to decompose the image into multiple sub-images characterized by different spatial domains and independent frequencies [45]. This suggested approach performs the detection procedure with improved efficiency while requiring less storage and computing time.

Extraction of image using DWT feature

DWT is used to process the pre-processed skin cancer images in order to extract cardinal features using dimensionality reduction. The coefficient of the wavelet has been extracted in this study by localizing frequency information using the Haar-based DWT approach. Each skin cancer image in Haar DWT is row-wise decomposed to produce L (low)- and H (high)-frequency subbands, respectively, using low-pass and high-pass filters. Following a column-wise decomposition of these two subbands, four frequency subbands are created: Low-Low (LL), Low–High (LH), High-High (HH) ,and High-Low (HL) as shown in Fig. 2.

Fig. 2
figure 2

Image extraction using DWT feature. Extract images utilizing the discrete wavelet transform feature for enhanced analysis and processing

Figure 3 shows the images of skin cancer and DWT feature extracted images of seven various types of cancer. The 1st row shows the original images, 2nd row shows gray scale images, and the 3rd row represents the DWT feature extracted LL image with the following specifications: (a) VASC (vascular lesion), (b) DF (dermatofibroma), (c) BKL (benign keratosis), (d) AKIEC (actinic keratosis), (e) BCC (basal cell carcinoma), (f) NV (melanocytic nevus), and (g) MEL (melanoma).

Fig. 3
figure 3

Images of skin cancer and DWT feature extracted images of seven cancer types. Skin cancer images and DWT-extracted images representing seven cancer types for analysis

Experimental setup

Implementation details and tools used

After the best characteristics are fed into CNN's softmax function, utilized for the classification of different varieties of skin cancers, the DWT approach is used to extract the key features from the skin cancer images [46]. The proposed system more properly categorizes the various forms of skin cancer [47]. Utilizing the HAM 10000 dataset, the suggested model is employed to scrutinize and evaluate its performance, yielding a higher accuracy compared to the presently utilized machine learning algorithms [44].

Evaluation metrics employed for assessing the performance

Several evaluation metrics can be employed to assess the performance of the classification model. These metrics help in quantifying the accuracy, recall, precision, and overall effectiveness of the proposed method.

Accuracy measures the proportion of correctly classified skin cancer lesions out of the total number of lesions in the dataset. It provides an overall measure of how well the classification model performs.

Precision measures the proportion of correctly classified malignant skin cancer lesions out of the total number of lesions predicted as malignant. This demonstrates the model's capability to accurately recognize malignant lesions while avoiding the misclassification of benign lesions as malignant.

Recall (Sensitivity or True Positive Rate) quantifies the ratio of accurately categorized malignant skin cancer lesions out of the total number of malignant lesions in the dataset. It indicates the ability of the model to correctly identify all malignant lesions, minimizing false negatives.

Specificity (True Negative Rate) measures the proportion of correctly classified benign skin cancer lesions out of the total number of benign lesions in the dataset. This highlights the model's capacity to accurately differentiate benign lesions while avoiding the erroneous categorization of malignant lesions as benign.

The F1 score amalgamates precision and recall, presenting them as a unified metric that strikes a balance between the two. Representing the harmonic mean of precision and recall, it accords equal importance to both measures.

These evaluation metrics help in comprehensively assessing the performance of the skin cancer classification model using DWT. They provide insights into the model's accuracy, ability to distinguish between malignant and benign lesions, and its performance across different aspects of the classification task.

Results

Convolutional neural networks (CNNs) are a distinct type of neural network architecture that has demonstrated exceptional performance in tasks such as image recognition and classification [6].

The architecture of convolutional neural networks (CNNs) used in this manuscript involving the HAM10000 dataset for skin lesion classification tasks comprises several layers designed to extract meaningful features from the images.

  1. 1.

    Input Layer: The input layer receives the image data, usually in the form of pixel values. The size of this layer corresponds to the dimensions of the input images.

  2. 2.

    Convolutional Layers: These layers consist of multiple filters or kernels that perform convolution operations to extract features from the images.

  3. 3.

    Pooling Layers: The pooling layers, such as MaxPooling, reduce the spatial dimensions of the feature maps.

  4. 4.

    Fully Connected (Dense) Layers: These layers consist of neurons that are fully connected, allowing the network to learn complex patterns in the extracted features. The final dense layers typically perform classification based on learned features.

  5. 5.

    Output Layer: The output layer usually has neurons corresponding to the number of classes or categories. For skin lesion classification, it might have neurons representing different types of skin conditions. The activation function used here often depends on the task, here we performed, softmax activation.

CNNs have exhibited superior capabilities in distinguishing faces, objects, and traffic signs, even outperforming human abilities. This prowess has led to their integration into applications like robotics and autonomous vehicles.

CNNs are educated through supervised learning, utilizing labeled data containing pertinent classes. Essentially, CNNs comprise two components: the concealed layers, responsible for feature extraction, and the fully connected layers, engaged in the final classification task once the feature extraction process concludes. CNNs discover the connection between the class labels and the input objects. A CNN's hidden layers adhere to a specific architecture, unlike traditional neural networks. In conventional neural networks, each layer is composed of a set of neurons, and the neurons in each layer are linked to the neurons in the layer before it. The arrangement of concealed layers within a CNN showcases notable dissimilarity. Neurons in a given layer exhibit a somewhat loose connection with certain neurons in the layer above, rather than possessing complete interconnections. The limitation to local connections and extra pooling layers that combine the outputs of individual neurons into a single value results in translation-invariant features. As a result, the model is simplified, and training is made simpler as shown in Fig. 4.

Fig. 4
figure 4

Architecture of convolutional neural network image representing the softmax function. Convolutional neural network architecture visualizing the softmax function for classification tasks in image processing

The effectiveness of the suggested approach is evaluated through the utilization of metrics such as Accuracy, Precision, F1-score, Recall (Sensitivity), and Specificity by comparing the detected cancer labels with the original cancer labels from the ISIC 2018 dataset. The various formulations for the performance evaluation metrics are shown as formulas.

The measures of accuracy metrics are calculated by

$${\text{Acc}} = \frac{{\left( {{\text{tp}} + {\text{tn}}} \right)}}{{\left( {{\text{tp}} + {\text{fp}} + {\text{fn}} + {\text{tn}}} \right)}}$$
(1)

Recall or Sensitivity metrics is measured by

$${\text{Sen}} = \frac{{{\text{tp}}}}{{\left( {{\text{tp}} + {\text{fn}}} \right)}}$$
(2)

The metrics for specificity is evaluated by

$${\text{Spc}} = \frac{{{\text{tn}}}}{{\left( {{\text{tn}} + {\text{fp}}} \right)}}$$
(3)

The F1-score is formulated by

$$F1 {\text{score}} = \frac{{2{\text{tp}}}}{{\left( {2{\text{tp}} + {\text{fp}} + {\text{fn}}} \right)}}$$
(4)

From formulas (1), (2), (3), and (4) the true class of the objects must be known in order to evaluate a classifier. The classifier's assigned class and the actual class are compared to assess the classification quality. This makes it possible to separate the objects into the four groups below:

  1. 1.

    A true positive (TP) occurs when the classifier predicts the positive class in the right way.

  2. 2.

    A true negative (TN) is one in which the classifier predicts the negative class with accuracy.

  3. 3.

    False positive (FP): This occurs when the classifier guesses the positive class wrongly.

  4. 4.

    False negatives (FN) occur when the classifier guesses the negative class inaccurately.

Now, statistical values for the classifier can be computed based on the cardinality of these subsets.

Discussion

In this method the images from Hamm 10000 dataset are converted and the obtained results are depicted in Fig. 5. The recall, accuracy, F1-Score, and specificity values obtained after training and testing the images are listed. Table 2 shows the results of collected dataset for skin cancer. The values obtained from true positive, true negative and false positive and false negative show the average specificity and sensitivity of the obtained images. The sensitivity and specificity were compared with ANN and multilayer perceptron [48] to represent the performance of automated detection results which is shown in Table 3, and its classification results of accuracy and specificity results of the obtained images are shown in Fig. 6.

Fig. 5
figure 5

Performance measures and values of proposed method. Evaluate proposed method using performance measures and corresponding values for comprehensive assessment

Table 2 Performance measures and values of proposed method
Table 3 Performance of automated detection methods
Fig. 6
figure 6

Performance of automated detection methods. Evaluate the effectiveness of automated detection methods based on their performance metrics and outcomes

Comparison proposed method with existing method

These methods often incorporate DWT as a feature extraction technique and combine it with various classification algorithms [49]. Commonly used existing methods are Support Vector Machines (SVM), Random Forest (RF), Artificial Neural Networks (ANN), k-Nearest Neighbors (k-NN), Naive Bayes, Decision Trees, Ensemble Methods. These existing methods serve as the foundation for skin cancer classification using DWT [50].

To assess the effectiveness of the suggested method in comparison with existing approaches, a thorough comparison was made. The results, as depicted in Table 4, highlight the superiority of our strategy in terms of performance when compared to other networks. Specifically, the utilization of the inception model in the proposed method led to a remarkable overall accuracy of 94%, surpassing the performance of established models. This outcome emphasizes the viability and effectiveness of our method as a promising solution for the task at hand.

Table 4 Comparison with various methods

Table 5 displays the highest achieved accuracy following fine-tuning with various transfer learning models. The ANN method produces 79.80% and multilayer perceptron produces 70.50% of the fine-tuned results of trained and tested images, whereas softmax function produces 94% of accuracy result.

Table 5 Accuracy results of the proposed CNN model with ANN and multilayer perceptron

The effectiveness of a proposed discrete wavelet transformation (DWT)-based method for skin cancer classification can be evaluated based on several factors [38]. Here are some points to consider when discussing the effectiveness of the proposed DWT-based method: Feature Extraction, Dimensionality Reduction, Classification Accuracy, Robustness, and Generalizability [59]. Analysis of the impact of feature selection techniques and classification algorithms is crucial for skin cancer classification using discrete wavelet transformation (DWT) [60].

Conclusion

Cancer is the term for when a cell grows abnormally in the body. If used sooner rather than later, the application that will be used in the future will undoubtedly benefit human existence. In this case, the disease can be diagnosed with 91% specificity early on or in advance in order to ascertain the kind of sickness the patient has and what needs to be done to treat it. The likelihood of survival increases if melanoma is discovered early. The disease can be found with great accuracy of 94%. Given its pivotal role in skin cancer detection, machine learning is poised to offer valuable contributions to the medical field. In the future, this might be expanded to help medical departments automate the diagnosis of skin cancer throughout the eligibility process. The proposed method for skin cancer classification using discrete wavelet transformation (DWT) can have several contributions and implications. In conclusion, skin cancer classification using discrete wavelet transformation is a valuable research area with significant implications for dermatology. Future research directions and potential improvements for skin cancer classification using discrete wavelet transformation (DWT) can focus on addressing existing limitations and exploring new avenues for advancement.

Comparing an approach to the current state-of-the-art methods in skin cancer detection involves considering several factors, including accuracy, computational efficiency, robustness, and clinical applicability.

Advantages:

  1. 1.

    Feature Fusion: Integration of DWT-extracted features with CNNs allows leveraging both spatial and frequency domain information.

  2. 2.

    Interpretability: It allows the clinicians to understand the significance of specific frequency-based features in lesion classification.

  3. 3.

    Data Efficiency: Computational resources are limited.

Limitations

  1. 1.

    Complexity and Interpretability Trade-off: Increase model complexity, potentially making it harder to interpret the exact features driving classifications.

  2. 2.

    Dependence on Transformation Quality: The effectiveness of DWT features depends on the quality and appropriateness of the transformation applied.

  3. 3.

    Computational Overhead: Increase computational complexity and training time, potentially making the approach less feasible for real-time applications.

Further research or improvement in this model

  • Multi-modal Integration

  • Interpretable AI

  • Enhanced Data Collection

  • Transfer Learning and Pretrained Models

  • Personalized Medicine

  • Clinical Validation and Validation Studies

Data availability

The HAM 10000 dataset is used to investigate and assess the performance of the suggested model. ISIC 2018 dataset is used to construct the suggested algorithm for detecting skin cancer.

Abbreviations

AKIEC:

Actinic keratosis

ANN:

Artificial neural network

BCC:

Basal cell carcinoma

BKL:

Benign keratosis

CNN:

Convolutional neural network

CRF:

Conditional random fields

DF:

Dermato fibroma

DNA:

Deoxyribo nucleic acid

DWT:

Discrete wavelet transformation

FP:

False positive

FN:

False negative

HAM10000:

Human against machine with 10000 training images

HH:

High-high

HL:

High-low

ISIC:

Imaging dataset collaboration

KNN:

K-Nearest neighbors

LH:

Low-high

LL:

Low-low

LSTM:

Long short-term memory

MEL:

Melanoma

MRF:

Markov random fields

NV:

Melanocytic nevus

PNASNet:

Progressive neural architecture search network

PSO:

Particle swarm optimization

RF:

Random forest

RNN:

Recurrent neural network

ROI:

Region of interest

SVM:

Support vector machine

TP:

True positive

TN:

True negative

VASC:

Vascular lesion

References

  1. Asadi O, Yekkalam A, Manthouri M (2023) MDDC: melanoma detection using discrete wavelet transform and convolutional neural network. J Ambient Intell Humaniz Comput 14(9):12959–12966

    Article  Google Scholar 

  2. Ansari UB, Sarode T (2017) Skin cancer detection using image processing. Int Res J Eng Technol 4(4):2875–2881

    Google Scholar 

  3. Fujisawa Y, Inoue S, Nakamura Y (2019) The possibility of deep learning-based, computer-aided skin tumor classifiers. Front Med 27(6):191

    Article  Google Scholar 

  4. Muthukumar K, Gowthaman P, Venkatachalam M, Saroja M, Pradheep N (2019) GTCM based skin lesion melanoma disease detection approach for optimal classification of medical images. Int J Recent Technol Eng 7(3):1–2

    Google Scholar 

  5. Tschandl P, Rosendahl C, Kittler H (2018) The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data 5(1):1–9

    Article  Google Scholar 

  6. Okur E, Turkan M (2018) A survey on automated melanoma detection. Eng Appl Artif Intell 1(73):50–67

    Article  Google Scholar 

  7. Jain S, Singhania U, Tripathy B, Nasr EA, Aboudaif MK, Kamrani AK (2021) Deep learning-based transfer learning for classification of skin cancer. Sensors 21(23):8142

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Tembhurne JV, Hebbar N, Patil HY, Diwan T (2023) Skin cancer detection using ensemble of machine learning and deep learning techniques. Multimed Tools Appl 16:1–24

    Google Scholar 

  9. Roy S (2017) Impact of UV radiation on genome stability and human health. In: Ultraviolet light in human health, diseases and environment, pp 207–219

  10. Milton MA (2019) Automated skin lesion classification using ensemble of deep neural networks in isic 2018: skin lesion analysis towards melanoma detection challenge. arXiv preprint arXiv:1901.10802

  11. Vijayalakshmi MM (2019) Melanoma skin cancer detection using image processing and machine learning. Int J Trend Sci Res Dev 3(4):780–784

    Google Scholar 

  12. Ravichandran D, Nimmatoori R, Ahamad MG (2016) Mathematical representations of 1D, 2D and 3D wavelet transform for image coding. Int J Adv Comput Theory Eng 5(3):20–27

    Google Scholar 

  13. Zhang X, Zhao S (2018) Segmentation preprocessing and deep learning based classification of skin lesions. J Med Imaging Health Inf 8(7):1408–1414

    Article  Google Scholar 

  14. Hekler A, Utikal JS, Enk AH, Hauschild A, Weichenthal M, Maron RC, Berking C, Haferkamp S, Klode J, Schadendorf D, Schilling B (2019) Superior skin cancer classification by the combination of human and artificial intelligence. Eur J Cancer 1(120):114–121

    Article  Google Scholar 

  15. Masood A, Ali A-J (2013) Computer aided diagnostic support system for skin cancer: a review of techniques and algorithms. Int J Biomed Imaging 30:2013

    Google Scholar 

  16. Li Y, Li C, Li X, Wang K, Rahaman MM, Sun C, Chen H, Wu X, Zhang H, Wang Q (2022) A comprehensive review of Markov random field and conditional random field approaches in pathology image analysis. Arch Comput Methods Eng 29(1):609–639

    Article  Google Scholar 

  17. Murugan A, Nair SA, Kumar KS (2019) Detection of skin cancer using SVM, random forest and kNN classifiers. J Med Syst 43:1–9

    Article  Google Scholar 

  18. Kaur R, Kumar P, Babbar G (2019) An enhanced and automatic skin cancer detection using K-mean and PSO technique. Int J Innov Technol Explor Eng 8(9):634–639

    Google Scholar 

  19. Shah SA, Ahmed I, Mujtaba G, Kim MH, Kim C, Noh SY (2022) Early detection of melanoma skin cancer using image processing and deep learning. In: Advances in intelligent information hiding and multimedia signal processing: proceeding of the IIH-MSP 2021 & FITAT 2021, Kaohsiung, Taiwan, vol 2. Springer, Singapore, pp 275–284

  20. Mane SS, Shinde SV (2017) Different techniques for skin cancer detection using dermoscopy images. Int J Comput Sci Eng 5(12):165–170

    Google Scholar 

  21. Patel I, Patel S, Patel A (2019) Dermoscopic image classification using image processing technique for melanoma detection. Int J Res Advent Technol 23(2):97–103

    Google Scholar 

  22. Shukla AK, Tripathi D (2020) Detecting biomarkers from microarray data using distributed correlation based gene selection. Genes Genom 42:449–465

    Article  CAS  Google Scholar 

  23. Adegun A, Viriri S (2021) Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art. Artif Intell Rev 54:811–841

    Article  Google Scholar 

  24. Iqbal S, Siddiqui GF, Rehman A, Hussain L, Saba T, Tariq U, Abbasi AA (2021) Prostate cancer detection using deep learning and traditional techniques. IEEE Access 8(9):27085–27100

    Article  Google Scholar 

  25. Dildar M, Akram S, Irfan M, Khan HU, Ramzan M, Mahmood AR, Alsaiari SA, Saeed AH, Alraddadi MO, Mahnashi MH (2021) Skin cancer detection: a review using deep learning techniques. Int J Environ Res Public Health 18(10):5479

    Article  PubMed  PubMed Central  Google Scholar 

  26. Savera TR, Suryawan WH, Setiawan AW (2020) Deteksi Dini Kanker Kulit menggunakan K-NN dan convolutional neural network. Jurnal Teknologi Informasi Dan Ilmu Komputer 7(2):373–378

    Article  Google Scholar 

  27. Codella NC, Gutman D, Celebi ME, Helba B, Marchetti MA, Dusza SW, Kalloo A, Liopyris K, Mishra N, Kittler H, Halpern A (2018) Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). IEEE, pp 168–172

  28. Jianu SR, Ichim L, Popescu D (2019) Automatic diagnosis of skin cancer using neural networks. In: 2019 11th International symposium on advanced topics in electrical engineering (ATEE). IEEE, pp 1–4

  29. Deshmukh AA, Wanjale K, Jadhav TA, Khankal DV, Diwate AD, Athawale SV (2023) Multi-class skin diseases classification using hybrid deep convolutional neural network. Int J Intell Syst Appl Eng 11(10s):11–22

    Google Scholar 

  30. Zhang X, Wang S, Liu J, Tao C (2018) Towards improving diagnosis of skin diseases by combining deep neural network and human knowledge. BMC Med Inform Decis Mak 18(2):69–76

    Google Scholar 

  31. Hosny KM, Kassem MA, Foaud MM (2019) Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 14(5):e0217293

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Albahar MA (2019) Skin lesion classification using convolutional neural network with novel regularizer. IEEE Access 19(7):38306–38313

    Article  Google Scholar 

  33. Kanani P, Padole M (2019) Deep learning to detect skin cancer using google colab. Int J Eng Adv Technol Regul Issue 8(6):2176–2183

    Article  Google Scholar 

  34. Rajput G, Agrawal S, Raut G, Vishvakarma SK (2022) An accurate and noninvasive skin cancer screening based on imaging technique. Int J Imaging Syst Technol 32(1):354–368

    Article  Google Scholar 

  35. Reis HC, Turk V, Khoshelham K, Kaya S (2022) InSiNet: a deep convolutional approach to skin cancer detection and segmentation. Med Biol Eng Comput 13:1–20

    Google Scholar 

  36. Le DN, Le HX, Ngo LT, Ngo HT (2020) Transfer learning with class-weighted and focal loss function for automatic skin cancer classification. arXiv preprint arXiv:2009.05977

  37. Ali MS, Miah MS, Haque J, Rahman MM, Islam MK (2021) An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models. Mach Learn Appl 15(5):100036

    Google Scholar 

  38. Rahman MM, Nasir MK, Nur A, Khan SI, Band S, Dehzangi I, Beheshti A, Rokny HA (2022) Hybrid feature fusion and machine learning approaches for melanoma skin cancer detection

  39. Guan Q, Wang Y, Ping B, Li D, Du J, Qin Y, Lu H, Wan X, Xiang J (2019) Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: a pilot study. J Cancer 10(20):4876

    Article  PubMed  PubMed Central  Google Scholar 

  40. Dorj UO, Lee KK, Choi JY, Lee M (2018) The skin cancer classification using deep convolutional neural network. Multimed Tools Appl 77:9909–9924

    Article  Google Scholar 

  41. Ech-Cherif A, Misbhauddin M, Ech-Cherif M (2019) Deep neural network based mobile dermoscopy application for triaging skin cancer detection. In: 2019 2nd international conference on computer applications and information security (ICCAIS). IEEE, pp 1–6

  42. Murugan A, Nair SA, Preethi AA, Kumar KS (2021) Diagnosis of skin cancer using machine learning techniques. Microprocess Microsyst 1(81):103727

    Article  Google Scholar 

  43. Nawaz M, Mehmood Z, Nazir T, Naqvi RA, Rehman A, Iqbal M, Saba T (2022) Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering. Microsc Res Tech 85(1):339–351

    Article  PubMed  Google Scholar 

  44. Vidya M, Karki MV (2020) Skin cancer detection using machine learning techniques. In: 2020 IEEE international conference on electronics, computing and communication technologies (CONECCT). IEEE, pp 1–5

  45. Barnouti NH, Sabri ZS, Hameed KL (2018) Digital watermarking based on DWT (discrete wavelet transform) and DCT (discrete cosine transform). Int J Eng Technol 7(4):4825–4829

    Google Scholar 

  46. Harangi B (2018) Skin lesion classification with ensembles of deep convolutional neural networks. J Biomed Inform 1(86):25–32

    Article  Google Scholar 

  47. Kim CI, Hwang SM, Park EB, Won CH, Lee JH (2021) Computer-aided diagnosis algorithm for classification of malignant melanoma using deep neural networks. Sensors 21(16):5551

    Article  PubMed  PubMed Central  Google Scholar 

  48. El-Khatib H, Popescu D, Ichim L (2020) Deep learning-based methods for automatic diagnosis of skin lesions. Sensors 20(6):1753

    Article  PubMed  PubMed Central  Google Scholar 

  49. Ameri A (2020) A deep learning approach to skin cancer detection in dermoscopy images. J Biomed Phys Eng 10(6):801

    Article  Google Scholar 

  50. Sae-Lim W, Wettayaprasit W, Aiyarak P (2019) Convolutional neural networks using MobileNet for skin lesion classification. In: 2019 16th international joint conference on computer science and software engineering (JCSSE). IEEE, pp 242–247

  51. Foahom Gouabou AC, Damoiseaux JL, Monnier J, Iguernaissi R, Moudafi A, Merad D (2021) Ensemble method of convolutional neural networks with directed acyclic graph using dermoscopic images: melanoma detection application. Sensors 21(12):3999

    Article  PubMed  PubMed Central  Google Scholar 

  52. Lopez AR, Giro-i-Nieto X, Burdick J, Marques O (2017) Skin lesion classification from dermoscopic images using deep learning techniques. In: 2017 13th IASTED international conference on biomedical engineering (BioMed). IEEE, pp 49–54

  53. Gouda W, Sama NU, Al-Waakid G, Humayun M, Jhanjhi NZ (2022) Detection of skin cancer based on skin lesion images using deep learning. In: Healthcare 2022, vol 10, No. 7. MDPI, p 1183

  54. Jaiswar S, Kadri M, Gatty V (2015) Skin cancer detection using digital image processing. Int J Sci Eng Res 3(6):138–140

    Google Scholar 

  55. Alom MZ, Aspiras T, Taha TM, Asari VK (2019) Skin cancer segmentation and classification with NABLA-N and inception recurrent residual convolutional networks. arXiv preprint arXiv:1904.11126

  56. Babu GN, Peter VJ (2022) classification of skin cancer images using discrete wavelet transform features and support vector machine. IJFANS Int J Food Nutr Sci 11:3

    Google Scholar 

  57. Ko LT, Chen JE, Hsin HC, Shieh YS, Sung TY (2012) Haar-wavelet-based just noticeable distortion model for transparent watermark. Math Probl Eng 2012:8

    Article  Google Scholar 

  58. Wu QE, Yu Y, Zhang X (2023) A skin cancer classification method based on discrete wavelet down-sampling feature reconstruction. Electronics 12(9):2103

    Article  Google Scholar 

  59. Thsper P, Singh A (2019) A survey of lesion detection using dermoscopy image analysis. J Gujarat Res Soc 21(6):129–143

    Google Scholar 

  60. Munir K, Elahi H, Ayub A, Frezza F, Rizzi A (2019) Cancer diagnosis using deep learning: a bibliographic review. Cancers 11(9):1235

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Funding

No funding was used for this study.

Author information

Authors and Affiliations

Authors

Contributions

AC, MM conceptualized the study and helped in roles/writing—original draft and software; JPD curated the data and supervised the study; AC, MM, JPD helped in formal analysis and validated the study; AC, JPD investigated the study and helped in resources and writing—review & editing; AC helped in methodology; MM, JPD were involved in visualization.

Corresponding author

Correspondence to A. Muthu Manokar.

Ethics declarations

Ethical approval and consent to participate

This study did not involve any research with human or animal subjects. All experiments were performed using the HAM 10000 dataset to investigate and ISIC 2018 dataset is used to construct the suggested algorithm for detecting skin cancer. 

Consent for publication

Not applicable. This study did not involve any research with human or animal subjects.. All experiments were performed using the HAM 10000 dataset to investigate and ISIC 2018 dataset is used to construct the suggested algorithm for detecting skin cancer.

Competing interests

The authors state that there are no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Claret, S.P.A., Dharmian, J.P. & Manokar, A.M. Artificial intelligence-driven enhanced skin cancer diagnosis: leveraging convolutional neural networks with discrete wavelet transformation. Egypt J Med Hum Genet 25, 50 (2024). https://doi.org/10.1186/s43042-024-00522-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43042-024-00522-5

Keywords