SSC-SleepNet: A Siamese-Based Automatic Sleep Staging Model with Improved N1 Sleep Detection.

in IEEE journal of biomedical and health informatics by Songlu Lin, Zhihong Wang, Hans van Gorp, Mengzhu Xu, Merel van Gilst, Sebastiaan Overeem, Jean-Paul Linnartz, Pedro Fonseca, Xi Long

TLDR

  • The study proposes SSC-SleepNet, a novel deep learning algorithm for automatic sleep staging using single-channel EEG signals, which outperforms existing models in detecting N1 sleep stage and overall sleep staging accuracy.

Abstract

Automatic sleep staging from single-channel electroencephalography (EEG) using artificial intelligence (AI) is emerging as an alternative to costly and time-consuming manual scoring using multi-channel polysomnography. However, current AI methods, mainly deep learning models such as convolutional neural network (CNN) and long short-term memory (LSTM), struggle to detect the N1 sleep stage, which is challenging due to its rarity and ambiguous nature compared to other stages. Here we propose SSC-SleepNet, an automatic sleep staging algorithm aimed at improving the learning of N1 sleep. SSC-SleepNet employs a pseudo-Siamese neural network architecture owing to its capability in one- or few-shot learning with contrastive loss. Which we selected due to its strong capability in one- or few-shot learning with a contrastive loss function. SSC-SleepNet consists of two branches of neural networks: a squeeze-and-excitation residual network branch and a CNN-LSTM branch. These two branches are used to generate latent features of the EEG epoch. The adaptive loss function of SSC-SleepNet uses a weighing factor to combine weighted cross-entropy loss and focal loss to specifically address the class imbalance issue inherent in sleep staging. The proposed new loss function dynamically assigns a higher penalty to misclassified N1 sleep stages, which can improve the model's learning capability for this minority class. Four datasets were used for sleep staging experiments. In the Sleep-EDF-SC, Sleep-EDF-X, Sleep Heart Health Study, and Haaglanden Medisch Centrum datasets, SSC-SleepNet achieved macro F1-scores of 84.5%, 89.6%, 89.5%, and 85.4% for all sleep stages, and N1 sleep stage F1-scores of 60.2%, 58.3%, 57.8%, and 55.2%, respectively. Our proposed deep learning model outperformed the most existing models in automatic sleep staging using single-channel EEG signals. In particular, N1 detection performance has been markedly improved compared to the state-of-art models.

Overview

  • The study proposes a novel automatic sleep staging algorithm, SSC-SleepNet, for improving the learning of N1 sleep stage from single-channel EEG signals.
  • SSC-SleepNet employs a pseudo-Siamese neural network architecture with a squeeze-and-excitation residual network branch and a CNN-LSTM branch to generate latent features of EEG epochs.
  • The algorithm uses an adaptive loss function that combines weighted cross-entropy loss and focal loss to address the class imbalance issue inherent in sleep staging, with a higher penalty for misclassified N1 sleep stages.

Comparative Analysis & Findings

  • SSC-SleepNet achieved macro F1-scores of 84.5%, 89.6%, 89.5%, and 85.4% for all sleep stages in the four used datasets, indicating high accuracy in distinguishing different sleep stages.
  • The N1 sleep stage F1-scores of SSC-SleepNet are 60.2%, 58.3%, 57.8%, and 55.2% in the four datasets, significantly improving upon state-of-the-art models in N1 detection performance.
  • The results demonstrate that SSC-SleepNet outperforms existing models in automatic sleep staging using single-channel EEG signals, particularly in the detection of N1 sleep stage.

Implications and Future Directions

  • The proposed algorithm has the potential to revolutionize the field of sleep staging by providing a more accurate and efficient method for diagnosing sleep disorders.
  • Future research can focus on further improving the performance of SSC-SleepNet on specific sleep stages, such as N1 sleep stage, and exploring its application on real-world sleep disorders.
  • The algorithm's capability for one- or few-shot learning with a contrastive loss function opens up possibilities for training on limited data or adapting to new datasets.