Project: Multi-Channel EEG


A multivariate time series (MTS) is a collection of data obtained by monitoring the values of a set of temporally related or interrelated variables over time at successive instants spaced at uniform time intervals. Given a set of MTS, the problem of classifying or clustering such data concerns discovering inherent groupings of the data based on how similar or dissimilar the time series are to each other.

Published papers related to Multi-Channel EEG Data Classification

  1. Fuzzy Feature Extraction for Multi-Channel EEG Classification
    • EEG signals are usually collected by placing multiple electrodes at various positions along the scalp as multi-channel data. Given that many channels are collected for every single trial, the multi-channel EEG classification problem can be treated as a multivariate time series classification problem. 
    • For multi-channel EEG data to be more accurately classified, we propose an algorithm called the Fuzzy Multi-Channel EEG Classifier (FMCEC). This algorithm can take into consideration the interaction among different signals collected at different time instants and locations on the skull when constructing a classifier. The FMCEC first pre-process raw EEG data by eliminating noise through the discretization of the data. It then performs fuzzification of the resulting discretized data to capture imprecision and vagueness in the data. Given the fuzzified data, FMCEC then discovers intra-channel patterns within each channel and inter-channel patterns between different channels of EEG signals. The discovered patterns, which are represented as fuzzy temporal patterns, are then used to characterize and differentiate between different classes of multi-channel EEG data. To evaluate the effectiveness of FMCEC, we tested it with several sets of real EEG datasets. The results show that the algorithm can be a promising tool for the classification of multi-Channel EEG data.
  2. What Strikes the Strings of Your Heart? – Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging
    • After twenty years extensive study in psychology, some musical factors have been identified that can evoke certain kinds of emotions. However, the underlying mechanism of the relationship between music and emotion remains unanswered. This paper intends to find the genuine correlates of music emotion by exploring a systematic and quantitative framework. The task is formulated as a dimensionality reduction problem, which seeks the complete and compact feature set with intrinsic correlates for the given objectives. 
    • Since a song generally elicits more than one emotions, we explore dimensionality reduction techniques for multi-label classification. One challenging problem is that the hard label cannot represent the extent of the emotion and it is also difficult to ask the subjects to quantize their feelings. This work tries utilizing EEG signal to solve this challenge. A learning scheme called EEG-based emotion smoothing (E2S) and a bilinear multi-emotion similarity preserving embedding (BME-SPE) algorithm are proposed. We validate the effectiveness of the proposed framework on standard dataset CAL-500. Several influential correlates have been identified and the classification via those correlates has achieved good performance. We build a Chinese music dataset according to the identified correlates and find that the music from different culture may share similar emotions.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Categories


Catogery Tags