Classification and Online Comparision

About the GUI application to design signal processing pipelines
Post Reply
ad3dia
Posts: 10
Joined: Sun Sep 09, 2012 6:53 am

Classification and Online Comparision

Post by ad3dia »

Hello everyone,
I am using the designer for a Motor Imagery scenario similar to the one that uses a CSP filter. I have a few questions bugging me.

So, my training with the Graz visualization results in a CSP filter config file (cspconfig.cfg) in about 2 minutes 01 secs.
And, the classifier trainer produces the classifier.cfg file in around the same time 2 minutes 01 secs.
The classification performance is 96% with sigma of 2.055.

First of all , I am assuming that this is great result, right?
Just confused why the CSP filter and classifier finish training around the same time for every session? Am I doing something wrong here?

Secondly, this classification seems to work decently well with the online scenario with feedback.

My major question is : How can I compare the online signal results with the ones I obtained with training to see how well the classifier performed?

One of the methods I used was : Obtain a new CSP filter from the online signal data, and run the classifier trainer with this new CSP filter and online signal data. It yielded similar results of 95% performance within 2 minutes.

Second method I used was: Run the classifier trainer with online signal data and the old training CSP filter (cspconfig.cfg) . Classifier performance was 69% with sigma= 2.04.

I must be missing something here. There must be a way to compare how well the classification during the training matched the classification during actual online performance.

I will appreciate any light into this .

Thanks,
Ashish

fabien.lotte
Posts: 112
Joined: Sun Mar 14, 2010 12:58 pm

Re: Classification and Online Comparision

Post by fabien.lotte »

Dear Ashish,

Concerning the training time, I guess that the training times are similar because most of the computation time is devoted to reading the EEG data from the file (which is the same file for both the CSP and LDA scenario), the actual training of CSP and LDA being extremely fast.

Concerning the performance comparisons, there is something important to note here: the performance that is displayed by the LDA classifier when training it, is biased if you used CSP (even if you use cross-validation to estimate the performance of LDA). Indeed, the CSP spatial filters have been trained on all the data, so even if you do cross-validation with the LDA, part of your signal processing pipeline (in this case the CSP filters) have already seen the testing data, so the overall performances will be overestimated. This is a limitation of the openvibe scenarios for CSP-based motor imagery in this case. So you have to use the performance mentioned by LDA as a rough indicator, most probably indicating higher performances than what you will actually have during the online classification. This being said, the real meaningful performance measure is the classification performances obtained online, when using the CSP and LDA training on the calibration data recorded previously, since this one actually assesses whether your BCI can generalize to new, completely unseen EEG data.

Does it make sense?

Best,
Fabien

ad3dia
Posts: 10
Joined: Sun Sep 09, 2012 6:53 am

Re: Classification and Online Comparision

Post by ad3dia »

Hi Fabien,
Thank you for your quick response! That made some things clear to me . I do have a few more questions, if you don't mind.
These are related to the Motor-Imagery-CSP filter scenario with Emotiv Epoc Headset.
So,
I have to use a feature extractor and a classifier for any BCI scenario to get any reasonable output. According to my understanding, the CSP does the feature extraction and LDA does the classification, is that right? Can I just use one and not the other , although that does not make sense to me ?

I am using the EMotiv Epoc Headset. I saw a blog post here saying Surface Laplacian filter does not work for emotiv. So I chose the CSP filter? Is that true? And for the same reason I am testing on the Motor-Imagery-CSP filter scenario.

To get the classification accuracy during online session, I have added a classifier accuracy box which gets stimulation from the classifier processor. Is that a reasonable step? But, I don't see any results on it. I guess the Graz visualization Box has to stop to get the final results. I am not sure how do I do that?

These are a handful of questions, but hopefully you'll be able to shed some light into it.

Regards,
Ashish

fabien.lotte
Posts: 112
Joined: Sun Mar 14, 2010 12:58 pm

Re: Classification and Online Comparision

Post by fabien.lotte »

ad3dia wrote:Hi Fabien,
Thank you for your quick response! That made some things clear to me . I do have a few more questions, if you don't mind.
These are related to the Motor-Imagery-CSP filter scenario with Emotiv Epoc Headset.
So,
I have to use a feature extractor and a classifier for any BCI scenario to get any reasonable output. According to my understanding, the CSP does the feature extraction and LDA does the classification, is that right? Can I just use one and not the other , although that does not make sense to me ?
two answers to that:
- the CSP is indeed used for feature extraction: it actually learns some spatial filters to combine the existing channels into new virtual channels that are more discriminative than the original ones, if you use band power features. So you don't have to use CSP (you can build a BCI without spatial filters), but your performances are likely to be much better with the CSP.
- If you use an Emotiv Epoc, I am actually surprised that it works at all. Indeed, there are no sensors over the hand motor area of the brain (notably electrodes C3 and C4) in the Epoc cap. Maybe you turned it around?
ad3dia wrote: I am using the EMotiv Epoc Headset. I saw a blog post here saying Surface Laplacian filter does not work for emotiv. So I chose the CSP filter? Is that true? And for the same reason I am testing on the Motor-Imagery-CSP filter scenario.
Indeed you cannot do a surface laplacian filter with the Epoc (there is not enough sensors and their configuration prevents from doing it). You can indeed do CSP, but simply because you can do CSP whatever your sensors number and configuration. Even if you use CSP you will still have the problem I mentioned above that the Epoc does not have any sensors over the hand motor cortex.
ad3dia wrote: To get the classification accuracy during online session, I have added a classifier accuracy box which gets stimulation from the classifier processor. Is that a reasonable step? But, I don't see any results on it. I guess the Graz visualization Box has to stop to get the final results. I am not sure how do I do that?
you have to connect the classifier accuracy box to both the classifier processor output and the target labels, that is, the stimulation from the Graz Motor Imagery BCI stimulator. Was it what you did?
(see http://openvibe.inria.fr/documentation/ ... asure.html)
Otherwise, the Graz visualization box might be displaying the classification accuracy in the console (click on the small arrow next to the "i" (info) symbol on the bottom left of the designer). I am not 100% sure of that, but you should check it.

Hope this helps,
Best,
Fabien

ad3dia
Posts: 10
Joined: Sun Sep 09, 2012 6:53 am

Re: Classification and Online Comparision

Post by ad3dia »

Hi Fabien,
Thanks for the insight again. Much appreciated!

I indeed got the classifier accuracy box to work.
As for Emotiv Epoc and motor imagery, I am using the F3 and F4 channels which lie slightly above the C3 and C4 in the 10-20 system. I have to slide the headset back a little to place those electrodes on the motor cortex. Its a bit uncomfortable to be frank. Have you heard of any better way to do that than my method? Emotiv Epoc is the only headset I have to work with it right now, unfortunately!

I tried two methods:
Both these had 10 training trials.
I) I had tried using all the channels and placing the headset in its original position. With a CSP filter of 6 dimensions it gave me classification of 95% but online of around 40% average.

II)By sliding the headset and using only F3 and F4, the classification dropped to 60% and online of around 55%. This time I used a CSP filter of 4 dimensions.
Both these methods were on signals of 8-30HZ.

I was just curious :
1. How is the filter dimensions of the CSP filter related to the choice of channels, or is it not?
2. Is it fruitless to try any Motor Imagery experiment on the Emotiv Epoc headset, given its configurations.

3. Can I train the BCI with graz visualizer for Right/Left classes and use that training data on my own virtual reality environment? Or, I can only use it with the online graz visualizer? If I use the training with graz visualizer on my own VR application, what are some of the steps I should be aware of ?



Regards,
Ashish

fabien.lotte
Posts: 112
Joined: Sun Mar 14, 2010 12:58 pm

Re: Classification and Online Comparision

Post by fabien.lotte »

ad3dia wrote: 1. How is the filter dimensions of the CSP filter related to the choice of channels, or is it not?
it is not really related. Each CSP filter is a linear combination of all channels. It has been shown empirically that using 6 filters was a good default value. You can try to use more but I won't expect much improvement. Most of the relevant information is contained in the first filters usually. Using less than 6 may decrease your performance though
ad3dia wrote: 2. Is it fruitless to try any Motor Imagery experiment on the Emotiv Epoc headset, given its configurations.
I guess it is indeed rather tough. I have never tried myself, but I've never heard of anyone able to do proper motor imagery with an Emotiv Epoc. The Epoc is more suitable for BCI based on P300 or even better on SSVEP (you could try the SSVEP scenario that comes with OpenVIBE to see if it works)
ad3dia wrote: 3. Can I train the BCI with graz visualizer for Right/Left classes and use that training data on my own virtual reality environment? Or, I can only use it with the online graz visualizer? If I use the training with graz visualizer on my own VR application, what are some of the steps I should be aware of ?
Yes you can train a motor imagery-based BCI with the Graz visualizer and then use the resulting BCI for another virtual environment. We actually did that several times (check, for instance, the handball scenario that comes with OpenViBE). Once your motor imagery-based BCI is trained, you can connect it to a VR application using VRPN (there are dedicated sections and tutorials about that on the OpenViBE website).

Hope this helps,

Best regards,
Fabien

uahmed
Posts: 49
Joined: Fri Dec 21, 2012 12:43 pm
Contact:

Re: Classification and Online Comparision

Post by uahmed »

Hi Adedia,
I am getting one problem in getting the stimulations. I keep getting the stimulation continuously, I checked that via stimulation listener too.

Thanks

jlegeny
Posts: 239
Joined: Tue Nov 02, 2010 8:51 am
Location: Mensia Technologies Paris FR
Contact:

Re: Classification and Online Comparision

Post by jlegeny »

Hello uahmed

I will detail the process of using the motor imagery scenario and will address your problem at the end of this post.

The CSP is only a spatial filter which finds the best linear combination of electrodes to maximize the differences between target and non target stimulations for a given frequency band. This scenario does not do any classification.

Usually a BCI experiment (I assume from your previous messages that you are using motor imagery as a paradigm) goes like this:

1. Acquire training data in a training session
- motor-imagery-bci-1-acquisition.xml
- for motor imagery you will need around 20 trials for each hand
- also do please note that it is very important to do the imagery task correctly, for the first trials I recommend doing and actual motor task, i.e.: moving the hand
2. Train the CSP scenario
- motor-imagery-bci-2-train-CSP.xml
- this scenario trains the CSP spatial filter
- just put the previously recorded file to the *Generic stream reader* and run the scenario
3. Train the classifier
- motor-imagery-bci-3-classifier-trainer.xml
- This trains the actual classifier used to determine right or left hand movement
- use the same file you have acquired previously in step 1
4. Use the online scenario
- motor-imagery-bci-4-online.xml
- This scenario will output a value estimating the like-hood of the user to imagine right or left motor imagery. The classifier indeed outputs the distance to the LDA classification hyperplane, this means that negative values represent left motor imagery and positive values represent right motor imagery.

Note that the online scenario does not normalize the output, so the values have no upper boundary. Also one value is outputted every 0.0625th of a second. In order to get realistic command you will need to do some arithmetic on the results. Averaging the output over a period of time or expecting several values on the same side of the 0 consecutively are some of the used approaches.


I hope this helps
Sorry for the later reply
Jozef

uahmed
Posts: 49
Joined: Fri Dec 21, 2012 12:43 pm
Contact:

Re: Classification and Online Comparision

Post by uahmed »

Thanks for the help... But I just did it 20 hours before :) Thanks alot once again :)

uahmed
Posts: 49
Joined: Fri Dec 21, 2012 12:43 pm
Contact:

Re: Classification and Online Comparision

Post by uahmed »

CSP scenario is not suitable for Emotiv I guess. I used only the classifier trainer, and it wokred fine then.

kiyarash
Posts: 27
Joined: Tue Apr 11, 2017 10:44 am

Re: Classification and Online Comparision

Post by kiyarash »

ad3dia wrote: 2. Is it fruitless to try any Motor Imagery experiment on the Emotiv Epoc headset, given its configurations.[/b]
Hi I have the same low online accuracy as you. Were you eventually able to get a good online accuracy using the emotive epoc+ in the MI CSP scenario ?

if yes would you share your experience. (how did you tilt the headset or any other change that made things better)

Best regards,
Kiarash

Post Reply