Page 1 of 1

Surface laplacian (spatial filter)

Posted: Tue Aug 02, 2011 9:31 pm
by Rapten
Hi i wanted to know whether or not to use spatial filters for classifying emotions or just like/dislike ? :? i've seen lots of motor imagery scenarios implementing spatial filters over the motor cortex but not sure if or how to use in my case...

Thanks in advance.

Re: Surface laplacian (spatial filter)

Posted: Wed Aug 03, 2011 6:34 am
by jlegeny
Hello Rapten,

the purpose of a spatial filter is to create the best possible linear combination of electrodes in order to obtain signal with least noise possible and to maximize the contribution of each channel to the 'useful' signal. It is thus important to know what you are actually searching for before worrying about the spatial filtering.

Personally I have not looked into emotion detection with EEG, but it seems that there are several papers treating this subject. You might want to check them out to see what was already done http://scholar.google.com/scholar?q=eeg ... as_sdtp=on

Cheers
Jozef

Re: Surface laplacian (spatial filter)

Posted: Wed Aug 03, 2011 12:14 pm
by Rapten
Hi,
There's another thing i'd like to know, that is the about stimulation file. All example scenarios use the *.lua file and in my scenario where i don't have one so that may be the problem for my classifier to not work.
My scenario here is listening to a music and acquiring eeg signals which is written to a file with GDF or generic writer. The next step would be to filter and train the classifier with the saved data. Since i don't have any external stimulation file how would i set the classifier's Train Trigger ? :?


Regards.

Re: Surface laplacian (spatial filter)

Posted: Thu Aug 04, 2011 10:30 am
by ddvlamin
Hi,

In general, the training scenario reads from a file which includes the stimulations and thus do not need the lua stimulator scripts. These stimuli should be recorded during the acquisition as they indicate the time when something interesting can be measured. In fact if you train a classifier, you almost always want to discriminate between two conditions, in your case between high and low arousal or high and low valence. These two conditions should be contrasted to each other by the way you setup your experiment. This means you will need to record several trials, each trial indicated with a stimulus representing the condition. For example some trials should indicate the start of a sound that causes high arousal while other conditions correspond to low arousal sound (or using images). So you should probably revise your experimental setup.

To come back on your previous question. If two emotional conditions can be discriminated from one another by means of changes in power in certain frequency bands, then yes maybe CSP is a good choice, if not there are maybe better ways to improve signal quality. CSP however maximizes the kullback-leibler divergence between the different distributions, so maybe it also works well in other cases. So as jlegeny suggested, you should first know what type of features are typically extracted from the signal in emotion recognition.

Best regards,
Dieter

Re: Surface laplacian (spatial filter)

Posted: Mon Aug 08, 2011 8:49 am
by Rapten
What i thought would be suitable for my case is, acquiring recordings for like/dislike categories of songs and try training a classifier for the two sets.
There are couple of problems that i'm having now:
1. the classifier is not being trained even with a single recorded data. How i do that is play the liked/disliked songs and save data in both formats *.ov & *.gdf as given below (just to make sure both generic and gdf data works, not sure what to choose..)
upload1.JPG
upload1.JPG (12.85 KiB) Viewed 14260 times
i've also tried few settings for train trigger but the file won't generate.

2. how can i train the same classifiers with different set of recordings and varied lengths depending on song duration ? what i did was, used one of the classifier examples with readers for "liked/disliked" signals.
upload2.JPG
upload2.JPG (37.61 KiB) Viewed 14260 times

Re: Surface laplacian (spatial filter)

Posted: Mon Aug 08, 2011 12:52 pm
by lbonnet
Hi Rapten,

From your attachment, I can see one "preliminary" problem in the training scenario.

You seem to record .ov files with the Generic Stream Writer, then read it with the Generic Stream Reader.
To do so, please be sure of the input/output type and order you are using, as the Generic Stream w/r are... generic :)
i.e they just record/read the raw EBML streams (the data container) in/from a file, no more intelligence...

In your case your are writing with :
- input 1 : exp information
- input 2 : signal
- input 3 : stims

and you read with:
- output 1 : signal

But in the file... the first stream is "experiment information", and not signal.
I advise you to configure the Generic Stream Reader with the same output types, number and order as in the Writer box.

------------
Regarding the training itself, my opinion is to do the training on one file, possibly a concatenation of several sessions (see Signal Concatenation box).
The file must be tagged with stimulations to categorize the signal according to the classes.

In you case I would be nice to have in your first file 2 stimulations :
- at start, one stim for "start of song I like"
- at the end, one different stim for "end of song I like"

and in the other file, 2 other stims for the "dislike" counterpart.
You would end with one concatenated file. A Stream Switch, as used in the scenario motor-imagery-bci-5-replay.xml (in the bci/motor-imagery-CSP folder), could then feed the classifier with the right stream for the right class according to the stimulations in the file.

The signal concatenation box, the stream switch, and the scenario I previously mentionned are only available in the latest SVN repository.
All of them are currently integrated in the upcoming 0.11.0 release.
If you are not using the source code version, you will have to wait I finish the integration & release process :)

I hope this helps
Laurent-

Re: Surface laplacian (spatial filter)

Posted: Mon Aug 08, 2011 3:54 pm
by yrenard
Laurent,

actually, the generic file reader will try to match type before position, so it will take the first signal stream and discard the other two :)

Yann

Re: Surface laplacian (spatial filter)

Posted: Mon Aug 08, 2011 9:04 pm
by Rapten
Guys, Thanks for the instant reply.

I'll make sure to use the same box algorithm to w/r the signal, but not sure what would be relevant in my case. Any suggestions ? :)
And about adding the stimulation to the data, how do i do it ? I've seen the motor imagery example and its associated *.lua file where at the end the train trigger is set to "StimulationId_Train" but i don't know how to build such a file for myself where the triggers like you mentioned would be the start and end of a song. In other words how do i create a stimulation file in my case :(

Regards

Re: Surface laplacian (spatial filter)

Posted: Tue Aug 09, 2011 9:05 am
by lbonnet
@ Yann : ok.... I was totally convinced the reader box would not handle it. But you're right, it just prints a warning in the console. I will do more test before talking next time :(

Regarding the stimulations, you looked at the right place :)
We use the Lua scripting language with the Lua Stimulator to design stimulation scenario.
Lua is a simple language, and the examples provided with openvibe should be enough to learn how to make your own script.

Moreover your needs are very simple...
In my opinion, you should program the lua script with :
- send a stimulation "Start of song I like" (let's say OVTK_StimulationId_Label_01) at time t0 = 5s
- send a stimulation "End of Song I like " (let's say OVTK_StimulationId_Label_02) at time t1 = (t0 + song duration)
- send a stimulation "Start of song I dislike" (let's say OVTK_StimulationId_Label_03) at time t2 = (t1 +5s)
- send a stimulation "End of Song I dislike " (let's say OVTK_StimulationId_Label_04) at time t3 = (t2 + song duration)

Then you connect the Lua box output to 2 Sound player boxes, one for each song (only ogg and wav format are accepted, you may have to convert your song files).
The first song is played when receiving OVTK_StimulationId_Label_01, stops when receiving OVTK_StimulationId_Label_02. Same thing with the second song (labels 03 & 04).
You connect the sound boxes output to your file writer, in order to properly tag the data with the very same stimulations (the Sound player box outputs the stimulation on the very precise moment when the sound is played/stopped, for good synchronization).

Your acquisition session ends with one EEG file, tagged to know when you are listening to a song you like or dislike, for further classifier training.

hope this helps !

Laurent

Re: Surface laplacian (spatial filter)

Posted: Tue Aug 09, 2011 9:09 am
by lbonnet
I'll make sure to use the same box algorithm to w/r the signal, but not sure what would be relevant in my case. Any suggestions ?
If you plan to use your files in another software such as matlab you may want to record your data in GDF format, as the OpenViBE Generic format is only readable with openvibe.
But if you do everything with openvibe, Generic format is good :)

Anyway, you can convert files in/from every format with the boxes provided. For example, connect a Generic Stream Reader to a GDF file writer, press fast-forward and you're done in few seconds ;)

Laurent-