Multiclass classifier
Multiclass classifier
Hi,
Continuing my work now i see the need for atleast 3 classes, namely like,dislike & neutral. I did see the option to add classes to the classifier trainer, but i don't know how would the classifier processor deal with it. It has got only two options available. Any suggestions ??
Regards.
Continuing my work now i see the need for atleast 3 classes, namely like,dislike & neutral. I did see the option to add classes to the classifier trainer, but i don't know how would the classifier processor deal with it. It has got only two options available. Any suggestions ??
Regards.
Re: Multiclass classifier
Dear Rapten,
at that time, there is no multi class classifier ready for OpenViBE. I suggest that you use two classifiers, one that would separate rest vs { like or dislike } and then if that's not rest, another one that separates like vs dislike.
or
you could also contribute a multi class classifier
Hope this helps,
Yann
at that time, there is no multi class classifier ready for OpenViBE. I suggest that you use two classifiers, one that would separate rest vs { like or dislike } and then if that's not rest, another one that separates like vs dislike.
or
you could also contribute a multi class classifier
Hope this helps,
Yann
Re: Multiclass classifier
yeah it'd great if i could.
But i've no clue how to go about it. any hints or example to start off with. In the mean time i would try & use voting classifiers.
Another thing i'm trying to figure out is multiplexing data for liked, disliked & neutral, where each of them would have several signals to be fed to the trainer. i've seen the various streaming boxes available. The question here is whether to merge signals before filtration,feature extraction and all those steps OR after individual processing is performed ? Not sure what would yield an effective result, i'm gonna try both,anyway.
Regards
But i've no clue how to go about it. any hints or example to start off with. In the mean time i would try & use voting classifiers.
Another thing i'm trying to figure out is multiplexing data for liked, disliked & neutral, where each of them would have several signals to be fed to the trainer. i've seen the various streaming boxes available. The question here is whether to merge signals before filtration,feature extraction and all those steps OR after individual processing is performed ? Not sure what would yield an effective result, i'm gonna try both,anyway.
Regards
Re: Multiclass classifier
Hi Rapten,
You could start with a one-versus-rest approach or one-versus-one approach based on the basic LDA classifier, if you can program and you follow or check the code corresponding to the LDA or SVM you should get a good idea of how you can extend OpenViBE. Here is a nice tutorial on how to develop a new box or algorithm for OpenViBE: http://openvibe.inria.fr/documentation/ ... lugin.html
Anyway, there's an interesting discussion about that multi class topic here: viewtopic.php?f=13&t=442&hilit=multiple+classes
@the developers: is there any progress or agreement on how the multi-class classifiers will be implemented in the future as there was still no clear decision made in the above topic? How is the work of bpayan progressing, I thought he was working on an extension of the SVM algorithm back then? If I find some free days in september, maybe I'll look into it too.
For training I would first filter the signal and then use stimulus based epoching to split the signal into three different types of epochs corresponding to each one of your conditions, like, dislike and neutral. Then you could multiplex for example the like/dislike epochs and compute your feature (e.g. power), give it to the feature aggregrator and subsequently to the classifier trainer. Objects for the second class could correspond to the features of the neutral class.
Best,
Dieter Devlaminck
You could start with a one-versus-rest approach or one-versus-one approach based on the basic LDA classifier, if you can program and you follow or check the code corresponding to the LDA or SVM you should get a good idea of how you can extend OpenViBE. Here is a nice tutorial on how to develop a new box or algorithm for OpenViBE: http://openvibe.inria.fr/documentation/ ... lugin.html
Anyway, there's an interesting discussion about that multi class topic here: viewtopic.php?f=13&t=442&hilit=multiple+classes
@the developers: is there any progress or agreement on how the multi-class classifiers will be implemented in the future as there was still no clear decision made in the above topic? How is the work of bpayan progressing, I thought he was working on an extension of the SVM algorithm back then? If I find some free days in september, maybe I'll look into it too.
For training I would first filter the signal and then use stimulus based epoching to split the signal into three different types of epochs corresponding to each one of your conditions, like, dislike and neutral. Then you could multiplex for example the like/dislike epochs and compute your feature (e.g. power), give it to the feature aggregrator and subsequently to the classifier trainer. Objects for the second class could correspond to the features of the neutral class.
Best,
Dieter Devlaminck
Re: Multiclass classifier
Thanks for that
1. A similar approach to what you have said but keeping similar data to their respective groups would be somewhat like this : Not sure what's gonna be an effective approach.
Was your idea of multiplexing "like/dislike" and others to form a set of signals to say "liked" vs "the rest", etc. ?
2. I was also trying to using the other streaming boxes and i didn't know where to use a "streamed matrix" or "stimulation" multiplexers. I've read through their documentation, but that didn't help me much. Are they of any use, in my case ?
Thanks in advance.
1. A similar approach to what you have said but keeping similar data to their respective groups would be somewhat like this : Not sure what's gonna be an effective approach.
Was your idea of multiplexing "like/dislike" and others to form a set of signals to say "liked" vs "the rest", etc. ?
2. I was also trying to using the other streaming boxes and i didn't know where to use a "streamed matrix" or "stimulation" multiplexers. I've read through their documentation, but that didn't help me much. Are they of any use, in my case ?
Thanks in advance.
Re: Multiclass classifier
Dear Rapten,
So you have 3 different files containing data for the condition like and 3 for the condition dislike?
How many epochs do you extract per file, one or more?
You then compute the powers (I guess?) per epoch and per channel, so basically your leftmost DSP box outputs a Cx1 vector where C is the number of channels? These are then concatenated in the feature aggregator box forming a 3Cx1 vector. So this 3Cx1 vector would represent one sample for your classifier input which doesn't make sense to me. I think what you want is to have three Cx1 vectors that serve as separate input samples to the classifier input? To this end I would multiplex the the epochs (or maybe the outputs of the DSP boxes) so that they are serialized one after another. In case you only have one epoch per box, I would recommend to extract multiple epochs at different times to have more training samples for the classifier.
Another problem I see is the following: your classifier trainer starts to train at the end of the first file, what happens when your first file is shorter than the others? Maybe the developers can shed some light on this, because I think the classifier will already start training before the other training samples are constructed, unless of course the first file is the longest.
The approach of Yann to construct two classifiers in a hierarchy can potentially be a better solution, but I'm not sure how to implement this in OpenViBE's streaming structure. Yann, could you help?
Best regards,
Dieter Devlaminck
PS: I still think your scenario's could be simplified by recording multiple conditions in one file.
There are some things I do not understand in this scenario.Rapten wrote:Thanks for that
1. A similar approach to what you have said but keeping similar data to their respective groups would be somewhat like this
So you have 3 different files containing data for the condition like and 3 for the condition dislike?
How many epochs do you extract per file, one or more?
You then compute the powers (I guess?) per epoch and per channel, so basically your leftmost DSP box outputs a Cx1 vector where C is the number of channels? These are then concatenated in the feature aggregator box forming a 3Cx1 vector. So this 3Cx1 vector would represent one sample for your classifier input which doesn't make sense to me. I think what you want is to have three Cx1 vectors that serve as separate input samples to the classifier input? To this end I would multiplex the the epochs (or maybe the outputs of the DSP boxes) so that they are serialized one after another. In case you only have one epoch per box, I would recommend to extract multiple epochs at different times to have more training samples for the classifier.
Another problem I see is the following: your classifier trainer starts to train at the end of the first file, what happens when your first file is shorter than the others? Maybe the developers can shed some light on this, because I think the classifier will already start training before the other training samples are constructed, unless of course the first file is the longest.
So you could construct three classifiers, the first discriminating like/dislike versus neutral, the second one like/neutral versus dislike and the third one neutral/dislike versus like. In an online setting one can then use the voting classifier to see which one of these three classifiers marks the input sample as target. If it is for example the third classifier you know that the signal corresponds to the liked condition.Rapten wrote:Not sure what's gonna be an effective approach.
Was your idea of multiplexing "like/dislike" and others to form a set of signals to say "liked" vs "the rest", etc. ?
The approach of Yann to construct two classifiers in a hierarchy can potentially be a better solution, but I'm not sure how to implement this in OpenViBE's streaming structure. Yann, could you help?
Best regards,
Dieter Devlaminck
PS: I still think your scenario's could be simplified by recording multiple conditions in one file.
Re: Multiclass classifier
Thanks for all.
I'm totally unaware of BCI processing Consider me a Noob.
i really need some help.
I know the feature extraction is not reasonable at all coz i'm using the one in the example scenarios.
Firstly,
Secondly about the time to trigger the classifier trainer is now being triggered by a clock timer of about 3.5 mins, so that length differences wouldn't affect the training: And finally you could make required changes to the processing step in the attached file for relevant signal processing and mainly, feature extraction. (plz feel free to make any changes that seems appropriate. )
Regards.
I'm totally unaware of BCI processing Consider me a Noob.
i really need some help.
I know the feature extraction is not reasonable at all coz i'm using the one in the example scenarios.
Firstly,
did you mean to use a signal merger ? coz i'm actually using several data files recorded when listening to the songs.ddvlamin wrote:PS: I still think your scenario's could be simplified by recording multiple conditions in one file.
Secondly about the time to trigger the classifier trainer is now being triggered by a clock timer of about 3.5 mins, so that length differences wouldn't affect the training: And finally you could make required changes to the processing step in the attached file for relevant signal processing and mainly, feature extraction. (plz feel free to make any changes that seems appropriate. )
Regards.
- Attachments
-
- sample.xml
- (119.13 KiB) Downloaded 410 times
Re: Multiclass classifier
Dear Rapten,
What I wanted to say in the "PS" is that with the current scenario you have to add or remove pipelines (corresponding to each file reader) depending on how many trials you record per subject. So if you decide to add some trials, you always need to replicate these pipelines, which can become cumbersome as it scales with the number of trials and not with the number of conditions. However, it is not as easy to implement it in OpenViBE as I initially thought in case you have different songs per condition, but I believe it is possible. The acquisition scenario I have in mind uses multiple sound player boxes, but the corresponding training scenario would need some lua scripting to translate stimuli to new stimuli corresponding to the three conditions (in case one has multiple songs per condition). So never mind, forget what I've said, it's indeed not that easy, for now it's maybe easier the way you work. The scenario as you have it now with the multiplexers should be ok, I think.
Best regards,
Dieter Devlaminck
First I want to clarify the difference between the multiplexer and the signal merger. The signal merger concatenates the different matrices along the "channel" dimension. So two signals of four channels will become a eight channel signal. The multiplexer serializes the epochs one after another, so the number of "channels" at the output is the same as the number of "channels" at the input. I think you need the multiplexer in your current scenario as I explained above.Rapten wrote: did you mean to use a signal merger ? coz i'm actually using several data files recorded when listening to the songs.
Regards.
What I wanted to say in the "PS" is that with the current scenario you have to add or remove pipelines (corresponding to each file reader) depending on how many trials you record per subject. So if you decide to add some trials, you always need to replicate these pipelines, which can become cumbersome as it scales with the number of trials and not with the number of conditions. However, it is not as easy to implement it in OpenViBE as I initially thought in case you have different songs per condition, but I believe it is possible. The acquisition scenario I have in mind uses multiple sound player boxes, but the corresponding training scenario would need some lua scripting to translate stimuli to new stimuli corresponding to the three conditions (in case one has multiple songs per condition). So never mind, forget what I've said, it's indeed not that easy, for now it's maybe easier the way you work. The scenario as you have it now with the multiplexers should be ok, I think.
Best regards,
Dieter Devlaminck
Re: Multiclass classifier
Thanks for an instant feedback.
I agree to what you say and that's what i've thought. I'd having 10 set signals to train from, at the max, for each group which is not that bad.
But i'd also like to know or get some hints for the feature extraction steps, which i've no clue on how to do it, as i'm totally unfamiliar to signal processing.
You could in fact make changes to steps in the file uploaded. Help..!!
Regards
I agree to what you say and that's what i've thought. I'd having 10 set signals to train from, at the max, for each group which is not that bad.
But i'd also like to know or get some hints for the feature extraction steps, which i've no clue on how to do it, as i'm totally unfamiliar to signal processing.
You could in fact make changes to steps in the file uploaded. Help..!!
Regards
Re: Multiclass classifier
Dear Rapten,
Unfortunately, I don't know much about emotional EEG processing, but I suggest to look for papers about musical or emotional BCI such as this one http://eprints.eemcs.utwente.nl/16010/0 ... al2009.pdf. Maybe these papers will tell you what kind of features you need to extract to be able to classify these like and dislike conditions.
Best regards,
Dieter Devlaminck
Unfortunately, I don't know much about emotional EEG processing, but I suggest to look for papers about musical or emotional BCI such as this one http://eprints.eemcs.utwente.nl/16010/0 ... al2009.pdf. Maybe these papers will tell you what kind of features you need to extract to be able to classify these like and dislike conditions.
Best regards,
Dieter Devlaminck
Re: Multiclass classifier
Hi Dieter,
Thanks for the link. I've been going through many papers to get clues on signal processing, but no luck..
Meanwhile i would create 3 classifiers, like you've said and train them with some data. I also have to see how to actually use a voting classifier, which i'm unaware of.
I wanted to know how does the classifier processor work in an online scenario ?
can i've have an online signal processed for the entire song duration rather than processing every signal or feature that is being streamed realtime ?
By that i mean, classifying a data for a full duration and knowing which group it belongs to.
Regards
Thanks for the link. I've been going through many papers to get clues on signal processing, but no luck..
Meanwhile i would create 3 classifiers, like you've said and train them with some data. I also have to see how to actually use a voting classifier, which i'm unaware of.
I wanted to know how does the classifier processor work in an online scenario ?
can i've have an online signal processed for the entire song duration rather than processing every signal or feature that is being streamed realtime ?
By that i mean, classifying a data for a full duration and knowing which group it belongs to.
Regards
Re: Multiclass classifier
Dear Rapten,
I have attached some partial example scenario files. The sample you gave is extended for three classifiers, forming one-versus-rest classifiers. The first classifier discriminates like versus dislike/neutral and so on. In the online scenario then each of the classifier processor outputs OVTK_StimulationId_Target if the sample corresponds to its first class. For example, the second classifier processor outputs OVTK_StimulationId_Target if the sample is of the dislike class, while the third outputs OVTK_StimulationId_Target when it is of the neutral class and the first outputs OVTK_StimulationId_Target when the sample is of the like class.
Note that ambiguity is possible, it can happen that none of the classifier processors outputs a OVTK_StimulationId_Target, then what?
Note that you should further adapt the scenarios to suit your own needs.
Best regards,
Dieter
PS: the scenarios have not been tested so there's not guarantee that this logic works.
I have attached some partial example scenario files. The sample you gave is extended for three classifiers, forming one-versus-rest classifiers. The first classifier discriminates like versus dislike/neutral and so on. In the online scenario then each of the classifier processor outputs OVTK_StimulationId_Target if the sample corresponds to its first class. For example, the second classifier processor outputs OVTK_StimulationId_Target if the sample is of the dislike class, while the third outputs OVTK_StimulationId_Target when it is of the neutral class and the first outputs OVTK_StimulationId_Target when the sample is of the like class.
Note that ambiguity is possible, it can happen that none of the classifier processors outputs a OVTK_StimulationId_Target, then what?
Note that you should further adapt the scenarios to suit your own needs.
Best regards,
Dieter
PS: the scenarios have not been tested so there's not guarantee that this logic works.
- Attachments
-
- sample.xml
- (209.21 KiB) Downloaded 392 times
-
- online.xml
- (18.86 KiB) Downloaded 363 times
Re: Multiclass classifier
Thanks a lot, Dieter.
It was similar to what i had been doing but it had few important things that i had missed out.
I'm unable to figure out how the voting classifier actually works...!!
I went through few examples and also the documentation. It generates a stimulation if certain number of repetitions occur but at the end how do we identify which class from which classifier is activated ? as the voting classifier has a single stimulation output.
Regards
It was similar to what i had been doing but it had few important things that i had missed out.
I'm unable to figure out how the voting classifier actually works...!!
I went through few examples and also the documentation. It generates a stimulation if certain number of repetitions occur but at the end how do we identify which class from which classifier is activated ? as the voting classifier has a single stimulation output.
Regards
Re: Multiclass classifier
Hello Rapten,
a quick hint: in OpenViBE you can select a box and press F1 to view its own documentation in your browser.
Now, the voting classifier has several parameters (you can see their detailed description in the documentation) but basically the output class will be equal to the stimulation set as Result class label base if the selected class corresponds to the first imput, to Result class label base + 1 if it corresponds to the second, etc. Usually you set one of the Label_XX stimulations as a base so the first class would send Label_01, second Label_02 etc.
Hope this clarifies the things a bit
Cheers
Jozef
a quick hint: in OpenViBE you can select a box and press F1 to view its own documentation in your browser.
Now, the voting classifier has several parameters (you can see their detailed description in the documentation) but basically the output class will be equal to the stimulation set as Result class label base if the selected class corresponds to the first imput, to Result class label base + 1 if it corresponds to the second, etc. Usually you set one of the Label_XX stimulations as a base so the first class would send Label_01, second Label_02 etc.
Hope this clarifies the things a bit
Cheers
Jozef
Re: Multiclass classifier
Sorry, but i may seem a bit obtuse when i say this, its still not clear as to how the voting classifier actually works..! I've a scenario attached, plz feel free to make any changes to it, which clarifies how it works.
It has three different classifier that identifies like, dislike and neutral states. Only the first classes from every scenario needs to be detected and the second class i.e. stimulation label_02 is always rejected.
The voting classifier's result class base has been set to label_01 and i presume this stimulation is sent when the first class is detected, Label_02 is when second class is detected and so on....
The sound player should play the respective tone when a class is detected. I'm also trying to check all classifiers' accuracy, how can i do it ??
Regards
It has three different classifier that identifies like, dislike and neutral states. Only the first classes from every scenario needs to be detected and the second class i.e. stimulation label_02 is always rejected.
The voting classifier's result class base has been set to label_01 and i presume this stimulation is sent when the first class is detected, Label_02 is when second class is detected and so on....
The sound player should play the respective tone when a class is detected. I'm also trying to check all classifiers' accuracy, how can i do it ??
Regards
- Attachments
-
- Replay.xml
- (45.72 KiB) Downloaded 321 times
Last edited by Rapten on Wed Aug 31, 2011 9:24 pm, edited 3 times in total.