Hi Yann:
I currently do not have a multi channel eeg but maybe will next year. I saw the 10 minutes OpenVibe video regarding the "3 direction" detection using the OpenVibe (left, right and forward).
According to your experience, could you please answer these questions?
1) How accurate is the OpenVibe sofware for detecting left, right and forward on a trained participant? That much as the person in the video or that is the best case? Can it even be trained a fourth of fifth action (backwards, jumping)?
2) How good is your success variability rate in the results with different participants?
I have been looking at other forum regarding BCI control over actions (Emotiv headset) and users seem to be having different levels of success with their software. Some manage to do very well, but others have serious problems with that purpose. Do you have any clue of why is this?
3) How important is the noise filter compared to location regarding electrical pollution? i mean, if you locate the eeg and participant nearby a very polluted area with RF, can that be well filtered using a pre scanning of the noise?
Another related question: Does the software detects automatically the noise to filter or it has to be done manually? If it is automatic, is it previous to the use of the electrodes (notch filter) or something variable (voltage substraction) using the one in the nose, or something else?
4) How important is the location and number of electrodes? Looks like the frontal lobe and the CZ are the two most important? Do you think that more electrodes can make a big difference in the proper detection? Which is the minimal and ideal number of electrodes? And ideal locations?
5) Does it influence very much the hour of the day? Maybe doing it in the morning makes the system more responsive when you are less tired? And related to this, how long can a user sustain the effort of moving around?
6) How is that OpenVibe detects movement patterns? Using neural networks for analising the raw EEG? And where do the software puts emphasis for identifying the patterns? For instance changes in a specific location, brainwave frequency, shape of the waveform (like a spike or sinusoidal wave), time duration? I guess different parts of the brain handle different parts of the body moving (real or imaginary)?
Related to this, I have seen that the OCZ NIA uses the beta/alpha (neurofeedback type) ratio to move a character forward and backwards. Do you use that?
I was thinking that you could use the EOG (eye blink) as a trick for stopping action or making the avatar advance steadily instead of the visual cues of the museum (those color bars that appeared later so that he would not get so tired when moving).
Also I have seen that the OCZ NIA uses the eyes direction movement (right/left) for moving an avatar. For this they use three three electrodes on the frontal lobe. I know that that is EOG, not pure EEG, but after a while the user has the ability to "think" that he is moving the eyes left or right (instead of real eye movement), and that would make the trick without eye movement at all. Have you tried that?
7) How important is the resolution of the EEG for the purpose? In terms of samples per second? In terms of frequencies per packet? For instance I have a single electrode EEG (Neurosky) with 8 packets per second and 256 frequencies per packet. It has a single dry electrode on the frontal lobe. Could I use this or it is too ambitious with only one electrode plus grounding (ear)?
Thanks for the questions and also Merry Xmas and happy New Year!
The accuracy of moving things by thought
Re: The accuracy of moving things by thought
Sorry I was making an experiment with the poll feature and published that by mistake...
Re: The accuracy of moving things by thought
Dear EEGHelp, sorry for late reply... this one was quite long to answer
It depends, really
The duration a subject can do an experiment mostly depends on the subject's experience...
I would not expect anything good with a single electrode. Neurosky electrode is on the frontal lobe. You won't have a good measure of the motor activity from here. Also, you won't be able to find a difference between two mental tasks with a single electrode.
Hope this helps,
Yann
The accuracy mostly depends on the ability of the subject to generate discriminable brain activity. Trained subject will have better separation while new comers will have difficulties. People in our lab are using motor imagery only for experiments, a few times a year. We usually have to repeat between 4-5 days training achieving acceptable results... And for two tasks (left/right) they achieve around 75/80% correct classification which is far from perfect. Disabled people training on such interface on a daily basis or even weekly basis will have better results. Any additional task you'd like will probably lower the performance. State of the art says it is possible to control 4 commands (I don't have the ref available but I can find it if you need it) but we, at INRIA, never tried more than 3. State of the art is with well trained people and the tasks are for left / right hand movements, foot movements and tongue movements.EEGHelp wrote: 1) How accurate is the OpenVibe sofware for detecting left, right and forward on a trained participant? That much as the person in the video or that is the best case? Can it even be trained a fourth of fifth action (backwards, jumping)?
Yes. First, the motivation is really important. People who tried BCI in the lab usually have other subjects to work on. They let themselves be available to try the experiment but some of them don't really have to do it. In such circumstances, it always fails. If you want to do anything good with BCI, be sure your subjects are motivated. Also, a good feedback (even if it reflects how close to correct the task is being done) and an entertaining context motivate the user.EEGHelp wrote: 2) How good is your success variability rate in the results with different participants?
I have been looking at other forum regarding BCI control over actions (Emotiv headset) and users seem to be having different levels of success with their software. Some manage to do very well, but others have serious problems with that purpose. Do you have any clue of why is this?
Depends on what kind of noise. Typically electrical pollution (50Hz/60Hz) is really easy to remove (also removes the brain activity in this band, but you should choose a task that does not appear in those bands)EEGHelp wrote: 3) How important is the noise filter compared to location regarding electrical pollution? i mean, if you locate the eeg and participant nearby a very polluted area with RF, can that be well filtered using a pre scanning of the noise?
It is manual for now. Temporal filter does this.EEGHelp wrote: Another related question: Does the software detects automatically the noise to filter or it has to be done manually? If it is automatic, is it previous to the use of the electrodes (notch filter) or something variable (voltage substraction) using the one in the nose, or something else?
The number and location of electrodes depends of what you're wanting to do. Cz is just on top of the motor cortex for foot movements. This is why this electrode is important for our Tie-Fighter demo. However, left and right hand movements will be detected on C4 and C5.EEGHelp wrote: 4) How important is the location and number of electrodes? Looks like the frontal lobe and the CZ are the two most important? Do you think that more electrodes can make a big difference in the proper detection? Which is the minimal and ideal number of electrodes? And ideal locations?
It depends, really
I have no info about what time is better to do this experiment.EEGHelp wrote: 5) Does it influence very much the hour of the day? Maybe doing it in the morning makes the system more responsive when you are less tired? And related to this, how long can a user sustain the effort of moving around?
The duration a subject can do an experiment mostly depends on the subject's experience...
I recommend you study the motor imagery BCI scenario in order to understand how the data is processed. Also have a look to the one hour tutorial which explains this quite well.EEGHelp wrote: 6) How is that OpenVibe detects movement patterns? Using neural networks for analising the raw EEG? And where do the software puts emphasis for identifying the patterns? For instance changes in a specific location, brainwave frequency, shape of the waveform (like a spike or sinusoidal wave), time duration? I guess different parts of the brain handle different parts of the body moving (real or imaginary)?
Not yet. We may in the future.EEGHelp wrote: Related to this, I have seen that the OCZ NIA uses the beta/alpha (neurofeedback type) ratio to move a character forward and backwards. Do you use that?
Not sure what you mean here, sorry. Can you expand ?EEGHelp wrote: I was thinking that you could use the EOG (eye blink) as a trick for stopping action or making the avatar advance steadily instead of the visual cues of the museum (those color bars that appeared later so that he would not get so tired when moving).
No.EEGHelp wrote: Also I have seen that the OCZ NIA uses the eyes direction movement (right/left) for moving an avatar. For this they use three three electrodes on the frontal lobe. I know that that is EOG, not pure EEG, but after a while the user has the ability to "think" that he is moving the eyes left or right (instead of real eye movement), and that would make the trick without eye movement at all. Have you tried that?
We at INRIA usually use 512Hz signal with 16 packets per second.EEGHelp wrote: 7) How important is the resolution of the EEG for the purpose? In terms of samples per second? In terms of frequencies per packet? For instance I have a single electrode EEG (Neurosky) with 8 packets per second and 256 frequencies per packet. It has a single dry electrode on the frontal lobe. Could I use this or it is too ambitious with only one electrode plus grounding (ear)?
I would not expect anything good with a single electrode. Neurosky electrode is on the frontal lobe. You won't have a good measure of the motor activity from here. Also, you won't be able to find a difference between two mental tasks with a single electrode.
Hope this helps,
Yann
Re: The accuracy of moving things by thought
Thanks for the replyI was thinking that you could use the EOG (eye blink) as a trick for stopping action or making the avatar advance steadily instead of the visual cues of the museum (those color bars that appeared later so that he would not get so tired when moving).
Not sure what you mean here, sorry. Can you expand ?
I was just suggesting that for instance if a player is walking in second life, he may move forward faster within the game even if there are no elements like those in minute 7:15 of the 39 minutes video (blue and yellow bars with ball in the top).
It could be used a command like this:
While being immobile: "three blinks" = walk steadily forward.
While walking steadily forward: "three blinks" = stop.
Re: The accuracy of moving things by thought
Dear EEGHelp,
using blink/EOG is not the same as using brain activity. However, it is easily detectable so yes, you can do some things with that. Depends on your need, if you are thinking about disabled people, maybe you should not consider this interaction as a valid one... If EOG is available, then brain-computer interfaces may not be the most appropriate way to perform the interaction
Yann
using blink/EOG is not the same as using brain activity. However, it is easily detectable so yes, you can do some things with that. Depends on your need, if you are thinking about disabled people, maybe you should not consider this interaction as a valid one... If EOG is available, then brain-computer interfaces may not be the most appropriate way to perform the interaction
Yann
Re: The accuracy of moving things by thought
Dear Yrenard,
And also can you explain what is the purpose of Simple DSP function (x^2), then averaged, and then log(1+x), in classifier training sample scenario?
Thank you.
I have question regarding motor imagery accuracy. Using modified script from scenario samples, I used 2 channel (C3 & C4) ModularEEG recording, 40 trials, and feed it into LDA classifier trainer. Then using online scenario file, i tested the same recording which also used as input to classification. So I record once, classified it and replayed it. But the accuracy is only about 60%, maximum 75-80% after doing another recording session. If I use real time EEG acquisition into online motor imagery scenario, the accuracy drops to 50%. Is it normal or can I get better accuracy without upgrading my hardware?The accuracy mostly depends on the ability of the subject to generate discriminable brain activity. Trained subject will have better separation while new comers will have difficulties. People in our lab are using motor imagery only for experiments, a few times a year. We usually have to repeat between 4-5 days training achieving acceptable results... And for two tasks (left/right) they achieve around 75/80% correct classification which is far from perfect. Disabled people training on such interface on a daily basis or even weekly basis will have better results. Any additional task you'd like will probably lower the performance. State of the art says it is possible to control 4 commands (I don't have the ref available but I can find it if you need it) but we, at INRIA, never tried more than 3. State of the art is with well trained people and the tasks are for left / right hand movements, foot movements and tongue movements.
And also can you explain what is the purpose of Simple DSP function (x^2), then averaged, and then log(1+x), in classifier training sample scenario?
Thank you.
Re: The accuracy of moving things by thought
Hello.
Then the next operation : log(1+pow(i)) ; looks like a "signal shaping operation"... in facts I don't clearly understand its purpose.
When I watch the tuto I was disapointed by the log operation .
I don't know well (Yann should ) but I think it's OK with only C3 and C4. Adding electrodes on neighbour position (C1/C2, FC1/FC2) can help but in most case C3/C4 are sufficient.ariandy wrote: If I use real time EEG acquisition into online motor imagery scenario, the accuracy drops to 50%. Is it normal or can I get better accuracy without upgrading my hardware?
Those DSP intend to perform power calculation of the signal which is : power(i) = mean(signal(t)²) during t.ariandy wrote: And also can you explain what is the purpose of Simple DSP function (x^2), then averaged, and then log(1+x), in classifier training sample scenario?
Then the next operation : log(1+pow(i)) ; looks like a "signal shaping operation"... in facts I don't clearly understand its purpose.
When I watch the tuto I was disapointed by the log operation .
Naëm Baron
CV : http://bee-oh.esiea-ouest.fr/baron/index.xml
CV : http://bee-oh.esiea-ouest.fr/baron/index.xml
Re: The accuracy of moving things by thought
Currently I only have access to 2 channel EEG hardware, and upgrading is not really an option right now. Any suggestion to increase the accuracy?nbaron wrote:Hello.
I don't know well (Yann should ) but I think it's OK with only C3 and C4. Adding electrodes on neighbour position (C1/C2, FC1/FC2) can help but in most case C3/C4 are sufficient.
My classifier trainer scenario is below:
GDF File Reader
Temporal filter (Band Pass 8 - 13 Hz, Mu/Alpha wave)
Identity (left-right differentiation)
Stimulation Based Epoching (2secs duration, 0.5secs offset)
Time Based Epoching (1sec duration, 0.0625 offset)
Simple DSP (x^2)
Signal Average
Simple DSP (log(1+x))
Feature Aggregator
Classifier Trainer (LDA with 7 partitions K-Fold Test)
Ah, yes right. I forgot about thatThose DSP intend to perform power calculation of the signal which is : power(i) = mean(signal(t)²) during t.
Then the next operation : log(1+pow(i)) ; looks like a "signal shaping operation"... in facts I don't clearly understand its purpose.
When I watch the tuto I was disapointed by the log operation .
Re: The accuracy of moving things by thought
Dear ariandy and nbaron,
as nbaron said, the log(1+x) function is a shaping function, it gives an almost normal distribution of the signal and avoids values to be too big because of the square.
Regarding motor imagery, you will have better results with more electrodes for sure. But if you only have two channels, go for two, it should work with lower performances...
50% accuracy for two classes is like chance. It can't be worse.
Now what could you do to enhance the performances of your BCI, well... train more is probably the best way it takes time, really, how long have you been playing with the system until now ? For your information, on our side, we usually perform a first acquisition session without feedback, train the classifier and go for another training session with feedback, train the classifier again, go for another training session and so on... We usually have acceptable performances after 4/5 sessions but not after the first one. Another interesting thing is that the frequency bands that react for motor imagery may not be exactly the same for everyone. We used to have a box that computed the good frequency bands to apply. I think this is no longer maintained so I can't point you to this box but I suggest that you play a bit with the frequency bands. You could also play with the number of k-fold test partitions.
Of course all these ideas should be considered only if your signals are clean and with no artifacts (EMG, OEG, blinks etc...)
Keep us posted.
Yann
as nbaron said, the log(1+x) function is a shaping function, it gives an almost normal distribution of the signal and avoids values to be too big because of the square.
Regarding motor imagery, you will have better results with more electrodes for sure. But if you only have two channels, go for two, it should work with lower performances...
50% accuracy for two classes is like chance. It can't be worse.
Now what could you do to enhance the performances of your BCI, well... train more is probably the best way it takes time, really, how long have you been playing with the system until now ? For your information, on our side, we usually perform a first acquisition session without feedback, train the classifier and go for another training session with feedback, train the classifier again, go for another training session and so on... We usually have acceptable performances after 4/5 sessions but not after the first one. Another interesting thing is that the frequency bands that react for motor imagery may not be exactly the same for everyone. We used to have a box that computed the good frequency bands to apply. I think this is no longer maintained so I can't point you to this box but I suggest that you play a bit with the frequency bands. You could also play with the number of k-fold test partitions.
Of course all these ideas should be considered only if your signals are clean and with no artifacts (EMG, OEG, blinks etc...)
Keep us posted.
Yann