The accuracy of moving things by thought

Come here to discuss about OpenViBE in general!
Post Reply

Poll

one
0
No votes
two
1
100%
three
0
No votes
 
Total votes: 1

EEGHelp
Posts: 14
Joined: Tue Sep 15, 2009 3:20 am

The accuracy of moving things by thought

Post by EEGHelp »

Hi Yann:

I currently do not have a multi channel eeg but maybe will next year. I saw the 10 minutes OpenVibe video regarding the "3 direction" detection using the OpenVibe (left, right and forward).

According to your experience, could you please answer these questions?

1) How accurate is the OpenVibe sofware for detecting left, right and forward on a trained participant? That much as the person in the video or that is the best case? Can it even be trained a fourth of fifth action (backwards, jumping)?

2) How good is your success variability rate in the results with different participants?

I have been looking at other forum regarding BCI control over actions (Emotiv headset) and users seem to be having different levels of success with their software. Some manage to do very well, but others have serious problems with that purpose. Do you have any clue of why is this?

3) How important is the noise filter compared to location regarding electrical pollution? i mean, if you locate the eeg and participant nearby a very polluted area with RF, can that be well filtered using a pre scanning of the noise?

Another related question: Does the software detects automatically the noise to filter or it has to be done manually? If it is automatic, is it previous to the use of the electrodes (notch filter) or something variable (voltage substraction) using the one in the nose, or something else?

4) How important is the location and number of electrodes? Looks like the frontal lobe and the CZ are the two most important? Do you think that more electrodes can make a big difference in the proper detection? Which is the minimal and ideal number of electrodes? And ideal locations?

5) Does it influence very much the hour of the day? Maybe doing it in the morning makes the system more responsive when you are less tired? And related to this, how long can a user sustain the effort of moving around?

6) How is that OpenVibe detects movement patterns? Using neural networks for analising the raw EEG? And where do the software puts emphasis for identifying the patterns? For instance changes in a specific location, brainwave frequency, shape of the waveform (like a spike or sinusoidal wave), time duration? I guess different parts of the brain handle different parts of the body moving (real or imaginary)?

Related to this, I have seen that the OCZ NIA uses the beta/alpha (neurofeedback type) ratio to move a character forward and backwards. Do you use that?

I was thinking that you could use the EOG (eye blink) as a trick for stopping action or making the avatar advance steadily instead of the visual cues of the museum (those color bars that appeared later so that he would not get so tired when moving).

Also I have seen that the OCZ NIA uses the eyes direction movement (right/left) for moving an avatar. For this they use three three electrodes on the frontal lobe. I know that that is EOG, not pure EEG, but after a while the user has the ability to "think" that he is moving the eyes left or right (instead of real eye movement), and that would make the trick without eye movement at all. Have you tried that?

7) How important is the resolution of the EEG for the purpose? In terms of samples per second? In terms of frequencies per packet? For instance I have a single electrode EEG (Neurosky) with 8 packets per second and 256 frequencies per packet. It has a single dry electrode on the frontal lobe. Could I use this or it is too ambitious with only one electrode plus grounding (ear)? :roll:


Thanks for the questions and also Merry Xmas and happy New Year! :D

EEGHelp
Posts: 14
Joined: Tue Sep 15, 2009 3:20 am

Re: The accuracy of moving things by thought

Post by EEGHelp »

Sorry I was making an experiment with the poll feature and published that by mistake...

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: The accuracy of moving things by thought

Post by yrenard »

Dear EEGHelp, sorry for late reply... this one was quite long to answer ;)
EEGHelp wrote: 1) How accurate is the OpenVibe sofware for detecting left, right and forward on a trained participant? That much as the person in the video or that is the best case? Can it even be trained a fourth of fifth action (backwards, jumping)?
The accuracy mostly depends on the ability of the subject to generate discriminable brain activity. Trained subject will have better separation while new comers will have difficulties. People in our lab are using motor imagery only for experiments, a few times a year. We usually have to repeat between 4-5 days training achieving acceptable results... And for two tasks (left/right) they achieve around 75/80% correct classification which is far from perfect. Disabled people training on such interface on a daily basis or even weekly basis will have better results. Any additional task you'd like will probably lower the performance. State of the art says it is possible to control 4 commands (I don't have the ref available but I can find it if you need it) but we, at INRIA, never tried more than 3. State of the art is with well trained people and the tasks are for left / right hand movements, foot movements and tongue movements.
EEGHelp wrote: 2) How good is your success variability rate in the results with different participants?

I have been looking at other forum regarding BCI control over actions (Emotiv headset) and users seem to be having different levels of success with their software. Some manage to do very well, but others have serious problems with that purpose. Do you have any clue of why is this?
Yes. First, the motivation is really important. People who tried BCI in the lab usually have other subjects to work on. They let themselves be available to try the experiment but some of them don't really have to do it. In such circumstances, it always fails. If you want to do anything good with BCI, be sure your subjects are motivated. Also, a good feedback (even if it reflects how close to correct the task is being done) and an entertaining context motivate the user.
EEGHelp wrote: 3) How important is the noise filter compared to location regarding electrical pollution? i mean, if you locate the eeg and participant nearby a very polluted area with RF, can that be well filtered using a pre scanning of the noise?
Depends on what kind of noise. Typically electrical pollution (50Hz/60Hz) is really easy to remove (also removes the brain activity in this band, but you should choose a task that does not appear in those bands)
EEGHelp wrote: Another related question: Does the software detects automatically the noise to filter or it has to be done manually? If it is automatic, is it previous to the use of the electrodes (notch filter) or something variable (voltage substraction) using the one in the nose, or something else?
It is manual for now. Temporal filter does this.
EEGHelp wrote: 4) How important is the location and number of electrodes? Looks like the frontal lobe and the CZ are the two most important? Do you think that more electrodes can make a big difference in the proper detection? Which is the minimal and ideal number of electrodes? And ideal locations?
The number and location of electrodes depends of what you're wanting to do. Cz is just on top of the motor cortex for foot movements. This is why this electrode is important for our Tie-Fighter demo. However, left and right hand movements will be detected on C4 and C5.

It depends, really ;)
EEGHelp wrote: 5) Does it influence very much the hour of the day? Maybe doing it in the morning makes the system more responsive when you are less tired? And related to this, how long can a user sustain the effort of moving around?
I have no info about what time is better to do this experiment.
The duration a subject can do an experiment mostly depends on the subject's experience...
EEGHelp wrote: 6) How is that OpenVibe detects movement patterns? Using neural networks for analising the raw EEG? And where do the software puts emphasis for identifying the patterns? For instance changes in a specific location, brainwave frequency, shape of the waveform (like a spike or sinusoidal wave), time duration? I guess different parts of the brain handle different parts of the body moving (real or imaginary)?
I recommend you study the motor imagery BCI scenario in order to understand how the data is processed. Also have a look to the one hour tutorial which explains this quite well.
EEGHelp wrote: Related to this, I have seen that the OCZ NIA uses the beta/alpha (neurofeedback type) ratio to move a character forward and backwards. Do you use that?
Not yet. We may in the future.
EEGHelp wrote: I was thinking that you could use the EOG (eye blink) as a trick for stopping action or making the avatar advance steadily instead of the visual cues of the museum (those color bars that appeared later so that he would not get so tired when moving).
Not sure what you mean here, sorry. Can you expand ?
EEGHelp wrote: Also I have seen that the OCZ NIA uses the eyes direction movement (right/left) for moving an avatar. For this they use three three electrodes on the frontal lobe. I know that that is EOG, not pure EEG, but after a while the user has the ability to "think" that he is moving the eyes left or right (instead of real eye movement), and that would make the trick without eye movement at all. Have you tried that?
No.
EEGHelp wrote: 7) How important is the resolution of the EEG for the purpose? In terms of samples per second? In terms of frequencies per packet? For instance I have a single electrode EEG (Neurosky) with 8 packets per second and 256 frequencies per packet. It has a single dry electrode on the frontal lobe. Could I use this or it is too ambitious with only one electrode plus grounding (ear)? :roll:
We at INRIA usually use 512Hz signal with 16 packets per second.
I would not expect anything good with a single electrode. Neurosky electrode is on the frontal lobe. You won't have a good measure of the motor activity from here. Also, you won't be able to find a difference between two mental tasks with a single electrode.

Hope this helps,
Yann

EEGHelp
Posts: 14
Joined: Tue Sep 15, 2009 3:20 am

Re: The accuracy of moving things by thought

Post by EEGHelp »

I was thinking that you could use the EOG (eye blink) as a trick for stopping action or making the avatar advance steadily instead of the visual cues of the museum (those color bars that appeared later so that he would not get so tired when moving).


Not sure what you mean here, sorry. Can you expand ?
Thanks for the reply

I was just suggesting that for instance if a player is walking in second life, he may move forward faster within the game even if there are no elements like those in minute 7:15 of the 39 minutes video (blue and yellow bars with ball in the top).

It could be used a command like this:
While being immobile: "three blinks" = walk steadily forward.
While walking steadily forward: "three blinks" = stop.

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: The accuracy of moving things by thought

Post by yrenard »

Dear EEGHelp,

using blink/EOG is not the same as using brain activity. However, it is easily detectable so yes, you can do some things with that. Depends on your need, if you are thinking about disabled people, maybe you should not consider this interaction as a valid one... If EOG is available, then brain-computer interfaces may not be the most appropriate way to perform the interaction :)

Yann

ariandy
Posts: 25
Joined: Fri Aug 14, 2009 3:11 am

Re: The accuracy of moving things by thought

Post by ariandy »

Dear Yrenard,
The accuracy mostly depends on the ability of the subject to generate discriminable brain activity. Trained subject will have better separation while new comers will have difficulties. People in our lab are using motor imagery only for experiments, a few times a year. We usually have to repeat between 4-5 days training achieving acceptable results... And for two tasks (left/right) they achieve around 75/80% correct classification which is far from perfect. Disabled people training on such interface on a daily basis or even weekly basis will have better results. Any additional task you'd like will probably lower the performance. State of the art says it is possible to control 4 commands (I don't have the ref available but I can find it if you need it) but we, at INRIA, never tried more than 3. State of the art is with well trained people and the tasks are for left / right hand movements, foot movements and tongue movements.
I have question regarding motor imagery accuracy. Using modified script from scenario samples, I used 2 channel (C3 & C4) ModularEEG recording, 40 trials, and feed it into LDA classifier trainer. Then using online scenario file, i tested the same recording which also used as input to classification. So I record once, classified it and replayed it. But the accuracy is only about 60%, maximum 75-80% after doing another recording session. If I use real time EEG acquisition into online motor imagery scenario, the accuracy drops to 50%. Is it normal or can I get better accuracy without upgrading my hardware?

And also can you explain what is the purpose of Simple DSP function (x^2), then averaged, and then log(1+x), in classifier training sample scenario?

Thank you.

nbaron
Posts: 23
Joined: Mon Jan 18, 2010 4:54 am

Re: The accuracy of moving things by thought

Post by nbaron »

Hello.
ariandy wrote: If I use real time EEG acquisition into online motor imagery scenario, the accuracy drops to 50%. Is it normal or can I get better accuracy without upgrading my hardware?
I don't know well (Yann should :D ) but I think it's OK with only C3 and C4. Adding electrodes on neighbour position (C1/C2, FC1/FC2) can help but in most case C3/C4 are sufficient.
ariandy wrote: And also can you explain what is the purpose of Simple DSP function (x^2), then averaged, and then log(1+x), in classifier training sample scenario?
Those DSP intend to perform power calculation of the signal which is : power(i) = mean(signal(t)²) during t.
Then the next operation : log(1+pow(i)) ; looks like a "signal shaping operation"... in facts I don't clearly understand its purpose.

When I watch the tuto I was disapointed by the log operation :| .

ariandy
Posts: 25
Joined: Fri Aug 14, 2009 3:11 am

Re: The accuracy of moving things by thought

Post by ariandy »

nbaron wrote:Hello.

I don't know well (Yann should :D ) but I think it's OK with only C3 and C4. Adding electrodes on neighbour position (C1/C2, FC1/FC2) can help but in most case C3/C4 are sufficient.
Currently I only have access to 2 channel EEG hardware, and upgrading is not really an option right now. Any suggestion to increase the accuracy?

My classifier trainer scenario is below:
GDF File Reader
Temporal filter (Band Pass 8 - 13 Hz, Mu/Alpha wave)
Identity (left-right differentiation)
Stimulation Based Epoching (2secs duration, 0.5secs offset)
Time Based Epoching (1sec duration, 0.0625 offset)
Simple DSP (x^2)
Signal Average
Simple DSP (log(1+x))
Feature Aggregator
Classifier Trainer (LDA with 7 partitions K-Fold Test)
Those DSP intend to perform power calculation of the signal which is : power(i) = mean(signal(t)²) during t.
Then the next operation : log(1+pow(i)) ; looks like a "signal shaping operation"... in facts I don't clearly understand its purpose.

When I watch the tuto I was disapointed by the log operation :| .
Ah, yes right. I forgot about that :oops:

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: The accuracy of moving things by thought

Post by yrenard »

Dear ariandy and nbaron,

as nbaron said, the log(1+x) function is a shaping function, it gives an almost normal distribution of the signal and avoids values to be too big because of the square.

Regarding motor imagery, you will have better results with more electrodes for sure. But if you only have two channels, go for two, it should work with lower performances...

50% accuracy for two classes is like chance. It can't be worse.

Now what could you do to enhance the performances of your BCI, well... train more is probably the best way :) it takes time, really, how long have you been playing with the system until now ? For your information, on our side, we usually perform a first acquisition session without feedback, train the classifier and go for another training session with feedback, train the classifier again, go for another training session and so on... We usually have acceptable performances after 4/5 sessions but not after the first one. Another interesting thing is that the frequency bands that react for motor imagery may not be exactly the same for everyone. We used to have a box that computed the good frequency bands to apply. I think this is no longer maintained so I can't point you to this box but I suggest that you play a bit with the frequency bands. You could also play with the number of k-fold test partitions.

Of course all these ideas should be considered only if your signals are clean and with no artifacts (EMG, OEG, blinks etc...)

Keep us posted.

Yann

Post Reply