Hello,
1.I would like to know where should we put exactly the electrodes in order to have signals? For example, if someone think about movement of hand (right and left hands), where should we put the electrodes on the head?
2.Moreover, could we have a scenario on openvibe with the movement of feet? Indeed, we only have a scenario with movement of hands (right hand and left hand)
Thank you for the help,
Motor imagery : hands and feet
Re: Problems with ModularEEG Data aquisition
Hello fleur,
Best Regards,
Laurent
Please look at the video tutorial. The example application uses right/left hand motor imagery.1.I would like to know where should we put exactly the electrodes in order to have signals? For example, if someone think about movement of hand (right and left hands), where should we put the electrodes on the head?
The scenario neurofeedback.xml given with openvibe in openvibe-scenarios/bci/neurofeedback uses feet movements. It computes the power of beta band on the Cz electrode, crops it, and gives it in its VRPN Server (with a threshold). The Crop box and the last Simple DSP use empiric values (i.e. they need to be tuned according to the subject).2.Moreover, could we have a scenario on openvibe with the movement of feet? Indeed, we only have a scenario with movement of hands (right hand and left hand)
Best Regards,
Laurent
Re: Problems with ModularEEG Data aquisition
For the hand movement you need electrode at postions C3 and C4 of 10-20 system.
It is on the cortical lobe, which is responsible of motor imagery.
I suppose it will be around the same position for foot movement.
It is on the cortical lobe, which is responsible of motor imagery.
I suppose it will be around the same position for foot movement.
Naëm Baron
CV : http://bee-oh.esiea-ouest.fr/baron/index.xml
CV : http://bee-oh.esiea-ouest.fr/baron/index.xml
Re: Motor imagery : hands and feet
Dedicated topic for this particular question.
Please create new topic for new question !
- The OpenViBE Team
Please create new topic for new question !
- The OpenViBE Team
-
- Posts: 2
- Joined: Wed Oct 06, 2010 12:39 pm
Re: Motor imagery : hands and feet
Hello!
Is there any OpenVibe scenario for BCI that includes 2 hands and 2 feet movements?
In our experiment we need to be able to walk an avatar in virtual city and make it turn right, turn left, accelerate and decelerate.
THank you for your help!
Is there any OpenVibe scenario for BCI that includes 2 hands and 2 feet movements?
In our experiment we need to be able to walk an avatar in virtual city and make it turn right, turn left, accelerate and decelerate.
THank you for your help!
Re: Motor imagery : hands and feet
Dear lioubov.aguilova,
first, thank you for your interest in OpenViBE and welcome on these forums.
Unfortunately, we don't have such 4 class BCI motor imagery in the sample scenarios. It should be feasible to update the two classes BCI scenarios to a 4 class anyway. But you will probably need more advanced techniques to optimize the spatial filter which is currently just a surface laplacian.
Could you describe us what you want to achieve and how experienced you are in BCI ? Performing a 4 classes motor imagery BCI is probably more difficult than what you expect, even more with two feet commands (which will be located almost at the same place on top of your head).
I hope this helps,
Regards,
Yann
first, thank you for your interest in OpenViBE and welcome on these forums.
Unfortunately, we don't have such 4 class BCI motor imagery in the sample scenarios. It should be feasible to update the two classes BCI scenarios to a 4 class anyway. But you will probably need more advanced techniques to optimize the spatial filter which is currently just a surface laplacian.
Could you describe us what you want to achieve and how experienced you are in BCI ? Performing a 4 classes motor imagery BCI is probably more difficult than what you expect, even more with two feet commands (which will be located almost at the same place on top of your head).
I hope this helps,
Regards,
Yann
-
- Posts: 2
- Joined: Wed Oct 06, 2010 12:39 pm
Re: Motor imagery : hands and feet
Dear OpenVibe developers,
Could you please answer these questions about the openVibe signal processing for the case of controlling interface with right and left hand movements?
- How many learning trails do you do for the imagery movement without feedback? With feedback? Is it a fixed number or do you continue until obtaining a good performance? If you stop once the performance is good, what percentage do you consider good? If the performance is good quickly, do you continue improving it up to better percentages?
- Does the program use .cfg file from previous trial while processing a new trial in order to improve its results? In other words, is user the only who improves his training from trial to trial or does the program also improve the recognition of user's signal features?
Thank you for your help!
Could you please answer these questions about the openVibe signal processing for the case of controlling interface with right and left hand movements?
- How many learning trails do you do for the imagery movement without feedback? With feedback? Is it a fixed number or do you continue until obtaining a good performance? If you stop once the performance is good, what percentage do you consider good? If the performance is good quickly, do you continue improving it up to better percentages?
- Does the program use .cfg file from previous trial while processing a new trial in order to improve its results? In other words, is user the only who improves his training from trial to trial or does the program also improve the recognition of user's signal features?
Thank you for your help!
Re: Motor imagery : hands and feet
Dear lioubov.aguilova,
Hope this helps,
Regards,
Yann
The predefined XML scenario displays 20 cues of each side (left and right). This makes a 10/15 mns sessions approximately. We usually concatenate the signals recorded in the last 4 sessions to train a good classifier. 75/80% classification rate is a good level to start with.lioubov.aguilova wrote: - How many learning trails do you do for the imagery movement without feedback? With feedback? Is it a fixed number or do you continue until obtaining a good performance? If you stop once the performance is good, what percentage do you consider good? If the performance is good quickly, do you continue improving it up to better percentages?
As stated before, we usually use the last 4 sessions to train the classifiers.lioubov.aguilova wrote: - Does the program use .cfg file from previous trial while processing a new trial in order to improve its results? In other words, is user the only who improves his training from trial to trial or does the program also improve the recognition of user's signal features?
Hope this helps,
Regards,
Yann