Motor Imagery / Neurofeedback: Lift the Spaceship demo

  • NB: last update for OpenViBE 2.1.0 (27.Sep.2018).

This demo enhances the Neurofeedback BCI with a motivating visual feedback : a 3D spaceship in a hangar, that can be lifted thanks to brain activity. You will find it in share/openvibe/scenarios/bci-examples/spaceship.

  • spaceship-freetime.xml: This scenario replays a pre-recorded MI session from a file and shows the feedback the user was given during the session. The feedback is presented through signal displays : one for the brain activity on the Cz electrode, and one for the band power in the beta activity, related to (imagined) feet movement.

The scenario doesn’t launch the visual application by itself. To see the spaceship, you need to manually start that part of the demo with openvibe-vr-demo-spaceship. It will receive instructions from the scenario while it is running.

The scenario communicates with the vr-demo through the VRPN protocol, triggering instructions (“Move your feet”, “Stop”) and providing feedback value that determines the altitude of the spaceship in the 3D scene.

Fig.1 The lift-the-spaceship BCI.

Calibrating the spaceship demo

Please note that the Crop and Simple DSP configuration values in the shipped scenario are based on a calibration session that computed the Mean and Variance of Beta activity on Cz on a 30 sec inactivity phase. If you want to change these parameters properly, you need to implement the corresponding calibration and observe the values in your circumstances. You can alternatively just change the values and observe the effects.

Fabien Lotte and Alison Cellard did a tutorial in 2014 about motor imagery and how to use beta rebound to control the spaceship. You can get the slides and the code/scenario archive. The archive contains an example how to calibrate the signal processing. Note that the archive is likely made for the 0.17.0 version of openvibe, and may need updating to work with the current version.

Implementation details

If you wish to use the spaceship demo app in some derived work or expeiment, it is useful to understand its operating logic. Basically the app expects discrete commands that change its state. It receives these from the ‘VRPN Button Server’ in the scenario. The spaceship is lifted by a single analog input it receives fom the ‘Analog VRPN Server’ box. The buttons the ship reacts to are as follows,

Button 1 : Phase_Rest;         
  // == Display neither Move or NoMove image
Button 2 : m_Phase_Move;
  // == Display Move image
Button 3 : Phase_NoMove; 
  // == Display NoMove image
Button 4 : Stage_Baseline;
  // == Display Calibration image
Button 5 : Stage_FreetimeReal;
  // == Control ship with 'real movements'
Button 6 : Stage_FreetimeImaginary;
  // == Control ship with 'imaginary movements'
Button 7 : Stage_Statistics;  
  // == Print out statistics

The existence of the ‘real’ and ‘imaginary’ movement states is probably related to some particular legacy experiment where the user was requested to do real limb movements and motor imagery in turns. The operating state of the spaceship app seems to be the same in both conditions: its the state where the ship reacts to the analog input.

You can also simulate these buttons and the analog with openvibe-vrpn-simulator. The first slider on the left is the analog input, and the stars at the bottom correspond to buttons 1,2,3,…

The spaceship app also seems to react to a few keyboad keys: ESC to quit, END to toggle viewing of the current reactivity threshold to the analog input, and UP and DOWN to change the threshold.

 

This entry was posted in Example scenarios and tagged , . Bookmark the permalink.