SSVEP: Using the Mind Shooter scenarios

  • NB: Document updated for OpenViBE 1.3.0 (Dec 2016)

The Mind Shooter demo[1] in OpenViBE illustrates how to do a shooting game based on the SSVEP paradigm. A more elementary version of an SSVEP game is also available.

In the Mind Shooter game, the user focuses on the flickering parts of the controlled spaceship. The flicker frequency is reflected in the recorded EEG signal, which is then classified, and the classifier predictions of the dominant frequency move the ship accordingly to the left and to the right, and command it to fire. The goal is to destroy the enemy ships.


ssvep_mind_shooter
The Mind Shooter

Glossary

  • Target : A flashing ship part on screen. Induces SSVEP when looked at.
  • Stimulation Frequency : Flashing frequency of one target part.

Hardware

SSVEP stimulator needs to be able to run in VSync mode. This is available readily in Windows XP and on Linux systems with compositing extension disabled. On other setups you will need to run the application in fullscreen only.

The electrodes should be mostly placed in the occipital region of the head. A good starting set is Oz, CPz, Pz, Iz, O1 and O2. To avoid overfitting, it is recommended to disconnect electrodes which are not situated near the visual cortex. This can be also done using the channel selection in the scenarios.

Choosing your Mind Shooter

In OpenViBE 1.3.0, Mind Shooter comes in two flavours: the default one and the classic flavour.

  • bci-examples/ssvep-mind-shooter/. The default Mind Shooter has simple signal processing pipelines that correspond to textbook machine learning practices: Different features are extracted from the signal, catenated together and used to train a single multiclass CSP spatial filter bank and a multiclass LDA classifier. The EEG data and the events are combined by the TCP Tagging technique (current OV recommendation).
  • bci-examples/ssvep-mind-shooter-classic/. The Classic Mind Shooter does a separate feature extraction for each class, and feeds the features to several binary CSPs and classifiers. Their outputs are then combined by simple voting. The EEG data and the events are combined in Designer (may be less time-accurate).

The two flavours may have slightly different performances qualitywise. The flavours and their scripts are kept separate largely to avoid interference and to make them more understandable to people who are not Mind Shooter experts. The same Shooter application itself is used by both flavours. If parameters such as flicker frequencies and durations are kept the same, the same EEG recordings should be compatible for both the default and the classic version.

Setup

In order for the scenarios to execute correctly they MUST be successfully executed in order at least once. Many scenarios create configuration files which are then used by subsequent scenarios. In this regard the SSVEP Mind Shooter scenario is different from other OpenViBE scenario sets — you cannot start with some middle scenario stage for testing.

Running the game

This procedure explains how to configure, train and run an online game of the Mind Shooter. Basically, open all the scenarios and run them successfully in increasing numeric order (you may need to change parameters though).

In all scenarios, the boxes that can be configured are marked by blue text “Configurable box”.

ms-1-configuration.xml

This scenario contains several boxes that will configure the behaviour of the experiment. The scenario contains Lua scripting boxes which generate configuration files. All scripts are located in the /scripts folder and configuration files are distributed between /configuration and /appdata folders under the scenario root tree.

Display Settings

This box configures the hardware. Several settings are available:

  • Screen Refresh Rate : FPS of the application, this should be kept at the native refresh rate of the screen. All frequencies used by the application must be entire fractions of this frequency.
  • Window Width and Window Height : Resolution of the stimulator application. It is preferable for the resolution to be proportional to your screen as the application does not use letter-boxing.
  • Fullscreen : If true then the stimulator will run in fullscreen.

These settings are reflected in /appconf/application-configuration.conf file.

Attention: On Windows be sure to run the application at least once in windowed mode, as Windows will pop-up a dialogue asking you to give the application permissions to go through firewall. These are necessary for VRPN to work.

EEG Input Settings

This box provides proxies to configure several other boxes in next scenarios. For standard procedure you do not need to modify these settings.

  • Channels, Selection Method, Match Method : These settings are passed to a channel selector box which is placed immediately after Acquisition Client boxes in every scenario. These can be used to remove noisy or unrelated channels, for example. These settings are reflected in /configuration/channel-selector.cfg file.
  • Training Data File : This file will be read by Generic Stream Reader box inside the CSP and Classifier training scenarios. The default value points to a file generated by the training scenario on every run. This setting is reflected in /configuration/file-reader-training.cfg file.
  • Testing Data File : This file will be read by Generic Stream Reader box inside the Performance Testing scenario. This setting is reflected in /configuration/file-reader-testing.cfg file. Useful for comparison of performance on subsequent runs of the training scenario.

Experiment Settings

This box configures the actual settings of the application relevant to the SSVEP stimulation.

  • Target Light Color : Targets flash by swapping between two colours at desired frequency. For identification one is called Light and the other Dark, but they can be of any colour. This settings defines the Light colour.
  • Target Dark Color : This setting defines the Dark colour. The colours are saved as OGRE Material descriptions in /appdata/materials/flickering.material.
  • Stimulation Frequencies : A semicolon (;) separated list of frequencies. These are real numbers (decimal separator is period (.)). This list can be of any length provided that there are enough targets defined in the stimulator application. In the case of the Mind Shooter the three targets correspond to Cannon, Left and Right respectively. This setting is reflected in /appconf/application-configuration.conf file. Additionally a series of temporal filters will be created, one for each frequency as the processing chains consider each frequency and its first harmonic.
  • Processing Epoch Duration : How many seconds of signal are used to generate one feature vector.
  • Processing Epoch Interval : Duration in between two epochs of signal. The epochs overlap! These two settings are written into /configuration/time-based-epoching.cfg.
  • Processing Frequency Tolerance : The scenarios will filter signal between Stimulation frequency – PFT and Stimulation frequency + PFT for each target.
  • CSP Spatial Filter Order : Setting passed to CSP filter trainers. This is saved into /configuration/csp-filter.cfg file.
  • Focus Point : If true then the Mind Shooter trainer and shooter targets will always have a dot in the middle, so the user can focus on that. Saved in /appdata/application-configuration.conf

Additionally this scenario will also create a folder named userdata-[CURRENT_TIMESTAMP] in the /signals folder in the scenario folder. All recordings and logs done with the other SSVEP scenarios will be put into this folder, until the next time the configuration scenario is ran.

ms-2-training-acquisition.xml

This scenario is used for recording data to train the classifier. It can be configured with the SSVEP Training Controller box. It writes data into the previously created userdata folder.

Scenario settings

  • Goal Sequence : Sequence of targets to be marked for training.
  • Stimulation Duration : For how long will the target effectively flash.
  • Break Duration : Break between end of a flashing sequence and disclosure of the next target.
  • Flickering Delay : Time for which the scenario will wait between marking the current training target and starting the flickering animation.
  • Epoching Delay : (non-classic only) Segment of a training epoch to skip before feature extraction, in seconds. It may avoid ERP effects.

During the acquisition, several stimulations are sent to the application. In the default version, the application uses TCP Tagging in order to combine them with the EEG data in the Acquisition Server. In the classic version, the stimulations are directly inserted in Designer (less accurate). In any case, these stimulations are sent to the application as VRPN button presses between the scenario and the stimulator. The following diagram explains the communication between the components.

Diagram of the SSVEP training scenario flow

ms-3-CSP-training.xml

This scenario will train the CSP spatial filters. In the default version, one multiclass CSP box produces the filter bank. In the classic versions, there are 6 different CSP filters, two for each frequency (base and first harmonic). The purpose of the CSP filters are to aggregate the electrodes spatially and find filters that maximize the difference of signal variation between the classes.

This scenario uses several Lua boxes which will be described later.

ms-4-classifier-training.xml

This scenario finally generates the classifiers. In the classic vesersion, epochs when the flickering target was marked are sent to Target input and all of the other epochs are sent do the Non-Target input. In the default version, all data features are catenated and processed by a single multiclass classifier.

Description of Lua boxes inside the classic scenarios:

Target Separator

This processes Label_XX openvibe stimulations, all other stimulations are simply ignored. In its settings you define two groups (Targets, Non-Targets). When a Label stimulation is received this box will send a (user-definable) stimulation on first output if the stimulation is in the Targets group and on the second output when its in the Non-Targets group.

This output is used to slice the signal into epochs which are fed to the classifier.

Flipswitch

This box will send a stimulation on its output once it has received as many stimulations as it has inputs (in general one for each input but this is not checked).

This is used to send a stop scenario stimulation once each of the classifier has finished training.

ms-5-online.xml

This is the actual game. The goal of the game is to shoot at enemy ships (dark with red and orange decorations) and avoiding shooting the “friend” ship (white with blue decorations).

The ship can go left, right and shoot. Several aids can be switched on and will be described later.

The game can be configured with several options. Some of them require modifying the actual code and will be discussed in an addendum to this tutorial.

SSVEP Shooter Controller

  • Target Types : List of non-player ship that will arrive in sequence. The types will be described later.
  • Target Positions : Positions at which the non-player ships will appear. In “One By One” mode position 0 is far left 7 is far right.
  • Pilot Assist : The ship will slow down in front of the enemies and will shoot slower in front of the allies.
  • Target Lockdown : Unusable commands (left and right at corresponding edges and cannon when nothing is in front) will be disabled.
  • Display Feedback : Display progressive concentration feedback on the flickering targets.
  • One By One : If true then enemies will appear one by one as specified on positions as specified by first two settings. If false then enemies will appear three by three. If position is 0 then the next three enemies will appear on positions 1, 6 and 4 respectively. If position is 1 then the ships will appear at positions 6, 1 and 4.
  • Analog Control : If true, the classification probabilities will be used to control the ship. If false, the control is done by classifier labels.
  • Shooter log level : Which level of information to print by the Shooter application.

Non-player ship types

  1. Big ship. Worth 100 points, leaves after 2 minutes.
  2. Medium ship, worth 200 points, leaves after 90 seconds.
  3. Small ship, worth 400 points, leaves after 1 minute.
  4. Medium ship, worth 500 points, leaves after 1 minute.
  5. Medium ship, worth 1500 points, leaves after 1 minute.
  6. Medium ship, allied, penalty of 500 points, leaves after 1 minute.

In the classic version, a move is selected only when just one of the classifiers reports a positive result. In the default version, there are four classes 0 to 3, where the prediction for the class 0 corresponds to no-action. When using analog inputs, the choice is made by selecting the class with the strongest likelihood.

ms-6-performance-test.xml

This scenario is not required to play the game, but can be used to examine the quality of the classifier predictions. See more below.

Troubleshooting

Sometimes the control of the ship may appear bad and/or be biased to one side. There are several possible reasons for this, and they are not necessarily due to bugs in the implementation.

Possible reasons for bad performance

Depending on the subject that wears the EEG headset, recorded Mind Shooter data may be more or less easy to discriminate. External analyses of the data performed e.g. in the EEGLAB software, suggest that the SSVEP patterns may not even be present in all subjects. This is not a simple coding fault or necessarily a bug. Possible reason is that the subject doesn’t generate the visually evoked patterns strongly enough. This could be due to the flickering apertures being too small, for example. It is recommended to run Mind Shooter with a big enough screen. The problem could also be of more cortical origin (smearing/cancelled by conduction, not produced in the first place, or swamped by other activity, unsuitable frequencies for the subject, …), unoptimal signal processing, overfitting, etc etc. Please consult the SSVEP literature on such possibilities.

To test if the reason for bad control is the data and/or insufficient signal prosessing pipeline, you can do the following experiment. Run the training acquisition scenario twice in a row on the user. Use the first recording to train the CSP and the classifier. Use the second recording to run the performance test scenario. If the accuracy of the performance test is low, it indicates that the first and the second file didn’t have much in common that the machine learning could have been able to extract from the first file. This simulates the training + gameplay split: the first session is the training, the second is the gameplay. Note that it is largely pointless to run the performance test with the same file that was used for training, as this is relatively easy to get to perform well due to a problem called overfitting. However, if the second file gives a good performance, but the ship control is still bad, there is reason to believe that there is a bug somewhere.

You may obtain different performances on different users with different frequencies. These can be configured in Experiment Settings. The default set is 5.45,6.66,7.5 hz. Other useful sets can be 10,12,15 hz and 12,15,20 hz. The frequencies should not be integer multiplies of each other, and they should be such that your display is capable of rendering them.

You can also try to increase the number of the training trials and modify the other parameters such as epoch durations, classifier types and regularization, CSP parameters such as regularization etc. However, longer training is likely not pleasant to the user. If you try modifying the parameters, you can use the two-file testing technique described above. Remember that changing parameters that change the nature of the recordings (such as flicker frequency) needs new recordings to be done.

If you have better-performing signal processing chains for the Mind Shooter, feel free to contribute!. :)

Debugging

The Mind Shooter is somewhat difficult to debug due to there being essentially 3 independently running components involved: the Acquisition Server, the Designer, and the Shooter application. The Designer and the Shooter communicate over VRPN: a network communication. In the default version, the Shooter also communicates with the Acquisition Server using TCP/IP (for the Tagging), but this is only used during the training data collection.

Some amount of work has gone into ensuring that the Mind Shooter application behaves properly when receiving meaningful predictions from the Designer side. This can be tested for example inserting hand-made classifier predictions into a CSV file, and then using CSV Reader box to feed them to the VRPN box that controls the ship. When the Shooter application is in the window mode, its easy to observe, when routing the data both to a Stimulation Listener (or EBML spy) boxes and the ship, that the ship takes the appropriate responses for each command. If you want to brave the code side of the Shooter, the files ovamsCCommandControlDiscrete.cpp and ovamsCCommandControlAnalog.cpp are good places to start. It is in them where the predictions are received from VRPN and used to move the ship (for Discrete or Analog control choice in the Application config, respectively).

References

1. “Towards Contextual SSVEP-based BCI controller: Smart activation of stimuli and controls weighting”. Jozef Legény, Raquel Viciana Abad, and Lécuyer Anatole. IEEE Transactions on Computational Intelligence and AI in games, IEEE Computational Intelligence Society, 2013.

This entry was posted in Example scenarios and tagged . Bookmark the permalink.