Multimodal Graz visualization

Summary

Doc_BoxAlgorithm_MultimodalGrazVisualization.png
  • Plugin name : Multimodal Graz visualization
  • Version : 1.0
  • Author : Thibaut Monseigne
  • Company : Inria
  • Short description : Generalization of visualization plugin for the Graz experiment
  • Documentation template generation date : Oct 29 2020

Description

Generalization of Visualization/Feedback plugin for the Graz experiment

This is a generalization of the Graz visualization allowing to display the result of the classification on two or more classes, always choosing which action to ask the user.

Inputs

1. Stimulations

The timeline of the events.

  • Type identifier : Stimulations (0x6f752dd0, 0x082a321e)

2. Amplitude

For online use and feedback, the strength of the current activation. This can be for example the continuous output from a classifier.

  • Type identifier : Streamed matrix (0x544a003e, 0x6dcba5f6)

Outputs

Confusion matrix (recommended to save only last matrix if you want to save to csv).

1. Displayed Bar Size

  • Type identifier : Streamed matrix (0x544a003e, 0x6dcba5f6)

2. Confusion Matrix

  • Type identifier : Streamed matrix (0x544a003e, 0x6dcba5f6)

Settings

1. Show instruction

If true, the user will be shown the modalities.

  • Type identifier : Boolean (0x2cdb2f0b, 0x12f231ea)
  • Default value : [ true ]

2. Feedback display mode

Selection of the feedback mode:
None = no Feedback
Positive Only = only the expected modality is the predicted
Best Only = Feedback only for the predicted Feedback
All = Feedback of all modality

  • Type identifier : Feedback display mode (0x5261636b, 0x464d4f44)
  • Default value : [ Positive Only ]

3. Delay feedback

If true, feedback will be shown only after the trial. Otherwise immediately.

  • Type identifier : Boolean (0x2cdb2f0b, 0x12f231ea)
  • Default value : [ false ]

4. Show accuracy

If true, a little matrix will display how many online trials matched the arrow direction.

  • Type identifier : Boolean (0x2cdb2f0b, 0x12f231ea)
  • Default value : [ false ]

5. Predictions to integrate

How many predictions to integrate for computing the feedback bar.

  • Type identifier : Integer (0x007deef9, 0x2f3e95c6)
  • Default value : [ 5 ]

6. Image no instruction

Path to the modality image if we don't show the instruction.

  • Type identifier : Filename (0x330306dd, 0x74a95f98)
  • Default value : [ ${Path_Data}/plugins/simple-visualization/graz/none.png ]

7. Stimulation modality 1

Label of the stimulation for the first class.

  • Type identifier : Stimulation (0x2c132d6e, 0x44ab0d97)
  • Default value : [ OVTK_GDF_Left ]

8. Image modality 1

Path to the modality image of the first class.

  • Type identifier : Filename (0x330306dd, 0x74a95f98)
  • Default value : [ ${Path_Data}/plugins/simple-visualization/graz/left.png ]

9. Stimulation modality 2

Label of the stimulation for the first class.

  • Type identifier : Stimulation (0x2c132d6e, 0x44ab0d97)
  • Default value : [ OVTK_GDF_Right ]

10. Image modality 2

Path to the modality image of the second class.

  • Type identifier : Filename (0x330306dd, 0x74a95f98)
  • Default value : [ ${Path_Data}/plugins/simple-visualization/graz/right.png ]

Examples

Miscellaneous

The timeline required by the box can be generated by a Lua stimulator. OpenViBE is bundled with a few motor imagery examples illustrating this (in folder "bci-examples/").

In order to place the markers (stimulations) to the recorded EEG stream accurately in time, the box connects to the Acquisition Server's TCP Tagging plugin and forwards the received timeline there after rendering. The subsequent scenarios and writers should then use the timeline from the Acquisition Server output and not directly from the timeline generating box. But due to the long duration of time the motor imagery is typically integrated, this paradigm could be less sensitive to marker alignment issues compared e.g. to P300.