Python generated stimulations

Working with OpenViBE signal processing scenarios and doing scenario/BCI design
Post Reply
joseph
Posts: 20
Joined: Wed Jun 12, 2013 1:33 pm

Python generated stimulations

Post by joseph »

Hello,

I recently posted about receiving UDP messages and generating stimulations from their content using the python scripting Box.
This approach seems to be working right now, but I'd like to make sure I don't miss any potential issue and that my stimulations are properly written together with the eeg stream in a file for use with classifiers.

In the meantime I came across the page about tcp tagging so I know I'm not doing things the recommended way, but for now let's consider my use case is not time critical enough to require the use of tcp tagging.
Also, if I understand correctly the motivation behind tcp tagging, maybe my approach could be useful for cases where the stimulations are not predefined but come from an unpredictable real-time input

What actually bothers me is that after some time I don't see the stimulations appear in the Signal display window anymore.
They are still all properly logged by a Stimulation listener box.
I also had a quick look at a csv recording test file, and it seems that the stimulations are still written into it, even those which don't appear in the Signal display window.
So I guess it might be a known issue with the signal display box ... could anyone confirm (or infirm) this ?

Here is my most recent python script, based on the python-clock-stimulator example.
As you can see I'm making sure to send continuously time tagged stimulation chunks, with no stimulation inside if no UDP message was received :

Code: Select all

from pyOSC3 import OSCServer

ip = "0.0.0.0"
port = 8005

class MyOVBox(OVBox):
	def __init__(self):
		OVBox.__init__(self)

	def initialize(self):
		self.server = OSCServer((ip, port))
		self.server.socket.setBlocking(0)
		self.server.addMsgHandler("/ov/stimulation", self.onMessage)
		self.server.addMsgHandler("/ov/scenario", self.onMessage)

		self.negativeStimLabel = self.setting['Negative Stim Label']
		self.neutralStimLabel = self.setting['Neutral Stim Label']
		self.positiveStimLabel = self.setting['Positive Stim Label']
		self.noPicStimLabel = self.setting['No Pic Stim Label']
		self.endStimLabel = self.setting['End Stim Label']

		self.stimCode = None
		self.negativeStimCode = OpenViBE_stimulation[self.negativeStimLabel]
		self.neutralStimCode = OpenViBE_stimulation[self.neutralStimLabel]
		self.positiveStimCode = OpenViBE_stimulation[self.positiveStimLabel]
		self.noPicStimCode = OpenViBE_stimulation[self.noPicStimLabel]
		self.endStimCode = OpenViBE_stimulation[self.endStimLabel]

		self.output[0].append(OVStimulationHeader(0., 0.))

	def process(self):
		self.stimCode = None
		stimSet = OVStimulationSet(self.getCurrentTime(), self.getCurrentTime()+1./self.getClock()

		# handle_request will eventually update self.stimCode if it triggers a call to onMessage()
		self.server.handle_request()

		if self.stimCode is not None:
			stimSet.append(OVStimulation(self.stimCode, self.getCurrentTime(), 0.))

		self.output[0].append(stimSet)

	def uninitialize(self):
		self.server.close()

	def onMessage(self, path, tags, args, data):
		arg = args[0]

		if path == "/ov/stimulation":
			if arg == "negative":
				self.stimCode = self.negativeStimCode
			elif arg == "neutral":
				self.stimCode = self.neutralStimCode
			elif arg == "positive":
				self.stimCode = self.positiveStimCode
			elif arg in ("snow", "nopic"):
				self.stimCode = self.noPicStimCode

		elif path == "/ov/scenario":
			if arg == "stop":
				self.stimCode = self.endStimCode

box = MyOVBox()
Finally, there is another point I'd like to clarify :

My stimulations have a duration of 0 as in the python-clock-stimulator example.
My guess is that a classifier will consider that all signal chunks after a specific stimulation belong to this stimulation's class, until a new stimulation is received and then it will consider that the following signal chunks belong to the new stimulation's class, and so on ...
Is this a right guess, or should I generate stimulations with non null durations in order to train the classifier properly ?

Many thanks for any feedback.
Best,
Joseph

Thomas
Posts: 211
Joined: Wed Mar 04, 2020 3:38 pm

Re: Python generated stimulations

Post by Thomas »

Hi Joseph,

Thanks for your detailed message.

I will look into the signal display box, there might be an issue indeed. What version of OpenViBE are you using at the moment ?

The Classifier Trainer does not actually look at the class stimulations. It only waits for the Train Trigger stimulation.
It expects feature vectors for each class on the corresponding feature vector input. When working with 2 classes, there are two feature vector inputs, and each should received feature vectors for the corresponding class.
The labels provided in parameters of the Classifier Trainer box are only meant to be written in the configuration file that will then be used by the Classifier process box.

Looking at the example provided with OpenViBE in: scenarios/bci-examples/moto-imagery/motor-imagery-bci-2-classifier-trainer.xml, you will see that the class stimulations are sent to the Stimulation base epoching boxes, and the amount of data used for that class depends on the time of the epoch given in parameter of the box.

Hope this helps.

Don't hesitate if you have more questions.

Cheers,
Thomas

joseph
Posts: 20
Joined: Wed Jun 12, 2013 1:33 pm

Re: Python generated stimulations

Post by joseph »

Hi Thomas,

It definitely looks like a display issue, I ran a few other quick tests and the stimulations all seem to be here.
This is reassuring :)
For information I'm currently using OpenVibe 3.2.0 on windows 10.

Thanks a lot for the insight about how the classifiers and the epoching boxes work, it is a bit more clear now.
I think I still miss some basic understanding of the different types of signals involved in OpenVibe, and of BCI in general.

To give you a little context, I'm not really into BCI research, I'm just working on reviving an old project from 2014 about felt emotions classification, meant to drive an interactive movie, and based on a calibration process using the IAPS database (which I know is a bit controversial).
Fabien Lotte helped us at that time to create a scenario based on several patches, using CSP filters on alpha, beta and gamma frequency bands, then feeding the filtered data into an LDA classifier.
The patch using the classifier trainer is actually VERY similar to the motor-imagery-bci-2-classifier-trainer, not to say the same.

I was just wondering about stimulations duration because I noticed in the logs that the stimulations we generated had a duration of zero, which implies stimulations can have a duration. But I guess I don't have to worry about this in my scenario.

Yesterday, following a recent advice from Fabien I tried to adapt the scenarios to use Riemannian geometry boxes instead of CSP filters and ran into issues while trying the example patchs, but I'll leave this for another post, as the updated scenarios from 2014 seem to be working as expected and you just cleared the few doubts I had.

EDIT : nevermind, the solution to the aforementioned issues with the examples was that I had to run OpenVibe as administrator in order for it to be able to write files in C:\Program Files

Thanks again !
Joseph

Post Reply