realtime and delay

Come here to discuss about OpenViBE in general!
Amélie
Posts: 13
Joined: Wed Nov 24, 2010 3:47 pm

realtime and delay

Post by Amélie »

Hello all,

I want to measure the treatment time of a BCI loop using OpenViBE and different acquisition systems and drivers, to evaluate the delay between the user's decision and the result (display or effector activation). If necessary, I want to identify which steps take a long or variable time.

To do this, I added some lines in the code to write CPU time in a file when a rising edge is detected on the first channel, which receives a generated square signal.
For delay between signal reception in OpenViBE Acquisition Server and a plugin in the Designer, I made a very simple scenario : there are just an acquisition client and a box detecting rising edges on first channel.
I also checked that time left to detect rising edges and write in file is insignificant compared to other delays.

Depending on the chunk size (number of channels and number of points) and anti-virus activation, the delay measured changes a lot. For example on my computer, for 32 channels and chunks of 32 points, this delay goes from around 4ms to more than 50ms.
In average, it can be acceptable, and I noticed no drift, but delay is also introduced by the acquisition system, possible other boxes for data treatment and possibles effectors, so it seems to much to me.

What's your opinion about this delay ? Did (or will) anyone measure it in this way or an other way ? Do you know how to reduce it ?

Thank you
Amélie

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: realtime and delay

Post by ddvlamin »

Hi,

I came across (I think) a timing problem when using the P300 scenario, see the topic viewtopic.php?f=5&t=238&start=30 and scroll down to see the two pictures. I noticed that one time the P300 peak was around 300ms, but other times it was much earlier.

An explanation for this delay was given to me by yrenard in this topic: viewtopic.php?f=5&t=325&p=1465&hilit=block+size#p1465 One solution he proposes is to take the number of samples sent per block as low as possible (depending on how fast your computer is). Then the delay should be minimized.

Your method for detecting the delay could be useful for me too, as I still suffer this problem when recording P300 data, for motor-imagery experiments it does not seem a problem. But maybe the computer that I use is just too slow to keep up with both acquisition and visualization. The fact that the variability of the delay (as you mention) is high worries me a bit too. I would dig a bit deeper into this, but at the moment I do not have the time.

Maybe one of the developers can further help you with your question.

Best regards,
Dieter

Amélie
Posts: 13
Joined: Wed Nov 24, 2010 3:47 pm

Re: realtime and delay

Post by Amélie »

Hello,

I tried with lower block-size, but unfortunately one of my acquisition devices doesn't allow a real-time processing under 32 samples per block.
I noticed that the delay is lower and less variable when deactivating all network connections, anti-virus etc. If you don't need to run the acquisition server and the designer on different machines, maybe this can help you too.

Best regards
Amélie

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: realtime and delay

Post by yrenard »

Dear Amelie, dear Dieter,

thank you for pointing out this challenging question, and thank you Dieter for pointing out the discussion we had some time ago about P300 detection.

The point is that as soon as the data come inside the designer, the time is simulated realtime. The good thing with such approach is that it is never necessary to discard data if the processing comes to be late. Better, we hope we get back to supposed time later after a temporary long processing has taken place. The bad thing with this is that interaction with effectors can not be realtime either. If you have visual feedback, it is not much an issue. If you are controlling a powerfull robot, you may put some security between OpenViBE outputs and the robot input, with a realtime intermediate process that possibly inter/extra pollates the data that come from OpenViBE.

Dieter, about the p300 delay, that was caused by the lack of sync in the acquisition server at connection time. The current version of OpenViBE now includes drift correction + sample level synchronization at connection time. This ensures the p300 is correctly tagged and you should not have any more problem now, at least if you record the different sessions with a single instance of the designer (that is you don't close and reopen the designer between each session).

Amelie, the buffering of the acquisition server causes a delay that can't be removed. However, the drift correction makes this delay vary along the session. If you want to fix this behavior, you can disable the drift correction. Also, if you really want to achieve strong timings with OpenViBE, you will have to care about the boxes scheduling. It is also interesting to notice that some devices drift more than others and choosing an appropriate device can help having better timings.

I can't end this without mentioning the fact that tagging signal through hardware will always be the most accurate way of measuring timing and epoching evoked potential. Not all the hardware support this so we have to propose a generic alternative which is drift correction + sample level sync at connection time + software tagging. If you want to measure reliable timings, you should consider moving to hardware tagging.

Hope this helps,
Yann

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: realtime and delay

Post by ddvlamin »

Hi Yann,

I hope I don't start to annoy you, but I still have some questions as I don't exactly understand some of these synchronization issues.
yrenard wrote:The current version of OpenViBE now includes drift correction + sample level synchronization at connection time.
I understand what the drift correction does, but what exactly is the sample level synchronization, I don't seem to find anything about that in the documentation of the acquisition server. Does it have anything to do with the fact that at the time you press the play button in the designer there can be a varying number of samples already in the buffer of the acquisition server? I'm guessing that it makes sure that not the whole buffer is send to the designer when a connection is made?
yrenard wrote:This ensures the p300 is correctly tagged and you should not have any more problem now, at least if you record the different sessions with a single instance of the designer (that is you don't close and reopen the designer between each session).
So it is not a good idea to take grand averages in an offline analysis of several days of recording, that's probably the reason why I didn't see a clear P300 peak. So it is possible that on one day the P300 is actually located around 250ms while another day it is around 350ms? I still don't understand how this problem exactly arises by opening and closing the designer. Could you elaborate a bit more on this? Can this also occur when you reconnect the acquisition server between sessions as I always do?
yrenard wrote:However, the drift correction makes this delay vary along the session.
If the acquisition adds or removes samples on the fly to meet the required sample frequency, I assume it is able to send its buffer to the designer at the correct time? So I don't understand how drift correction can make the delay vary in the designer (I do understand there can be a fixed delay which is not a real problem), of course a delay can occur if the designer cannot compute all the necessary things before the next block arrives. Anyway, if there is a delay possible (with drift correction) in the execution of the boxes it can still happen that the timing of the visualization of the P300 stimuli does not correspond with the correct tagging of the stimuli in the EEG data, causing a varying delay in the subject's response?

sorry for the many questions, maybe they sound stupid as I'm unfamiliar dealing with real-time things and synchronization problems, but I think it's quite important to understand how all these new features work and what the effect is.
Best regards,
Dieter Devlaminck

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: realtime and delay

Post by yrenard »

ddvlamin wrote:Hi Yann,
I hope I don't start to annoy you, but I still have some questions as I don't exactly understand some of these synchronization issues.
No problem ;) I don't have much time to post on the forum but still, I try to when it is possible.
ddvlamin wrote:
yrenard wrote:The current version of OpenViBE now includes drift correction + sample level synchronization at connection time.
I understand what the drift correction does, but what exactly is the sample level synchronization, I don't seem to find anything about that in the documentation of the acquisition server. Does it have anything to do with the fact that at the time you press the play button in the designer there can be a varying number of samples already in the buffer of the acquisition server? I'm guessing that it makes sure that not the whole buffer is send to the designer when a connection is made?
Yes you got it right.
ddvlamin wrote:
yrenard wrote:This ensures the p300 is correctly tagged and you should not have any more problem now, at least if you record the different sessions with a single instance of the designer (that is you don't close and reopen the designer between each session).
So it is not a good idea to take grand averages in an offline analysis of several days of recording, that's probably the reason why I didn't see a clear P300 peak. So it is possible that on one day the P300 is actually located around 250ms while another day it is around 350ms? I still don't understand how this problem exactly arises by opening and closing the designer. Could you elaborate a bit more on this? Can this also occur when you reconnect the acquisition server between sessions as I always do?
This is related to the scheduling of the boxes in a scenario. This scheduling can change each time a scenario is opened. This can be worked around specifying a different priority to each box. The priority will be considered by the kernel. Unfortunately, this has not yet been exposed to the designer level, this is why you don't find user documentation about this. Btw, look at the TRACE messages when you start a scenario, you'll have the scheduling printed, with box priorities etc.
ddvlamin wrote:
yrenard wrote:However, the drift correction makes this delay vary along the session.
If the acquisition adds or removes samples on the fly to meet the required sample frequency, I assume it is able to send its buffer to the designer at the correct time? So I don't understand how drift correction can make the delay vary in the designer (I do understand there can be a fixed delay which is not a real problem), of course a delay can occur if the designer cannot compute all the necessary things before the next block arrives. Anyway, if there is a delay possible (with drift correction) in the execution of the boxes it can still happen that the timing of the visualization of the P300 stimuli does not correspond with the correct tagging of the stimuli in the EEG data, causing a varying delay in the subject's response?
The delay varies over time because the drift correction is done at the acquisition server application level, not the driver level. The driver continues sending fix sized buffers which are split by the acquisition server according to the number of samples it removed / added.
ddvlamin wrote:sorry for the many questions, maybe they sound stupid as I'm unfamiliar dealing with real-time things and synchronization problems, but I think it's quite important to understand how all these new features work and what the effect is.
No problem, I hope my rather short answers to your questions helped.
Yann

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: realtime and delay

Post by ddvlamin »

Dear Yann,

Thanks for your answers, they still have to soak in a bit, but I guess I also should take a closer look at the code to understand it all thoroughly. Anyway, don't worry, I will come back to this if I don't understand something :)

Best regards,
Dieter Devlaminck

Amélie
Posts: 13
Joined: Wed Nov 24, 2010 3:47 pm

Re: realtime and delay

Post by Amélie »

Hello,

thank you both for this instructive discussion !
I will try to reduce the delay at different levels, and see if it is enough for my use, even with acquisition server's buffering.
Drift correction is Ok, I noticed immediately that it makes delay vary much more :)
I can certainly spare time in the driver I'm using, but I don't see very well what you meant, Yann, with box scheduling.


Best regards
Amélie

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: realtime and delay

Post by yrenard »

Amélie,

the box scheduling is the order which the kernel uses to activate boxes in a scenario.

Regards,
Yann

Amélie
Posts: 13
Joined: Wed Nov 24, 2010 3:47 pm

Re: realtime and delay

Post by Amélie »

It is an important aspect of my delay problem, but as I am using simple scenario with few boxes, the delay seems to be introduced mainly by the "acquisition" part (the order of magnitude is 2 to 12 ms for scenario/effector activation, 30 to 60 ms for acquisition server->client).

I would like to test the performances (so, in this context, just the delay) of a box acquiring directly data from one of my acquisition devices, similar to file-reading boxes. Which functionalities will I loose compared to using acquisition server+client ?

Best regards,
Amélie

jlegeny
Posts: 239
Joined: Tue Nov 02, 2010 8:51 am
Location: Mensia Technologies Paris FR
Contact:

Re: realtime and delay

Post by jlegeny »

You will not be able to use the automatic drift correction. Also the server+client model makes it possible to do the acquisition and processing on separate machines.

Amélie
Posts: 13
Joined: Wed Nov 24, 2010 3:47 pm

Re: realtime and delay

Post by Amélie »

Hello Jozef,

I think it will not be a problem for the measure, as I already deactivate the drift correction (to treat all data as fast as possible when they are received) and I use just one machine.

Regards,
Amélie

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: realtime and delay

Post by ddvlamin »

Btw, look at the TRACE messages when you start a scenario, you'll have the scheduling printed, with box priorities etc.
I wanted to check the priorities assigned to each box today, but I can't seem to find it when setting the log level to trace. Has this been changed, or can I find it elsewhere?

Dieter

lbonnet
Site Admin
Posts: 417
Joined: Wed Oct 07, 2009 12:11 pm

Re: realtime and delay

Post by lbonnet »

Hi Dieter,

When I set the Log level to TRACE and play a scenario, I get the following messages in the console:

Code: Select all

[ TRACE ] Scheduler initialize
[ TRACE ] Scheduled box : id = (0x000006ef, 0x00006043) priority = 0 name = Acquisition client
[ TRACE ] Scheduled box : id = (0x00001880, 0x000064c0) priority = 0 name = Signal display
The priority is implemented but never used in the Designer. I mean, there is currently no way to set the priority of the boxes.
You can change the priority of the box manually by modifying the XML scenario file.

The priority is an attribute of the box defined as:

Code: Select all

<Attribute>
	<Identifier>(0xac367a9c, 0x2da95abe)</Identifier>
	<Value>1</Value>
</Attribute>
The Identifier is OV_AttributeId_Box_Priority defined in the kernel. Change the value to set the priority of the box.
I know it's totally user-unfriendly... The need of scheduling priority has been foreseen when designing openvibe, but actually never used.

L
Follow us on twitter >> openvibebci

Checkout my (old) blog for some OpenViBE tips & tricks : here !

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: realtime and delay

Post by ddvlamin »

Thanks for your answer, so it is possible to change the priorities of the boxes or are these just dummy values in the xml file that are never used?

However, what I do not understand then is how the scheduling can change each time a scenario is opened (as stated above in one of Yann's posts) as the priorities are fixed in the file? Or do you randomly choose one of the boxes for boxes with the same priority? Meaning giving each box a different priority would solve this.

The reason for asking is that I had to write a front-end python application that calls the p300 scenarios in a very easy and straightforward way for use in a clinic, but this closes and reopens the scenarios each time so I'm worried the different schedules will affect the performance of the p300.

Dieter

Post Reply