ANT TMSI Refa 8

Obtaining data from various hardware devices
bpayan
Posts: 46
Joined: Fri Jan 08, 2010 4:02 pm

Re: ANT TMSI Refa 8

Post by bpayan »

Dear Ddvlamin,

thank you for your feedback,

I haven't used the P300 paradigm with the new TMSI driver but I had the same problem with the old driver. When I tested the P300 paradigm the classifier performance was more 90%, and when I used the online P300 scenario with the same record, it doesn't find any letters. I can't explain this but when I used the P300 Xdawn paradigm it finds all letters. Have you tried this paradigm?

PS: I used a sampling frequency = 512, I don't know if this information can help you.

Best regards,
Baptiste

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: ANT TMSI Refa 8

Post by yrenard »

guys,

be careful when tweaking the configured parameters of the existing scenarios. e.g. you should consider the two following things about the p300 speller paradigms :
  • - The initial scenarios have been configured for a 512 Hz signal. If you change this value in the acquisition server, don't forget to keep it synchronized with the Signal Decimation box (e.g. if you multiply sampling rate by two, do the same on the decimation factor). You have to do that on all the scenarios, the same way
    - The averaging parameter has been chosen so that the classifier trainer has a reasonable amount of examples regarding to the feature vector size. If you want to average the P300 epochs over time, then the classifier trainer will have less examples to be trained so it will be less likely to be able to generalize what it learned. xdawn scenarios are configured to have no temporal averaging, 12 repetitions of 10 letters by default. So if you decide to average 3 repetitions over time for example, then you should consider to train the classifier on more examples, spelling e.g. 30 letters instead of 10 !
Having a good classification rate on the dataset you used to train the classifier and bad classification rate on new datasets reflects that the classifier is probably not correctly trained. At least, it is not capable to understand a new feature vector regarding the ones he known while training, even if he was perfectly able to understand those initial feature vectors.

I'm not sure my english was OK so feel free to request more explanations if that was not clear enough.

Hope this helps,
Yann

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: ANT TMSI Refa 8

Post by ddvlamin »

Thanks, both, for the response
When I tested the P300 paradigm the classifier performance was more 90%, and when I used the online P300 scenario with the same record, it doesn't find any letters.
Exactly the same here. I'm glad that I'm not the only one with the problem then :), somehow I still feared that this was caused by my amplifier. Does the same problem occurs with other amplifiers?
I can't explain this but when I used the P300 Xdawn paradigm it finds all letters. Have you tried this paradigm?
Yes, I work with the Xdawn paradigm because I believe it should perform significantly better than without spatial filter, also it's easier to use as it selects the correct channel combinations. However, it's with the xdawn algorithm that it does not work. It worked one time without the spatial filter, but only once.
I used a sampling frequency = 512, I don't know if this information can help you.
For now I always used 256Hz.
be careful when tweaking the configured parameters of the existing scenarios. e.g. you should consider the two following things about the p300 speller paradigms :
I still consider this as a possibility, but I checked it multiple times and read the information of the boxes, and can't seem to find any problem.
The initial scenarios have been configured for a 512 Hz signal. If you change this value in the acquisition server, don't forget to keep it synchronized with the Signal Decimation box (e.g. if you multiply sampling rate by two, do the same on the decimation factor). You have to do that on all the scenarios, the same way
I recognize the importance of the decimation box, but if you use a sample block size of 32 and set the decimation factor to 4, it should work in both cases, 256Hz and 512Hz. The only difference would be in the size of the feature vector. Is this correct? I also first filter my signal between 1 and 20Hz so no aliasing effects should occur, right?
The averaging parameter has been chosen so that the classifier trainer has a reasonable amount of examples regarding to the feature vector size. If you want to average the P300 epochs over time, then the classifier trainer will have less examples to be trained so it will be less likely to be able to generalize what it learned. xdawn scenarios are configured to have no temporal averaging, 12 repetitions of 10 letters by default. So if you decide to average 3 repetitions over time for example, then you should consider to train the classifier on more examples, spelling e.g. 30 letters instead of 10 !
I agree with you, it is dangerous to train LDA on a set with objects that have more features than sample objects. I also average (without it I don't get a decent classification rate) four times. I do 12 repetitions per character and have 8 characters (sometimes 12). So I get only 24 sample objects if I understand correctly with feature dimension of 64 (sample rate of 256 and decimation factor of 4)? I agree this is not the most favorable type of training set :)
Still even on such a low number of sample objects it is strange that 5-fold cross validation gets a steady accuracy of 90-100% on all parts. This means it divides the set in 5 parts and tests it on the 5 samples of one part. So I always get 4 or 5 out of 5 correct. These features were not seen by the classifier before, so this accuracy should give a good idea about the eventual accuracy on the test set, right? Off course, I understand that it drops a bit (to 60% for example), but it drops to zero accuracy?
So to me it seems the classifier found a good solution, so it must be something else that goes wrong. Also, this same setup works with my old amplifier and the nexus driver.

However I will test it again with less averaging and thus more training objects. But to correctly understand the training process of OpenViBE, I still have a question about the cross-validation:
I trained on a set with 12 characters and twelve repetitions each (averaging 4 times), which then gives me 36 training objects? I did 5-fold cross-validation. So in one validation set, there are 7 or 8 objects to test the classifier. How is it possible OpenViBE gives me a performance value of 96.5517% on one of the parts? If it classifies every test object correct we have 100%, if we have only 6 out of 7 correct we have only 85.7% accuracy, so where does this 96% comes from?

Best regards,
Dieter Devlaminck

EDIT:
Now, I don't understand why your device return values 2 and 3, but this confirmed the reason of the problem. I'll make a correction of this.
This must have changed with newest amplifiers, shipped by TMSI, and therefor almost impossible to find that error if you have an older amplifier. I will report this to TMSI as the info in the SIGNAL_FORMAT structure causes confusion, especially because the getSamples method then returns 4 bytes instead.

EDIT:

Although this probably does not cause the problem (as it occurs in all my files), I still find the following observation strange:
I sample at 256Hz, I decimate with 4 so the signal info in openvibe states that the signal frequency is 64 with a sample block size of 32. After the stimulation epoching (duration per epoch is half a second), this results in a sampling frequency of 66 and a sample block size of 33.

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: ANT TMSI Refa 8

Post by yrenard »

Dear ddvlamin,
Yes, I work with the Xdawn paradigm because I believe it should perform significantly better than without spatial filter, also it's easier to use as it selects the correct channel combinations. However, it's with the xdawn algorithm that it does not work. It worked one time without the spatial filter, but only once.
xdawn will definitely give you better performances as soon as you have enough electrodes, at least 10. (We tested it with 13 here with no problem)
For now I always used 256Hz.
I recognize the importance of the decimation box, but if you use a sample block size of 32 and set the decimation factor to 4, it should work in both cases, 256Hz and 512Hz. The only difference would be in the size of the feature vector. Is this correct? I also first filter my signal between 1 and 20Hz so no aliasing effects should occur, right?
Yes this is correct but the size of the feature vector matters : on the first hand it should not be too big so to need few examples to train the classifier, and on the other hand it should be big enough to keep the P300 shape ! Signal Decimation was initially set to 4 with 512 Hz expected so either move to 512 Hz, or change the decimation factor to 2 !
I agree with you, it is dangerous to train LDA on a set with objects that have more features than sample objects. I also average (without it I don't get a decent classification rate) four times. I do 12 repetitions per character and have 8 characters (sometimes 12). So I get only 24 sample objects if I understand correctly with feature dimension of 64 (sample rate of 256 and decimation factor of 4)? I agree this is not the most favorable type of training set :)
12 letters x 2 target flashes x 12 repetitions / 4 averages = 72 target examples
12 letters x 10 non target flashes x 12 repetitions / 4 averages = 360 non-target examples
Still even on such a low number of sample objects it is strange that 5-fold cross validation gets a steady accuracy of 90-100% on all parts. This means it divides the set in 5 parts and tests it on the 5 samples of one part. So I always get 4 or 5 out of 5 correct. These features were not seen by the classifier before, so this accuracy should give a good idea about the eventual accuracy on the test set, right? Off course, I understand that it drops a bit (to 60% for example), but it drops to zero accuracy?
So to me it seems the classifier found a good solution, so it must be something else that goes wrong. Also, this same setup works with my old amplifier and the nexus driver.
Well in fact you are wrong because there are more non-targets than targets in the training set. Consider a classifier that would always fail to detect a P300, it would have a classification rate of 80% because only the target would have wrong classification. If your classifier is under 75% it's not a good classifier (this is for P300 speller).
I trained on a set with 12 characters and twelve repetitions each (averaging 4 times), which then gives me 36 training objects? I did 5-fold cross-validation. So in one validation set, there are 7 or 8 objects to test the classifier. How is it possible OpenViBE gives me a performance value of 96.5517% on one of the parts? If it classifies every test object correct we have 100%, if we have only 6 out of 7 correct we have only 85.7% accuracy, so where does this 96% comes from?
See previous answers, it will help understanding ;)
Although this probably does not cause the problem (as it occurs in all my files), I still find the following observation strange:
I sample at 256Hz, I decimate with 4 so the signal info in openvibe states that the signal frequency is 64 with a sample block size of 32. After the Stimulation Based Epoching (duration per epoch is half a second), this results in a sampling frequency of 66 and a sample block size of 33.
if the sample block was originally 32 (which is default), then after decimation, it should be 8.
The stimulation based epoching should not change the sampling rate of your signal, just the epoch size. If you want half as second of signal sampled at 64 Hz, you should have 32 samples (still 64 Hz).

In fact, it seems that the Signal Display box that you probably use to check the sampling rate has a bug. Use the EBML Stream Spy and you'll see that the sampling rate is OK after stimulation based epoching ;)

However, we are discussing lots of things here but fact is that your old amplifier had good results so you should not have anything worse or better with your new amplifier. My advice is to focus on the acquisition driver first (as you used the Nexus driver with your old amplifier and the Refa driver with your new amplifier), there is probably something to find out here ! Easiest way to check that would be to use the Refa driver with your old amplifier (it should work right ?) and see what comes out.

Also, just test easy things (within openvibe) such as : jaw clamping, eye blinks on frontal and alpha burst in occipital when you close eyes.

I hope this helps,
Best regards,
Yann

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: ANT TMSI Refa 8

Post by ddvlamin »

12 letters x 2 target flashes x 12 repetitions / 4 averages = 72 target examples
12 letters x 10 non target flashes x 12 repetitions / 4 averages = 360 non-target examples

Quote:
Still even on such a low number of sample objects it is strange that 5-fold cross validation gets a steady accuracy of 90-100% on all parts. This means it divides the set in 5 parts and tests it on the 5 samples of one part. So I always get 4 or 5 out of 5 correct. These features were not seen by the classifier before, so this accuracy should give a good idea about the eventual accuracy on the test set, right? Off course, I understand that it drops a bit (to 60% for example), but it drops to zero accuracy?
So to me it seems the classifier found a good solution, so it must be something else that goes wrong. Also, this same setup works with my old amplifier and the nexus driver.


Well in fact you are wrong because there are more non-targets than targets in the training set. Consider a classifier that would always fail to detect a P300, it would have a classification rate of 80% because only the target would have wrong classification. If your classifier is under 75% it's not a good classifier (this is for P300 speller).

Quote:
I trained on a set with 12 characters and twelve repetitions each (averaging 4 times), which then gives me 36 training objects? I did 5-fold cross-validation. So in one validation set, there are 7 or 8 objects to test the classifier. How is it possible OpenViBE gives me a performance value of 96.5517% on one of the parts? If it classifies every test object correct we have 100%, if we have only 6 out of 7 correct we have only 85.7% accuracy, so where does this 96% comes from?


See previous answers, it will help understanding ;)
Oh god, I'm embarassed, I should have known this. I completely forgot to count the non-target samples :oops: and the fact that there are row and column flashes... What was I thinking, I guess I was so clueless of how to find that error that I started to see errors everywhere :oops:
Such an unbalanced set is indeed even worse, maybe we need to consider simple cost-sensitive classification?
if the sample block was originally 32 (which is default), then after decimation, it should be 8.
The stimulation based epoching should not change the sampling rate of your signal, just the epoch size. If you want half as second of signal sampled at 64 Hz, you should have 32 samples (still 64 Hz).
In fact, it seems that the Signal Display box that you probably use to check the sampling rate has a bug. Use the EBML Stream Spy and you'll see that the sampling rate is OK after stimulation based epoching ;)
Indeed, but I wasn't to worried about that, just wanted to be sure... I'm starting to see ghosts...
Easiest way to check that would be to use the Refa driver with your old amplifier (it should work right ?) and see what comes out.
I already tried it quickly and some things went wrong so I've to check this again.
My advice is to focus on the acquisition driver first
Thanks for the hint, that's the confirmation I need so I know where to start looking.
Also, just test easy things (within openvibe) such as : jaw clamping, eye blinks on frontal and alpha burst in occipital when you close eyes.
Indeed we also discussed and wondered how we could test timing things and we did some P300 tests with eye blinking instead, everything went fine indeed, so there does not seem to be a timing issue.

Ok thanks, for the clear answer and my apolagies for the stupid classifier remark, I really should have known that...

Kind regards,
Dieter Devlaminck

PS: TMSI confirms that the new amplifiers return 2 and 3 bytes in the SIGNAL_FORMAT structure. They say it concerns the amplifiers with serial numbers that start with 0120 or 0121.
The driver needs to be 6.0.0.73 or higher.

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: ANT TMSI Refa 8

Post by ddvlamin »

Hi,

Friday I tested the native driver with the old amplifier. At first, it did not work because it (some channels) seemed to saturate at a maximum value, resulting in a stepwise signal. I think this was caused by the fact that the GetSamples method takes a buffer of type PULONG, while each channel of that old amplifier is encoded as signed longs. So, probably that was the cause of the saturation problem? To solve this I tried to change the PULONG type in the method GetSamples to PLONG, off course I expected a linking error here, but for some strange reason it did not complain at all. After that it seemed to work.

Then I tested the P300 paradigm and it also seemed to “work”, just as with the nexus driver. So far, so good. Now, I have to test the P300 again with the new amplifier and the native driver.

However, one problem still remains. I recorded 15 characters as training data. Learned the XDawn spatial filter and trained a classifier with only two averages (of 12 repetitions). That way I have more target training samples as you mentioned before. Then I ran some tests again, and here again some strange pattern emerges (see also one of the problems in the topic: viewtopic.php?f=6&t=245 ). Some runs reach 100% accuracy, others 0%. This is no coincidence as I can reproduce it and it happens every time I test it: sometimes completely correct predictions, sometimes completely wrong predictions.

I included some recordings that are done one after another (so close in time to each other) as you can see on the time stamp of the files. The first recording p300_dieter_19022010_rec1.ov is the training set of 15 characters. The other sets are test sets from which the following are predicted completely correct:

p300_dieter_19022010_rec3.ov
p300_dieter_19022010_rec4.ov

and the following are predicted completely incorrect:

p300_dieter_19022010_rec2.ov
p300_dieter_19022010_rec5.ov
p300_dieter_19022010_rec6.ov

http://www.thewired.be/blog/wp-content/ ... rdings.zip

Does someone encounter the same problems?

Kind regards,
Dieter Devlaminck

bpayan
Posts: 46
Joined: Fri Jan 08, 2010 4:02 pm

Re: ANT TMSI Refa 8

Post by bpayan »

Dear ddvlamin,
I tried to change the PULONG type in the method GetSamples to PLONG, off course I expected a linking error here, but for some strange reason it did not complain at all. After that it seemed to work.

I made some tests and I saw this problem. I found in the SDK a parameter to indicate if the buffer is composed to signed or unsigned values. That's why I comited the last week, on the forge, a new fix of this problem.
I made a new test on friday, and I think now, datas of the amplifier are good. But, I haven't been posting a message on the forum because when I used the scenario Xdawn speller, my classifier was under 80%, I didn't find any letter and I don't understand why. I'm searching an explication of this. Perhaps my problem is on scenarios parameters because with another system and the same scenarios I found all columns but all lines are false, or perhaps it's the problem with the driver, I don't know.

Can you confirm, that your correction are same as my fix. Thank you, Your result help me to know, that we are on the good way.
Some runs reach 100% accuracy, others 0%. This is no coincidence as I can reproduce it and it happens every time I test it: sometimes completely correct predictions, sometimes completely wrong predictions.
I don't know if this is the reason of your problem, but when I had the device running and I moved a window, I had the error message “Drop Data” on my console, this message can become a problem when the speller was recording and a window was in moving ( work on a big screen or a dual screen is a good solution for reduce this problem). Can you tell me if you have the same message and if your problem have a link with it.

Thank you for your respond.
best regards

Baptiste

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: ANT TMSI Refa 8

Post by ddvlamin »

Dear Baptiste,
I don't know if this is the reason of your problem, but when I had the device running and I moved a window, I had the error message “Drop Data” on my console, this message can become a problem when the speller was recording and a window was in moving Can you tell me if you have the same message and if your problem have a link with it.
With the native driver I do not get error messages. Only when I use the Nexus driver then I get warning messages that dummy samples are removed or added. This happens when I move windows like the one of the acquisition server.
( work on a big screen or a dual screen is a good solution for reduce this problem).
I will try this, but for now I analysed the data in matlab. And one can see some remarkable results in the included figures. The figure with name p300_correctsets displays the non-target and target signal calculated on the sets where I reached 100% accuracy. The figure with the name p300_wrongsets displays the non-target and targets signal calculated on the sets where I reached 0% accuracy. One can see the shift in the peak of the p300 between the wrong and correct set. In the correct set the peak starts around 300ms after stimulus, but with the wrong set this is significantly earlier. This can't be due to habituation effects because sometimes I record two sets that are predicted correct, then again a few sets that are predicted wrong and then again a few that are working 100% accurate and so on... (i did around forty (small) recordings that day, interchangeably correct and wrong sets)
I still doubt that the acquisition can cause this time shift, at least I have no clue how the acquisition can be responsible for such shift.
Can you confirm, that your correction are same as my fix
Ok, I looked with diff and it seems a lot has been changed. Concerning our problem. Indeed my old amplifier states that each channel is coded as an signed long. However, because my top priority is getting things to work no matter what coding style, I just manually changed it to type long, by changing the following:

Code: Select all

typedef ULONG           ( __stdcall * PGETSAMPLES)      (IN HANDLE Handle,OUT PLONG SampleBuffer,IN ULONG Size);

LONG m_ulSignalBuffer[65535];

l_lsize=m_oFpGetSamples(m_HandleMaster,(PLONG)m_ulSignalBuffer,m_ui32BufferSize);
I know it's strange I could just change that methods definition without generating a linking error, I still don't understand. Your method is off course preferable

I also saw another problem in the ovasCAcquisitionServer.cpp file. Someone commented some if structure, which generated an error. At line 473 someone commented out if(m_bStarted)

I also saw you added support to check impedances in the acquisition driver. I also considered that and made a first attempt for a plugin, see my website: http://www.thewired.be/
Attachments
p300_wrongsets.jpg
p300_wrongsets.jpg (44.81 KiB) Viewed 6108 times
p300_correctsets.jpg
p300_correctsets.jpg (47.12 KiB) Viewed 6108 times

bpayan
Posts: 46
Joined: Fri Jan 08, 2010 4:02 pm

Re: ANT TMSI Refa 8

Post by bpayan »

Dear ddvlamin,
I still doubt that the acquisition can cause this time shift, at least I have no clue how the acquisition can be responsible for such shift.
For this problem, I have no idea too. It's strange this problem is present on some test, not for all tests and not on times periods on a test (times where the cpu worked more or other effects...). Have you got the same problem on the acquisition scenario or it's only on the online scenario?

I know it's strange I could just change that methods definition without generating a linking error, I still don't understand.
The data can to have the type signed and unsigned, perhaps these two methods are defined but the documentation indicate only the PULONG solution.
Someone commented some if structure, which generated an error. At line 473 someone commented out if(m_bStarted)
Yes, the line 473 is commented since the last release. Now the method loop of the driver class is call when the driver is initialize, that's why in some drivers class you have two parts, when the driver is start or not start. In the new version of the driver TMSI, I used this for the check impedance, but for others drivers it's necessary to pulling datas in the buffer all the time before start the driver.
For your error generated, when I made the version in the trunk, I hadn't this modification that's why this problem didn't exist yet. I haven't tested the trunk since the last integration and I didn't known, this error was in the trunk, sorry for the disagreement. If you want correct this error on your code you can replace these lines:

file: ovasCDriverTMSiRefa32B.cpp, method: loop

Code: Select all

if(!m_bStarted)
{
	return false;
}
by

Code: Select all

if(!m_bstarted)
{
	return true;
}
Or if you want, you can used the branch wip-bpayan-TMSI, I wait to integrate it in the trunk, because we haven't defined it as stable, but I think, I'll not change it a lot now, just for fix bugs.

Best regards,
Baptiste

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: ANT TMSI Refa 8

Post by ddvlamin »

For this problem, I have no idea too. It's strange this problem is present on some test, not for all tests and not on times periods on a test (times where the cpu worked more or other effects...). Have you got the same problem on the acquisition scenario or it's only on the online scenario?
Indeed, once a trials starts good, it goes well until the end. For example, in one trial of 15 characters everything was predicted well and the second trial everything was wrong.
I stopped using the acquisition scenario because after training it did not work in the online one, but this is probably also due to the above problem.
Judging from the above figures, I would say the triggers are delayed or the signal shifted at the beginning of each wrong trial, because the P300 is still visible but located elsewhere.

So if I understand correctly, if you record 10 trials, then you become almost equal accuracies for each trial? In other words you do not have this problem?

If you also experience this with other amps than TMSI, then maybe something is wrong in the pipeline.

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: ANT TMSI Refa 8

Post by yrenard »

Dear ddvlamin,

sorry for late reply. Baptiste has left for a week on holidays so I will follow this issue during this week.

First, congratulations for your efforts in tracking this issue. I find this discussion very interesting and the idea of having a bug in the pipeline may become a priority soon. Before considering this, I would like to mention a few things that obviously were not clear about the Nexus driver.
  • - Some time ago, we noticed that the Nexus driver did not send samples on a regular basis. Sometimes it sent too many samples, sometimes too few. Thus we monitored the difference between what the driver had to send according to elapsed time and sampling frequency, and what it actually sent. If a big enough difference was detected, then the driver automatically adds / removes samples in the stream to fit what was initially declared (this produces the "plane signal" parts you noticed some weeks ago).
    - Only the Nexus driver has this verification implemented right now
    - The user can be aware of that thanks to the "added / removed samples" in the console
The fact that this message is not printed with Baptiste's Refa driver doesn't mean a similar problem is not happening. I think we should at least monitor this on a higher level, on the acquisition server side and not on the OpenViBE driver side. I don't mean this should be corrected by the acquisition server (because it could hide a bug on the openvibe driver) but at least, the acq server can monitor and warn if something bad happens.

If I remember well, you had many many messages with the Nexus driver on your new Refa device. So definitely we have to be sure this is not happening again.
If this is not happening, then we'll track what happens in the sig processing pipeline.
I will implement monitoring of the sample count today or tomorrow and put it in the trunc so you can test it and tell me what it gives.

I'd also like to ask a question about the signal pictures you sent, was it after of before the spatial filter ? Did you average some components or only took one channel ?

Thank you again for your efforts, we'll beat this bug for sure ;)

Best regards,
Yann

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: ANT TMSI Refa 8

Post by yrenard »

Oh and I forgot about the "dropped data" message when you resize the designer windows... This issue is windows specific. It looks like the main application loop is freezed while you resize any GTK window on windows. Thus the TCP buffer becomes full and the acquisition server drops data. We can correct this with an additional application side buffer and I will probably add this this week also. Meanwhile, just avoid resizing windows, or at least, do it as fast as possible... Or... switch to Linux ;)

Best regards,
Yann

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: ANT TMSI Refa 8

Post by yrenard »

Dear ddvlamin,

I just committed the sample count monitoring on the acquisition server trunc. If you were using Baptiste's branch, please let me know, I will merge it to the trunc.

I'll work on the drop message tomorrow ;)

Best regards,
Yann

ddvlamin
Posts: 160
Joined: Thu Aug 13, 2009 8:39 am
Location: Ghent University
Contact:

Re: ANT TMSI Refa 8

Post by ddvlamin »

hello,

Thanks for your reply.
Sometimes it sent too many samples, sometimes too few. Thus we monitored the difference between what the driver had to send according to elapsed time and sampling frequency, and what it actually sent.
Indeed, in the meantime I did the same thing for the native driver and discovered the same thing you're mentioning now. To give an example. I wrote out the time in milliseconds since the arrival of the first sample. Sometimes the loop was called twice within the same millisecond. But the first time it did not process any samples, while the second call it processed/received about 30 samples. This makes me suspect the hardware does not sample in a uniform way? We only have the SDK of TMSI, so we can't do anything more than calling the GetSamples method to receive new samples, can we?

At the same time I put debug code throughout the most essential parts of the P300 pipeline. For example I printed the time of stimulation generation and the time when it was displayed on the screen and the difference was smaller than one tenth of a millisecond. So nothing to worry about.


I also installed the system again on a different computer but now with the most recent drivers of TMSI (before I thought I needed to use the old one because I also used the old amplifier). I did six runs and all of them seemed to succeed. The day after I tested with some other subjects and it did not always work. But they stated they had problems focussing (it were BCI naive subjects). After finding their focus it did work better. So now I'm wondering: is it the system/amplifier causing the problem or is it a "focussing" problem located in our heads :) ... difficult to debug. Anyhow, my colleague will try to test it again this week. Then we'll see if that problem occurs again or not. If not, then it maybe has something to do with the reinstallation.
So maybe it's not a bug in the code per se, but in my configuration, otherwise you should encounter the same problems like I do, but you don't?
Oh and I forgot about the "dropped data" message when you resize the designer windows... This issue is windows specific.
Indeed, when I minimize the acquisition window this happens, but this probably does not cause the problem as it also happens during successful trials.
I'd also like to ask a question about the signal pictures you sent, was it after of before the spatial filter ? Did you average some components or only took one channel ?
I first applied the spatial filter, then averaged all the target/non-target trials. I only use the first spatial filter, so only one component.

Another strange thing, for now the new amp does not even receive any samples. But I still have to look into that, probably some setting that is wrong.

you say to transfer to linux, but I still need to be able to install the drivers (I only have .inf and .reg files for installation) for the amplifier and TMSI does not support linux,.... or maybe I'm missing something here?

Kind regards,
Dieter Devlaminck

yrenard
Site Admin
Posts: 645
Joined: Fri Sep 01, 2006 3:39 pm
Contact:

Re: ANT TMSI Refa 8

Post by yrenard »

Dear ddvlamin,

the most usual setup here is to have the acquisition server on windows and the designer & other controlled applications on linux... Anyway, that was a stupid joke ;)

Best regards,
Yann

Post Reply