OpenViBE forum

The OpenViBE community
It is currently Thu Mar 22, 2018 3:45 pm

All times are UTC

Post new topic Reply to topic  [ 2 posts ] 
Author Message
PostPosted: Sun Feb 18, 2018 10:31 pm 

Joined: Thu Jun 29, 2017 9:32 pm
Posts: 13
I am not an expert in programming and I did not manage to reverse engineer the math function used for the following boxes from the code. So my question is, what are the exact formulas used for the "xDAWN Trainer" box and the "Classifier Trainer" utilising "LDA"?

I have found bits and pieces online, such as on the Brain Computer Interface 1 & 2 books by Lotte and The Elements of Statistical Learning by Hastie but not specifically what is being used in OpenVibe. I read that the xDAWN marginally varies from Rivets paper (2009). For example Lotte states (see attachment); is he correct for xDAWN employed in OpenVibe?

Would it be possible to have the exact formulas used?


22.JPG [ 128.25 KiB | Viewed 91 times ]
Reply with quote  
PostPosted: Tue Feb 27, 2018 8:38 am 

Joined: Tue Dec 04, 2012 3:53 pm
Posts: 648
Location: INRIA Rennes, FRANCE
Hi Patrick,

its a good question but I'm afraid the answer is that the source code is the best definition of what the software does. In my knowledge how the implementation of machine learning algorithms usually goes is that you have a research paper or a textbook and you hack from there. Along the way the implementation may evolve and lose more and more resemblance to the source materials due to optimizations and all sorts of glue and kludge that are needed in practice. For example, its easy to write in a textbook "inverse of matrix A", but in practice the matrix A might not actually be that neatly invertible, you need pseudoinverse, regularization and whatnot, and the instant you stick these kind of modifications to the code it starts to look different from the textbook version, and indeed it is.

I'd hazard an opinion that machine learning algorithms found in the wild are rather "inspired by X" than "exactly implements X". What is exactly a support vector machine? There's many different libraries developing an SVM this way or that, and that give slightly different results.

If you can give pointers to 'reference implementations' it might make a useful little project to compare these. IF everything works well, I'd expect the algorithms to give slightly different results, but nothing drastic. I'd imagine a finding worth acting on would be if the OV implementation could be shown to fail on some reasonable data on which the ref implementation performs well. If anybody can report such a case and provide us the reproducing data, we'd of course want to fix our code.

I'll poll the certivibe people if the openvibe certifiablity project will be able to produce more rigorous and detailed documentation in its scope.


Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 

All times are UTC

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Americanized by Maƫl Soucaze.