OpenViBE forum

The OpenViBE community
It is currently Thu Apr 26, 2018 8:58 pm

All times are UTC




Post new topic Reply to topic  [ 3 posts ] 
Author Message
PostPosted: Sun Feb 18, 2018 10:31 pm 
Offline

Joined: Thu Jun 29, 2017 9:32 pm
Posts: 14
Hi,
I am not an expert in programming and I did not manage to reverse engineer the math function used for the following boxes from the code. So my question is, what are the exact formulas used for the "xDAWN Trainer" box and the "Classifier Trainer" utilising "LDA"?

I have found bits and pieces online, such as on the Brain Computer Interface 1 & 2 books by Lotte and The Elements of Statistical Learning by Hastie but not specifically what is being used in OpenVibe. I read that the xDAWN marginally varies from Rivets paper (2009). For example Lotte states (see attachment); is he correct for xDAWN employed in OpenVibe?

Would it be possible to have the exact formulas used?

Thanks
Patrick


Attachments:
22.JPG
22.JPG [ 128.25 KiB | Viewed 186 times ]
Top
 Profile  
Reply with quote  
PostPosted: Tue Feb 27, 2018 8:38 am 
Offline

Joined: Tue Dec 04, 2012 3:53 pm
Posts: 677
Location: INRIA Rennes, FRANCE
Hi Patrick,

its a good question but I'm afraid the answer is that the source code is the best definition of what the software does. In my knowledge how the implementation of machine learning algorithms usually goes is that you have a research paper or a textbook and you hack from there. Along the way the implementation may evolve and lose more and more resemblance to the source materials due to optimizations and all sorts of glue and kludge that are needed in practice. For example, its easy to write in a textbook "inverse of matrix A", but in practice the matrix A might not actually be that neatly invertible, you need pseudoinverse, regularization and whatnot, and the instant you stick these kind of modifications to the code it starts to look different from the textbook version, and indeed it is.

I'd hazard an opinion that machine learning algorithms found in the wild are rather "inspired by X" than "exactly implements X". What is exactly a support vector machine? There's many different libraries developing an SVM this way or that, and that give slightly different results.

If you can give pointers to 'reference implementations' it might make a useful little project to compare these. IF everything works well, I'd expect the algorithms to give slightly different results, but nothing drastic. I'd imagine a finding worth acting on would be if the OV implementation could be shown to fail on some reasonable data on which the ref implementation performs well. If anybody can report such a case and provide us the reproducing data, we'd of course want to fix our code.

I'll poll the certivibe people if the openvibe certifiablity project will be able to produce more rigorous and detailed documentation in its scope.


Best,
Jussi


Top
 Profile  
Reply with quote  
PostPosted: Sat Apr 07, 2018 2:22 pm 
Offline

Joined: Thu Jun 29, 2017 9:32 pm
Posts: 14
Hey Jussi,
sorry for my late reply. Understood, thanks for the detailed explanation, much appreciated.

Yes it would be greatly appreciated if you can bring it up for more detailed documentation, it would help fellow researchers, like myself :)

Many thanks once again

Patrick


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC


Who is online

Users browsing this forum: Bing [Bot] and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Americanized by Maƫl Soucaze.