Formulas for xDAWN trainer box and Classifier Trainer - LDA

Concerning processing components: filters, file load/save, visualizations, communication ...
Post Reply
knightv8
Posts: 14
Joined: Thu Jun 29, 2017 9:32 pm

Formulas for xDAWN trainer box and Classifier Trainer - LDA

Post by knightv8 »

Hi,
I am not an expert in programming and I did not manage to reverse engineer the math function used for the following boxes from the code. So my question is, what are the exact formulas used for the "xDAWN Trainer" box and the "Classifier Trainer" utilising "LDA"?

I have found bits and pieces online, such as on the Brain Computer Interface 1 & 2 books by Lotte and The Elements of Statistical Learning by Hastie but not specifically what is being used in OpenVibe. I read that the xDAWN marginally varies from Rivets paper (2009). For example Lotte states (see attachment); is he correct for xDAWN employed in OpenVibe?

Would it be possible to have the exact formulas used?

Thanks
Patrick
Attachments
22.JPG
22.JPG (128.25 KiB) Viewed 2542 times

jtlindgren
Posts: 775
Joined: Tue Dec 04, 2012 3:53 pm
Location: INRIA Rennes, FRANCE

Re: Formulas for xDAWN trainer box and Classifier Trainer -

Post by jtlindgren »

Hi Patrick,

its a good question but I'm afraid the answer is that the source code is the best definition of what the software does. In my knowledge how the implementation of machine learning algorithms usually goes is that you have a research paper or a textbook and you hack from there. Along the way the implementation may evolve and lose more and more resemblance to the source materials due to optimizations and all sorts of glue and kludge that are needed in practice. For example, its easy to write in a textbook "inverse of matrix A", but in practice the matrix A might not actually be that neatly invertible, you need pseudoinverse, regularization and whatnot, and the instant you stick these kind of modifications to the code it starts to look different from the textbook version, and indeed it is.

I'd hazard an opinion that machine learning algorithms found in the wild are rather "inspired by X" than "exactly implements X". What is exactly a support vector machine? There's many different libraries developing an SVM this way or that, and that give slightly different results.

If you can give pointers to 'reference implementations' it might make a useful little project to compare these. IF everything works well, I'd expect the algorithms to give slightly different results, but nothing drastic. I'd imagine a finding worth acting on would be if the OV implementation could be shown to fail on some reasonable data on which the ref implementation performs well. If anybody can report such a case and provide us the reproducing data, we'd of course want to fix our code.

I'll poll the certivibe people if the openvibe certifiablity project will be able to produce more rigorous and detailed documentation in its scope.


Best,
Jussi

knightv8
Posts: 14
Joined: Thu Jun 29, 2017 9:32 pm

Re: Formulas for xDAWN trainer box and Classifier Trainer -

Post by knightv8 »

Hey Jussi,
sorry for my late reply. Understood, thanks for the detailed explanation, much appreciated.

Yes it would be greatly appreciated if you can bring it up for more detailed documentation, it would help fellow researchers, like myself :)

Many thanks once again

Patrick

Post Reply