Hey guys,
I'd like to use the Graz visualization Box to visualize the output of an LDA classifier. I am using a sigmoid function, so my output lies between -1 and 1. Now I would like the bar to be e.g. for 0.5 half extended to the right and for -0.5 half extended to the left. Testwise sending constant values it seems like any positive number will just extend it all the way to the right while negative numbers extend it all the way to the left. But when I use real data I also sometimes get a less extended bar. But I can't find a connection between the bar length and the numbers I give to the Box.
I have compared with Matrix Display in parallel, so I'm sure which numbers I send.
Can you tell me how the box computes the bar length? Does it somehow 'learn' from the maximum values used? Does it even work with just one output or do I need to give probabilities? (I also tried that, but with no success)
Cheers,
Simon
Question Regarding the Graz visualization Box
-
- Posts: 3
- Joined: Thu May 11, 2017 9:37 pm
-
- Posts: 775
- Joined: Tue Dec 04, 2012 3:53 pm
- Location: INRIA Rennes, FRANCE
Re: Question Regarding the Graz visualization Box
Hi Simon, how it roughly works is this,
1) If you give it more than 2 dimensional input, who knows what will happen.
2) If you give it 2-dimensional input, it will convert them to probabilities, i.e. transform them into a range [0,1] so that the two numbers also sum to 1. After this,
it will compute slot2-slot1 as a signed response strength (minus means towards left if I remember right).
3) If you give it 1 dimensional input, it will use that as signed response strengh
4) With the signed response strength at hand, it will add this value to a list of such strengths collected during the trial (it assumes sliding window approach). It will compute an abs(max) of the stored responses during the trial. The blue bar is a mean of the responses buffered divided by that abs(max).
5) When stimulation OVTK_GDF_Feedback_Continuous is received, the collected list and the maximum will be cleared (at the beginning of each new trial).
The idea in the code is that the blue bar is in a sense auto-scaling but in a way that it tries to avoid a situation where the classifier outputs some outlier value and all the rest of the trials would be scaled by this outlier. I realize this scaling might have the effect that its not easy for the user to learn to do high probabilities, as a sequence of [0.7, 0.7, 0.7 ...] will have the same blue bar as a sequence of [0.9 0.9 0.9 ...]
ps. We're happy to hear about better ideas to do the scaling.
Hope this helps,
Jussi
1) If you give it more than 2 dimensional input, who knows what will happen.
2) If you give it 2-dimensional input, it will convert them to probabilities, i.e. transform them into a range [0,1] so that the two numbers also sum to 1. After this,
it will compute slot2-slot1 as a signed response strength (minus means towards left if I remember right).
3) If you give it 1 dimensional input, it will use that as signed response strengh
4) With the signed response strength at hand, it will add this value to a list of such strengths collected during the trial (it assumes sliding window approach). It will compute an abs(max) of the stored responses during the trial. The blue bar is a mean of the responses buffered divided by that abs(max).
5) When stimulation OVTK_GDF_Feedback_Continuous is received, the collected list and the maximum will be cleared (at the beginning of each new trial).
The idea in the code is that the blue bar is in a sense auto-scaling but in a way that it tries to avoid a situation where the classifier outputs some outlier value and all the rest of the trials would be scaled by this outlier. I realize this scaling might have the effect that its not easy for the user to learn to do high probabilities, as a sequence of [0.7, 0.7, 0.7 ...] will have the same blue bar as a sequence of [0.9 0.9 0.9 ...]
ps. We're happy to hear about better ideas to do the scaling.
Hope this helps,
Jussi