A kind request: Google Science Fair vote

Come here to discuss about OpenViBE in general!
Post Reply
asrinivasan
Posts: 7
Joined: Sun Nov 14, 2010 11:34 pm

A kind request: Google Science Fair vote

Post by asrinivasan »

Hello all,
I recently entered my Science Fair project in the Google Science Fair, an international science competition in which
entrants can build, research, discover etc. anything they want to. For my entry, I researched on how prosthetic limbs
can be controlled by thought alone and found that much of the mathematical analysis of the brainwave data had to be
improved upon in order to make such a technology usable. Most of the signal processing/interfacing with the EEG device was done in OpenVibe. Here is a brief synopsis, in case you were interested:

----------------------------------------------------------------------------------------------------------------------
My project is, at its most general level, based upon the idea of the brain-computer interface.In this sense of the definition,
anything we use to interact with machines is a brain-computer interface, including our fingers. However,
amputees often face difficulties after the loss of such a vital method of interaction. Through research, I found that a current
medical device, the Electroencephalograph (EEG) could be implemented as a direct brain-machine interface; in other
inputs on a computer (such as a cursor) could potentially be operated by thought alone. However, I also learned that, although
EEG technology has been in existence since circa 1920, it still suffers from the age-old problems of signal filtration and
desired feature extraction. This means that current signal processing algorithms are not able to interpret the electrical signals
exhibited by neuronal synapses very efficiently, thus making such an interface wholly impractical and inaccurate. My project
sought to rectify this through the creation of custom signal processing scenarios that utilized new algorithms; specifically, the use of Vector quantization compression/extraction methods for enhanced noise filtration and the removal of known artifacts (sources of electricity other than the brain, such as muscles).
However, I decided it was not enough to run software simulations; to determine its true real-world applicability, I used a 14-
channel EEG neuroheadset to gather electrical data from my own brain. I then built a prototype robotic arm with an onboard
processor that would translate signals from the computer. Finally, I used the programs I created to "decipher" the incoming brainwave signals, and send corresponding messages to the robotic limb. I concluded that, by using my programs to perform the signal processing, I was able to increase the accuracy of detected brainwave patterns by about 16%. Although this may not seem like much, the brain processes hundreds of thousands of ideas simultaneously, and recognizing patterns requires a great deal of processing effort on the part of the computer. Finally, I reached an accuracy of about 91.35% using the programs I created.
-------------------------------------------------------------------------------------------------------------------

Further in-depth details can be seen here: http://sites.google.com/site/eegprosthetics/home
Recently, after submitting my project, I was notified that I was one of the 60 semifinalists internationally; as part of the judging process, there is also an award called the "People's Choice Award." Essentially, the public goes online and votes once in each of the 3 age groups (13-14, 15-16, 17-18) for the project they believe is the best.

I am kindly asking if you would consider voting for my project for this award; I believe this project holds many potential applications in the real world other than prosthetics alone; such technology could be effectively utilized by patients with paraplegia, paralysis, or even polio.

The voting process is simple:
1. Go the Google Science Fair Voting website: http://www.google.com/events/sciencefai ... etics.html (for my project)
2. Click the "vote" button in the upper right-hand corner

Again, thank you for your time and consideration of my project,

A. Srinivasan

jlegeny
Posts: 239
Joined: Tue Nov 02, 2010 8:51 am
Location: Mensia Technologies Paris FR
Contact:

Re: A kind request: Google Science Fair vote

Post by jlegeny »

Hello Anand,

first of all, congratulations for making it into the semifinals. I sincerely hope that you will make it even further and you do have my vote of course :) Also, with your permission, we would like to include your project in our "made with OpenViBE" website section along with your video.

If you may, I would like to ask you several remarks though. First of all, on your page you mention that you are using Imagery paradigm, in this regard I suppose you use motor imagery specifically. However, you do mention four commands (up, left, raise, lower) whereas in OpenViBE we only have a 2-class motor imagery scenario (handball - right hand, left hand) and a 1-class Tie Fighter scenario using imagination of feet movement. I would like to know which movement did you use for the fourth command.

Also, since you are improving current scenarios in OpenViBE we would like to know which OpenViBE scenarios you have used for your evaluation.

asrinivasan
Posts: 7
Joined: Sun Nov 14, 2010 11:34 pm

Re: A kind request: Google Science Fair vote

Post by asrinivasan »

Hello Mr. Legeny,

First of all I must apologize for this extremely late reply; I do not get email updates from OpenViBE forum, and after the first day of posting I did not see that anyone else had posted, and afterwards I did not think to check it again.

I would be more than happy if you would include it in the "Made with OpenVibe" section of the website! If necessary, I can make a second video detailing the core processes used in my scenarios.
Following up to my selection as a semifinalist, Google has also selected the 15 Finalists from the previous 60 semifinalists, and I am one of them :) They will be holding the final judging round in California at Google HQ!

To answer your questions about my modification of the OpenViBE scenarios, firstly, you are right, I based my project around the "motor-imagery-bci" scenarios. However, like you mentioned, all too quickly I realized that control of something as versatile as a robotic arm is not possible with only two classes.

To explain my creation of scenarios I'll start with a little background of the project:

In actuality in my project there are 5 classes (Up, Down, Right, Left, "Rest"). This would correspond with, for instance, shoulder/elbow/wrist rotation, and fist closure, at the very least. However, another one of my self-imposed design criteria was to have directional control, i.e. being able to lift a grape without beating it to a pulp. These criteria are how I came to select OpenViBE as the signal processor; as I was severely restricted for time (submission date), the intuitive, 'a-b-c' interface was wonderful, and I could finally *easily* achieve directional control of motors simply by computing the LDA hyperplan distance for each feature(which OpenViBE does automatically).

In order to meet these criteria, I required more classes of imagery than doable with the current Imagery scenarios. However, as someone in the forums stated before (Mr. Dieter I think), the framework for multi-class imagery is there , but not possible without extensive modification of the 'Classification' box algorithms. I tried to do so, first by starting with the Lua script modification to create 4-class stimulations, but I realized that there were too many dependencies within a single scenario, and I abandoned it.

As my focus was more on the mathematics and the processing itself, I simply took the "easy way"- since the BCI imagery scenarios already have 2 classes, it would be a simple matter to have 2 instances of the Classifier Trainer running simultaneously to achieve a total of 4 classes. To do so, after I had optimized the filters by hand, I simply ran the Classifier Trainer scenario twice, disregarding the "right-left" animations from the Graz Visualizer the second time. During online use with the Classifier Processor, I simply duplicated the improved filters, the feature aggregator, and Classifier Processor boxes to classify the "up-down" imagery.

From here, it was a simple matter of tying the "Classification State" output of both the Classifier Processors to a VRPN Analog server. However, I realized that classification errors could have worse implications, e.g. the "right-left" Classifier Processor could mistake an "up-down" imagery. To correct this, I simply set up a VRPN server (vrgeeks.com) and used some conditional statements to determine which Classifier Processor was making the correct decision.

Finally, I used a keyboard emulator to send commands via serial port.

I hope this answers your question about the multi-class imagery- for ease of reading, I will follow up with a second post below detailing how I went about implementing Vector Quantization processing on the data stream (not included by default with OpenViBE, I believe).

Additionally, if you would like me to, I can upload the scenarios from my project.

Anand Srinivasan

lbonnet
Site Admin
Posts: 417
Joined: Wed Oct 07, 2009 12:11 pm

Re: A kind request: Google Science Fair vote

Post by lbonnet »

Hello Anand !

This work looks very nice... congratulations ;)

I just checked the Google Science Fair website... and I saw you made it to the top 5 Finals !
Let us know when the results are out :)

By the way, Jozef (Legény) is currently on vacation, and will be back in few weeks.
I let him discuss with you when he comes back... we will definitely show your project somehow on the website.

Cheers

Laurent-
Follow us on twitter >> openvibebci

Checkout my (old) blog for some OpenViBE tips & tricks : here !

Post Reply