SXSW Earplugs: Event rescheduled - now taking place Thursday from 11-noon OR noon-1pm at West Mall in front of the Union Starbucks by the dried out fountain
You can seek other opportunities that require Volunteer Sign Off Forms:
Growing Roots - every Tuesday afternoon
Best Buddies - organization on campus that holds different events for adults with disabilities
Capitol School of Austin
Texas School for the Deaf - does not require prior ASL experience but it is highly recommended!
TSHA School Supplies: 3 lightly used or new school supplies = 1 point. A box will be outside of the NSSLHA office (inside the Student Leadership Suite) until this Friday to drop off the supplies if you were unable to do so at today's meeting.
T-Shirt Sale: tentatively set for March 25th from 11-4pm (or until we run out). Email will be sent out later with more information!
Fundraising points should be updated by tonight - if there are any discrepancies please be sure to email AFTER they are updated
April profit shares will be announced after Spring Break :)
All registrations will be completed by tonight!
Scott Novich - V.E.S.T. (Versatile Extra-Sensory Transducer)
Current electrical and computer engineering PhD student at Rice University conducting research at Baylor College of Medicine
Mapping sound information to touch information
Sensory substitution - emerging neuroscience that all of our sensory receptors, whether it's the ones under the skins or in your ear, they are tuned to picking up specific type of information but they don't really care about the content
What he's working on is a wearable vest which is lined with little vibrational motors (basically what's in cell phones that causes them to vibrate). He takes sound information using a device like a tablet which is responsible for mathematically converting the sound signal to a dimensionality and periodicity suitable to the sense of touch. The idea came about 4 to 5 years ago.
Through a Kickstarter campaign, they were able to raise about $40,000! And he will be at a Ted talk this coming week!
So far, they've run a simple experiment with both hearing and deaf participants by taking a set of 50 spoken word samples (single syllable words) and on a given trial, they pick a word on random and play it back on the vest, just like the word would be presented if it was spoken. They present the person with four possible options and they have to guess what the word was. As the people answer, they get feedback and they start to memorize the patterns of vibrations that correspond to the words. The participants are then given a break for 10 days, brought back, and introduced a new set of words. As a result, the participants have been able to identify words that they've never encountered before.
What are you doing to make the vest more wearable in order for it to be implemented? (currently there are a lot of wires on the back side)
Undergraduates are currently working on weaving the wires and fibers into a fabric grid.
Was there a difference in the rate in which the hearing and deaf individuals were able to pick up the information?
There haven't been enough participants just yet - however generally speaking, the people who perform better are younger individuals (+25 and up) regardless if they are hearing or not.
Were the participants given articulatory or phonemic cues when they were wearing the vest or was it more blinded and based on guesses?
Definitely more blinded - they could make some educated guesses since they were single syllable words and base it on consonants and vowels
On the vest itself, are each of the vibrating components representing a single phoneme?
The vest works together all at once. It's mapped based on frequency, lower on bottom and higher on top. Essentially we are doing a Fourier based composition (sum of sinusoids). The intensity of the motor is the strength of the composition of a specific frequency.
Do you allow your test subjects to use lip reading when they are being tested?
At the moment, no. We have just started doing more research that involves person to person interaction but we are thinking about incorporating lip reading. But it's kind of messy because on one hand can help guide people but people's ability to lip read is on a spectrum and if you get people that rely on lip reading, it won't help in understanding the tactile stimuli.
Are the deaf participants congenitally deaf? Or was it acquired post language acquisition?
There have been congenitally deaf individuals as well as those who have already acquired language and then became deaf.
What could this mean for the future of sign language? Do you have thoughts as to what's going to happen down the road with sign language considering this technology?
We'll probably run into the same issues as what happened with cochlear implants because this is essentially a non-invasive cochlear implant. So as far as the future of sign language goes, it's maybe working against it culturally. That being said, what I think about long term as far as this technology goes, this is more about taking arbitrary information from a framework to feeding this information. If you were to extrapolate, this is definitely along the lines of a cochlear implant.
That's all for now! See you at our next meeting, March 24th, right after Spring Break :)