Greetings All,

The last part of the following quote is exactly the concern I have when it 
comes to auditory research: **Gunther showed..., in the course of his 
demonstrating that a phantom image is not physically equivalent to a real 
source, even if is perceptually equivalent (to whatever extent).**
Here’s the scenario: I test cochlear implant (CI) wearers or hearing aid (HA) 
users in a controlled, laboratory space. Five word sentences are presented, one 
at a time, through one of several loudspeakers. The remaining loudspeaker or 
loudspeakers present background noise in the form of pink noise or speech 
babble. I make adjustments to the CI processor or HA settings, and the 
wearer/listener demonstrates improved speech comprehension ability (based on 
percentage of correct words or sentences) in the lab. I have no idea what 
perception is like for these listeners; I can only measure speech comprehension 
ability/performance. The HA or CI users re-enters the REAL WORLD only to 
discover that the new processor settings made a whole mess of everything 
(intelligibility-wise).

I go back to the drawing board with the grandiose idea that I can create 
*real-world* stimuli using a popular stereo miking technique. For 
normal-hearing listeners, the new stimuli are PERCEIVED  to sound real. 
Unfortunately, I don’t know how PHYSICALLY real the stimuli are and, 
consequently, no idea how the acoustical stimuli will interact with a CI 
wearer’s head, mic, processor, and (ultimately) implanted electrodes. For 
simple stimuli, such as a single talker, I can make any number of spectral 
measurements to ensure that the stimulus is realistic. But for complex, 
multi-directional stimuli, I would like to know that newly-developed stimuli 
are physically equivalent to their real-world counterparts.

Auditory streaming, scene analysis, timbre, etc. all apply to the 
hearing-impaired, but it is THEIR perception. I’m not actually concerned that 
their perception is different from mine or from anybody else’s. What matters is 
when I decide to modify electrical pulse-width (current delivered to 
electrodes), frequency transposition algorithms, envelope detection (e.g. 
half-wave versus Hilbert transform), number of electrodes receiving stimulus, 
etc. is that these changes are in response to PHYSICAL reality that will 
ultimately enable improved CI performance in the REAL WORLD. Making changes 
based on normal-hearing listeners’ perception isn’t the key to building better 
acoustic stimuli for hearing research (at least not for studying HA or CI 
efficacy).

I believed Ambisonics was a way of capturing real-world events and bringing 
them to a controlled, laboratory environment (kind-of requisite for science, 
oui?). If I have to use HOA to accomplish this, then so be it. But most 
importantly, I need some means of showing that this proposed laboratory 
listening environment and stimuli are physically *real* so that it applies to 
ALL listeners (hearing-impaired or normal listeners) regardless of their 
perceptual judgments. If listening ability improves in the *improved* 
laboratory environment, then I have confidence that changes made to CIs and HAs 
will make a real difference where it counts: Outside of the laboratory.

Thanks for many great insights (and for reading my rants regarding hearing 
science).
Kind regards,
Eric C.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20130704/09dbf67d/attachment.html>
_______________________________________________
Sursound mailing list
[email protected]
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to