Hello Haig,
Thank you very much for the note and for the link (below). Gathering
from my research and from what I’ve read over the years, pitch discrimination
is difficult for cochlear implant (CI) recipients, thus making music 
enjoyment... well... not so
enjoyable. As the video states, we take our ability to discern pitch for
granted. I hope other will view the video/doc. In case other readers didn’t see
your post, I’ve provided the link to the concert you recorded:
 
http://www.abc.net.au/arts/stories/s3051873.htm
 
A friend of mine (Louise Loiselle) 
received a grant from the
NIH as well as funding from Med-El (Austria) to research localization
ability for bilateral (or bi-modal) CI patients. Her doctoral committee 
includes world-renowned hearing scientists Bill Yost (who's well-known 
in psychoacoustic circles) and Michael Dorman (who heads one of the 
world’s
leading CI labs). I’m guessing Michael is aware of the CI music you 
recorded,
but I’ll forward the link to Louise, Bill, and Michael. Others I know 
who will
be interested in Robin Fox's composition include Drs. Chris Brown and 
Sid Bacon. It is because of my interest in creating virtual listening 
environments for studying CI efficacy in noise that I stumbled upon Ambisonics 
(and I've since added Ambisonics to music-recording arsenal).

 
One question I had asked myself not too long back was where
to “insert” a CI simulator when using normal-hearing listeners. When 
listening in a surround of sound, vocoding* the
signal going to the individual speakers (8 feeds in my octagonal setup) 
doesn’t make much
sense. However, Ambisonic recordings could once again help because I can 
rotate
virtual mics in 3D environments using the B-formatted files created from live
recordings. A virtual (monaural)) mic can represent a CI mic (akin to a 
hearing aid mic),
and the signal picked off the mic can be routed through a CI 
simulator/vocoder.
This signal, in turn, ultimately goes to my (calibrated) ER-3A insert 
phones, L or R.
This may seem trivial, but two microphones, properly spaced (similar to 
ORTF placement), allows me to
simulate bilateral CI listening in a 3D environment. Again, the 
bilateral (versus binaural) signal is presented to the subject via ER-3A insert 
phones. The two channels (L & R) can be individually processed, which would be 
the case for bilateral CI users. Bimodal (electric and acoustic) modelling is 
also possible. I use CI Sim software that
was developed at the University of Granada,
and another CI simulator developed by Dr. Qian-Jie Fu. Maybe some other 
readers
have a better way to do this, or have presented CI-simulated sounds
acoustically through a loudspeaker array?


Anyway, I’m rambling now... Always a 
lot of ideas and thoughts (to include a couple of worthwhile ones). Many thanks 
again for the link!
Best regards,Eric

*CI simulators are, for the most part, specialized tone or noise vocoders. 
Envelope extraction can vary (e.g., half- or full-wave rectification with 
appropriate time constants or via a Hilbert transform), and the number of 
output channels varies depending on the number of virtual electrodes being 
simulated. A large (> 12) electrode count doesn't significantly improve speech 
understanding, and narrowing each channel's bandwidth may not improve frequency 
discrimination (narrowing the bandwidth works for normal-hearing listeners, but 
realistic simulations provide broad- or narrow-band noise, not pure tones, on 
the output channels). These are just a few of many variables.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20120407/cf90ca9b/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to