Fri, 20 Dec 2013 22:02:26 -0800,
Aaron Heller <hel...@ai.sri.com> a écrit :
> On Fri, Dec 20, 2013 at 7:40 PM, David Worrall
> <worr...@avatar.com.au>wrote:
> 
> > I remember reading that, with exposure, human's audio-processing
> > "hardware" can adapt to/learn how to use a non-optimal HRTF, given
> > a bit of time.
> > Does anyone have a reference for this?
> >
> I don't know about 'non-optimal', but we can learn new ones by cross
> calibration with other senses, and apparently we don't forget the old
> ones.

I have access to an Android tablet with a 4.2 OS, so I will train myself
using the Ambi Explorer app, hoping it won't "decalibrate" my brain! ;-)

> Here, we demonstrate the existence of ongoing spatial calibration in
> the adult human auditory system. The spectral elevation cues of human
> subjects were disrupted by modifying their outer ears (pinnae) with
> molds. Although localization of sound elevation was dramatically
> degraded immediately after the modification, accurate performance was
> steadily reacquired. Interestingly, learning the new spectral cues
> did not interfere with the neural representation of the original
> cues, as subjects could localize sounds with both normal and modified
> pinnae.

So it means that a "good localizer" can be trained, and can use
nonindividualized HRTFs. The "suspension of disbelief" is
already possible with stereo...


Sat, 21 Dec 2013 09:55:03 +0000,
dw <d...@dwareing.plus.com> a écrit :
> http://128.102.119.100/publications/wenzel_1993_Localization_Head_Related.pdf 
>  

Thanks for the article. An excerpt from the conclusion:
..."the use of nonindividualized transforms primarily results in an
increase in the rate of front-to-back confusions. It is possible that
both this effect and the slight overall degradation in performance are
due to the subjects' inexperience with the task, and may be mitigated
by further training."

N.B: 128.102.119.100  = human-factors.arc.nasa.gov


Sat, 21 Dec 2013 12:01:45 +0000,
dw <d...@dwareing.plus.com> a écrit :
> When listening over speakers "decoding ambisonics over 4 speakers is
> a better option, ", you are listening via the the superposition of
> several of your natural HRTFs with varying amplitudes and delays. In
> the time domain this is not equivalent your HRIR for any real source. 
> Interpolation between these several speaker-head IRs will occur at
> the sweet spot to give more or less correct ILD and ITD values, but
> outside of the sweet spot, and at high frequencies, the resulting IR
> is alien.

My comprehension of Ambisonics is that the listener's head (in the
"sweetest spot") is exposed to one coherent approximation of a
reproduced (or synthesized) sound field, not to a set of directional
waves coming from the speakers (one directional wave per speaker).
Understanding Ambisonics is already difficult, and I'm less comfortable
with this concept based on the "superposition of natural HRTFs".

--
Marc


_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to