Greetings All,
I was intrigued by the post titled 'catching flies' because distance-to 
information is an area of interest to me. As a few folks out there know, my 
interest in Ambisonics (aside from music) is its application to hearing 
research. It is important for safety reasons that a hearing aid (HA) or 
cochlear implant (CI) user be able to a determine source's distance.

Side note: It's interesting that a mic would be compared to the ear. No one 
should expect a microphone alone to do what the ear or auditory system does. A 
quality mic can accurately convert pressure variations to analogous voltage or 
current variations. That's about it. A laboratory grade mic and audio-analysis 
hardware or software can readily measure changes in relative phase, intensity, 
and frequency, and do this over a very wide dynamic range. But converting 
pressure variations (or particle velocity) to voltages is just the beginning of 
a chain of events that ultimately results in a listener’s perception of pitch, 
location, loudness, etc. If the goal is to reproduce a real-world sound field 
around the listener's head, then we need to add the following to the chain: 
Loudspeakers, signal processors, room acoustics, etc. Of course the mic is 
hugely important, and is at the heart of Ambisonics.

Now back to distance approximation:
I’m not sure how many readers are familiar with the book Ecological 
Psychoacoustics (edited by John Neuhoff). For those of you who are interested 
in loudness constancy, loudness of dynamically changing sounds, etc. this book 
addresses aspects of psychoacoustics that aren’t found in the best books on 
psychoacoustics (e.g. An Introduction to the Psychology of Hearing by Brian C. 
J. Moore). One of my mentors and an all-around great guy, William (Bill) Yost, 
wrote, 'The chapters in Ecological Psychoacoustics suggest many reasons why 
combining the rigor of psychoacoustics with the relevance of ecological 
perception could improve significantly the understanding of auditory perception 
in the world of real sound sources. Ecological Psychoacoustics provides many 
examples of how understanding and using information about the constraints of 
real-world sound sources may aid in discovering how the nervous system parses 
an auditory scene.'

Although I don’t ascribe to a single 'school' of psychology, I do buy into 
James Gibson's idea that man (and animals) and their environments are 
inseparable (this is at the heart of Ecological Psychology). Here is where I 
find 'fault' or room for improvement with a lot of controlled laboratory 
experiments: The person (subject) is isolated from his/her environment, thus 
limiting the external validity of many experiments. As an example, there are 
ways of judging a sound source's distance that could be difficult to replicate 
using convention playback systems in the laboratory. It has been hypothesized 
that we are sensitive to the curvature (or flatness) of a wavefront, and that 
this shape provides cues as to distance. But when performing controlled tests 
of this hypothesis, free-field (anechoic) environments are limited in physical 
dimensions, so near-field / curved-wavefront conditions are difficult to avoid. 
Outside of the laboratory, reflections from
 surfaces are probable cues to distance. In a cafeteria (for example), the 
signal-to-reverb ratio grows as a talker approaches us, thus giving a viable 
cue as to the talker's distance. Naturally, intensity increases as well, but 
intensity alone isn't a great cue without a reference. A distant noise source 
could be equally loud but at the same time reverberant, thus compelling the 
listener to believe the noise source is at a distance. How well HA and CI 
recipients judge distance (and therefore safely avoiding disaster) is one of 
many questions I'm interested in. Again, I'm building a playback system 
designed to answer some of these questions. But if Ambisonics involves too much 
psychoacoustic 'trickery' (as some on the sursound list like to say), then it 
would not be the best recording/playback method for performing the 
aforementioned experiments. But to date, re-creating the sound field as it 
originally existed at the listener' s head via Ambisonics
 (while letting the ear and brain do the rest) seems to be one of the best 
research tools at my disposal. (Note: HRTF via headphones isn't a solution 
because headphones physically interfere with behind-the-ear HAs and CIs).

So how good is Ambisonics in reproducing the original auditory 'scene'? If the 
reconstructed wavefield is close to the original, then what happens when you 
record the Ambisonics system itself? Will the playback of this recording yield 
the same spatial information as the first recording did through an appropriate 
first- or n-order system? Or will the recording of the playback capture the 
so-called 'trickery,' thus making the recording-of-a-recording useless. Anybody 
tried this? I think I’ll give it a go using a four speaker arrangement 
(horizontal only) while playing a live recording of persons talking at eight 
equally-spaced locations around a Soundfield mic. Upon playback, I’ll place the 
Soundfield mic in the four-speaker arrangement, record this, and then listen to 
the recording of the recording. How much localization info do you believe will 
be lost? Could be fun, plus I’m a firm believer in learning by doing.
Thanks for reading,
Eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20120530/081156c8/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to