Hi all,
I think that one of the problems with all these discussions is that we tend to think of the distance of an audio object as being the exactly the same sort of thing as the coordinates of the object w.r.t. the listener - but it's not because, unlike direction, we humans can't determine it absolutely, but only as implied via the object's (and our) interaction with the environment. For a unknown distant stationary source in an anechoic environment there are _no_ cues as to distance, unless the listener can move and gain something via parallax or loudness variation. For close sources (i.e. in the curved wavefront zone) there may be some cues from bass lift, but even these would be ambiguous for median plane sources if head turning is not allowed (Greene-Lee head brace, anyone?)

  Dave M.

On Jul 20 2011, Dave Hunt wrote:

Hi,



Modelling distance, and controlling it on a per source basis, is founded on sound physical principles and can be made 'convincing', even with low order ambisonics. Agreed that it is 'bolted on', though synthesis (being the converse of analysis) involves controlling a large number of parameters to simulate what occurs naturally.

Even WFS, as described in the literature, suggests that sources be recorded individually as dry and close as possible, and the 'scene' then reconstructed on playback. So it too synthesises distance.

Ciao,

Dave Hunt

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to