Well, its the most accurate in terms of point source localization. Then again, its highly dependent on the rendering method used and the layout available in the final space. But, at least in terms of archiving a 3d soundtrack it will be the most accurate, as rendering it to a lower order ambisonic format will irrecoverably decrease the spatial resolution.

As far as I imagine a possible format for object oriented audio, one could be easily extend it to hold an ambisonic encoding as a single object to include ambients and sf mic recordings. But this would require either a fall back solution or a flag to ensure one has the right encoder for the new audio object file.

On the other hand, in terms of ambients, sound engineers in the 3D business use several 5.1 reverb plug-ins and render them over 5 virtual sources each in a layout independent format. Similarly, they could probably reconstruct a 3D recording off of a sf mic. As I already said, this is far from elegant, nor in any way accurate, but viable. Also, remember that defining the width of a source (through decorrelation) goes a long way in making impressive ambients with few point sources, and might be included in audio object file formats.

But as far as I can see with my limited view of the "business" (Dolby is developing a proprietary audio-object-format, DTS is thinking of opening up their MDA formula (nice name, eh?)) and if the work flow continues to concentrate itself on objects as opposed to sound fields, I would say that this is the current trend.


(FYI, I am also not saying that I like nor dislike this trend)

On 5/16/13 7:01 PM, Stefan Schreiber wrote:
Timothy Schmele wrote:


The industry is moving towards object oriented encoding of 3D
soundtracks anyway. This is perhaps the least elegant, but the most
accurate, as every sound is stored in isolation of the others, with
exact meta information of its spatial position. Theoretically, you
could take this soundtrack and render it over any system you, be it
ambisonics, higher order ambisonics, vbap or wave field synthesis
among possibly others...

Audio objects with spatial position work only for the direct part of
sounds, limitation which is often ignored. Reflections and ambience you
actually would have to render on some kind of cinema sound processor. Or
would you prefer to mix a real 3D soundfield in a studio environment,
anyway?

(The rendering process you were referring to above is just the rendering
of the direct sound parts on different cinema layouts.)

My fear is that audio objects work only if the system very defined, say
Audio Atmos. (The speaker system has to be defined at least more or
less.) Then, maybe...  But this is actually not the convincing
layout-independent solution people are looking for

Sell something as the "most accurate" solution, and don't compare to
anything else?


Best,

Stefan Schreiber

P.S.: The industry (which industry?) < currently thinks > that object
oriented encoding of 3D soundtracks is the "right way".

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to