We just released our first piece of VR content with ambisonic audio to the public. It's a live recording on stage at a recent Paul McCartney concert. The audio was captured from the sound board and mixed in b-format. Available for Google Cardboard now, Oculus Rift Mac/PC coming soon. http://www.jauntvr.com/content/
Adam Somers Jaunt, Inc. http://jauntvr.com On Thu, Nov 20, 2014 at 5:39 AM, Peter Lennox <p.len...@derby.ac.uk> wrote: > Having quickly skimmed through the discussion, I'm not sure if I missed > something, so apologies if I have. > > HRTF derived binaural is never going to completely work unless you use > your own, personalised HRTFs. Using generic HRTF datasets, the problems > usually manifest as front-back reversals (which head tracking ought to > disambiguate) and lack of externalisation. (especially, I seem to remember, > directly in front of the head). > The externalisation problem can be extended to the range-perception > problem - and range perception is very largely to do with indirect sound - > just as, for instance, range perception does not work well in an anechoic > room. > > I thought full HRTFs did take into account shoulder and torso reflections > - though it seems likely that it's usually measured with shoulders parallel > to the line running through the ears? > > > Dr Peter Lennox > > School of Technology, > Faculty of Arts, Design and Technology > University of Derby, UK > e: p.len...@derby.ac.uk > t: 01332 593155 > ________________________________________ > From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Bo-Erik > Sandholm [bo-erik.sandh...@ericsson.com] > Sent: 20 November 2014 09:30 > To: Surround Sound discussion group > Subject: Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio > Available? > > Some of the current subject was vitalized by me trying to be ironic about > how non ambisonic guys are trying to solve the sound field recording > problem. > > Originally there was not, and I am not sure there currently exist a > solution for sound and picture having a coherent scene movement (controlled > by head direction) for Oculus Rift and like VR viewer in video and sound > environments other than those controlled by game engines. > > I want to listen to realistic FOA tetramic recordings over headphones if > possible. > My thinking is strongly impacted by the current availability of a < 20 USD > 3D direction sensors and low cost processing power. > > Pointer on howto implement low cost head tracker including headtracking > binaural software http://www.matthiaskronlachner.com/?p=2091 > > Low cost processing: M805 1.5GHz (Cortex-A5) android 4.4 stick for < 40 > USD. > > http://www.geekbuying.com/item/MK808B-Plus-Amlogic-M805-Quad-Core-Android-4-4-Mini-TV-Dongle-1G-8G-WIFI-H-265-HW-Decode-Bluetooth-DLNA-Miracast---Black-337068.html > note to self does this ARM v5 version support NEON answer = Yes. > > > I believe there are at least a few glaring problems in the way binaural is > generated via HRTF's currently. > > I think head tracking is part of the solution. That is that the sound > field decoding parameters to binaural change when you move your head. > - the goal of adding head tracking with binaural listening is to get to > the status that the sound field is stationary and externalized . > - Personally for me listening to others binaural recordings with inear > microphones there can an experience like listening to a vertical sound > field slice/surface trough the ears, that is a bit externalization to the > sides and up and down but no depth! > - Introducing head tracking controlled rotation of the sound field before > ambisonic to binaural conversion enables the sound field to stay in the > "initial position" when the listener moves the head in all directions. > > Things having potential for improvement are in my opinion: > - HRTF's are not individual, maybe not such large problem can appearently > be adapted to during listening, is the ear to ear distance most important? > - In current state of art all HRTF's are created with the live victim > locked in a head brace and the whole body stationary as sound source is > rotated horizontally / vertically in relation to the subject or in worst > case the subject is a decapitated kunstkopf! > - Maybe we should add a separate info channel for "torso tracing" in > addition to the head tracing. See Note 1 > - > > Note 1 > I took a look in the mirror :-) > When turning the head or nodding, the distance from my shoulders to the > ear channels stay the same, more or less. > When nodding sideways (is that english?) the distance to the shoulders > changes drastically. > > Today on my walk to work through a park, I walked past a distant white > noise point source (a large fan in an air cooling installation). > I decided to do a small psychoacoustic experiment with my HRTF's :-). > - 1- I rotated my whole body in relation to the sound source as if I had a > head brace. > - 2- I rotated my whole torso in relation to the sound source with my head > directed towards the sound source. > - 3- I rotated my head in relation to the sound source with stationary > torso. > - 4- I forgot to nod sideways :-) > The only sound change I could notice was for case -2-, there was a large > noticable impact on the white noise spectra, as I am not a musician I am > not able to specify the frequency range that was mostly impacted. > > I think this is a strong indication that head movement in relation to the > Torso should be added to HRTF processing for binaural sound! > Maybe it can be implemented and tested for a special case, that is: > - HRTF's created for a fixed torso with the head turned instead of the > whole person? > - Can this be created with the best resolution in the forwards listening > sphere for optimization? > > Best Regards > Bo-Erik Sandholm > Stockholm Sweden > > -----Original Message----- > From: Sursound [mailto:sursound-boun...@music.vt.edu] On Behalf Of dw > Sent: den 20 november 2014 00:02 > To: sursound@music.vt.edu > Subject: Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio > Available? > > On 19/11/2014 22:49, Paul Doornbusch wrote: > > Can you give us some links to this please? > > > > Thanks, > > Paul > > I'll give you a couple. If you record a sound in front of a dummy head, > you would expect to hear it in front on replay through headphones. > If you tilt your head backwards while listening, you would expect the > auditory image to rotate with the head/ears/torso. Neither happens in all > cases.. And then there is the 'externalization' problem. > > > > > On 20 Nov 2014, at 9:46 AM, dw <d...@dwareing.plus.com> wrote: > > > >> There are numerous examples where the predictions of HRTF localisation > are falsified by observations. What is one to think of the science? > >> > >> > > _______________________________________________ > Sursound mailing list > Sursound@music.vt.edu > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, > edit account or options, view archives and so on. > > The University of Derby has a published policy regarding email and > reserves the right to monitor email traffic. If you believe this was sent > to you in error, please select unsubscribe. > > Unsubscribe and Security information contact: info...@derby.ac.uk > For all FOI requests please contact: f...@derby.ac.uk > All other Contacts are at http://www.derby.ac.uk/its/contacts/ > _______________________________________________ > Sursound mailing list > Sursound@music.vt.edu > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, > edit account or options, view archives and so on. > -------------- next part -------------- An HTML attachment was scrubbed... URL: <https://mail.music.vt.edu/mailman/private/sursound/attachments/20141120/ba262b89/attachment.html> _______________________________________________ Sursound mailing list Sursound@music.vt.edu https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit account or options, view archives and so on.