Hey everyone! Hope all are doing well.

Want to know if anyone in this group has experience with AllRAD decoding to
physical loudspeakers? I'm asking for some advice, or guidance, on how to
best optimize this decoding approach for multiple loudspeakers in different
arrangements. My current listening rooms have been a 14.1, equally
distributed two layer arrangement with (lowest layer to highest seperated
by 45 degree angle) 7-7-.1 and a two layer 20.1 arrangement 9-8-2 voice of
god-.1.

I've implemented the AllRAD algorithm on SHARC DSPs using Aalto
university's Spatial Audio Framework
<https://github.com/leomccormack/Spatial_Audio_Framework> as my backend to
generate the decoder weighting coefficients. Simply put, all that happens
is I feed my Ambisonic encoded channels to the DSP mixing matrix, where at
each crosspoint in the mixer matrix, they are transformed with the correct
weighting and phase inversion for that particular loudspeaker before
summation to its output (all this done purely in the time domain).

So far, I've managed to get a decent spatial impression, however, I've been
running to this issue where the top speakers seem to emitting too high
volumes. I've tested against the SPARTA
<http://research.spa.aalto.fi/projects/sparta_vsts/> decoding plugins and
have also been experiencing similar problems. My thoughts are that the
AllRAD approach in the IEM <https://plugins.iem.at/> plugins would be
similar.

*For example:*
Testing with some 1st order AmbiX recordings I've done with my Zoom H3-VR,
I've noticed how there seems to be alot of bleed on the top speakers. For
instance, I have a recording of a busy street, where cars are coming passed
and people talking around the mic. Sometimes, it feels as though the top
loudspeaker energy contributions are way too high, causing a noticeable
effect of cars driving above me. Particularly, with the voice of god
arrangement, it sounds literally like the car engine is on top of you.

I've tested with third order Eigenmike samples retrieved from here
<https://mhacoustics.com/demos> which do seem to get a better vertical
separation. However, there is still the sense that the soundfield is 'too
high' and we are listening to the scene above us, rather than being
immersed within it.

Now here's where some advice would be needed. I'm not 100% sure if this
problem is to do with the nature of my recording, the limitations of 1st
order Ambisonics, the microphone, my loudspeaker configuration or the
AllRAD decoding algorithm itself.

*Attempts at solutions:*
Some things I've done to try eradicate the problem is to feed 'vertically
spreaded' loudspeaker elevation angles to the algorithm to try increase
vertical separation. Or to offset my centre of origin so the angles to my
loudspeakers are solved according to different levels of elevation within
the centre of my loudspeaker arrangement. This has helped somewhat, but I'm
not 100% happy with it as of yet.

The other thought I've had would be to perform better speaker tuning, so to
compensate for any sound propagation delays and attenuation because of
location although based on this paper
<https://iaem.at/Members/frank/files/2014_frank_howtomakeambisonicssoundgood.pdf>,
this may not be the correct approach.

Any feedback, tips, tricks or advice would be greatly appreciated!

Thanks in advance everyone!

-- 
Sean Devonport
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20190531/21975125/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to