Hi Tom,

I don't have anything useful to contribute on your set of questions, but I did 
have a few for you:

When mixing for this project, did you lie down on the floor of your studio to 
check if the mix works? Going from standing to lieing down will effectively 
make the listeners inhabit 2pi space with all the glories of proximity effect 
on bass/lower mids and lack of rear reflections.

Also, do you use those KEF 107s in your normal workflow? :)

Thanks,
Mikhail

________________________________
From: Sursound <sursound-boun...@music.vt.edu> on behalf of Tom Slater 
<slater...@gmail.com>
Sent: May 24, 2021 07:52
To: sursound@music.vt.edu <sursound@music.vt.edu>
Subject: [Sursound] Ambisonics for larger spaces

Hi everyone,

I have a project coming up that requires me to create a spatial mix of some
music and then transfer it to a much larger space. The details are:

The space is circular with a 12m radius and a 3.5m ceiling.

We can use a large number of speakers (48 or more)

Budget is sufficient to consider all hardware processing options.

The audience will be lying down to form concentric circles, facing the
ceiling. The outermost circle will be sitting with their back against the
wall slightly tilted up. (there is a visual element to the show that
requires this audience placement).

I will be producing and mixing the music in my studio in which I have a
25.2 dome-shaped speaker layout (8, 8, 4, 4, 1.2). See images of the studio
here https://callandresponse.org.uk/

I use the Blue Ripple suite of plugins and Rapture 3D Advanced decoder.

When transferring mixes from my studio to other venues I usually build a 3D
model in Sketchup, design the speaker array for the venue and then extract
the cartesian speaker coordinates from the 3D Sketchup model, build a new
decoder in Rapture 3D Advanced and render a polywav for playback in the
venue.

This has always worked well but I just wanted to see if there was anything
I could do to improve this method, particularly when transferring to
larger spaces.

I read about this project at The Royal Danish Academy of Music
<https://www.digitalaudio.dk/page2169.aspx?recordid2169=1149> where they
used a DAD AX32 as a delay matrix to delay certain speaker channels to
create a virtual hemisphere. I guess they then build a bespoke decoder with
the same speaker positions as the virtual hemisphere and then render to
that.

I'm particularly interested in the community's experience using hardware
delay matrices in conjunction with ambisonics. *OR *instead of ambisonics,
such as Meyer Galaxy processors and their Space Map Go system, D&B
Soundscape, etc. etc.

I’m also very interested to hear experiences of using systems like SpaceMap
Go or D&B Soundscape in conjunction with your favourite DAW i.e. how well
have did they fit into your creative sound design and composition workflows
as studio tools?

Thanks in advance everyone.

Best,

Tom
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20210524/a701c436/attachment.htm>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20210524/aa80ea61/attachment.htm>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to