[Sursound] A shared standard format for describing decoder loudspeaker setups?

2019-12-10 Thread Trond Lossius
Would it be an idea for the community of ambisonic developers to agree on a 
shared file format for describing loudspeaker setups as imported to or exported 
from various decoders?

I currently want to compare various decoders, such as Blue Ripple Rapture 3D, 
IEM AllRAD, Compass, Spat, etc.. It would be really useful to be able to 
describe loudspeaker setup once only, and afterwards be able to import to all 
of these decoders.

If so, it believe that it would be useful if the decoder file can contain both 
polar and Cartesian coordinates, gain adjustments for individual channels, and 
possibly also consider support for virtual speakers (decoders that do not deal 
with virtual speakers could just ignore these).


Thanks,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] FOA/HOA Impulse Responses

2019-10-10 Thread Trond Lossius
Hi,

> There are some in the ATK (Ambisonics Tool Kit) plugin pack for Reaper. I 
> haven't tried them so can't comment on how good they are, nor could I comment 
> on the legal/copyright issues but for some personal experimentation hopefully 
> they're usable.
> Regards,
> Paul.
> 
>> I was wondering if there were some freely available impulse response 
>> recordings online in FOA or HOA format that could be used for ambisonic 
>> convolution reverb experiments, before I head out and record my own?


Although there are a number of kernels for convolution included with ATK, these 
are not reverb impuls responses, but rather FIR filters used for various 
encodings and decodings. The kernels are maintained here:

https://github.com/ambisonictoolkit/atk-kernels

and they are licenses according to a Creative Commons Attribution-ShareAlike 
3.0 Unported License:

https://github.com/ambisonictoolkit/atk-kernels/blob/master/LICENSE.md

Just a few days ago I saw a set of IRs being announced by someone at 
Huddersfield. They will be presented at the upcoming AES conference in New 
York. There also exists a set of impulser respoonses from  the MUMUTH hall in 
Graz, but I’m not sure where to get them.

Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] multichannel VST recorder os x

2017-06-21 Thread Trond Lossius
Hi Oliver,

One option is to use Loopback to route audio from your not-able-to-record 
program to a program that is able.

Best,
Trond


> On 21 Jun 2017, at 11:40, Politis Archontis  
> wrote:
> 
> Hi Oliver,
> 
> A bit curious here, I’m not aware of any such VST, but isn’t the multichannel 
> recording usually a task for the host DAW or sample editor etc. ? For example 
> Reaper does super easy multichannel recording on OSX for up to 64ch on a 
> single bus, if I remember correctly.
> 
> Regards,
> Archontis
> 
> 
> 
> 
> 
> On 21 Jun 2017, at 00:14, Oliver Larkin 
> mailto:olilar...@googlemail.com>> wrote:
> 
> hello,
> 
> does anyone know of a VST plug-in that will record a valid 16 channel wav 
> file on os x? would rather not join mono files manually
> 
> thanks,
> 
> oli
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.
> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Software plugin for SPS200

2017-03-20 Thread Trond Lossius
Hi,

A friend of me attempts to convert recordings made using the SoundField SPS200 
mic on OSX, and experiences major problems loading the AU and VST plugins for 
Mac. The plugin does not currently seem to be available from 
http://soundfield.com/, and I notice that my versions of the plugins are 
getting pretty old:

SPS200SurroundZone.component - 24 Feb 2010
SPS200SurroundZoneVSTi.vst - 01 Oct 2009

I’m at El Capitan. The AU works for me, but the VST does not.

- Are anyone aware of more recent versions of these plugins?
- Do they work with macOS Sierra?


Thanks,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Muti-channel ambisonic audio in sync with video

2016-09-08 Thread Trond Lossius
AFAIR QuickTime supports up to 24 channels.

Best,
Trond

> On 09 Sep 2016, at 04:59, ByungJun Kwon  wrote:
> 
> Thanks for the information.
> Did you try high order ambisonic encoded audio in video file?
> What was the number of audio channels that you tried in VLC( or other 
> players)?
> 
> Best, byungjun
> 
> 
> On 09/09/2016 10:17 AM, Marc Lavallée wrote:
>> It's already possible to play a multichannel audio/video file using
>> VLC (or other players) while decoding tou  ambisonic; it's a matter of
>> configuring the system to redirect the multichannel audio of the video
>> player to an ambisonic decoder, using a sound server like Jack.
>> --
>> Marc
>> 
>> On Fri, 9 Sep 2016 09:57:07 +0900
>> ByungJun Kwon  wrote:
>> 
>>> Thank you very much for all your feedback and I think it's enough
>>> information to start with.
>>> Though if I were a professional visual person, I would prefer to play
>>> it from the dedicated media player like Quicktime or VLC rather than
>>> the one which resides inside DAW. I wish someone release media player
>>> with multi-channel audio support soon.
>>> Best, byungjun
>>> 
>>> 
>>> 
>>> On 09/08/2016 08:09 PM, Steven Boardman wrote:
 You really shouldn't need another machine. If your
 processor/graphics card and your media is fast enough then there
 shouldn't be a problem. If there is, it is most certainly drive
 speed,  so use ssd/raid or both. If you video codec/playback app
 won't play that many audio channels then just import video into
 Reaper. If it must be 2 machines then use the IAC bus in core midi
 for osx, or some other midi/time code networking protocol for
 others. I use IAC all the time and it works well.
 
 Best
 
 Steve
 
 Steve
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 
 ___ Sursound mailing
 list Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
 here, edit account or options, view archives and so on.
>>> ___
>>> Sursound mailing list
>>> Sursound@music.vt.edu
>>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
>>> here, edit account or options, view archives and so on.
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Muti-channel ambisonic audio in sync with video

2016-09-08 Thread Trond Lossius
In this blog post I demonstrate how to create QuickTime movies with 
multichannel sound,using QuickTime 7 Pro:


http://trondlossius.no/articles/1251-creating-quicktime-movies-with-surround-sound

I’ve used this in a number of installations with 6 to 8 channels of audio, and 
it works fine. This blog post documents an apple script to automatically play 
back QuickTime videos in fullscreen mode:


http://trondlossius.no/articles/1279-applescript-for-fullscreen-quicktime-video-playback---with-hack-to-avoid-pink-glow-mouse-cursor

My experience is that QT movies with multichannel audio plays back slightly 
less reliable than QT movies with stereo only. With stereo I have several time 
set up a mac mini, and it has run without a hitch and without any need for 
restarts in installations running up to a month. With multichannel audio there 
is a tendency of the movie to stall at some point, after several hours of 
playing. For this reason it might be best to quit QT and shut the mac down at 
the end of the day, and restart just in time for opening hours the next day. 
Doing so I have had playback with no problems in two projects with a total 
exhibition time of five months this year.

Best,
Trond



> On 08 Sep 2016, at 08:52, ByungJun Kwon  wrote:
> 
> Hello list,
> I have an exhibition of playing heavy duty video file together with 16ch 
> interleaved ambisonic wave file. Audio is rendered  through reaper using 
> ambisonic decoder so each channel is just feeding to relevant 16 speakers to 
> make audio playback simple. Since video file is very heavy I'd like to have 
> separate machine for processing video and audio.
> Could anyone on the list suggest cheap(or free) solution to sync 
> multi-channel audio and video?
> 
> Best, byungjun
> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Ambisonic Toolkit for Reaper - 1.0.0.b9

2016-07-24 Thread Trond Lossius
ATK for Reaper v.1.0.0.b9 is now available. Some key changes:

- Binaural decoders at all sample rates
- New encoders and decoders for bridging between FuMa and AmbiX (useful for VR 
authoring)

And two changes that breaks backwards compatibility:

- Azimuth is now positive in counter-clockwise direction
- Quad decoder now has 4 channels out only

More details can be found here:

http://www.ambisonictoolkit.net/publications/2016/07/24/atk-reaper-1.0.0b9.html


Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Conversion from FOA to TOA ? How to and why or not?

2016-07-02 Thread Trond Lossius
>> I use the Blue Ripple Harpex upsampler extensively in my installations, in 
>> fact it was me that asked Richard Furse if he could license the Harpex 
>> library and create this plugin. The benefit is that when I do installations 
>> with 16 or more speakers, distributed as two rings near the floor and high 
>> up at the wall, and use the Rspture 3D Advanced decoder, I tend to get much 
>> more stable sound field reproduction with more precise definition and 
>> localisation of sources within B-format field recordings in the space than a 
>> FOA would give on its own.
>> 
>> I got to try this briefly at BEAST FEAST in April in the venue used for 
>> presentations with 24 speakers as I demoed the ATK for Reaper plugins. That 
>> was probably the best setup that I have been able to get my hands on to date.
> 
> that’s interesting to hear, Trond, I was also wondering about how upsampling 
> would affect the reproduction of field recordings.
> 
> What microphone are you using for your recordings?

I’m using a SoundField SPS200. You’ll find documentation of two of the 
installations I’ve done in collaboration with Jeremy Welsh here:

http://trondlossius.no/works/56-the-atmospherics-ii---flags-flames-smoke-bridges
http://trondlossius.no/works/57-the-atmospherics-iii---till-it-rains-im-gonna-stay-inside

For both of these I used 16 speakers, 8 near the floor and 8 near the 
ceiling.It’s not enough vertical coverage to give full 3D (there are holes 
below and above), but I’m thinking of this approach more as a vertical 
“widening” of the horizon as compared to 2D layouts.

We’ve just opened a new installation at the art museum in Førde, Sogn & 
Fjordane. I don’t have any audio/video documentation online, but Jeremy has 
posted a number of photos from all of the “Atmospherics” installations that 
we’ve done together, including this one:

http://jewelsh.blogspot.no/p/the-atmospherics.html

The latest work was installed in a large white cube at the museum, and the 
vertical dimension of the sound really comes into its own here. At the same 
time the rectangular shape of the space, and the amount of reverb and a number 
of acoustic oddities posed major challenges, and I ended up having to build a 
number of 4 channel filter and eq JSFX effects for Reaper that I used 
extensively to adjust for this. Still, the reverberation in this space makes 
localisation in the mid and low frequency range much more blurry than in the 
previous installations.

Cheers,
Trond


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Conversion from FOA to TOA ? How to and why or not?

2016-06-29 Thread Trond Lossius
Hi,

>> What is the simplest or best way to upscale FOA to TOA.
> 
> Harpex-based upsamplers yield good results.
> 
> Horizontal-only, you can use the Harpex-B plug-in. Full sphere, the Blue 
> Ripple Sound Harpex Upsampler.
> 
> http://harpex.net
> 
> http://www.blueripplesound.com/products/toa-harpex-upsampler-vst

I use the Blue Ripple Harpex upsampler extensively in my installations, in fact 
it was me that asked Richard Furse if he could license the Harpex library and 
create this plugin. The benefit is that when I do installations with 16 or more 
speakers, distributed as two rings near the floor and high up at the wall, and 
use the Rspture 3D Advanced decoder, I tend to get much more stable sound field 
reproduction with more precise definition and localisation of sources within 
B-format field recordings in the space than a FOA would give on its own.

I got to try this briefly at BEAST FEAST in April in the venue used for 
presentations with 24 speakers as I demoed the ATK for Reaper plugins. That was 
probably the best setup that I have been able to get my hands on to date.

Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] YouTube now supports Ambisonics (warning....part advertisement..)

2016-04-21 Thread Trond Lossius
> On 20 Apr 2016, at 21:16, Marc Lavallee  wrote:
> 
> I wonder why using uncompressed PCM instead of compressed AAC...

Is there a risk of compressed audio altering the phase between the channels, 
affecting the spatial image? 

Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Dialogue in center channel,,, not always

2016-02-05 Thread Trond Lossius
For what it’s worth I recently attended a Dolby Atmos screening of the most 
recent Star Wars movie. I ended up in the frontmost left seat, and I have to 
say that I was surprised that next to all of the sound of the screening for me 
appeared to come from one and one only speaker just off the screen to the left. 
I didn’t expect to get the same experience as if seated in the middle, but 
still t thought that Dolby Atmos mixing and reproduction would be much more 
resilient to off-centre listening.

But then again, I’m generally pretty underwhelmed with how the Doly Atmos 
format is being used in most blockbuster Hollywood productions. Apart from a 
few swishes here and there as a mere audible spectacle, there seem to be a lack 
of understanding and imagination with respect to how spatial sound can help 
invite a deeper sense of immersion in the places where the story unfolds.

Cheers,
Trond


> On 05 Feb 2016, at 15:43, jim moses  wrote:
> 
> I the state the obvious - something I imagine everyone on this list
> understands..
> 
> The main reason to keep dialogue in the center channel is that panned
> phantom images are unstable for most of the audience in a theater. Panning
> to a center speaker fixes the location for everyone, instead of moving to
> the speaker you are sitting closest to.
> 
> jm
> 
> On Fri, Feb 5, 2016 at 8:29 AM,  wrote:
> 
>> Yes, in Gravity this is easily possible in the opening shot: it's a super
>> long
>> wide shot where Clooney is off picture in the beginning. There is plenty
>> of tome to absorb the scene and the
>> position of everything. This is an obvious opportunity to pan dialogue as
>> it is really underlining the
>> dramaturgical intent. And this always is the criteria.
>> If the picture cuts are too fast (and this limit is reached soon),
>> following the
>> perspective panning-wise exaggerates the edits, makes them obvious
>> and potentially destroys the seamless flow of the narration. That's the
>> main reason
>> for keeping the dialogue in the center. If the shots are long enough,
>> if there are off-voices, if there is movement or something similar in a
>> dramaturgical sense,
>> then panning the dialogue to the position on (or off) screen may enhance
>> the sense of space
>> and the story. More than 90% of all dialogue is in the center, though. But
>> yes, sometimes
>> it is an improvement. And yes, in animation the voices are super-clean as
>> they are recorded
>> in a studio - and thus they can be panned easily if wanted. With location
>> sound, there may be considerable background sound
>> behind the voices - and if such a signal is panned, the (mono-) background
>> jumps around as well. Very noticable
>> and very disturbing. Location audio is very much used these days by many
>> directors (Tarantino, e.g.).
>> Robert Altman was famous for insisting on 100% location dialogue. This
>> makes panning dialogue almost impossible,
>> even if it would enhance the story.
>> 
>> Best, Florian
>> 
>> 
>> 
>> Von: Sursound [sursound-boun...@music.vt.edu]" im Auftrag von
>> "Courville, Daniel [courville.dan...@uqam.ca]
>> Gesendet: Donnerstag, 04. Februar 2016 18:15
>> An: sursound@music.vt.edu
>> Betreff: Re: [Sursound] Dialogue in center channel,,, not always
>> 
> And which director takes care about stereo compatible picture editing?
>>> 
>>> Alfonso Cuaron is possibly one such director. Both Children of Men and
>>> Gravity often panned the dialogue to match the position of the actor on
>>> screen. It's very noticeable right at the start of Gravity; first you hear
>>> George Clooney's voice coming from far right and as the shot zooms in and
>>> you start to see him appear on the far right of the screen, his dialogue
>>> moves across to match.
>> 
>> Every Pixar animation movies have panned dialogues.
>> 
>> Since the voices are recorded in individual booth, they start audio
>> post-production with separate tracks for every voices, making panning
>> easier and more effective, although making the mix more time consuming.
>> 
>> - Daniel
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>> 
> 
> 
> 
> -- 
> Jim Moses
> Technical Director/Lecturer
> Brown University Music Department and M.E.M.E. (Multimedia and Electronic
> Music Experiments)
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https:/

Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-26 Thread Trond Lossius
> On 25 Jan 2016, at 01:37, Marc Lavallée  wrote:
> 
>> As anything simpler but functional might be sufficient and even 
>> preferable in most cases:
>> 
>> - Does ATK define an HRTF interface which is sufficiently flexible to
>> be the base for a real < standard > ?
> 
> Not really, but you should ask the maintainers of ATK.

I don’t think ATK makes sense as a standard. The ATK sets are pretty 
application-specific: For each HRTF it contains 8 impulses so that each of the 
WXYZ channels can be convolved to the left and right ear. These are calculated 
as a reductionfrom larger sets of HRTF measurements. A general HRTF measurement 
contains much more information (measurements for multiple azimuths and 
elevations). As such SOFA seems to me an interesting move towards a 
standardisation.

What would be useful though, would be a standard solution for generating 
impulses for ATK from SOFA data.

Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic recordings and noise reduction - best practise?

2016-01-10 Thread Trond Lossius
Thanks for all the replies to my initial question.

I have done some testing with all three approaches today. My conclusion so far 
is that running noise reduction on the B-format signal as two stereo pairs (WX 
and YZ) ends up distorting the sound field (spatial artefacts, less pronounced 
spatialisation). Doing it on the A-format signals sounds much better. I’ve not 
yet reached any conclusion on whether there are any differences between 
processing the original recorded A-format signals, or A-format signals decoded 
from B-format using ATK.

But workflow-wise it seems faster to do this using the VST/AU plugins rather 
than the standalone application, with all the splitting and linking of audio 
files required in that process.

Best,
Trond

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Ambisonic recordings and noise reduction - best practise?

2016-01-04 Thread Trond Lossius
Hi,

I’ve done some ambisonic recordings of ambience in very quiet environments 
using the SoundField SPS200 mic and a SoundDevice 788T recorder. The resulting 
recordings are A-format, and I use the SoundField SPS200 plugin to convert the 
A-format recordings to B-format. The resulting recordings have a bit of hiss 
that I suspect come from the recorder. The sound is very similar to recordings 
done using the 788T with no mic connected, and with the same recording levels 
as when doing the ambience recordings. I’d like to use Isotope RX4 to reduce 
the amount of hiss, and I’m wondering what would be best practise when dealing 
with 4-channel A- and B-format signals.

RX4 is not able to deal with multichannel files, so I’l have to split the 
4-channel recording into two stereo tracks, do noise cancellation on them, and 
then merge back into four channels again. However I can envisage three 
different approaches to this:

1) I treat the original A-format recordings, and afterwards I convert the 
processed files to B-format using the SPS200 plug.

2) I treat the converted B-format recordings.

3) I decode the B-format files to A-format using the BtoA decoder from 
Ambisonic Toolkit, remove noise, merge, and then reencode to B-format using the 
ATK AtoB encoder.

I’m sure the first option will be the best when considering noise reduction 
only, as it reduce hiss before the four channels gets mixed. At the same time 
I’m concerned that it might offset phase information between channels, and 
hence interfere with how the SPS200 plugin compensates for distance between the 
mic capsules. For the same reason I’m thinking that processing of the B-format 
signal might interfere with phase information between the four channels and 
hence distort the spatial information.

Do anyone else have any experience with this, and have a recommendation for 
what workflow leads to the best results?


Thanks,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Ambisonic Toolit for Reaper - 1.0.0.b6

2015-11-14 Thread Trond Lossius
Hi,

I’ve just uploaded a new version of the Ambisonic Toolkit for Reaper. Due to 
changes in the JSFX implementation for Reaper 5, the names of the plugins 
changed and became pretty useless. This is now fixed. The new version can be 
downloaded here:

http://www.ambisonictoolkit.net/wiki/tiki-index.php?page=Downloads#JS_FX_plugins_for_Reaper

Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Workshop – La Spatialisation Sonore @ SCRIME, Bordeaux

2015-09-14 Thread Trond Lossius
For anyone in or near Bordeaux tomorrow (Tuesday) I’m contributing to a 
workshop on spatial sound at SCRIME - Studio de Création et de Recherche en 
Informatique et Musique Electroacou. I plan to present documentation from some 
recent artistic projects, discuss my use of ambisonic field recordings, my 
artistic research into how this can be used as material for my sound 
installations, and my workflow when using such recordings. As part of this I’ll 
also give an introduction to and demonstration of AmbisonicToolkit for Reaper.

The workshop takes place in the Hémicyclia studio in the LaBRI building, and 
we’ll have a 16 channel 3D system available for listening demonstrations. More 
info here:

http://scrime.labri.fr/blog/workshop-la-spatialisation-sonore/


Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Boids for Ambisonic Panning

2015-04-12 Thread Trond Lossius
Hi,

boids and flocking algorithms have been used in quite a few works developed at 
ICST in Zurich. I have the impression that often this has been done by 
implementing boids/flocking in OpenFramework, and sending OSC messages to Max, 
where ambisonic processing is done using the ICST ambisonic externals. You’ll 
find several publications on the topic here:

https://www.zhdk.ch/index.php?id=icst_publications_main

The ICST externals for Max using gain adjustments to emulate distance. When 
sources are located within the unit circle, the order of the encoding will be 
modified so that a source located at the very center will be omni only.

Although not ambisonic, I would also look towards the work on spatialisation by 
John Chowning, and how he modifies gain, reverb and dry/wet balance to emulate 
distance. This is discussed in details e.g. in the book on computer music by 
Dodge and Jerse. A recent piece by John Chowning was presented in an outdoor 
setting during ICMS / SMC 2014 in Athens, and I have to say that I was deeply 
impressed with how he managed to create illusions of sound objects moving in 
space using only four speakers.

Ircam Spat is a third option to check out. Spat can work with ambisonic as well 
as other spatialisation algorithms, and also emulates air filtering and reverb.


Best,
Trond




> On 12 Apr 2015, at 18:35, Ricky Graham  wrote:
> 
> Hi Dave,
> 
> I guess I’d like to hear how others have mapped the output from boids to 
> whatever ambisonic panner they’re using. I will have a look for the ICMC 
> paper. 
> 
> While I have your attention, I can use ambilib~ as an example. Would you 
> simply map the cartesian to polar output to the azimuth (phase) and distance 
> (amplitude) parameters of your externals? 
> 
> I hope you enjoyed the croissants.
> 
> Ricky
> 
>> On Apr 12, 2015, at 12:00 PM, sursound-requ...@music.vt.edu wrote:
>> 
>> Message: 4
>> Date: Sun, 12 Apr 2015 09:12:32 +0100
>> From: Dave Malham mailto:dave.mal...@york.ac.uk>>
>> To: Surround Sound discussion group > >
>> Subject: Re: [Sursound] Boids for Ambisonic Panning
>> Message-ID:
>>  > >
>> Content-Type: text/plain; charset="utf-8"
>> 
>> Hi Ricky,
>>   Boids has been around for a long time and I'm certain it's been
>> used quite a few times in electroacoustic compositions - in fact, I seem to
>> remember one of our students on the Music Technology course here at York
>> doing so. Trouble is, I'm darned if I can remember his or her name (which
>> will be no surprise to anyone who's been on that course - my apologies to
>> the person concerned if they are reading this :-).  Your best bet would be
>> to look through the Proceedings of the ICMC from around '87 and maybe the
>> Computer Music Journal.
>> 
>> I'm not sure what you mean by "...difficult to scale in terms of
>> distance".  Are you referring to the mapping of the notional distances in
>> the boid simulation to the things which we use perceptually to deduce the
>> distance of a sound source? That's opening up an interesting can of worms!
>> Do a search in the archives for "giant geese" to see the fun we had talking
>> (arguing) about it last time. A lot will depend on wether the sound sources
>> are "familiar" or not - we can easily tell that a thunder storm (or a jet) is
>> distant or nearby because we are familiar with them as "perceptual objects"
>> and can construe them within the acoustic space we are listening in but
>> with constructed sounds that we are not familiar with we are stuck with
>> "immediate" (and to some extent, unreliable) cues like direct to
>> reverberant ratios, the pattern of early reflections, HF rolloff and maybe
>> distortion (loud sounds have distortion which increases with distance). If
>> he's currently on the list, I suspect Peter Lennox will jump in here and
>> tell me I've got it all wrong :-).
>> 
>>  Anyway,  I'm sure much/all of this is old news for you but I had to have
>> something to occupy a Sunday morning whilst waiting for the croissants to
>> warm up 
>> 
>>   All the best
>> 
>> Dave
> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Most convincing B-Format to binaural transcoding

2015-02-11 Thread Trond Lossius
Hi,

I’m back and forth between Harpex and the ATK for Reaper binaural decoder.With 
the ATK plugin I find that the CIPIC 11 HRTF works well for me, better than any 
of the responses from the Listen set. With Harpex, I always have to set the “EQ 
method to None”, if not the sound gets badly coloured. Harper has more distinct 
localisation , while ATK has better immersion. As I work towards installations 
that will be using speakers in a room, I find it useful to switch back and 
forth, it’s a bit like switching between different pairs of speakers while 
miksing.

As for the most convincing I have experienced, that was a demo at Ville 
Pullki’s lab in Helsinki, using Dirac and head tracking. They were playing AB 
tests for me in an anechoic chamber, going back and forth between external 
speakers and binaural. Along the way I was convinced that they were playing 
tricks with me, playing all of it from the speakers. Gradually I started 
noticing that if froze the position of my head, the sound would suck inwards 
after a second or so, disrupting the realistic illusion added by head-tracking.

So, needless to say I have jumped the NEOH bandwagon at Kickstarter, and hope 
for an SDK.

Best,
Trond


> On 03 Feb 2015, at 17:01, Courville, Daniel  wrote:
> 
> Hi,
> 
> I'm looking for comments on B-Format to binaural conversion/transcoding:
> what's the most convincing you've heard? What was the software setup?
> 
> Thanks,
> 
> Daniel
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Ambisonic Toolkit for Reaper 1.0 - public beta 4

2014-10-08 Thread Trond Lossius
A new beta of ATK for Reaper is now available:


http://www.ambisonictoolkit.net/wiki/tiki-index.php?page=Downloads#JS_FX_plugins_for_Reaper

This is a set of JS plugins for encoding, processing and decoding first-order 
Ambisonic. The two main changes since beta 3 are:

- Fixed an issue with matrix-based encoders when tracks have insufficient 
number of channels (thanks to Fergler at the Reaper forum for reporting this).
- A new utility plugin 4channels can be used when batch-processing recordings 
from portable hard disk recorders (such as Tascam DR680 or SoundDevices 788T) 
in order to extract the ambisonic channels as a 4-track sound file.

I have received several mails asking how I go about preparing B-format sound 
files myself. I have prepared a screencast demoing how I do this:

http://trondlossius.no/articles/1197-atk-reaper-4-and-a-screencast

Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Toolkit (ATK) for Reaper - Public beta v.1.0.b3

2014-09-12 Thread Trond Lossius
Hi Marc,

Could by, I don’t know. I don’t work on Linux myself, but if the ReaJS plugin 
work on Linux with Wine I’m pretty sure that ATK for Reaper will work as well, 
as the plugins are all just text files processed on load by Reaper (or ReaJS).

If you try it, I would be curious to hear if it works out.

Best,
Trond


On 12 Sep 2014, at 20:28, Marc Lavallée  wrote:

> Hi Trond.
> 
> On the download page
> ( http://www.ambisonictoolkit.net/wiki/tiki-index.php?page=Downloads )
> it says:
> "Using the ReaJS plugin, part of the ReaPlugs VST FX Suite(external
> link), ATK for Reaper can also be used with other DAWs on Windows,
> provided that they support VST plugins."
> 
> On the ReaPlugs/ReaJS page ( http://reaper.fm/reaplugs/ ), it says:
> "General features of ReaPlugs:
> Support for Windows 98/ME/2K/XP/Vista/W7, WINE"
> 
> Is it possible to use "ATK for Reaper" on Linux with WINE?
> 
> --
> Marc
> 
> 
> On Fri, 12 Sep 2014 20:01:01 +0200, Trond Lossius wrote :
>> Hi,
>> 
>> Somehow I managed to get the link wrong in the previous mail. The
>> correct link is here:
>> 
>>  http://www.ambisonictoolkit.net/wiki/tiki-index.php
>> 
>> Best,
>> Trond
>> 
>> 
>> On 12 Sep 2014, at 11:54, Trond Lossius  wrote:
>> 
>>> We are pleased to announce that ATK for Reaper is now available as
>>> a public beta, and can be downloaded here:
>>> 
>>> http://www.ambisonictoolkit.net/wiki...page=Downloads
>>> 
>>> While there is a well-established workflow for stereo production in
>>> DAWs, options have been more limited when working with Ambisonics.
>>> The Ambisonic Toolkit (ATK) brings together a number of classic and
>>> novel tools for the artist working with Ambisonic surround sound.
>>> The toolset in intended to be both ergonomic and comprehensive,
>>> framed so that the user is enabled to ‘think Ambisonic’. By this,
>>> the ATK addresses the holistic problem of creatively controlling a
>>> complete soundfield, facilitating spatial composition beyond simple
>>> placement of sounds in a sound-scene. The artist is empowered to
>>> address the impression and imaging of a soundfield — taking
>>> advantage of the native soundfield-kernel paradigm the Ambisonic
>>> technique presents.
>>> 
>>> ATK has previously only been available for public release via the
>>> SuperCollider real-time processing environment. Using the JSFX
>>> text-based scripting language, the ATK has now been ported to
>>> plugins for Reaper. Several of the plugins, in particular the
>>> transforms, offers intuitive graphical user interfaces that helps
>>> visualise the effect of the various transforms. The plugins work
>>> with all operation system and processor versions that Reaper
>>> supports. On the Windows platform the plugins can also be used with
>>> other VST-hosts by means of the ReaJS ReaPlugs plugin [1].
>>> 
>>> The ATK toolset has been been gathered, authored and developed by
>>> Joseph Anderson over a number of years, and is now a collaborative
>>> open source project. The port of ATK to Reaper has been done by
>>> Trond Lossius of BEK, Bergen Center for Electronic Arts.
>>> 
>>> A paper on ATK for Reaper by Trond Lossius and Joseph Anderson will
>>> be presented at the upcoming joint ICMC | SMC conference in Athens.
>>> If you are attending the conference, please drop by during the
>>> poster session next Thursday!
>>> 
>>> 
>>> Best,
>>> Trond
>>> 
>>> 
>>> 
>>> [1] http://reaper.fm/reaplugs/index.php
>>> ___
>>> Sursound mailing list
>>> Sursound@music.vt.edu
>>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
>>> here, edit account or options, view archives and so on.
>> 
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
>> here, edit account or options, view archives and so on.
>> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Toolkit (ATK) for Reaper - Public beta v.1.0.b3

2014-09-12 Thread Trond Lossius
Hi,

Somehow I managed to get the link wrong in the previous mail. The correct link 
is here:

http://www.ambisonictoolkit.net/wiki/tiki-index.php

Best,
Trond


On 12 Sep 2014, at 11:54, Trond Lossius  wrote:

> We are pleased to announce that ATK for Reaper is now available as a public 
> beta, and can be downloaded here:
> 
>   http://www.ambisonictoolkit.net/wiki...page=Downloads
> 
> While there is a well-established workflow for stereo production in DAWs, 
> options have been more limited when working with Ambisonics. The Ambisonic 
> Toolkit (ATK) brings together a number of classic and novel tools for the 
> artist working with Ambisonic surround sound. The toolset in intended to be 
> both ergonomic and comprehensive, framed so that the user is enabled to 
> ‘think Ambisonic’. By this, the ATK addresses the holistic problem of 
> creatively controlling a complete soundfield, facilitating spatial 
> composition beyond simple placement of sounds in a sound-scene. The artist is 
> empowered to address the impression and imaging of a soundfield — taking 
> advantage of the native soundfield-kernel paradigm the Ambisonic technique 
> presents.
> 
> ATK has previously only been available for public release via the 
> SuperCollider real-time processing environment. Using the JSFX text-based 
> scripting language, the ATK has now been ported to plugins for Reaper. 
> Several of the plugins, in particular the transforms, offers intuitive 
> graphical user interfaces that helps visualise the effect of the various 
> transforms. The plugins work with all operation system and processor versions 
> that Reaper supports. On the Windows platform the plugins can also be used 
> with other VST-hosts by means of the ReaJS ReaPlugs plugin [1].
> 
> The ATK toolset has been been gathered, authored and developed by Joseph 
> Anderson over a number of years, and is now a collaborative open source 
> project. The port of ATK to Reaper has been done by Trond Lossius of BEK, 
> Bergen Center for Electronic Arts.
> 
> A paper on ATK for Reaper by Trond Lossius and Joseph Anderson will be 
> presented at the upcoming joint ICMC | SMC conference in Athens. If you are 
> attending the conference, please drop by during the poster session next 
> Thursday!
> 
> 
> Best,
> Trond
> 
> 
> 
> [1] http://reaper.fm/reaplugs/index.php
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Ambisonic Toolkit (ATK) for Reaper - Public beta v.1.0.b3

2014-09-12 Thread Trond Lossius
We are pleased to announce that ATK for Reaper is now available as a public 
beta, and can be downloaded here:

http://www.ambisonictoolkit.net/wiki...page=Downloads

While there is a well-established workflow for stereo production in DAWs, 
options have been more limited when working with Ambisonics. The Ambisonic 
Toolkit (ATK) brings together a number of classic and novel tools for the 
artist working with Ambisonic surround sound. The toolset in intended to be 
both ergonomic and comprehensive, framed so that the user is enabled to ‘think 
Ambisonic’. By this, the ATK addresses the holistic problem of creatively 
controlling a complete soundfield, facilitating spatial composition beyond 
simple placement of sounds in a sound-scene. The artist is empowered to address 
the impression and imaging of a soundfield — taking advantage of the native 
soundfield-kernel paradigm the Ambisonic technique presents.

ATK has previously only been available for public release via the SuperCollider 
real-time processing environment. Using the JSFX text-based scripting language, 
the ATK has now been ported to plugins for Reaper. Several of the plugins, in 
particular the transforms, offers intuitive graphical user interfaces that 
helps visualise the effect of the various transforms. The plugins work with all 
operation system and processor versions that Reaper supports. On the Windows 
platform the plugins can also be used with other VST-hosts by means of the 
ReaJS ReaPlugs plugin [1].

The ATK toolset has been been gathered, authored and developed by Joseph 
Anderson over a number of years, and is now a collaborative open source 
project. The port of ATK to Reaper has been done by Trond Lossius of BEK, 
Bergen Center for Electronic Arts.

A paper on ATK for Reaper by Trond Lossius and Joseph Anderson will be 
presented at the upcoming joint ICMC | SMC conference in Athens. If you are 
attending the conference, please drop by during the poster session next 
Thursday!


Best,
Trond



[1] http://reaper.fm/reaplugs/index.php
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [OT] Unmixed multichannel spots

2014-08-21 Thread Trond Lossius
Hi,

The Norwegian artist Sondre Lerche currently have amide all stems for “Bad 
law”, one of the tracks of his newest album, available for a remix competition:

https://soundcloud.com/sondrelerche/sets/bad-law-remix-stems/s-YrKOf

If you are looking for less tracks, there are some sample projects for mixing 
in Reaper. You’ll find the link to them in the user guide at 
http://reaper.fm/userguide.php 

Additional here’s a website that keeps track of ongoing remix competitions. If 
you are into endless reverbs, heavy compression and phat synths, this is the 
place, if not, you might have to scroll quite a few of them to get to something 
more interesting:

http://www.remixcomps.com/

Cheers,
Trond



On 21 Aug 2014, at 14:34, Michael Chapman  wrote:

> 
> I have to give a demo of 'mixing' to a group of schoolchildren ... and
> don't have suitable raw material.
> 
> Does anyone know of any libraries of (say) four to eight-channel
> recordings of musical 'ensembles' (any style), that I could feed to a
> mixer desk and output as (e.g.) stereo ?
> (that is tracks from four or more spot mic's.)
> 
> Thanks,
> 
> Michael
> 
> 
> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Call for beta testers : Ambisonic Toolkit for Reaper

2014-08-21 Thread Trond Lossius
Hi,

Over the past 1 1/2 year I have gradually been porting Ambisonic Toolkit by 
Jospeph Anderson et. al. [1] to a set of JS FX plugins for Reaper [2]. ATK for 
Reaper consists of a number of plugins for encoding, transforming and decoding 
FOA sound fields, and contain several functionalities that to the best of my 
knowledge are not available in existing plugin suites for FOA.

Me and Joseph have been writing a paper on the subject that will be presented 
as a poster at the upcoming joint ICMC / SMC conference in Athens. I’d like to 
have an initial release of ATK for Reaper available for download by the time of 
the conference, but before letting it out in the wild, it would be useful to 
have some beta-testing in order to ensure that there are no obvious 
malfunctions, omissions or bad interaction design decisions in the first 
release. Hence I’m looking for a few beta testers to give some initial feedback 
on the plugin suite.

I have made an installer for OSX that should make it easy to get up and going. 
For Windows installing currently will require a bit of manual copying into the 
correct folders. I have not tested the plugins on Windows myself yet, but as it 
is all based on the JS FX scripting language, I would expect them to work the 
same there as on Mac. On Windows it would also be possible to test them with 
other VST hosts using ReaPlugs [3].

In particular I'm looking for feedback of the type:

- "This seems to work well"
- "This is just not working"
- "This is just plain wrong"
- "This is really nice and intuitive"
- "This is not intuitive at all"
- "I have a suggestion for an improvement: ..."
- etc.

Initial reactions to the plugins are particular useful, as the first reaction 
to user interfaces are often quite informative in terms of how the user 
interaction design works (or not).

If you are (1) a regular user of Reaper, (2) have prior experience with first 
order ambsionic, and (3) have the interest and some available time over the 
next two weeks for testing and providing feedback, please contact me off list . 
If you have prior experience with ATK for SuperCollider, that would be useful 
as well. Please also provide me with information on what OS you are using.

And if you have prior experience with building installers for Windows that 
would be particularly useful, I might need some help when attempting to solve 
this one.

Best,
Trond



[1] http://www.ambisonictoolkit.net/
[2] http://reaper.fm/sdk/js/js.php
[3] http://reaper.fm/reaplugs/index.php
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] POA/HOA vs 5.1

2012-04-03 Thread Trond Lossius

On Apr 1, 2012, at 9:51 PM, Jörn Nettingsmeier wrote:

> On 04/01/2012 09:05 PM, Augustine Leudar wrote:
>> again to anyone who says things like "ambisonics cant compete with 5.1
>> please bear in mind this is like saying "amplitude panning can't
>> compete with 5.1 - it doesnt make any sense at all. You mix your
>> tracks horizontally ,without elevation, using ambisonics plugins and
>> burn your ac3/dts file like any other surround mix.
> 
> higher order ambisonics can compete. first order cannot.

Using a matrix-based decoding, I would agree with you about first order, but my 
experience is that decoding using Harpex leads to quite convincing and robust 
results.

Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Can anyone help with my dissertation please?

2012-03-31 Thread Trond Lossius
In addition to everything else that has been stated in this thread already, I 
also believe we can think of consumer media technology as following two 
diverging strands, in particular from the 80s onwards. One is the high fidelity 
approach. High quality stereo reproduction systems, quadraphonic, 5.1 and 
ambisonics are all positioned somewhere along this trajectory.

The other is instead emphasizing mobility and an individualized media 
experience. The cassette, walkman, ghettoblaster, mp3 files, iPod, laptop, 
iPhone and streamed music are all parts of this tendency. The philosophy is 
that once a certain degree of audio quality has been reached, mobility is more 
important than further improvement of quality/fidelity.

With the iPod with a screen, iPhone, iPad and video on demand we can see the 
same tendency starting to unfold for moving image as well. My guess is that 
iTunes in the coming years will be more successful at distributing video 
content to the home market than BluRay disks, in spite of the latter having 
better quality by far. Cloud-based video content is more accessible than the 
physical BluRay disks as you have to head over to a shop or video rental place 
to fetch, and the quality of iTunes videos is or eventually will get "good 
enough". Similarly I believe that the relative amount of video watched on iPads 
and other kinds of portable tablets will increase in the future as compared to 
HD TV. The Apple TV is already suggesting that in the future TV to a larger 
degree will be a supporting device for tablets, rather than remain the main 
device for controlling and watching TV and video content.

Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] ANN: Ambisonic Toolkit v1.0.0

2012-03-25 Thread Trond Lossius
On Mar 20, 2012, at 11:25 PM, Anthony Palomba wrote:

>> There's a good set of Max objects by ICST
>> 
>> http://www.icst.net/research/**downloads/ambisonics-**
>> externals-for-maxmsp/
>> 
>> (snip)
>> 
>> There are also ambisonic and other spatial audio processors as part of 
>> Jamoma.


The Jamoma ambisonic modules indeed are wrappers around the ICST externals.

Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] mixing multichannel in Logic

2011-06-14 Thread Trond Lossius
Hi,

I believe you can set up a 7:1 surround sound project and work on 8 channel 
sounds files in that. Should work as long as you are just doing simple mixing 
and are careful about what effects and plugins you apply.

But once working on non-standard multichannel configurations and using surround 
sound effects, you never know what the different channels are being interpreted 
as or where they will be mapped...

Best,
Trond


On Jun 12, 2011, at 11:13 PM, Navid Navab wrote:

> Does anyone know a straightforward way to mix multichannel files (8 or
> 10 channels files) inside logic pro 9. I have a few already decoded and
> already panned multichannel files that I like to edite and then mix together
> into one file. So logic needs to be setup to read a few 8 channel files and
> and mix them into one 8 or 10 channel file.
> thanks,
> -Navid
> 
> -- 
> ___
> * *
> Alkemist, Sound and Interaction Design
> navidnavab.netalkemieatelier.com   514.432.6633
> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] tetramic+ DR-680 question

2011-04-22 Thread Trond Lossius
Hi Justin,

we got a SPS200 at BEK last fall, and are really pleased with it.

As for HD recorder, we initially tried using a 4 track Edirol with it, but that 
proved difficult, as it was not possible to precisely set input gains for all 
four channels to the same level. It was also vulnerable to having gain levels 
being altered accidentally if one did recordings while carrying the equipment. 
Maybe this is improved in more recent versions of the Edirol recorders. The 
Tascam DR680 has worked really well for us, and with phantom powering from the 
Tascam, all of the equipment, with a Rycote shield for the mic easily fits in a 
rug sack, and is a lightweight solution for field recordings.

Best,
Trond


On Apr 21, 2011, at 6:18 PM, Justin Bennett wrote:

> Dear all,
> 
> I'm trying to persuade the institution where I sometimes work to
> shell out for a B-format recording kit. The budget doesn't run to
> an st350 + sounddevices, so I'm looking around for alternative
> setups. They have 4 track edirol's but we've already noticed
> how noisy their mic amps can be.
> 
> Has anyone been working with the tascam DR-680 with a
> TetraMic in a quiet field recording situation? anyone know
> how it compares to the Edirol in that context?
> 
> a Soundfield SPS 200 could also be a possibility. Again -
> the Edirol doesn't allow you to gang the gains and the
> new soundfield micamp only seems to work on 240v(!) so
> we'd probably need another recorded anyway.
> 
> any tips greatly appreciated,
> 
> best wishes, Justin
> 
> 
> 
> 
> Justin Bennett
> j...@bmbcon.demon.nl
> http://this.is/justin
> http://this.is/bmbcon
> 
> NEW RELEASES AND FREE DOWNLOADS FROM http://spore.soundscaper.com
> 
> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Exciting news anyone?

2011-04-01 Thread Trond Lossius
On Apr 1, 2011, at 11:37 AM, Svein Berge wrote:

> 5. counter-productively expensive (compared to Reaper or Plogue)
> 
> As you all know, the marginal cost of software is close to zero, everything 
> is in the development. So, when you compare a specialized product to a 
> mass-market product, it makes no logical sense to compare their feature set 
> without at the same time dividing by the potential sales numbers. It should 
> be noted that I am not paid by anyone to do this work, so the business model 
> here is that the users must pay for all of the development. Since this plugin 
> in practice requires the use of a soundfield-type microphone, which is not 
> really a mass-market product, I don't expect to recoup any reasonable wage 
> for the time spent on it. From a customer's point of view, the price should 
> be considered in relation to the cost of all the other gear involved - the 
> microphone, recorder, computer, daw etc and the relative value added by the 
> plugin. I think it's a good deal, but don't expect much sympathy from 
> academia.

I don't know enough regarding software sales and pricing strategies to have any 
clear opinion on what's the right price to eventually cover the development 
costs. The reasoning above seems sensible, but is assuming that the same 
person/organization owns and use the mic, recording hardware and 
equipment/software for playback. That might not always be the case.

I'm working at BEK, a media lab for artists in Bergen (NO). We have a 
SoundField mic and it's getting increasingly popular among local artists. 
Depending on the project they often get to borrow it and the hard disc recorder 
for free (or almost free). For many of them it will be interesting to be able 
to use Harpex for decoding later on. While BEK itself can and probably will get 
a plug-in license, I'm less sure how many of the artists we work with that will 
be able to afford it. On the other hand it might well be that they can do 
decoding when required at the BEK studio, and thus won't depend on having their 
own license.

Just my 5 cents. I've set aside a full day at the study next week to try it 
out. Natasha Barrett has been very enthusiastic about Harpex, so I'm looking 
forward to playing with it.

Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Another plugin inquiry...

2010-12-02 Thread Trond Lossius
I would imaging that you could build this in a pretty straight-forward way 
using e.g. Max or Bidule by combining one or more 4-channel delays (same delay 
time on all four channels) and a plugin for rotating the B-format signal, also 
adding feedback to the system.


Cheers,
Trond


On Dec 2, 2010, at 1:00 AM, George Kierstein wrote:

> I was thinking of one that would bounce point sources placed in a sound
> field around with a given timing and relative placement -- akin to stereo
> echo-delay.
> 
> I've found quite a few nice plugins that will place/pan a source into a
> conceptual field, but once you have sent it into the encoding chain the only
> way I can currently see to emulate a delay would be rather manual.
> 
> On Wed, Dec 1, 2010 at 6:31 PM, Paul Hodges wrote:
> 
>> --On 01 December 2010 18:20 -0500 George Kierstein
>>  wrote:
>> 
>>> ambisonic delay plugin
>> 
>> What would make a delay ambisonic?
>> 
>> Paul
>> 
>> --
>> Paul Hodges
>> 
>> 
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound
>> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound
> 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Universal Ambisonic 0.98

2010-11-24 Thread Trond Lossius
Hi,

> As the dust settles, I think it is clear that UA should look more like
> what it is slowly morphing into. Spec attached, summary below.
> Comments and suggestions welcome.
> 
> (...)
> 
> URL: 
> 


I don't feel qualified to respond to the decisions inherent in the proposal, 
but in terms of the layout of the document itself, I would like to suggest a 
few improvements that might render it more readable and self-contained, in 
particular helping readers/developers with limited prior knowledge of 
ambisonics to understand it correctly and in full:

- The term used for the normalisation scheme N3D should either be further 
explained, or a reference provided.
- The same goes for "Gerzon Ambisonics"
- In relation to the main table, the meaning of the angles theta and delta 
should be defined
- I would consider adding more parenthesis to the equations to ensure that they 
can't be misunderstood.

E.g. is the equation for X20 to be read as:

cos(2*theta) * sqrt(15/4) * pow(cos(delta), 2)

or

sqrt( cos(2*theta*(15/4) ) * pow(cos(delta), 2)

Would it be an idea to provide equations in pseudo code format (C/C++), so that 
programmers could copy and paste directly into their code?

I am also wondering if the conversions to B-format /FMH be more clear if given 
as full equations, eg:

X = X1
R = X20 * sqrt(5/4)



And finally, once the ambisonic file format has been stored in a correct way, 
it need to be decoded again. Would it be an idea to provide equations for this 
as well? As a developer, I would expect the document not only to provide 
information on encoding and storing, but also decoding the format.


Best.
Trond

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] DBAP, VDP, VBAP, and Ambisonics

2010-11-15 Thread Trond Lossius
>> VDP and DBAP is based on the same idea, but DBAP as presented in the ICMC 
>> 2009 paper
>> 
>> http://www.trondlossius.no/system/fileattachments/30/original/icmc2009-dbap.pdf
>> 
>> has a number of additional features. The important difference of DBAP and 
>> ViMiC as compared to ambisonics and VBAP is that there are no restrictions 
>> on the positioning of loudspeakers or listener. Loudspeakers are not 
>> restricted to a ring/sphere surrounding the listener, but could e.g. be laid 
>> out as a regular or irregular grid in the space. This is what makes it 
>> useful for installations in one or more spaces, such as art galleries and 
>> museums, where rings and spheres of speakers might be impractical and the 
>> audience is free and expected to move about.
> 
> I really don't understand  "...ambisonics and VBAP is that there are no 
> restrictions on the positioning of loudspeakers or listener.", at least as 
> regards to VBAP. We are currently - and have been for over two years now - 
> using VBAP in The Morning Line sculpture 
> (http://www.worldarchitecturenews.com/index.php?fuseaction=wanappln.projectview&upload_id=14039)
>  and the reason we used it rather than Ambisonics was because it could cope 
> with extremely irregular arrays of speakers. Comments?


Thanks for the link, that seems to be a very interesting project!

It's difficult to figure out the loudspeaker setup in this sonic pavilion from 
the image, but reading the AES paper:

http://www.aes.org/e-lib/browse.cfm?elib=15370

the 41 speakers seems to be subdivided into 6 "rooms". Each of these rooms seem 
to have the speakers pointing towards the sweet spot of the room (appendix A). 
Did these rooms coincide architectonically with the positions that audience 
were likely to be standing at in order to experience the work?

As the azimuth and elevation of each speaker relative to the sweet spot of the 
room seems to be irregular, I think it makes a lot of sense to use VBAP rather 
than ambisonics.

If you want a source to move all around the sculpture, how do make it move from 
one room to the next? Do you cross-fade from one sphere onto the next as it 
moves? Or was "room 7", the one used for spatialization into the structure as a 
whole using a subset of the speakers at the margins of the sculpture to create 
yet another sphere?

With DBAP you wouldn't have to subdivide the array in order to enforce a 
"sphere surrounding sweet spot" way of thinking on to the sculpture. You could 
just describe the position of all of the speakers, and then freely move the 
sources around the sculpture. The theory of DBAP ideally assumes loudspeakers 
to be radiating the same way in all direction, and I personally know very few 
speakers that are close to doing so, so the sounding results would be 
influenced by the direction of the loudspeakers. This would need to be 
accounted for in the compositional process, but could spatially and 
sculpturally be used in meaningful ways, leave impressions of sound sources 
being directed towards or away from you depending on your and the speakers 
positions.

DBAP was in fact first developed for an installation with a loudspeaker setup 
somewhat similar to the one of The Morning Line, although being 2D instead of 
3D. I was working on a sound installation for Galleri KiT, the gallery space of 
the art academy in Trondheim, Norway. This gallery consists of a number of 
semi-connected rooms. I wanted to have a total of 16 loudspeakers distributed 
along the walls of the various rooms, and then have sound sources drifting 
around the space, from one room to the next.

I first tried setting up a set of VBAP rooms, with a mechanism for moving from 
one room/VBAP system to the next as sources drifted. However I did not manage 
to find a way of doing so that felt smooth and convincing. Sound seemed to be 
glued to the walls of the rooms, sneaking from the wall of one room to the next 
in a way that I felt to be abrupt and unconvincing for what I was after.

DBAP was developed late the night before the opening, and helped creating a 
much smoother drift. As speakers were positioned along the walls, the direction 
of the speakers was not experienced as problematic, it rather added to the 
feeling that the sound had now moved on to the next room.

I hope this helps clarifying the difference.

Another way of explaining would be the following:

Imagine that you have a regular 3D matrix/grid of 4x4x4 speakers. The grid 
could be subdivided into 3x3x3 rooms, and ambisonic decoding for a cube of 8 
speakers used for each of the rooms. You would then need to find a way of 
switching or crossfading from one cube to the next as the source moves about. 
Alternatively, using DBAP, you would deal with all 64 loudspeakers in one go. 
DBAP would simply feed the highest amount of gain to the speakers closest to 
the virtual source, and next to none to the ones that are far away, all while 
keeping the total intensity of all the loudspeak

Re: [Sursound] DBAP, VDP, VBAP, and Ambisonics

2010-11-15 Thread Trond Lossius
>> Never heard of AEP.  Can anybody suggest a
>> reference for this?
> 
> ICMC 2007, Martin Neukom and Jan Schacher.
> 
> Basically it's the equivalent of adding an AMB decoder
> to each panner instead of keeping this factored out to
> the playback environment.
> While it's a valid approach, I can't see *why* anyone
> would want to do this. You can always use an AMB decoder
> on the mixed signal.

The paper is available here:

http://www.icst.net/research/publications/single-view/?tx_ttnews%5Btt_news%5D=165&tx_ttnews%5BbackPid%5D=77&cHash=01fce057e7ac12d78732aaf27f0a2193

AEP do in-phase decoding, and due to some math magic this enables one to 
dynamically change the encoding order for sources, also permitting decimal 
orders like 4.36. Furthermore the paper show that fewer speakers are required 
per order. So, a ring with 20 speakers is able to reproduce up to approx. 60th 
order.

As the direction of the source will get more and more directional with 
increasing order, this can be thought of as pen size in e.g. photoshop: You can 
decide to have a source encoded using a low order (diffuse location) and then 
dynamically increase the order making the direction of the source increasingly 
articulated. The artistic potential of this is fairly attractive. 


Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] DBAP, VDP, VBAP, and Ambisonics

2010-11-14 Thread Trond Lossius
> (may I add that I would not consider WFS a panning technique?)

I guess ViMiC can not be categorized as a panning technique either, as it also 
uses delays. For me the more interesting question is what techniques are 
available, what are the assumptions of them, how can I use them and how/when do 
they work?

Jan Schacher once expressed the possibility of using different spatial audio 
techniques as having different brushes when painting, all with their particular 
characteristics and potentials. I find that a meaningful analogy.

>>> This brings the number of 3-D audio panning
>>> systems up to four:
>>> 
>>>   DBAP -- Distance-Based Amplitude Panning
>>>   VDP -- Vector Distance Panning
>>>   VBAP -- Vector-Based Amplitide Panning
>>>   Ambisonics
>> 
>> And we could add:
>> 
>> WFS - Wave Field Synthesis
>> ViMiC - Virtual Microphone Technique
>> AEP -  Ambisonics Equivalent Panning

Another (non-panning) technique I forgot the other day is

DirAC - Directional Audio Coding

http://www.acoustics.hut.fi/research/cat/DirAC/


Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] DBAP, VDP, VBAP, and Ambisonics

2010-11-13 Thread Trond Lossius
Hi Richard,

thanks for your interest and investigations, a more thorough examination of 
DBAP would be very welcome. It has been used by a number of composers and sound 
artists internationally, and the reports I get back have been positive, but 
apart from the work at Queen Mary University of London, no systematic 
examination has been done so far. 

> I've just done a quick analysis of DBAP as documented in the ICMC2009 paper.
> Two comments:
> * Equation (6) seems to be missing a square-root (not a big deal).

Thanks, this was pointed out by John Reiss as well, and I have to add an erata 
to the pdf for it when I get the time.

> * The outputs are very unfamiliar! I've attached (hopefully) a
> map-of-the-world plot of the output of one speaker in a cube layout - which
> has a curious set of holes. The plots are a long way from what's produced by
> VBAP, Ambisonics etc (e.g. there doesn't seem to be any concern with energy
> vectors etc) so perceptual studies definitely seem like the next step...

The attachment was unfortunately scrubbed from the list and I didn't manage to 
download it. Do you have a possibility of either uploading it somewhere and 
post the link, or mail it offlist so that I could upload it and post the link?

Generally I wouldn't use DBAP in situations where VBAP and ambisonics works 
well; where the audience is confined to a small area at the middle (sweet spot) 
surrounded by a ring/sphere/cube of speakers. DBAP was designed for situations 
that defy the idea of the sweet spot and where it is not possible or desirable 
to position loudspeakers in a surrounding fashion.

Another limitation of DBAP that is important to be aware of is that it starts 
breaks down if the source is moved outside the convex hull defined by the 
speakers. With a source at infinite distance, the levels will be the same for 
all speakers (because the relative difference in distance becomes infinite 
small) and instead perceived localization will be dominated by the proximity 
effect.

So if you discover problems in a ring/sphere/cube layout, I won't be surprised. 
I have tested with a ring of 8 speakers a few times, and then moving sources 
around along the ring. My experience has been that you do get an idea of the 
location of the source, but it's less defined than e.g. 3rd order ambisonics or 
VBAP.

Analysis, perceptual studies and evaluation tests would all be very interesting 
to get done. If so it would be interesting to do that not only for standard 
rigs were he other techniques excel, but also off centre and in non-ring/sphere 
layouts, as this is what DBAP was made for. Unfortunately this is beyond my 
capacity, I don't have the institutional backing required for resources and 
ensuring proper scientific rigor.

To get an idea of what kind of loudspeaker setups I tend to work on in my sound 
installations, you could e.g. view the documentation of a work I did at a 
fortress near Bergen in 2008. For this I used 32 speakers distributed along a 
line all the way down the corridor. I consider this my longest composition to 
date, lasting for close to 200 meters...;-)

For this I used the 1D version of DBAP. The documentation is available here.

http://trondlossius.no/works/18-imploding-spaces-2008

More bizarre speaker layouts can be seen here:

http://trondlossius.no/works/30-white-out-2005

I guess this might serve to illustrate that sound artists sometimes might have 
needs in terms of spatialisation techniques that might differ widely from the 
ones that the audio engineering society mainly seems to be focusing on, and 
that seems to be geared towards concerts, cinema and home cinema.


Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Audition goes Mac. And fully multi-channel!

2010-11-13 Thread Trond Lossius
> The first public beta of Audition for the Mac is now available - see the 
> press release with link to beta area here:
> 
> 
> 
> So Adobe has rewritten Audition to make it cross-platform.  Apparently one 
> thing they've done on the way is to make it fully multi-channel, and it has 
> been seen to work on 80-channel files. Read here:
> 
> 
> 
> I don't have a Mac to see if this is already exposed - but I discovered last 
> night that one of my sons has been testing it for Adobe, so I may try to fix 
> up a visit to his studio to investigate (he doesn't have any kind of surround 
> facilities, though, that I'm aware of).  Meanwhile, I guess some Macheads on 
> this list might be tempted to have a look...

Thanks!, interesting news!

It's probably a long shot, but it would be fantastic if the Aurora plugins 
could be ported to Mac.

http://www.aurora-plugins.com/

Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] DBAP, VDP, VBAP, and Ambisonics

2010-11-13 Thread Trond Lossius
> This brings the number of 3-D audio panning
> systems up to four:
> 
>DBAP -- Distance-Based Amplitude Panning
>VDP -- Vector Distance Panning
>VBAP -- Vector-Based Amplitide Panning
>Ambisonics

And we could add:

WFS - Wave Field Synthesis
ViMiC - Virtual Microphone Technique
AEP -  Ambisonics Equivalent Panning

> Are these all distinct techniques, or are some
> of them different names for the same
> technique?

VDP and DBAP is based on the same idea, but DBAP as presented in the ICMC 2009 
paper

http://www.trondlossius.no/system/fileattachments/30/original/icmc2009-dbap.pdf

has a number of additional features. The important difference of DBAP and ViMiC 
as compared to ambisonics and VBAP is that there are no restrictions on the 
positioning of loudspeakers or listener. Loudspeakers are not restricted to a 
ring/sphere surrounding the listener, but could e.g. be laid out as a regular 
or irregular grid in the space. This is what makes it useful for installations 
in one or more spaces, such as art galleries and museums, where rings and 
spheres of speakers might be impractical and the audience is free and expected 
to move about.

An evaluation of DBAP in listening tests as compared to VBAP and Ambisonics is 
reported in this paper:

D. Kostadinov, J. D. Reiss and V. Mladenov, "Evaluation of distance based 
amplitude panning for spatial audio", Proceedings of the IEEE International 
Conference on Acoustics, Speech, and Signal (ICASSP) , Dallas, March 2010

The DSP processing of ViMiC is more complex than matrix-based techniques such 
as DBAP, VBAP and ambisonics, as it uses filters and varying delays to emulate 
direction of sources and speakers (= virtual microphones), early reflections 
and room dampening. With propoer tuning of parameters ViMiC can be used to 
recreate several other spatialisation techniques, including DBAP. ViMiC is 
discussed extensively in the recent PhD thesis by Nils Peters:

http://www.music.mcgill.ca/~nils/PetersThesis-web.pdf

> Also, there is a Wikipedia article on
> Ambisonics.  Could I encourage people who
> are familiar with the other techniques to create
> Wikipedia articles on them.

Yes, that would be useful. I'll see what I can do early next year. Do Wikipedia 
have any etiquette regarding whether you can (not) write up on subject areas 
that you have been a major contributor to? I know that wikipedia articles 
should not contain original research.



Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Largest ever indoor multichannel sound art installation in the world at Eden project, Cornwall, UK

2010-11-11 Thread Trond Lossius
Hi,

> I am writing to let you know about the largest multichannel sound art
> installation in the world at the Eden project. For those unfamiliar with the
> Eden Project it is the largest indoor tropical conservatory in the world
> covering several acres and presently has lots of speakers hidden in the
> bushes !  Hopefully this event is something that would interest you as well
> 
> http://www.edenproject.com/come-and-visit/whats-on/heart-of-darkness.php

Sounds exciting!

> hopefully somebody from this list will be able to get down and check it out,
> Although the installation is laid out with a very unusual speaker
> configuration which included height information as well as surround sound I
> have discovered several things during my application of large sound
> installations. Discrete sound sources (ie one speaker dedicated to one
> sound) work much better than trying to put the sound in a surround sound
> image using a surround sound panner in Nuendo or whatever). Leaves act as
> fantastic dispersers of souind and the sound of say, insects, sound much
> more realistic if the speaker is hidden in the bushes.

Depending on how you are rendering and distributing material, DBAP 
(distance-based amplitude panning) might be of interest as well. It's so far 
available for MaxMSP as part of Jamoma, as part of Ircam Spatialisateur, and 
included in the Flux Ircam Spat plugin:

http://www.jamoma.org/
http://www.jamoma.org/papers/icmc2009-dbap.pdf
http://www.fluxhome.com/products/plug_ins/ircam_spat

DBAP doesn't make any assumptions on positioning of loadspeakers or listener, 
and was developed for installation purposes.

> Although the installation is ambisonically laid out many of the recordings
> were only taken with surround sound microphones - I am shortly to return to
> the amazon with new equipment and would welcome any advice on a good
> multichannel recorder and low noise mics to take with me (I reckon I need at
> least six inputs for this - sound devices look good but are so expensive -
> what are peoples on the Tascam 680 ?) any suggestions welcome. I also look
> forward to discussing the best way of recording ambisonically ,

So far I'm quite happy with the Tascam 680. It's way better to use with the 
Soundfield 250 than the Edirol.


Cheers,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound