Re: [Sursound] Subject: Re: Why do you need to decode ambisonic/b format signals ?

2011-01-25 Thread Eero Aro

Augustine Leudar wrote:

Why the need for the w coordinate


I am not a mathematician or a scientist. A sound designer's reply:
The W is a "reference" signal. For example, at decoding:
+ W + X is the "front" direction (W and X at equal phase)
+ W - X is the "rear" direction (X phase reversed in relation to W)

Try to look at 1st order B-Format (WXYZ) as a "three dimensional
MS-stereo signal". That helped me in the beginning.


the soundscapes I am working on are large jungle
soundscapes in a large indoor tropical conservatory


You must have looked at Timax?
http://www.outboard.co.uk/pages/timax.htm


Where ambisonics could help in the installation is the insect
noises


Yep. A single mono sound is best localized to all listeners when you
play it back through one speaker only. It is a good idea to route that
kind of signals directly into the appropriate speaker. A combination of
different techniques is possibly the best way to do it. Soundfields with
moving phantom images and ambiences and spatial images are easy
to control with B-Format Ambisonics. As somebody already said, it is
possible to rotate, tilt and tumble a full 3D soundfield, and much more.
You can for example "zoom in" into a certain direction.


can the decoding be done with software and then burnt to wav files ?


Andrew gave you good pointers. Bruce Wiggins has described workflows
in his page. Possibly you'd like to check also these sites:

Dave Malham:
http://www.dmalham.freeserve.co.uk/vst_ambisonics.html
http://www.york.ac.uk/inst/mustech/3d_audio/vst/welcome.html

Visual Virtual Mic:
http://mcgriffy.com/audio/ambisonic/vvmic/

Aristotel Digenis:
http://www.digenis.co.uk/

Acousmodules. No Ambisonics in there, but you might find other useful tools:
http://acousmodules.free.fr/acousmodules_en.htm

Eero
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Subject: Re: Why do you need to decode ambisonic/b format signals ?

2011-01-25 Thread Andrew Levine
Hi Augustine,

On 26.01.2011, at 06:33, Augustine Leudar wrote:
> I am now utterly intrigued by
> ambisonics and cant wait to try it out the more I read the more I get sucked
> in

I know what you're talking about :-)

> Gaps in the image are plugged with other speakers with say
> cicadas on them - despite the doubts expressed here it also has been
> effective perhaps because insect noises are high  frequency and the leaves
> on the bushes and trees disperse the sound filling out the sound field.
> Generally the effect is pretty similar to being in the rain forest- except
> you don't get bitten.

I didn't know cicadas bit ;-)

> What I would like to know is
> can the decoding be done with software and then burnt to wav files ?  

Sure. This is what some call G-format.

> Are there
> any ambisonic panners that are VST compatible (I mainly use Nuendo and MAx
> MSP) ? Could I design a horizontal surround sound DVD using ambisonic
> software for panning and localisation - and then burn 6 wav files and
> release it on a 5.1 DVD which could be played on a normal home system ?

So far I haved worked with plugins by Bruce Wiggins 
(http://www.brucewiggins.co.uk/?page_id=78) and Daniel Courville 
(http://www.radio.uqam.ca/ambisonic/b2x.html).

Regards,

Andrew
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Subject: Re: Why do you need to decode ambisonic/b format signals ?

2011-01-25 Thread Danny McCarty
Nice response Gus and certainly more gracious than I would have been.
Danny



On Jan 25, 2011, at 9:33 PM, Augustine Leudar wrote:

> Message: 6
> Date: Sat, 22 Jan 2011 23:56:42 +0100
> From: f...@kokkinizita.net
> Subject: Re: [Sursound] Why do you need to decode ambisonic/b format
> 
> 
> First, don't try and send HTML to this list, as you can see it will
> be removed.
> 
> Sorry - I  dont know what you mean - as far as  Iknow  I haven't sent any
> html to this list (at least not intentionally) - I assume you don't mean
> links as there was no link in my original message and
> there are also many links in the messages on this list.
> 
> 
> Your question reveals that you have not even started to study and
>> understand Ambisonics theory - the answer would be quite evident
>> in the other case
>> 
> 
> Obviously - or I wouldnt be asking how it works.  I do however have a
> lot of experience creating 3d soundscapes (in fact its my job) and have
> spent a reasonable amount of time studying a wide range of psychoacoustic
> topics and other areas pertaining to sound art. Now Ive read a bit more it
> is certainly something I will be persuing.
> 
> 
> . You could as well ask a engineer why he needs
>> complex numbers while you can do your bookkeeping without.
>> 
> 
> Engineers ? bookeeping ?  I think I know what you mean.
> 
> Hoping you will eventually have a go at it,
>> 
> 
> I will absolutely be having a go at it - the  replies I received here  has
> led me to a flurry of reading - the result is  I am now utterly intrigued by
> ambisonics and cant wait to try it out the more I read the more I get sucked
> in - I even found myself trying to unravel the maths last night - that might
> take a while...)  . I only got so far but as I understand it it uses sound
> pressure levels and phase differences to plot x,y,z spherical coordinates
> which are then reconstituted in the decoding - out of curiosity Why the need
> for the w coordinate - cant the sound pressure level be gleaned from the x,y
> and z ? At the moment the soundscapes I am working on are large jungle
> soundscapes in a large indoor tropical conservatory  ( covering several
> acres)  - perhaps with twenty metres between speakers. Because of the
> problems of amplitude panning and the sheer size of the installations often
> sounds are localised by using real world object analogues (ie if a monkey is
> meant to sound like it comes from behind a certain tree there is a speaker
> with a monkey noise behind that tree)a thunderstorm is represented by a
> stereo pair high on a hillside - we even had neighbours thinking there was
> real thunderstorm happening and it does sound well, realistic (don't take my
> word for it you can read the public response here :
> http://augustineleudar.110mb.com/Hd/Hod.html ) . This type of localisation
> has proved extremely effective and don't think that any system no matter how
> clever at fooling the human ear can improve upon a sound actually coming
> from the direction its meant to (though recording ambisonically probably
> would)  .   Where ambisonics could help in the installation is the insect
> noises - at the moment there are large 4 speaker areas with 4mic recorded
> insect noises . Gaps in the image are plugged with other speakers with say
> cicadas on them - despite the doubts expressed here it also has been
> effective perhaps because insect noises are high  frequency and the leaves
> on the bushes and trees disperse the sound filling out the sound field.
> Generally the effect is pretty similar to being in the rain forest- except
> you don't get bitten. However if what I have read about ambisonics is true
> it would make it sound even better and there is always room for improvement
> .  I am currently trying to translate some of these sound installations to a
> format that can be listened to at home - I have to admit 5.1 is a bit
> frustrating so ambisonics might hold the key.  What I would like to know is
> can the decoding be done with software and then burnt to wav files ?  There
> is no way a physical decoder could be in the biome - we generally have to
> throw speakers away or sell them on ebay after a few uses because of the
> ants and humidity (wav players are in sealed plastic boxes) .   Are there
> any ambisonic panners that are VST compatible (I mainly use Nuendo and MAx
> MSP) ? Could I design a horizontal surround sound DVD using ambisonic
> software for panning and localisation - and then burn 6 wav files and
> release it on a 5.1 DVD which could be played on a normal home system ?
> best,
> Gus
> 
> 
>>> --
> FA
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound

Danny McCarty
Monolith Media, Inc.
4183 Summi

[Sursound] Subject: Re: Why do you need to decode ambisonic/b format signals ?

2011-01-25 Thread Augustine Leudar
Message: 6
Date: Sat, 22 Jan 2011 23:56:42 +0100
From: f...@kokkinizita.net
Subject: Re: [Sursound] Why do you need to decode ambisonic/b format


First, don't try and send HTML to this list, as you can see it will
be removed.

Sorry - I  dont know what you mean - as far as  Iknow  I haven't sent any
html to this list (at least not intentionally) - I assume you don't mean
links as there was no link in my original message and
 there are also many links in the messages on this list.


Your question reveals that you have not even started to study and
> understand Ambisonics theory - the answer would be quite evident
> in the other case
>

Obviously - or I wouldnt be asking how it works.  I do however have a
lot of experience creating 3d soundscapes (in fact its my job) and have
spent a reasonable amount of time studying a wide range of psychoacoustic
topics and other areas pertaining to sound art. Now Ive read a bit more it
is certainly something I will be persuing.


. You could as well ask a engineer why he needs
> complex numbers while you can do your bookkeeping without.
>

Engineers ? bookeeping ?  I think I know what you mean.

Hoping you will eventually have a go at it,
>

I will absolutely be having a go at it - the  replies I received here  has
led me to a flurry of reading - the result is  I am now utterly intrigued by
ambisonics and cant wait to try it out the more I read the more I get sucked
in - I even found myself trying to unravel the maths last night - that might
take a while...)  . I only got so far but as I understand it it uses sound
pressure levels and phase differences to plot x,y,z spherical coordinates
which are then reconstituted in the decoding - out of curiosity Why the need
for the w coordinate - cant the sound pressure level be gleaned from the x,y
and z ? At the moment the soundscapes I am working on are large jungle
soundscapes in a large indoor tropical conservatory  ( covering several
acres)  - perhaps with twenty metres between speakers. Because of the
problems of amplitude panning and the sheer size of the installations often
sounds are localised by using real world object analogues (ie if a monkey is
meant to sound like it comes from behind a certain tree there is a speaker
with a monkey noise behind that tree)a thunderstorm is represented by a
stereo pair high on a hillside - we even had neighbours thinking there was
real thunderstorm happening and it does sound well, realistic (don't take my
word for it you can read the public response here :
http://augustineleudar.110mb.com/Hd/Hod.html ) . This type of localisation
has proved extremely effective and don't think that any system no matter how
clever at fooling the human ear can improve upon a sound actually coming
from the direction its meant to (though recording ambisonically probably
would)  .   Where ambisonics could help in the installation is the insect
noises - at the moment there are large 4 speaker areas with 4mic recorded
insect noises . Gaps in the image are plugged with other speakers with say
cicadas on them - despite the doubts expressed here it also has been
effective perhaps because insect noises are high  frequency and the leaves
on the bushes and trees disperse the sound filling out the sound field.
Generally the effect is pretty similar to being in the rain forest- except
you don't get bitten. However if what I have read about ambisonics is true
it would make it sound even better and there is always room for improvement
.  I am currently trying to translate some of these sound installations to a
format that can be listened to at home - I have to admit 5.1 is a bit
frustrating so ambisonics might hold the key.  What I would like to know is
can the decoding be done with software and then burnt to wav files ?  There
is no way a physical decoder could be in the biome - we generally have to
throw speakers away or sell them on ebay after a few uses because of the
ants and humidity (wav players are in sealed plastic boxes) .   Are there
any ambisonic panners that are VST compatible (I mainly use Nuendo and MAx
MSP) ? Could I design a horizontal surround sound DVD using ambisonic
software for panning and localisation - and then burn 6 wav files and
release it on a 5.1 DVD which could be played on a normal home system ?
best,
Gus


>> --
FA
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Why do you need to decode ambisonic/b format signals

2011-01-25 Thread Steven Dive
One interesting, if odd sounding, effect I found was using superstereo  
on a test track for channel identity for stereo that had someone (Alan  
Wiltshire) speaking from positions full left, half left, centre, half  
right and full right. On my usual setting of 0.5 (range is 0 to 1.5 on  
my decoder in 0.1 increments)), the speaking positions are perceived  
pretty near corresponding to the left to right spacing of my front  
speakers. On the 1.0 setting, the image of full left to full right is  
wrapped around like a horseshoe, with left as rear left and right as  
rear right with centre and 'half' positions stretched around  
accordingly and somewhat broadened. But on the full 1.5 setting the  
apparent speaking positions were fully reversed, with right as left  
and left as right but sounding really quite focussed.


Any thoughts welcome. It darn well surprised me.

Steve

On 25 Jan 2011, at 20:53, Geoffrey Barton wrote:



On 25 Jan 2011, at 17:00, sursound-requ...@music.vt.edu wrote:


Message: 8
Date: Mon, 24 Jan 2011 12:03:11 -0700
From: Martin Leese 
Subject: Re: [Sursound] Why do you need to decode ambisonic/b format
signals ?
To: sursound@music.vt.edu
Message-ID:

Content-Type: text/plain; charset=ISO-8859-1

Eero Aro  wrote:


J?rn Nettingsmeier wrote:
in theory, you can. in practice, you can't, because you'd have to  
know

what stereo technique was used during recording


Yes you can.

Just one word: Trifield.


Steven Dive  wrote:
...

I understand that Trifield is derived from the same groundwork as
Ambisonic, which also gives us ambi superstereo. It's a matter of
personal judgement, I think, but do you more knowledgeable theory
folks know if Trifield is therefore as flexible in its use as
superstereo?



From memory, the theory behind Trifield

assumes either Blumlein XY, or pan-potted
multi-track mono.  Perhaps Geoffrey can chip
in, or somebody can look at the paper
(reference below).  Again from memory,
SuperStereo assumes some sort of coincident
mic technique so, in theory, is more flexible
than Trifield.  I don't know of a reference for
SuperStereo; this is a gap in Ambisonic
theory.


It is not essential that the material is Blumlein or pan potted;  
that is just the way MAG handled a virtual 'test signal' in the  
paper; other recording techniques work ok too, but with varying  
results, much as they do over two speakers:-).


The main difference between 'Trifield' and 'Super-stereo' is that  
the former works over a number of speakers >2 across the front  
sector and the latter seeks to use an ambisonic array of speakers  
all around the listener to lock the front in place. There were a  
number of different alignments of 'Super-stereo' in various decoder  
implementations, but in essence they all sought to bend the 'washing  
line' of the in-phase stereo image around the front arc with  
variable width, and anything substantially out of phase generally  
somewhere else, again rather dependent on the source material.


Geoffrey






-- next part --
An HTML attachment was scrubbed...
URL: <https://mail.music.vt.edu/mailman/private/sursound/attachments/20110125/52e2d5cc/attachment.html 
>

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Why do you need to decode ambisonic/b format signals

2011-01-25 Thread Geoffrey Barton

On 25 Jan 2011, at 17:00, sursound-requ...@music.vt.edu wrote:
> 
> Message: 8
> Date: Mon, 24 Jan 2011 12:03:11 -0700
> From: Martin Leese 
> Subject: Re: [Sursound] Why do you need to decode ambisonic/b format
>   signals ?
> To: sursound@music.vt.edu
> Message-ID:
>   
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Eero Aro  wrote:
> 
>> J?rn Nettingsmeier wrote:
>>> in theory, you can. in practice, you can't, because you'd have to know
>>> what stereo technique was used during recording
>> 
>> Yes you can.
>> 
>> Just one word: Trifield.
> 
> Steven Dive  wrote:
> ...
>> I understand that Trifield is derived from the same groundwork as
>> Ambisonic, which also gives us ambi superstereo. It's a matter of
>> personal judgement, I think, but do you more knowledgeable theory
>> folks know if Trifield is therefore as flexible in its use as
>> superstereo?
> 
>> From memory, the theory behind Trifield
> assumes either Blumlein XY, or pan-potted
> multi-track mono.  Perhaps Geoffrey can chip
> in, or somebody can look at the paper
> (reference below).  Again from memory,
> SuperStereo assumes some sort of coincident
> mic technique so, in theory, is more flexible
> than Trifield.  I don't know of a reference for
> SuperStereo; this is a gap in Ambisonic
> theory.

It is not essential that the material is Blumlein or pan potted; that is just 
the way MAG handled a virtual 'test signal' in the paper; other recording 
techniques work ok too, but with varying results, much as they do over two 
speakers:-).

The main difference between 'Trifield' and 'Super-stereo' is that the former 
works over a number of speakers >2 across the front sector and the latter seeks 
to use an ambisonic array of speakers all around the listener to lock the front 
in place. There were a number of different alignments of 'Super-stereo' in 
various decoder implementations, but in essence they all sought to bend the 
'washing line' of the in-phase stereo image around the front arc with variable 
width, and anything substantially out of phase generally somewhere else, again 
rather dependent on the source material.

Geoffrey






-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20110125/52e2d5cc/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound