What I've always done is to keep the channel sets of different ambisonic
order separate, then design a decoder for each channel set, and mix the
speaker feeds. Perhaps that's overkill.
On Sun, Jun 11, 2023 at 4:25 PM Sampo Syreeni wrote:
> On 2023-06-01, Jan Jacob Hofmann wrote:
>
> > is it po
There's MPEG-H 3D Audio
https://en.wikipedia.org/wiki/MPEG-H_3D_Audio
and Google's Draft IETF proposal
https://datatracker.ietf.org/doc/rfc8486/
Aaron Heller
Menlo Park, CA US
-- next part --
An HTML attachment was scrubbed...
URL:
<https://mail
I uploaded my copy of the CIPIC data and docs to
http://ambisonics.dreamhosters.com/CIPIC/
and hacked together a readme.md based on the original home page for the data
http://ambisonics.dreamhosters.com/CIPIC/cipic_readme.md
I apologize in advance for the lack of aviation content.
Aaron
.
It's a great resource, but it would certainly be better if the filenames
were more mnemonic, as well as indicating the channel convection in use.
Aaron Heller
On Mon, May 23, 2022 at 5:43 AM Jack Reynolds
wrote:
> As far as the know each file has the format in the file name.
>
&
Does anyone have instructions on how to set up the "three personal devices"
needed to stream Fastenow's piece "Vernal Periphony"? I assume some sort
of synchronization is needed across the devices.
Aaron Heller
On Thu, Mar 3, 2022 at 2:25 PM j...@bmbcon.demon.nl
. Soc., vol. 52, no. 11, pp.
1142-1156, (2004 November.)
https://www.aes.org/e-lib/browse.cfm?elib=13028
Aaron Heller
Menlo Park, CA US
On Thu, Dec 2, 2021 at 10:33 AM eric benjamin wrote:
>
> I believe that Nando may have been thinking about reproduction with
> loudspeaker arrays. He
I'll add a couple of things that might be relevant.
The Calrec Soundfield MkIV has a capsule heater to drive off moisture, a 1k
resistor across the 50V supply, so 2.5W. It's labeled 'Stem Htr" on this
schematic:
http://www.ai.sri.com/~heller/ambisonics/schematic-1.pdf
I'm in the US, and had
On Tue, Jun 4, 2019 at 2:48 AM Sean Devonport
wrote:
> Thanks so much Aaron! I have seen the ambisonic toolkit around but haven't
> played with it too much.
Just to be clear, there are two similarly named systems,
Ambisonic Toolkit (ATK) by Jo Anderson http://www.ambisonictoolkit.net
and
re 49
band-splitting filters and 336 NFC filters in the decoder. The design and
implementation of the filters is discussed in our IFC18 conference paper
http://www.ifc18.uni-mainz.de/papers/heller.pdf
Aaron
On Sun, Jun 2, 2019 at 1:22 PM Aaron Heller wrote:
>
> Hi Sean,
> Are all the s
Small Layouts (AllRAD2),” May 2018.
http://www.aes.org/e-lib/browse.cfm?elib=19460
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Sun, Jun 2, 2019 at 12:26 AM Sean Devonport
wrote:
>
> Thanks for the reply Florian.
>
> Yes, was aware of the imaginary loudspeakers. The a
The soundbars I've heard sound like they use crosstalk cancellation (aka
transaural stereo) to achieve surround effects. I believe the work of Edgar
Choueiri and his students at Princeton represents the state of the art in
that area.
https://www.princeton.edu/3D3A/index.html
Also Ralph Glasgal,
new 56-speaker
array.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Tue, May 14, 2019 at 12:51 PM Sean Devonport
wrote:
>
> Hey there everyone,
>
> Want to ask if anyone has any thoughts on decoding first order ambisonics
> to higher order decoders?
>
> I'
If it is any help, the script I wrote to make YouTube videos from AMB files
is here:
https://bitbucket.org/ambidecodertoolbox/amb2yt/src
Some samples that might help you reverse engineer the format
https://youtu.be/eY9DMn8pgGA
https://youtu.be/RC4ptd9B-NA
You could make a file with
A new site hosting Stephen Thorton's diaries and a new film about Michael
Gerzon and Ambisonics at Oxford
https://intothesoundfield.music.ox.ac.uk
All very nicely done.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
-- next part --
An HTML attachmen
The is known as the "substitution method" in microphone calibration
literature.
On Sat, Sep 22, 2018 at 9:07 AM Fons Adriaensen wrote:
> On Sun, Sep 16, 2018 at 12:18:54PM -0600, Jonathan Kawchuk wrote:
>
> > I'm also curious if anyone has experimented with running an inverse EQ
> > calibration
de from Etienne Deleflie's Universal
Ambisonic work
https://tools.ietf.org/html/draft-ietf-codec-ambisonics-07
Martin Leese has posted pointers to it here from time to time. Early
versions had errors, which I informed them of and were fixed in later
versions.
Aaron Heller (hel...@ai.sri
The radius of the tetrahedral array determines the frequency at which the
B-format polar patterns start to breakdown. The formula given by Gerzon is
c/(pi*r), where c is the speed of sound and r is the radius of the
array[1]. Depending of the design, the acoustic radius is about 10% larger
than the
One more detail... I had to create an Aggregate audio device to get it
working with Jack. I don't remember the details, but could refresh my
memory if anyone needs them.
On Thu, Mar 8, 2018 at 8:59 PM, Aaron Heller wrote:
> I have a "Mid-2010" Mac Mini connected to a Sherwo
with the Audio MIDI Setup
program.
Aaron Heller
Menlo Park, CA US
On Thu, Mar 8, 2018 at 5:54 AM, Augustine Leudar
wrote:
>
> Hi Marc,
> The processing involve dhere is not too CPU intensive - rather its whether
> the graphics card is capable of transmitting 8 channels of audio over HDMI
One correction, before you invert the A-to-B gain matrix, you have to
account for the -3dB gain on W in B-format FuMa.
On Thu, Mar 8, 2018 at 8:35 PM, Aaron Heller wrote:
> A friend bought one, so I've been doing some reverse engineering.
>
> In the 96kHz stereo file, t
lters, the diffuse field response will be
wrong.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Thu, Mar 8, 2018 at 4:27 PM, Marc Lavallée wrote:
>
> Hi Steven, and thanks for the useful info.
>
> The fact that the Twirling720 presents itself as a stereo 96Khz
> device and
I use it extensively as well. Easy to control from a midi control surface
if you need physical controls. Glad to help with any questions.
Aaron
On Wed, Oct 25, 2017 at 11:58 AM, Fons Adriaensen
wrote:
> On Wed, Oct 25, 2017 at 02:43:23PM -0400, len moskowitz wrote:
>
> > I'll try Plogue Bidule
ex.or DirAC.
Aaron
On Wed, Oct 25, 2017 at 8:09 AM, Marc Lavallée wrote:
> True! This is actually the assignment for this week. :-)
> I could post the solution here after I succeed,
> but I’m afraid that would be against the “Coursera honor code”...
> —
> Marc
>
> > On Oct
Doing this was one of the programming assignments in the course "Audio
Signal Processing for Music Applications"
https://www.coursera.org/learn/audio-signal-processing
Excellent course. Tools here:
https://www.upf.edu/web/mtg/sms-tools
https://github.com/MTG/sms-tools
On Tue, Oct 24,
I published a python script that does the encoding, mux'ing, and metadata
injection for ".amb" files to YouTube VR videos. Sox and ffmpeg do the
heavy lifting.
https://bitbucket.org/ambidecodertoolbox/amb2yt
Example output at:
https://youtu.be/eY9DMn8pgGA
You need to view it in Chrome on
On Thu, Jul 6, 2017 at 1:53 PM, Sampo Syreeni wrote:
>
> On 2017-07-05, Martin Dupras wrote:
>
>> I've deployed a 21-speaker near spherical array a few days ago, which I
think is working ok, but I'm having difficulty [...]
>
> Oh, and by the way, *please* compensate each speaker for 1) its
propaga
The decoders produced by my toolbox in FAUST (the ".dsp" files) have
distance, level, and near-field compensation up to 5th-order (and more
soon). Those can be compiled to a large number of plugin types, including
VST, AU, MaxMSP, ...
https://bitbucket.org/ambidecodertoolbox/adt
Aaron
On Thu,
Forgot the URL...
http://www.ai.sri.com/~heller/ambisonics/index.html#test-files
On Wed, Jul 5, 2017 at 6:15 PM, Aaron Heller wrote:
> I have some first-order test files that you can try. They're FuMa
> order/normalization. There's "eight directions" and some pink
Jul 5, 2017 at 5:53 PM, Aaron Heller wrote:
> Hi Martin,
>
> A few things...
>
> 1. You should use a first-order decoder to play first-order sources.
> That's not the same as playing a first-order file into the first-order
> inputs of a third-order decoder.
>
>
Hi Martin,
A few things...
1. You should use a first-order decoder to play first-order sources. That's
not the same as playing a first-order file into the first-order inputs of a
third-order decoder.
2. 1st-order periphonic (3D) ambisonics on a full 3D loudspeaker array gets
the energy correct,
> On Jun 21, 2017, at 1:13 PM, Oliver Larkin wrote:
>
>
> I had tried bidule, but it looked like it was going to record 16 mono files
The setting in Bidule's Audio File Recorder is a bit confusing here... if you
set "Channels per File" to "Stereo" it records all the channels in a single
file
Oli... Perhaps more than you wanted, but Plogue Bidule has a 16-channel
recorder (up to 32 ch) and is available as VST and AU plugins. It will also
host VST and AU plugins, and do many other types of processing. There is a
standalone version as well that interfaces to audio devices directly.
h
On Sat, Sep 17, 2016 at 11:24 AM, Bo-Erik Sandholm
wrote:
> Octathingy is probably 2 Tetra mics joined so we have
> Front Left Up, front left down, front right up, front right down
> Back Left Up, Back Left Down, Back Right Up, Back Right Down.
>
Not quite, there's a 45 deg twist between the top
My experience with FOA-to-binaural rendering is pretty much the same as
what Acrhontis says. I hear directional information and head tracking
effects, but have never experienced the externalization and verisimilitude
that direct dummy head or Algazi and Duda.'s motion-tracked binaural
recordings
On Tue, May 10, 2016 at 12:20 PM, Michael Chapman wrote:
> > Aaron Heller wrote:
> >> https://youtu.be/eY9DMn8pgGA
> > On Mon, 09 May 2016 00:02:49 +0100, Stefan Schreiber wrote:
> >> Well done.
> > Just one suggestion: As VR is a lot about 3D, you
MOV
with spatial metadata. You can find it here:
https://bitbucket.org/ambidecodertoolbox/amb2yt/src/master/amb2yt.py
--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
> On Tue, Apr 26, 2016 at 5:17 PM Dillon Cower wrote:
>
> > Hi Aaron,
> >
> > Currently the chan
Is there some trick to uploading these files? The files I make upload, but
then fail during processing.
I make the files like this:
$ /opt/local/bin/ffmpeg -loop 1 -framerate 30 -i cube-1920x960.png -i
> AJH_eight-positions-ambix.wav -map 0:v -map 1:a -c:a pcm_s16le
> -channel_layout quad -c:v
ametric decoder. No listening
tests, just visual comparison of plots of spatial energy.
We found very little spreading with low-complexity AAC, but a fair amount
with HE-AAC.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
-- next part --
An HTML attachmen
bucket.org/ambidecodertoolbox/adt/src/6c6bdc2460352cd4b72d2dde6928fcb5141d1976/faust/ambi_panner_ambix_o5.dsp?fileviewer=file-view-default
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Mon, Apr 18, 2016 at 9:12 AM, Courville, Daniel wrote:
> Jan wrote:
>
> >how about http://a
ot; tab, specify "windows" and "vst" and download the VST
plugin. (or AU, or LADSPA, or MaxMSP, or ...).
More on Faust at
http://faust.grame.fr/Documentation/
--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Tue, Apr 12, 2016 at 1:30 AM, Jörn Nettingsmeier <
Marc Lavallée, Eric Benjamin, and I put together a Trifield (three speaker
stereo) plugin and demo'ed it a Burning Amp last fall. It is hosted at
https://bitbucket.org/ajheller/trifield/overview
It is written in Faust so can be compiled for a number of different hosts,
but we provide precompil
>From an existing AMB file, any of these will work
. flac --channel-map=none AJH_eight-positions.amb
. sox AJH_eight-positions.amb AJH_eight-positions.flac
. open AMB file in Audacity and then export selecting FLAC format
Note that FLAC is limited to eight channels, so this will work for
fir
AM, Aaron Heller wrote:
>
>
>
> On Thu, Mar 24, 2016 at 10:47 AM, Aaron Heller wrote:
>>
>> ISO/IEC 14496-11 "Information technology — Coding of audio-visual
objects —Part 11: Scene description and application engine" (second
edition, dated 11/2013; see Table 9 in
On Thu, Mar 24, 2016 at 10:47 AM, Aaron Heller wrote:
> ISO/IEC 14496-11 "Information technology — Coding of audio-visual objects
> —Part 11: Scene description and application engine" (second edition, dated
> 11/2013; see Table 9 in page 36)
>
> and
>
> ISO/IEC
formation
about
> > their HOA channel ordering and normalization scheme.
> > Could you please point me to these?
> >
> > Thank you,
> > Sönke
> >
> >
> > 2016-03-24 7:25 GMT+01:00 Aaron Heller :
> >
> > > Martin,
> > >
> > >
Make that "compiled to VST"...
http://faust.grame.fr/onlinecompiler/
On Wed, Mar 23, 2016 at 11:25 PM, Aaron Heller wrote:
> Martin,
>
> Note that while AmbDec can accommodate FuMa normalization on input, it
> still makes the connections to jack in ACN order so
Martin,
Note that while AmbDec can accommodate FuMa normalization on input, it
still makes the connections to jack in ACN order so the inputs will often
appear in Jack in ACN order (and never in FuMa order). I say "often"
because the Jack API does not have any notion of the 'order' of an
applicat
Hi Ricky,
I took a quick look at grambipan.c. For FuMa U you've written
189: (*APout4++) = sample1 * (2 * cosf(sample2)) * cosf(sample3) *
cosf(sample3); //U
whereas the correct expression (found on Richard Furse's webpage, for
example) is
U:cos(2A) cos(E) cos(E)
Please note that cos(2x
In BLaH11 (AES 137, 10/2014, Los Angeles) , Eric and I compared horizontal
FOA over a 2-meter radius 4-speaker diamond vs. 8-speaker octagon with
binaural dummy head measurements and listening tests. (classic decoding:
2-band, rV=1 at LF, rE=sqrt(1/2) at HF, 400 Hz xover, NFC).
The TL;DR summary
der. It assumes that the speaker
coordinates are in the last three columns of the csv file.
If you are still having trouble, please send me the speaker coordinates (in
private email) and I will be happy to create the files for you.
Best...
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA, US
O
On Fri, Jan 15, 2016 at 2:58 PM, Jörn Nettingsmeier <
netti...@stackingdwarves.net> wrote:
> On 01/15/2016 10:00 PM, Martin Leese wrote:
>
>> Andres Cabrera wrote:
>>
>>> Very interesting.
>>>
>>> I'm wondering if it's worth considering separating the order for
>>> horizontal
>>> vs. vertical (ins
hanks for pointing that out; just a series of bad (horrible, really!)
> typos from transcribing code. :) The equations should be corrected now.
>
> Dillon
>
> On Thu, Jan 14, 2016 at 9:13 AM Aaron Heller wrote:
>
> > Unfortunately, the definition of the spherical harmonics g
and
spherical coordinates. I'll post a link later today.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Thu, Jan 14, 2016 at 5:55 AM, Marc Lavallée wrote:
>
> It's about time!
>
> I made my little effort last year, with an experiment on
> http://ambisonic.x
acing capsules is a bit different from the downward-facing ones due
to diffraction and shadowing from the support structure and housing, so
some good taste and experience is needed.
--
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Sat, Sep 12, 2015 at 6:09 AM, Dave Malham wrote:
>
&
On Wed, Dec 31, 2014 at 3:20 AM, Dave Malham wrote:
>
> I wonder how closely this is related to the paper he was one of the
authors
> of at the 2010 Ambisonics Symposium? Anyone have it handy?
Here's the URL:
http://ambisonics10.ircam.fr/drupal/files/proceedings/poster/P6_41.pdf
Also an IEEE p
On Sat, Jul 19, 2014 at 9:47 AM, wrote:
On 19 Jul 2014, at 10:13, Ralf R Radermacher wrote:
Am 18.07.14 15:21, schrieb Rupert Brun:
> The BBC will make the BBC Proms Concerts available in 4.0 using
MPEG-DASH. The stream will be available internationally.
Does anyone have the first idea how to r
I recall this camera cluster being used for the Lincoln Sound and Vision
production with that odd four faced binaural dummy head.
http://now.lincoln.com/2013/02/an-entirely-new-sound-and-vision-2/
On Thu, Apr 10, 2014 at 10:39 AM, wrote:
>
> While at NAB2014 in Las Vegas earlier this week I
What happens when you attach cameras to every conceivable member of the
orchestra?
http://www.classicfm.com/discover/music/gopro-cameras-orchestra/
I especially like #5, the Trombone Cam.
Aaron (hel...@ai.sri.com)
Menlo Park, CA US
-- next part --
An HTML attachment w
I downloaded the MPD file on the FAQ page with
wget http://rdmedia.bbc.co.uk/dash/ondemand/channel_test/1/5.mpd
If I'm reading it correctly, the channel announcements are 320 kbits/sec,
48k sample rate.
Aaron
On Wed, Mar 19, 2014 at 1:01 AM, Kees de Visser wrote:
> On 19 Mar 2014, at 07:33
roundbreaking to me. But,
> as always, I'm happy to be proved wrong.
>
> Worth keeping an eye on, though.
>
> Regards,
>
> John
>
>
> On 18 Mar 2014, at 16:56, Aaron Heller wrote:
>
> > There were three 'with height' workshops at the 2012 AES meeti
There were three 'with height' workshops at the 2012 AES meeting in San
Francisco* that featured an Aura 3D playback setup. David Bowles
(Swineshead) and Paul Geluso (NYU) played some recordings that were quite
nice -- height in front, with convention surrounds -- but the remainder,
including rec
is harmless, because the WbinVec variable is used only if
> WBIN is considered True, and WBIN is actually a constant set to 0.
>
> --
> Marc
>
>
> Le Tue, 18 Mar 2014 01:01:25 +,
> Fons Adriaensen a écrit :
>
> > On Mon, Mar 17, 2014 at 05:06:32PM -07
I used a PCM-F1/SL-2000 combo extensively in the 1980s; about 150 concert
recordings total. Roughly half with my Soundfield MkIV used as a stereo
mic (and a few in B-format to 4-track analog). I'd do very rudimentary
'pause-button editing' to take out long pauses; anything fancier was with
analo
On Mon, Mar 17, 2014 at 1:09 PM, Fons Adriaensen wrote:
> On Mon, Mar 17, 2014 at 06:05:11PM +0100, /dav/random wrote:
>
> > The project is called IDHOA and the code is hosted here [1] under GPL .
>
> (after automatic conversion to python3)
>
> Traceback (most recent call last):
> File "./main.
I have a GigaPort AG connected by USB to my MacBook Pro and it works
correctly with Chrome Version 33.0.1750.152
I used Audio MIDI Setup (in /Applications/Utilites) and selected
"Multichannel/5.1 Surround" for the speaker configuration of the GigaPort
device.
Aaron Heller (hel...@
Steve,
I'm not sure I follow everything you're saying about angle errors, but
there are a few installations that work well here in the SF Bay area that I
have personal experience with. The Listening Room at Stanford's CCRMA
is a 3rd-order periphonic facility, described here
https://ccrma.s
On Wed, Mar 5, 2014 at 1:52 PM, David Pickett wrote:
> At 21:06 05-03-14, Dave Hunt wrote:
>
>1. 4 D sound (!) (Dave Malham)
>>>2. Re: 4 D sound (!) (Ronald C.F. Antony)
>>>3. Re: 4 D sound (!) (Dave Malham)
>>>4. Re: 4 D sound (!) (Eero Aro)
>>>5. Re: 4 D sound (!) (Michael C
/journal/poma/19/1/10.1121/1.4799424
On Fri, Jan 17, 2014 at 12:28 PM, Jörn Nettingsmeier <
netti...@stackingdwarves.net> wrote:
>
> On 01/17/2014 07:29 PM, Aaron Heller wrote:
>>
>> Jan 2014 Physics Today just landed on my desk and the cover article is by
>> Tapio
Jan 2014 Physics Today just landed on my desk and the cover article is by
Tapio Lokki on evaluation of concert halls.
"Tasting music like wine: Sensory evaluation of concert halls"
http://scitation.aip.org/content/aip/magazine/physicstoday/article/67/1/10.1063/PT.3.2242
It is labeled "free conte
Dick Duda and Ralph Algazi gave a talk and demo at a San Francisco AES
meeting at Dolby Labs a few years ago. At that time, they were recording
with a head-sized sphere with either 8 or 16 microphones around the
equator. They imagined that 8 would be used for teleconferencing and 16
for music rec
On Fri, Dec 20, 2013 at 7:40 PM, David Worrall wrote:
> I remember reading that, with exposure, human's audio-processing
> "hardware" can adapt to/learn how to use a non-optimal HRTF, given a bit of
> time.
> Does anyone have a reference for this?
>
>
I don't know about 'non-optimal', but we can l
The wikipedia article on Ambisonics has the following paper under Source
texts on Ambisonics – basic theory. I can't seem to find a copy from my
usual sources. Do any of have a copy?
W.C.Clarck, K.Alimi, B.Spendor: Ambisonic depending Aural recognition,
International Institute of Inuitive Audio
Not cheap, but the Smyth Research Realizer does do head tracking with any
headphones. It also measures your HRTFs and headphone's response. I
played with it at Burning Amp this year and it was quite effective at
externalizing the sound from headphones.
http://www.smyth-research.com/products.ht
I hate to contradict you, but my experience playing my first-order
recordings (e.g. Pulcinella, LvB 4th Sym, available on Ambisonia) on the
2nd or 3rd order presets in Ambdec results in decoding errors. The most
obvious artifact is that frontal sources sound too close, sometimes right
in front of
I took the liberty of merging them into 4-channel files and putting them on
my server, which might be easier to access than the skydive (the UI was in
Japanese for me, fortunately I recognized the character for 'down')
http://ambisonics.dreamhosters.com/01-Birds_WXYZ-110425_0119.wav
http://amb
I encountered this a while back and modified my copy of Ambdec 0.5.1 to
register its inputs with jack in FuMa order rather than ACN, specifically
to get the mplayer-jack-ambdec chain to work as expected (which I suspect
is one of the more common use cases). Ambdec's functioning is completely
unaff
Interesting paper in PNAS, from July. I believe it is open access, so
anyone can read/download. Aaron
http://www.pnas.org/content/110/30/12186.short
The supplemental information (SI) shows some of the equipment and more math.
I. Dokmanić, R. Parhizkar, A. Walther, Y. M. Lu, and M. Vetterli, “A
r Lennox
>
> School of Technology,
> Faculty of Arts, Design and Technology
> University of Derby, UK
> e: p.len...@derby.ac.uk
> t: 01332 593155
> ____
> From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Aaron Heller [
> h
[1] P. Power, C. Dunn, B. Davies, and J. Hirst, “Localisation of Elevated
Sources in Higher-Order Ambisonics,” BBC R&D, WHP 261, Oct. 2013.
http://www.bbc.co.uk/rd/publications/whitepaper261
[2] D. Satongar, C. Dunn, Y. Lan, and F. Li, “Localisation Performance of
Higher-Order Ambisonics for Off
On Thu, Sep 26, 2013 at 12:15 PM, Eero Aro wrote:
The Soundfield microphone directional controls aren't exactly panning.
Sorry to quibble, but if I feed a signal into the W and X inputs (with
appropriate scaling) and ground Y and Z, then the soundfield controls on a
Mk4 behave like a B-format
A first-order B-format panner needs to implement the equations
W = S * sqrt(1/2)
X = S * cos(az) * cos(el)
Y = S * sin(az) * cos(el)
Z = S * sin(el)
where S is the signal being panned and az and el define the direction.
The Calrec Soundfield MkIV controller box has analog circuitry for
s
Not specifically Ambisonic, but Trinnov has technology for automatically
tuning a surround setup. I recall them using it during the "with height"
demo sessions at Recombinant Media Labs during the 2006 AES convention in
San Francisco.
http://www.trinnov.com/technologies/loudspeaker-room-optimiza
On Mon, Sep 23, 2013 at 3:55 PM, Andy Furniss wrote:
>
>
> I don't quite understand the "in phase" though, are you saying that they
> artificially adjust phase for the same sound that comes out of more than
> one speaker to affect the mixdown?
The Recording Academy recommendations for surround
Also here, with PDF
http://www.google.com/patents/US20110135098
(I see Google Patents, now has a "Find prior art" button)
On Sat, Sep 14, 2013 at 8:36 AM, Aaron Heller wrote:
> I don't know if it is relevant, but I found this patent application
>
> http://
I don't know if it is relevant, but I found this patent application
http://www.faqs.org/patents/app/20110135098
Patent application title: METHODS AND DEVICES FOR REPRODUCING SURROUND
AUDIO SIGNALS
Inventors: Markus Kuhr (Wedemark, DE) Jurgen Peissig (Wedemark, DE) Axel
Grell (Wedemark, DE)
y (hel...@ai.sri.com) if you want to try it out.
Aaron Heller
Menlo Park, CA US
-- next part --
An HTML attachment was scrubbed...
URL:
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20130813/72153f52/attachment.html>
__
an it be turned off in the encoder?
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Fri, Aug 2, 2013 at 9:39 AM, Stefan Schreiber wrote:
> Martin Leese wrote:
>
> Stefan Schreiber wrote:
>> ...
>>
>>
>>> To offer a backward-compatible extension of a <
Regarding 'even' distribution of points on a sphere
Most of the spherical microphone arrays use the Fliege points
http://www.personal.soton.ac.uk/jf1w07/nodes/nodes.html
For numerical integration on the sphere (both for ambisonic decoder
optimization and mic array simulation), I use Lebedev-La
Faust here:
http://faust.grame.fr
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Wed, Jul 3, 2013 at 7:33 AM, Moritz Fehr wrote:
> hi everyone,
>
> thank you very much for your replies -- what i would like to achieve is
> playing a mix of a b-format recording combined with
heard it on
orchestral recordings at his studio in Stockton and it sharpens up the
orchestra image nicely.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Wed, Jun 26, 2013 at 10:02 AM, Eric Carmichel wrote:
> Greetings All,
> I have a friend who's an advocate of th
om Fellgett in the May 3, 1973 issue, digging that up, we find him
using the term Ambisonic in this letter dated March 23, 1973. So that
narrows the window to 9/72 to 3/73.
> On 23 June 2013 21:56, Aaron Heller wrote:
>
> > The term "Ambisonics" does not appear at all in Fellge
The term "Ambisonics" does not appear at all in Fellgett's 9/72 article
[1], but is in the title in 11/73 [2].
[1] P. Fellgett, “Directional information in reproduced sound,” Wireless
World, vol. 78, no. 1443, pp. 413–417, 1972.
[2] P. Fellgett, “Ambisonic reproduction of sound,” Electronics and
On Sat, May 18, 2013 at 3:01 AM, Rev Tony Newnham <
revtonynewn...@blueyonder.co.uk> wrote:
> Indeed - there's a picture or two in one of his books, which I have here
> somewhere. I don't think he tried to mimic the piositions of instruments
> within an ensemble though - except maybe the piano.
On Sun, May 12, 2013 at 3:59 AM, Iain Mott wrote:
>
> The Earfilms link is interesting. I wonder if people on the list have
> other references or links on ambisonics applied in theatrical
> productions, either traditional theatre or theatrical installation? Your
> own productions or the work of ot
Helmut Wittek gave a paper Tonmeistertagung 2006 that discusses the
relationship between Schoeps Double-MS and Ambisonic B-format.
http://hauptmikrofon.de/HW/TMT2006_Wittek_DoubleMS_neutral.pdf
English translation here:
http://www.schoeps.de/documents/SCHOEPS_Double_MS_paper_E_2010.pdf
---
Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International
Conference on, 2002, vol. 2.
[4] A. O'Donovan, R. Duraiswami, and D. Zotkin, “Imaging concert hall
acoustics using visual and audio cameras,” *Proc IEEE Int Conf Acoust
Speech Signal Process*, pp. 5284–5287, Jan. 2008.
have
done the listening tests, acknowledge that a six-speaker hexagon is an
improvement over a square. Some people have reported good results with 8,
others report spectral contamination. Mathematical analysis supports the
latter.
Aaron Heller
Menlo Park, CA US
-- next par
have access to (vs. pathological examples), so you can listen and
report back on what you hear.
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
-- next part --
An HTML attachment was scrubbed...
URL:
<https://mail.music.vt.edu/mailman/priv
cond
mic at right angles and you have a natural horizontal B-format array (WXY).
Martin Kantola's Panphonic microphone works this way:
http://www.panphonic.com/
Aaron Heller (hel...@ai.sri.com)
Menlo Park, CA US
On Tue, Apr 23, 2013 at 12:45 PM, Eric Carmichel wrote:
> This po
ker locations, but it is quite usable. I've have NFC
and phase-matched crossovers filters working in Faust, but not integrated
yet.
If anyone like to try it, contact me off list (hel...@ai.sri.com) and I'll
send you a beta copy. I'll be doing a general release as soon as I get
some
1 - 100 of 147 matches
Mail list logo