Re: [Sursound] Ambisonics with self-contained headphones head tracking

2022-01-06 Thread David McGriffy
Jun,

Sounds like a great project.  I have long been playing with Teensy's and
ambisonics including porting both encoding and decoding there.  My code
won't directly help you since one of the first things I did was convert the
whole audio chain in the Teensy Audio Library to use floats instead of
integers.  Not only was my own code already in floats but I find DSP
programming in floats much easier than in integers.  And since the Teensy's
have hardware single precision floating point, it all works fine.

A Teensy 4 will certainly be able to run a basic decoder.  The most basic
version is nothing but sums and differences.  The next step up might be to
add shelf filters.  However, for the application you envision, most people
would want to do a conversion to binaural and there I expect you will run
into trouble.  Binaural conversions normally require running multiple
convolutions.  The filters are usually short but I suspect even eight 512
point convolutions are beyond the Teensy 4's ability but perhaps you could
manage if you use integer math and truncate the filters.

That said, I'm sure you can create a basic decoder and run a simple XY
stereo output.  I have seen this used for VR playback with head tracking
and the results were quite usable.  You will also need an ambisonic rotate
unit, ideally with smoothing to avoid zipper noise.  The rotate itself
would then be an interpolation between two 4x4 matrix multiplies, but I
expect you will need to set up table driven trig functions to build the
matrices.

I'd love to hear more about how your project develops.  You can contact me
at dmcgri...@vvaudio.com if the discussion gets too detailed for this forum.

Thanks,
David


On Wed, Jan 5, 2022 at 7:58 PM byungjun kwon  wrote:

> Hello,
>
> I'm a media artist and have been playing around with headphones with
> custom-built circuitry
> <
> https://drive.google.com/drive/folders/1C2FX65AsQ9YsBrL4eVJvjhEDyWnPpXSt?usp=sharing
> >.
>
> It's similar topic with on-going Air-pods but I opened a new thread
> since it involves new hardware.
>
> The circuit is based on Teensy 4.0
>  with audio codec(sgtl5000),
> SD card, indoor positioning(DW1000), GPS, accel/gyro(LSM6DS33), on-board
> mic and etc.
>
> Combined with Teensy audio library
>  , I was using it in various
> art projects which offer quite accurate site-specific audio in the space
> and user-interaction based on the distance and head-tracking.
>
> FOA playback with headtracking has long been on my list and I'd like to
> have your opinion.
>
> It may play multichannel audio from SD card and is fast enough(600Mhz)
> to run decoder, I think.
>
> Can anyone guide me how to develop FOA decoder in Teensy Audio
> environment to achieve self-contained headphones with head tracking?
>
> Any comments would be appreciated!
>
> Regards, Jun
>
>
>
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20220106/9755ea74/attachment.htm
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Funding opportunity for small business

2020-10-28 Thread David McGriffy
Oops, pardon to the list  I meant to send that privately, of course.  Well,
there's a little more about me if anyone is interested.

Thanks,
David McGriffy

On Wed, Oct 28, 2020 at 11:05 AM David McGriffy  wrote:

> Anne,
>
> To see a John Muir quote on a SurSound post definitely caught my
> attention.  I run the audio software company VVAudio and I've just become a
> Texas Master Naturalist.  I believe I am uniquely qualified to be a part of
> your program.
>
> My company, VVAudio, has been building software tools for surround
> sound since 2001.  My primary focus today is ambisonics, especially
> ambisonic microphones.  I provide the plugins that come with both Core
> Sound and Brahma brand mics.  I have been lucky enough to work with some of
> the world's top experts in microphone array calibration and learned how
> much I enjoy MatLab programming along the way.  As virtual reality took
> ambisonics from niche to mainstream, I was able to sell my code and
> services to several large VR companies.  I have ported my code to the Unity
> game development environment and to mobile devices.  And, perhaps
> significantly for your mission, the two V's in my company name stand for
> 'Visual Virtual' and as such many of my plugins feature sophisticated,
> OpenGL based graphics.  I believe that being able to visualize what's going
> on, helps tremendously when working with sound, especially surround sound.
>
> I also have many skills outside of audio that pertain to this
> opportunity.  I have long held a day job as a programmer or IT manager
> where I've worked on everything from GPS systems to oil rigs to ecommerce
> websites.  I've written low level firmware and I've done a lot of
> development process consulting.  I know how to build and manage a great
> team.  I believe in communicating as well as coding and as such I have
> published one book (Make: Drones) and have a good start on one about VR/AR
> audio.  I have a life-long love of photography and I do all my own
> graphics.  And, as I mentioned, I just completed my training as a Texas
> Master Naturalist, an education and volunteer program mostly aimed at our
> state parks.
>
> Just as writing a book about drones combined many, diverse threads in my
> life, like signal processing and flying (I am a licensed private pilot),
> your opportunity looks like the perfect combination of everything I want to
> be doing right now.  I want to write more and better audio code.  I want to
> pay more attention to audio in nature.  And I want to communicate my
> excitement about these things to others.
>
> I would love to talk more about how your grant proposal process works and
> how I can best help your program.
>
> Thanks,
> David McGriffy
>
>
> On Tue, Oct 27, 2020 at 7:50 PM Anne Simonis 
> wrote:
>
>>  Hi All,
>>
>> I'm an acoustic ecologist with NOAA, and my research uses underwater
>> acoustic recordings to study marine mammal ecology, behavior, and the
>> impacts of human noise.
>>
>> NOAA recently announced a small business grant opportunity, with a
>> subtopic
>> related to citizen science and education (subtopic 9.5 in the full
>> announcement):
>> https://www.grants.gov/web/grants/view-opportunity.html?oppId=329444
>>
>> There's a strong interest among many NOAA acoustics researchers (including
>> myself!) to develop acoustic analysis tools for applications in education
>> and citizen science. For example, a web-based spectrogram annotation tool
>> would be very useful not only for us, but also the greater community of
>> bioacoustics, speech recognition and machine learning. I imagine all of
>> you
>> likely have a bunch of creative ideas that this funding might also be able
>> to support!
>>
>> The grant is a multi-phase program, with funding available for $150k for
>> the first 6 months and potentially $500k for 2 years after that.
>> Application period closes Jan 13, 2021.
>>
>> If you are part of a small business that might be interested in submitting
>> a proposal, I'd be happy to talk more with you about your ideas and our
>> needs.
>>
>> Feel free to reach out to me at anne.simonis at noaa.gov to set up a time
>> to chat.
>>
>> Best wishes,
>> Anne
>>
>> --
>> "All things make music with their lives" -John Muir
>>
>> @AnneListens <https://twitter.com/AnneListens>
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20201027/ce3825e4/attachment.htm
>

Re: [Sursound] Funding opportunity for small business

2020-10-28 Thread David McGriffy
Anne,

To see a John Muir quote on a SurSound post definitely caught my
attention.  I run the audio software company VVAudio and I've just become a
Texas Master Naturalist.  I believe I am uniquely qualified to be a part of
your program.

My company, VVAudio, has been building software tools for surround
sound since 2001.  My primary focus today is ambisonics, especially
ambisonic microphones.  I provide the plugins that come with both Core
Sound and Brahma brand mics.  I have been lucky enough to work with some of
the world's top experts in microphone array calibration and learned how
much I enjoy MatLab programming along the way.  As virtual reality took
ambisonics from niche to mainstream, I was able to sell my code and
services to several large VR companies.  I have ported my code to the Unity
game development environment and to mobile devices.  And, perhaps
significantly for your mission, the two V's in my company name stand for
'Visual Virtual' and as such many of my plugins feature sophisticated,
OpenGL based graphics.  I believe that being able to visualize what's going
on, helps tremendously when working with sound, especially surround sound.

I also have many skills outside of audio that pertain to this opportunity.
I have long held a day job as a programmer or IT manager where I've worked
on everything from GPS systems to oil rigs to ecommerce websites.  I've
written low level firmware and I've done a lot of development process
consulting.  I know how to build and manage a great team.  I believe in
communicating as well as coding and as such I have published one book
(Make: Drones) and have a good start on one about VR/AR audio.  I have a
life-long love of photography and I do all my own graphics.  And, as I
mentioned, I just completed my training as a Texas Master Naturalist, an
education and volunteer program mostly aimed at our state parks.

Just as writing a book about drones combined many, diverse threads in my
life, like signal processing and flying (I am a licensed private pilot),
your opportunity looks like the perfect combination of everything I want to
be doing right now.  I want to write more and better audio code.  I want to
pay more attention to audio in nature.  And I want to communicate my
excitement about these things to others.

I would love to talk more about how your grant proposal process works and
how I can best help your program.

Thanks,
David McGriffy


On Tue, Oct 27, 2020 at 7:50 PM Anne Simonis  wrote:

>  Hi All,
>
> I'm an acoustic ecologist with NOAA, and my research uses underwater
> acoustic recordings to study marine mammal ecology, behavior, and the
> impacts of human noise.
>
> NOAA recently announced a small business grant opportunity, with a subtopic
> related to citizen science and education (subtopic 9.5 in the full
> announcement):
> https://www.grants.gov/web/grants/view-opportunity.html?oppId=329444
>
> There's a strong interest among many NOAA acoustics researchers (including
> myself!) to develop acoustic analysis tools for applications in education
> and citizen science. For example, a web-based spectrogram annotation tool
> would be very useful not only for us, but also the greater community of
> bioacoustics, speech recognition and machine learning. I imagine all of you
> likely have a bunch of creative ideas that this funding might also be able
> to support!
>
> The grant is a multi-phase program, with funding available for $150k for
> the first 6 months and potentially $500k for 2 years after that.
> Application period closes Jan 13, 2021.
>
> If you are part of a small business that might be interested in submitting
> a proposal, I'd be happy to talk more with you about your ideas and our
> needs.
>
> Feel free to reach out to me at anne.simonis at noaa.gov to set up a time
> to chat.
>
> Best wishes,
> Anne
>
> --
> "All things make music with their lives" -John Muir
>
> @AnneListens <https://twitter.com/AnneListens>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20201027/ce3825e4/attachment.htm
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20201028/586b5b23/attachment.htm>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] tetrahedral mic record

2018-09-11 Thread David McGriffy
Actually, directly on axis of one of the capsules may be the worst
direction for a tetrahedral array.  While it's true that otherwise you are
slightly off-axis for a given capsule, with whatever coloration that might
cause, I believe the non-coincidence problems of the complete array
overwhelm this effect.  When pointing directly at one capsule, the distance
to each of the other three is the same, thus the peaks and notches caused
by phase differences all line up at the same frequency and become one
bigger peak or notch.  When a source comes in between two capsules, the
distances in the array don't match so the peaks and notches spread out in
frequency and help to smooth each other out a bit.  Of course, the
calibration filters for tetrahedral mics attempt to correct for this
effect, but those filters more closely match the diffuse field average of
all directions than the more extreme results in the direction of one
capsule.

David McGriffy
VVAudio


On Tue, Sep 11, 2018 at 8:21 AM James Mastracco 
wrote:

> I've love to see pictures  of the chapel and the mic placement.
>
> JM
>
> > Hi Folks
>
> > Is it better to align performers to capsule axis, to get the best
> > frequency, and noise performance from those directions, then rotate the
> b-format encode afterwards?
>
> > I am recording at Union Chapel, with tetrahedral mics, and spot
> microphones.
> > The final output will be stereo and HOA.
> > I usually just position tetra mics as normal (upside down and
> > end-fire), but it occurred to me that it may be better to align the
> > capsules to performers (where possible).
> > All performers will be spread out horizontally on an equilateral
> > triangle, with the main tetra on a corner.
> > The main decode will be stereo, but I want to capture the acoustic of
> the place, and do a HOA mix.
>
>
> > Best
>
> > Steve
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> > here, edit account or options, view archives and so on.
>
>
>
> --
> Best regards,
>  Jamesmailto:acoustic...@verizon.net
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20180911/114add24/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [allowed] Re: Strange 'buzz' in Ambisonic recording

2018-05-11 Thread David McGriffy
Given that this is a Brahma, it uses FFT based processing.  My suspicion is
that the artifact is actually something in the FFTs like a windowing
problem or a bug handling the first FFT bins.  At 48kHz, 1K block
boundaries would be about 48Hz.

Does using Xvolver give the same artifact?  IIRC I read that code and tried
to do the FFT filters in a compatible way in VVEncode so it could be they
both have the same trouble or it could just be a bug in my code.

David
VVAudio

On Fri, May 11, 2018 at 9:13 AM Fons Adriaensen  wrote:

> On Fri, May 11, 2018 at 01:03:57AM +0100, Gerard Lardner wrote:
>
> > Actually really only when the organ is playing; the brass is usually with
> > the organ, but not always. The buzz is present when the organ is playing
> > loudly.
>
> Could you make available a small part (20 seconds or so) of the original
> A-format file and the encoded B-format one for the part where you hear
> the 'buzz' ??
>
> --
> FA
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Mac/Win support for OctoMic

2018-03-29 Thread David McGriffy
Not wanting the Windows and Macintosh users to get left out of the OctoMic
party, I have been working with Len and Fons to produce VST and AAX plugins
as well as command line tools for both operating systems.  The new encoder
will be ready to ship with the OctoMic, of course, and I should have a
matching higher order decoder ready too, but only if I get back to
programming ...

David McGriffy
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20180329/1d4b2306/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [allowed] brahma

2017-01-05 Thread David McGriffy
I recently developed a MatLab script to create calibrations based on
Angelo's method.  Set up equations for each measurement's expected results,
find the LMS solution in each frequency bin, then build the resulting
filter matrix.  A little engineering around the low and high frequencies
and it seems to work pretty well.

I'd also like to point out that VVEncode (alas not VVTetraVST or VVMic)
will process anyone's 4x4 matrix of filters if they are put in wav files
like Angelo and Brahma do.  I also have matrix calibration processing in my
command line tool and I've done it in pure MatLab.  Contact me if you want
to try this and have any questions.

I mention all this mostly because I'd be interesting in processing anyone's
custom measurements.  For me this is test data for my scripts.  So far I've
only run data out of one lab and with one type of
excitation/reference/etc.  I'd like to make sure I'm not dependent on those
things.  And I can return a working calibration, I hope.

David
VVAudio

On Thu, Jan 5, 2017 at 1:21 PM, Fernando Lopez-Lezcano <
na...@ccrma.stanford.edu> wrote:

> On 01/05/2017 02:19 AM, umashankar manthravadi wrote:
>
>> I use an earthworks calibration microphone, but any good quality six mm
>> Omni microphone will be okay. Noise is not important, but flat frequency
>> response is. You could use a tetramic and take the W signal after
>> processing, but it might be overcomplicating things. Coaxial speakers sound
>> ideal, but many of them have problems. A small five inch full range, or 5
>> inch plus tweeter mounted close together, is best. I use an Adam 5 after
>> first sealing off the bass port. Choice of microphone is more important
>> than the speaker.
>>
>
> Indeed. I have an (if memory serves) emm6 "reference microphone" which is
> not very expensive but comes with its own calibration curve. Flatness is
> important as that will define how true is the frequency response of the
> calibration. For the excitation I use a single driver small speaker, so far
> it has been fine. I record is our rather small concert hall (the Stage) and
> I managed to get about 5 mSecs of useful data before the first reflection
> arrives. That seems to be enough. Completely agree with sealing the bass
> port if you have one like that, that's what Eric Benjamin recommended doing.
>
> I'll probably be offline for a few days, sorry (vacation, no internet,
> bliss)...
> Good luck!
> -- Fernando
>
>
>
> Sent from Mail for
>> Windows 10
>>
>> From: Bo-Erik Sandholm
>> Sent: Thursday, January 5, 2017 3:31 PM
>> To: sursound
>> Subject: Re: [Sursound] [allowed] brahma
>>
>> To get a good calibration, I expect I will need at least one
>> known/reference source or sensor?
>>
>> The reference for me will probably be my coresound Tetramic.
>>
>> I will probably use Kef eggs as speakers as they are coaxial transducers.
>> Only at low frequency will The port be offerter.
>>
>> Have I understood The basics?
>> I will probably measure outside a calibration Day orborrow a anechoic room
>> at swedish radio.
>>
>> Bo-Erik
>>
>> Den 5 jan. 2017 11:53 fm skrev "umashankar manthravadi" <
>> umasha...@hotmail.com>:
>>
>> Dear fernando
>>>
>>>
>>>
>>> I forwarded your link to marc Lavallée when you first posted them. My
>>> intent is that we should have multiple compatible systems for
>>> calibrating A
>>> format mics
>>>
>>>
>>>
>>> Sent from my Windows 10 phone
>>>
>>>
>>>
>>> From: Fernando Lopez-Lezcano
>>> Sent: 05 January 2017 00:28
>>> To: Surround Sound discussion group;
>>> glard...@iol.ie
>>> Subject: Re: [Sursound] [allowed] brahma
>>>
>>>
>>>
>>> On 01/02/2017 09:59 PM, Bo-Erik Sandholm wrote:
>>>
 Good luck in your continued effekt.

 By the way, i have accuired one of your Tetra mic 3d shapeway for 14 mm
 capsules.
 I have not yet assembler it.
 I Wonder is there any software package and description on How to create
 calibration files for The mic?
 I will probably have The possibility to get temporary access to a

>>> anechoic
>>>
 room.

>>>
>>> Hi Bo-Erik,
>>> You could take a look at my project here (I posted about this a while
>>> back):
>>>
>>> https://cm-gitlab.stanford.edu/ambisonics/SpHEAR/
>>> (there is mailing list with 0 posts so far, ha ha..)
>>>
>>> See paper here (been working on this for over a year):
>>> https://ccrma.stanford.edu/~nando/publications/sphear.pdf
>>>
>>> The project includes free software (open source, GPL) that runs in
>>> Matlab and derives a calibration matrix (4x4 matrix of FIR filters) that
>>> can be used directly with TetraProc (by Fons Adriansen) - or any other
>>> software that can run the filters.
>>>
>>> Just a few days ago I _finally_ managed to get the software into
>>> somewhat usable shape in that I think (and I'm pretty much wrong every

Re: [Sursound] leftover xmas ornaments?

2016-09-17 Thread David McGriffy
I have a friend that has contributed to the OpenSCAD project, so I know a
bit about that. It's a 3D modeling program/language that works by writing
lines of code instead of pushing things around with a mouse. Because of
this, you can have parameters for things like size or number of teeth on a
gear or whether to add a certain feature. This way one model can produce a
whole family of objects. OpenSCAD then outputs the same G code as other 3D
printer programs so it can be used with most printers. Suits old
programmers like me who still count their productivity in lines of code no
matter how useless this measure has become (or ever was).

David
VVAudio

On Sat, Sep 17, 2016 at 12:56 PM, Stefan Schreiber 
wrote:

> Fernando Lopez-Lezcano wrote:
>
> Hi all,
>>
>> I've been working on this project since November 2015, and at the time I
>> thought I would be done by Christmas, hence the subject line... (I was very
>> naive). The main motivation of the project was to have easy to build and
>> cheap microphone arrays for my students to use in class (@ CCRMA,
>> Stanford)...
>>
>> So, you can choose what you can use this for: Ambisonics themed Xmas tree
>> ornaments, 3d puzzles of platonic solids, big earrings for your loved ones
>> or, perhaps, microphone arrays.
>>
>> I've been working on designs that are 3D printable as flat pieces on
>> cheap or medium priced printers and are assembled and glued together like
>> 3d puzzles, starting with a regular tetrahedral first order microphone and
>> then moving on to Eric Benjamin and Aaron Heller's Octathingy (8 capsules)
>> and a few more "platonic solid" designs (12 and 20 capsules, these last
>> just to test the concept of even bigger 3d puzzles - it works).
>>
>
>
> It is a bit frustrating to read about (probably)  interesting projects
> which are< completely unintroduced >.
>
> https://ccrma.stanford.edu/search/node/Octathingy
>
> Didn't find any real info. Is this supposed to be some 2D design? If so,
> Orange Research Labs also had some prototype, and also some 12-capsule (2nd
> order-3D) prototype but which seems to be to big. (Capsule distances on
> sphere implied an spatial aliasing limit of about 4.5kHz, which would be
> (too) low for any serious recording.)
>
> So, < before > talking about 3D printed holders: What actually IS an
> 8-capsule Octathingy?
>
> Waiting for some enlightening infos... :-)
>
> Best
>
> Stefan
>
>
>
>> All models are written in Openscad (a 3d modeling programming language),
>> with most of the dimensions being parametric - the models are, after all,
>> just software.
>>
>
> What does this mean? "Parametric dimensions" for some 3D printer???
>
>
> I spent a couple of weeks doing plain old geometry on paper to try to get
>> everything to fit just right...
>>
>> I wrote a paper on the progress of the project so far for AES SFC (which
>> I regretfully was unable to attend), you can find it for now in my web page
>> - jump to the publications link[*]. I have a first working prototype
>> (calibration and measurements in the paper), I'm currently working on two
>> more and looking forward to testing the 8 capsule design.
>>
>> A lot of work ahead (coding and hardware design, documentation, etc).
>> This turned out of be a black hole for any time I can throw at it.
>> Contributions welcome...
>>
>> GPL Openscad code and Creative Commons licensed 3D models are available
>> here:
>>
>>https://cm-gitlab.stanford.edu/ambisonics/SpHEAR/
>>
>> (there is also a low volume mailing list available, so far 0 messages :-)
>>
>> You can also find a Kicad PCB design for the phantom power interface for
>> each capsule (they fit into the body of the latest design) and the
>> preliminary calibration software (GPL, written in Octave) for the
>> tetrahedral design. But of course everything needs better documentation.
>> Take a look, I included a few more pictures...
>>
>> If you are tempted to build one be forewarned that it is a LOT of work :-)
>>
>> Many in this list helped a lot (you know who you are, thanks!!), I would
>> not have gotten this far by walking alone.
>>
>> Enjoy!
>> -- Fernando
>>
>> [*] https://ccrma.stanford.edu/~nando/
>>
>> -- next part --
>> A non-text attachment was scrubbed...
>> Name: TinySpHEAR_zapnspark.png
>> Type: image/png
>> Size: 2450318 bytes
>> Desc: not available
>> URL: > nts/20160914/a9f50b92/attachment.png>
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed..

Re: [Sursound] New FOA recorder

2016-09-06 Thread David McGriffy
You can sync two Tascam DR-680's and get 12 usable channels with matching
pre-amps and completely digital gain.

I wonder if the new Zoom is really digitally controlled gain or just a
digital display of an analog value. You could find out by recording a tone,
moving the gain knob slowly, then looking for a smooth or step-wise
response in the recording.

David McGriffy
VVAudio

On Tue, Sep 6, 2016 at 10:37 AM, umashankar manthravadi <
umasha...@hotmail.com> wrote:

> My eight channel microphone is meant to be  2nd order with height. I know
> that needs nine  channels but angelo farina has told me it will be possible
> to create a filter matrix so the 8channels of A format can be processed
> into 9 channels of B format.
> The reason for eight channels was of course recorder availability. F8 can
> do 10 channels if you connect an adaptor but I am not sure those two
> channels will be an exact match for the other eight channels. And A-D
> convertors like the MOTU 8pre are only 8 channels.
>
> Umashankar
>
> Ps. Who from the sursound group will be at the Los Angeles AES ? We are
> exhibiting various new Brahma microphones (including a large diaphragm
> version.)
> Umashankar
>
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
> From: Stefan Schreiber<mailto:st...@mail.telepac.pt>
> Sent: Tuesday, September 6, 2016 9:01 PM
> To: Surround Sound discussion group<mailto:sursound@music.vt.edu>
> Subject: Re: [Sursound] New FOA recorder
>
> umashankar manthravadi wrote:
>
> > I am building a second order microphone with 8 capsules,
> >umashankar
> >
> >
>
> Why don't you consider to build some 3D version with 12 capsules?
>
> The VR people might be quite interested in such a "simple" HOA mike.
>
> Best regards,
>
> Stefan
>
> P.S.: It seems obvious that your mike is a 2D version.
>
> P.S. 2: The associated 12-track recorder would be a secondary
> problem...   :-)
>
>
>
> >From: Courville, Daniel<mailto:courville.dan...@uqam.ca>
> >Sent: Tuesday, September 6, 2016 6:54 PM
> >To: Sursound<mailto:sursound@music.vt.edu>
> >Subject: [Sursound] New FOA recorder
> >
> >New FOA recorder
> >
> >https://www.zoom.co.jp/products/field-recording/zoom-
> f4-multitrack-field-recorder
> >
> >
> >
> >
> >
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
> -- next part --
> An HTML attachment was scrubbed...
> URL: <https://mail.music.vt.edu/mailman/private/sursound/
> attachments/20160906/76d0c8b8/attachment.html>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160906/d99be14e/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] B format and atoms

2016-06-15 Thread David McGriffy
Certainly the easiest workflow would be to simply decode your B-format to
your bed format, 9.1 or 7.1.4, and mix into the bed directly. I don't know
of any (released) decoder that does these layouts in a ProTools world, but
you can always just use 7.1 and ignore height to get things working.

Another approach starts by using a nice symmetrical or semi-symmetrical
layout, for which the ambisonic decode works better, then use ATMOS panners
to put the outputs of this decoder into the proper positions. This uses up
objects for however many virtual speakers you use, of course. I have a
sneaking suspicion that properly done square (or maybe 5 channel w/ height)
done this way would sound good on the widest possible variety of playback
systems.

The first approach isn't ideal because, well 7.1, etc. aren't ideal. The
second approach would be even more prone to have ATMOS mess it up somehow,
though things that are normal in large theaters, like front to back delay
lines, would play havoc with ambisonics either way and I don't know if this
is sometimes or always present in ATMOS theaters. I've heard enough real
feedback from folks trying this to know that it can sound fabulous, but not
not enough to tell you how get good results every time.

I would also suggest that by being much more directional in its virtual
speakers, a parametric decoder is likely to suffer less from either of
multi-speaker or delay issues, at least for highly directional sources. I
would highly recommend a parametric decoder for this application.

David
VVAudio

On Wed, Jun 15, 2016 at 1:18 PM, David Pickett  wrote:

> Has a sufficient spec of Atmos been published to allow one to know what is
> needed to convert it to ambisonics?
>
> If so, I'd be pleased to be pointed to it.
>
> David
>
>
> At 18:30 15-06-16, Jon Honeyball wrote:
> >Are there tools out there for putting b format into atmos, and then
> >doing atmos authoring?
> >
> >What are my options?
> >
> >Ta much
> >
> >jon
> >-- next part --
> >An HTML attachment was scrubbed...
> >URL:
> > >15/b94fb5e1/attachment.html>
> >___
> >Sursound mailing list
> >Sursound@music.vt.edu
> >https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> >here, edit account or options, view archives and so on.
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Never do electronic in public.

2016-02-01 Thread David McGriffy
Right, 10DOF is with altitude. I'm building a drone controller out of piece
parts as a demonstration.

Yes, there are certainly ways to achieve low latency convolution, but all
at the expense of CPU, which is already at a premium when running on phones.

I first heard the 20ms figure from some folks working in the VR biz.
Admittedly, they are more camera and content producers and not headset
makers, so perhaps this is just a dream goal of theirs. It does make some
sense to me as that's 50Hz, or the same as some TV refresh rates. Being one
frame off there can certainly be noticeable, especially with speech vs
moving lips.

It does seem quite reasonable to me that an audio only app would not be as
critical. And, of course, the video and audio frame rates don't really
match anyway, or we'd be getting 882 or 960 sample blocks instead of 128 or
512.

I got a GearVR recently and it does work better than Google Cardboard. Much
of this is just comfort, I think, but I understand that it has its own
gyros, mostly because they are faster. 'Heresay' is that the Oculus Rift
samples at least 1000Hz. I have actually written an audio rate rotate that
could handle this, but it does seem like overkill. At normal head turning
rates, I find interpolating the rotation within each block to be enough.

Any modern gyro and processor will run fast enough. My little 8-bit
controllers run their complete flight control loop in under 2ms. The limit
on update rate, and latency for that matter, will be the wireless link.
Wifi will be fastest but highest power. Bluetooth lower power but lower
bandwidth and still wireless. Wired would not only be very fast but could
provide power.

If you are thinking wireless headphones, remember the latency that that
introduces. I find my everyday bluetooth headset, built for music, is
useless for VR because of latency. And if you headphones are going to be
wired, then running that extra USB might not be too bad.

David McGriffy


On Mon, Feb 1, 2016 at 12:49 PM, Bo-Erik Sandholm 
wrote:

> Apperently there exists a head tracking specialized version of the bno055
> called bno070 - but I cannot find a place to buy this.
> you can get up to 250 samples / second from that version.
>
> Matthias did not complain about the update rate with the DIY headtracker
> http://www.rcgroups.com/forums/showthread.php?t=1677559
>
>
> http://electronicspurchasingstrategies.com/2014/03/06/hillcrest-bosch-sensortec-collaborate-sensor-hub-solution/
>
> http://docsfiles.com/pdf_bno070.html
>
>
> http://ae-bst.resource.bosch.com/media/_tech/media/product_flyer/Mobile-Hillcrest-Labs-Sensor-Hub-Product-Brief-BNO0701.pdf
>
>
> https://www.eiseverywhere.com/file_uploads/5c881a2f6eaaa90ed3f3d30fb20852db_Newsflash_BNO077.pdf
>
>
> maybe it is better to not think about that and try and use what I have ?
>
> It is easy to find videos about using the bno055 in different projects but
> i do not want videos... have not found links to the software used...
>
>
>
> Better luck when google github bno055 :-)
>
>
> https://www.reddit.com/r/virtualreality/comments/3faq11/why_we_need_to_move_to_ambisonic_sound_in_the/
>
> http://vriscoming.com/daydream-vr/
>
> Bo-Erik
>
> 2016-02-01 18:56 GMT+01:00 Marc Lavallee :
>
> > On Mon, 01 Feb 2016 17:46:47 +
> > Stefan Schreiber  wrote:
> > > What is a 10 DOF motion sensor??? (Didn't you mean 9DOF?)
> >
> > It's 9DOF + barometric pressure, so 9DOF is enough. For more info:
> >
> http://playground.arduino.cc/Main/WhatIsDegreesOfFreedom6DOF9DOF10DOF11DOF
> >
> > --
> > Marc
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160201/252ce84f/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160201/b49200e1/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Never do electronic in public.

2016-02-01 Thread David McGriffy
Oh how I wish I had time to play with this stuff. At this moment on my
breadboard I have a Teensy 3.2, a 6DOF gyro module, etc. Bluetooth modules
should be in today. 10DOF modules on order. I think the magnetometer is
important for HT to prevent drift. The altitude, perhaps not so much. Of
course, this is for a drone project (I'm writing a book about building
drones). And I have audio projects backed up ...

For audio and head tracking, I have been playing in Unity to Google
Cardboard and now GearVR. I have some demo code for anyone interested in
Unity development.

The standard I hear in the VR world is 20ms "motion to photons". Minimum
frame rate of 60Hz, preferably 90-120Hz. These faster rates do not allow
convolution with the full block size of the listen database, though careful
truncation should be OK.

David McGriffy
VVAudio

On Mon, Feb 1, 2016 at 5:52 AM, Michael Chapman  wrote:

> > I just looked back at sursound mails and michael chapman had pointed out
> > that the teensy has four channel audio support. >
> > http://www.pjrc.com/teensy/td_libs_Audio.html
> >
> > umashankar
> >
> >
> Must refuse the credit.
> I just participated in that thread.
>
> Michael
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160201/1758058b/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] YouTube adds ambisonics support

2016-01-15 Thread David McGriffy
Google's spatial audio is in the latest version of their Unity plugins,
available now.  I haven't found any doc on how to make a youtube 360 video
with spatial audio, but that might be there too.

I have plugged their audio widgets into my Unity demo in place of my own
audio widgets and after a couple of tweaks it all seems to work. I don't
see a way to play a B-format file directly.  Also, it looks like this is
targeted only at headphones.   I have always thought that one of the
beauties of using ambisonic processing is that one can switch from
headphones to speakers.

My audio sources can be direct binaural, ambisonic to binaural, or pure
ambisonic just by flicking a couple of switches. Also, even when in direct
binaural mode, I built my spatializer to fade from binaural to ambisonic as
you get more distant since once you are awash in reverb the CPU cost of the
direct binaural isn't worth it.  You can still do binaural for the whole
ambisonic bus and reverb returns. I haven't figured out how to get 5.1 out
of Unity, but I think it's possible.

Also, as someone mentioned back there, having non-rotated streams can be
useful.  Not only do I have a B-format clip player, but you can switch it
to a second ambisonic bus that is not rotated for things like background
music that shouldn't track with head movements.  If I remember right,
MPEG-H supports two ambisonic streams for just such reasons.

David McGriffy
VVAudio

On Fri, Jan 15, 2016 at 3:29 PM, Albert Leusink 
wrote:

> Marc Lavallée  writes:
>
> >
> > Albert,
> >
> > What you described is exactly the project I started.
> > What's missing is the zoom feature and support for HOA.
> > Should I continue? And make it open source?
> > I don't have much time now... (sigh).
> >
> > --
> > Marc
> >
>
> Hi Marc,
>
> Yes, I've been to your site. Awesome job!, I was really pleasantly
> surprised
> how good it sounded and how well it localized considering it is just
> browser
> based
>
> Please continue! This is something that all of us newbies, who flocked to
> this list because of 360/VR, want to see happening.
>
> Not sure how long it will be before Youtube/Cardboard will put this into
> reality, but as far as I know, apart from proprietary apps by big players
> such as Jaunt and IM360 etc, it does not currently exist yet, so people
> settle for stereo, binaural, 5.1 or game engine plugins that don't sound
> very good for music..
>
> If this could be integrated somehow into a web based spherical video
> player,
> there should be a way for you to recoup some of your valuable time spent on
> this by making it subscription based...I'll be one of the first to sign
> up...
>
> (Or maybe Google will recruit you for $$M/year...:-))
>
> Bests,
>
> Albert
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160115/035820b4/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] vertical precendence and summing localisation (wallis and lee 2015)

2015-12-13 Thread David McGriffy
I've been working for a couple of years now on a synthetic binaural
system.  My first reason was to be able to interpolate cleanly between
HRTFs, which I think is working well.  I can move sounds around at silly
speeds and get smooth responses.  And it all sounds good to me, but of
course, I may have just built my own, custom HRTFs.

The second goal is to parameterize it for different people.  I control the
dozen or more filters and delay with three parameters, head size that
controls ITD, torso size or the cutoff above which there are ILDs, and a
frequency scaling of all the other stuff, which is mostly the stuff above
4K, which I'm calling ear size.  So far my experience is that the delay
definitely makes a difference.  The ILDs help, but changing the cutoff
frequency is not as dramatic as I might have thought.

The top stuff, I assume mostly pinnae effects, is what interests me right
now.  I don't find that scaling the frequencies up and down makes much
difference, but, I wonder if having some notches and peaks there doesn't
help anyway.  Consider a sound in front.  When you turn your head to
disambiguate front/back, these notches and peaks sweep up in one ear and
down in the other.  A sound in the rear would get very little of this
effect.  Could our ears not be very sensitive to the sweep, even if we
could not reliably tell one static state from the other?

Here's a page showing graphics of my results so far.  I'll post a better
link to my google cardboard demo when I manage to get it up to the app
store, but here's an .apk file for those of you who can get that to your
phone on your own.

http://www.vvaudio.com/synthbinaural
http://www.vvaudio.com/sites/vvaudio7/files/VVUnityDemo.apk_.zip

David McGriffy
VVAudio


On Sun, Dec 13, 2015 at 9:57 AM, Augustine Leudar  wrote:

> Thanks for clarifying Bo-Erik
>
> On 13 December 2015 at 15:42, Bo-Erik Sandholm 
> wrote:
>
> > Tried to formulate it like this exactly:
> > Above 4 kHz individual hrtf's are very difficult to create.
> > Generic HRTF transformation can (should) be used for 4kHz and upwards.
> >
> > Best Regards
> > Bo-Erik
> > On Dec 12, 2015 10:19 PM, "Stefan Schreiber" 
> > wrote:
> >
> > > Bo-Erik Sandholm wrote:
> > >
> > > Sadly this info is not published as far as I know,
> > >>
> > >> I have it from Ingvar Öhman
> http://lmgtfy.com/?q=ingvar+%C3%B6hman&l=1
> > >> who
> > >> had a research company for researching how the human hearing work
> > together
> > >> with the 2 loudspeaker  stereo system.
> > >>
> > >> He did a demo recording showing how you could do stereo with height
> > mixing
> > >> For The Swedish Radio (Sveriges Radio) a long time ago with very low
> > >> interest from them.
> > >>
> > >>
> > >
> > >
> > > Stereo with height - (reflecting)
> > >
> > > This is a truly audiophile approach...;-)
> > >
> > > But I don't want to be too ironic: interesting anyway. And such
> > techniques
> > > could be applied to 5.1 etc., as well.  (What the Mpeg seems to offer
> as
> > > "tool"  in the case of Mpeg-H 3DA, BTW. )
> > >
> > > Stefan
> > >
> > > P.S.:
> > >
> > > I have talked to him about this, He has never been much of a publisher
> > >> other than in discussion forums in swedish.
> > >>
> > >>
> > > A pity?!
> > >
> > > And of course is using it in his business as consultanta and designer,
> > >>
> > >> So I have only the principal description that individual HRTF's over
> > 4kHz
> > >> makes things worse for 90% of the population and that is frequency
> bands
> > >> that for the majority of people gives an impression of height when
> > raised
> > >> in amplitude.
> > >>
> > >>
> > >
> > > Must read:
> > >
> > > "... that  < generic > HRTF's over 4kHz makes things worse for 90% of
> the
> > > population"  ?
> > >
> > > We must keep precise, in this context.
> > >
> > >
> > >
> > > ___
> > > Sursound mailing list
> > > Sursound@music.vt.edu
> > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here,
> > > edit account or options, view archives and so on.
> > >
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> >

Re: [Sursound] Brahma first experience

2015-12-07 Thread David McGriffy
Emanuele,

As you may be aware, I have recently released VVEncode with Brahma support
that will run in ProTools.  This is hardly makes even a minimal suite,
though.  I know of no rotate unit for ProTools, for example.  It's my goal
to produce encode, decode, pan, and rotate for VST/AAX/OSX/WIN, but fun
things in the VR world keep distracting me.

David
VVAudio

On Mon, Dec 7, 2015 at 11:40 AM, Emanuele Costantini <
lamacchiaco...@yahoo.it> wrote:

> At the moment I am using it in quite controlled environments, where is
> just myself or another person with me, recording atmos tracks, so we can
> leave our smatphones down for few minutes.
> Someone is enquiring for some ipothetical VR work where I fear the Brahma
> won't cope with the amount of RF sprays from cell phones and electric,
> electronic devices and cameras coming from a crew of people.
> :-)
>
> Emanuele
>
>
> On 07/12/2015 17:05, Jean-Pascal Beaudoin wrote:
>
>> Thank you for sharing your experience, Emanuele.
>>
>> We bought one earlier this year but it turned out to a real RF magnet,
>> which for us made the microphone unusable in the context commercial
>> productions.
>>
>> Nakul at Embrace Cinema Gear did his best to help us but as you pointed in
>> your article, it doesn't have a RF protection circuit so ultimately we had
>> no choice but to return it.
>>
>> Cheers,
>>
>> J-P
>>
>> Jean-Pascal Beaudoin | *Headspace Studio*
>> Co-founder / Head of Sound
>> +1 514-746-8461
>>
>> 2015-12-07 11:48 GMT-05:00 Emanuele Costantini :
>>
>> Hello everyone,
>>>
>>> I would like to share with you my first experience with Ambisonics
>>> technique through the Brahma microphone:
>>>
>>> http://tinyurl.com/z6pm5ur
>>>
>>> Any feedback is really welcome.
>>>
>>> Many thanks.
>>>
>>> Emanuele
>>> ___
>>> Sursound mailing list
>>> Sursound@music.vt.edu
>>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>>> edit account or options, view archives and so on.
>>>
>>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151207/f04d8116/attachment.html
>> >
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic on Matlab

2015-11-09 Thread David McGriffy
Right, the folks I have worked with who are using MatLab are mostly doing
acoustic measurement work and want to process TetraMic signals natively in
the environment where they do everything else.

David
VVAudio

On Mon, Nov 9, 2015 at 12:12 PM, Dave Malham  wrote:

> Depends, of course, what the facility is to be used for. If it's for
> artistic purposes, I would definitely agree with Augustine, go with ICST or
> Spat or something similar. If it's for scientific investigation, then
> Matlab or Octave or Scilab would probably be best.
>
>  Dave
>
> On 9 November 2015 at 15:32, Augustine Leudar 
> wrote:
>
> > You could use Ircam's SPAT or ICST ambisonics plugins in Max MSP to do
> this
> > for Windows or Osx machines - if I'm not wrong Soundscape renderer will
> do
> > it in Linux. Personally I wouldnt use Matlab to do this - but only
> because
> > I dont know the program very well.
> >
> > On 9 November 2015 at 14:22, David McGriffy  wrote:
> >
> > > Seba,
> > >
> > > I have ported my first order ambisonic code to MatLab before but
> nothing
> > > higher order yet.
> > >
> > > David
> > > VVAudio
> > >
> > > On Mon, Nov 9, 2015 at 1:55 AM, Ausili, S.A. 
> > > wrote:
> > >
> > > > Dear all,
> > > >
> > > > We would like to include an Ambisonic lab in our facilities and we
> > don't
> > > > have experience in applying it so far.
> > > >
> > > > We are thinking to use a big number of speakers (4th or 5th order)
> but
> > we
> > > > are still looking for the best or most comfortable software to handle
> > > this.
> > > >
> > > > Because of that, is there any Matlab toolbox that directly provides
> you
> > > > the audio output for a certain order of Ambisonic?
> > > > Which are your recommendation actually to create a new setup?
> > > >
> > > > Thanks in advance and looking forward to your comments.
> > > >
> > > > Regards,
> > > > Seba.
> > > > ___
> > > > Sursound mailing list
> > > > Sursound@music.vt.edu
> > > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> > here,
> > > > edit account or options, view archives and so on.
> > > >
> > > -- next part --
> > > An HTML attachment was scrubbed...
> > > URL: <
> > >
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151109/7b429c57/attachment.html
> > > >
> > > ___
> > > Sursound mailing list
> > > Sursound@music.vt.edu
> > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here,
> > > edit account or options, view archives and so on.
> > >
> >
> >
> >
> > --
> > www.augustineleudar.com
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151109/8f9ff93b/attachment.html
> > >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
>
>
>
> --
>
> As of 1st October 2012, I have retired from the University.
>
> These are my own views and may or may not be shared by the University
>
> Dave Malham
> Honorary Fellow, Department of Music
> The University of York
> York YO10 5DD
> UK
>
> 'Ambisonics - Component Imaging for Audio'
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151109/18cbc442/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20151109/6ed624f7/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic on Matlab

2015-11-09 Thread David McGriffy
Seba,

I have ported my first order ambisonic code to MatLab before but nothing
higher order yet.

David
VVAudio

On Mon, Nov 9, 2015 at 1:55 AM, Ausili, S.A.  wrote:

> Dear all,
>
> We would like to include an Ambisonic lab in our facilities and we don't
> have experience in applying it so far.
>
> We are thinking to use a big number of speakers (4th or 5th order) but we
> are still looking for the best or most comfortable software to handle this.
>
> Because of that, is there any Matlab toolbox that directly provides you
> the audio output for a certain order of Ambisonic?
> Which are your recommendation actually to create a new setup?
>
> Thanks in advance and looking forward to your comments.
>
> Regards,
> Seba.
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] "Spatial Workstation" for 360? (VR) audio mixing/edition

2015-10-27 Thread David McGriffy
I've downloaded Unity and 3Dception's existing stuff to play with and my
first comments are: Unity is fun!  Everyone I know that has tried it thinks
it's a great environment to work in.  And 3Dception is well integrated into
the Unity world.

This new VR audio workflow looks to me to be about monitoring in the target
environment, i.e. VR, while you are creating your audio elements.  This is
probably important in this new world that everyone is still learning about.
 (Though I might counter that ambisonics probably scales from small to
large monitoring systems more gracefully than most.)  Looks like a plugin
for the DAW's that does the rotation and rendering and then a synchronized
video player to feed the VR headset and send the headtracking data back to
the plugin.

I believe the target runtime is the same as their current stuff.  You can
place individual audio objects in the scene and/or a B-format player.
Connect a widget to the camera for head position and the audio engine takes
care of mixing, rotating and rendering.  Note that the render code is
binary and different for each platform so they could potentially be using a
different method on phones where cycles are limited.  Even with unlimited
power, the required low latency means that even 512 sample blocks are too
long at the highest frame rates and some optimization is required.

Note that native Unity audio can do most of this, but I believe 3Dception
has a proprietary binaural system they think is better, and of course they
are one of the very few applications that take B-format directly, even in
this enlightened age.

David
VVAudio



On Tue, Oct 27, 2015 at 7:27 AM, Dave Hunt 
wrote:

> Hi,
>
> This doesn't yet seem to be available.
>
> Part of it is not too radical: the use of binaural plug-ins for a DAW.
> These have been around in various guises for quite some time, but have not
> been widely taken up despite the huge number of listeners using headphones
> (especially on mobile devices) and the widespread availability of stereo
> (as opposed to truly multi-channel other than  5.1/7.1) audio DAWs. Two
> channel plug-ins are generally compatible with all of them, multi-channel
> plug-ins are not.
>
> They do seem to offer some sort of binaural surface reflection algorithm,
> which must be on a per source basis. A binaural global reverb algorithm is
> harder to implement, and binaural plug-ins that I have seen that have a
> reverb implementation has a reverb per instance. This can get DSP
> intensive, and makes it very tedious to make global changes.
>
> They do mention incorporation of head tracking, but details are vague.
>
> The whole scheme seems to be a re-working of their games engine plug-ins,
> where sound sources and the listener can move freely,  to standard DAW
> plug-in formats. Using a DAW would be, at best, a simulation, with audio
> file delivery to games developers being discrete mono, stereo or surround
> stems as at present. The final binaural coding would be on an audio object
> or stem basis in the game engine.
>
> As far as I know you cannot undo binaural coding or spatially manipulate
> it (at least not easily and only with static sources and listener).
>
> Could be useful for binaural audio production (not exactly a huge market)
> and producing demonstrations for games audio using their existing products,
> but is not an answer to every problem. The prospect of communication with
> more interactive and non-linear audio software (e.g. Max, pd,
> SuperCollider, Live ??) would be much more interesting.
>
> Ciao,
>
> Dave Hunt
>
>
>   1. "Spatial Workstation" for 360? (VR) audio mixing/edition
>>   (Stefan Schreiber)
>>
>> From: Stefan Schreiber 
>> Date: 25 October 2015 19:40:41 GMT
>> To: Surround Sound discussion group 
>> Subject: [Sursound] "Spatial Workstation" for 360º (VR) audio
>> mixing/edition
>>
>>
>> FYI...
>>
>>
>> http://www.roadtovr.com/two-big-ears-spatial-workstation-delivers-realtime-cross-platform-3d-vr-audio-mixing/
>>
>>
>> 3D spatial audio specialists Two Big Ears have launched a new 'Spatial
>>> Workstation', a platform for audio engineers to mix and edit immersive
>>> audio with realtime feedback leveraging VR headset head-tracking
>>> information.
>>>
>>
>>
>> See "Workflow" picture...
>>
>>
>> The API is designed to provide an interface for developers to deliver
>>> accurate and compelling spatial audio, that is - audio which recreates a
>>> realistic sound-stage akin to the real world. This is particularly
>>> important for virtual reality as what you hear is a key trigger point for
>>> presence
>>>
>>
>>
>> Best,
>>
>> Stefam
>>
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151027/24a3086f/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit

Re: [Sursound] Release of VVEncode

2015-10-23 Thread David McGriffy
(Missed this message as it came in.)

I'm not likely to port the whole plugin to ALSA.  But the core signal
processing library, VVSDK, is running today everywhere from Linux cloud
servers to Android.

David

On Thu, Oct 8, 2015 at 6:36 AM, Marc Lavallée  wrote:

> On Wed, 7 Oct 2015 23:20:27 -0500,
> David McGriffy  wrote :
>
> > To all who requested AU support, I have a beta ready.  See:
> > http://www.vvaudio.com/downloads
>
> Can we expect Linux and Android versions?
>
> --
> Marc
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20151023/47fcf849/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Advice on new loudspeaker array... Genelec 8010 speakers?

2015-10-16 Thread David McGriffy
I certainly didn't mean to discount everything that happens after the
decoder, driver design, room acoustics and such.  Probably more important
and should be taken care of first and then what can't be fixed there maybe
the decoder can take into account somehow.  Encoders and decoders are just
the things that I happen to work on every day.  Sadly no budget for 32
Genelecs around here regardless of the number of subs.

David

On Fri, Oct 16, 2015 at 12:19 PM, Ilpo Martikainen <
ilpo.martikai...@genelec.com> wrote:

> Different dispersion patterns of different drivers may not be a decoder
> question only, but should be solved in the acoustic domain. That has been
> the whole idea of so called waveguides, to match the directivity of a
> higher frequency driver - MF or HF - to the directivity of the lower
> frequency driver around crossover. Otherwise there will be a bump in power
> response as the higher frequency driver is practically omnidirectional at
> its low end while the lower frequency driver gets directive in its upper
> end.
>
> Ilpo
>
> > On 16 Oct 2015, at 19:27, David McGriffy  wrote:
> >
> > I've been thinking about how this discussion might apply to a couple of
> > things I'm working on and it seems to me there are two different problems
> > here.
> >
> > First, there is the issue of higher order mics often not really being
> > higher order at low frequencies.  But isn't this really a problem of
> > encoding and not decoding?  It seems like we shouldn't have to know
> > anything about the mic once we are in B-format.  And such considerations
> > would not apply to synthetically panned higher order signals, right?
> >
> > The other issues around cost of low frequency drivers, the ability to
> place
> > the physically larger, heavier speakers, dispersion patterns of different
> > drivers, etc. are certainly decoder questions.  With simple, linear
> > decoders, one would have an 'ambisonic crossover' that not only divides
> > frequency bands but takes care of scaling the different order components,
> > followed by two or more decoders.  Since a parametric decoder is already
> > working in the frequency domain, one could just vary the decode
> parameters
> > on a bin by bin basis, thus generating all the outputs from a single set
> of
> > input FFT's.
> >
> > But these issues are about really decoding to speakers, which would not
> > apply to creating say 7.1 stems to be later mixed and sent to a bass
> > managed theater system, right?  Or to put it another way, crossed over
> > decoders are doing the bass management in the ambisonic domain.
> >
> > David
> > VVAudio
> >
> > On Fri, Oct 16, 2015 at 8:19 AM, Peter Lennox 
> wrote:
> >
> >> plus, with the large numbers of speakers, it's cheaper and easier to
> cross
> >> over in the b-format 'pinchpoint' than in the speaker feeds.
> >> ppl
> >>
> >> Dr. Peter Lennox
> >> Senior Fellow of the Higher Education Academy
> >> Senior Lecturer in Perception
> >> College of Arts
> >> University of Derby
> >>
> >> Tel: 01332 593155
> >> 
> >> From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Steven
> >> Boardman [boardroomout...@gmail.com]
> >> Sent: 16 October 2015 14:14
> >> To: Surround Sound discussion group
> >> Subject: Re: [Sursound] Advice on new loudspeaker array... Genelec 8010
> >> speakers?
> >>
> >> This is something I have alluded to before.
> >> There is no need to have multiple transducers in the same box with
> >> ambisonics.. In fact it should be more accurate, economical, aesthetic,
> and
> >> space saving to have each transducer at its own separate point in space.
> >> Tweeters are more directional so more are needed,  they are also
> cheaper.
> >> Going all the way down the frequency range to the least directional and
> >> most expensive, woofers and subs.
> >> Separate decodes could be done for each transducer bandwidth, rotating
> the
> >> soundfield to align. This would also benefit the transducer crossover
> >> point, as there would be less interference from proximity. It would also
> >> help room response, as crossovers could be tweaked and balanced on the
> fly.
> >> Which is also alot of fun!
> >>
> >> Steve
> >> On 16 Oct 2015 12:47, "Peter Lennox"  wrote:
> >>
> >>> we used a much cruder version of this back in 2002 - decoding a
> >

Re: [Sursound] Advice on new loudspeaker array... Genelec 8010 speakers?

2015-10-16 Thread David McGriffy
I've been thinking about how this discussion might apply to a couple of
things I'm working on and it seems to me there are two different problems
here.

First, there is the issue of higher order mics often not really being
higher order at low frequencies.  But isn't this really a problem of
encoding and not decoding?  It seems like we shouldn't have to know
anything about the mic once we are in B-format.  And such considerations
would not apply to synthetically panned higher order signals, right?

The other issues around cost of low frequency drivers, the ability to place
the physically larger, heavier speakers, dispersion patterns of different
drivers, etc. are certainly decoder questions.  With simple, linear
decoders, one would have an 'ambisonic crossover' that not only divides
frequency bands but takes care of scaling the different order components,
followed by two or more decoders.  Since a parametric decoder is already
working in the frequency domain, one could just vary the decode parameters
on a bin by bin basis, thus generating all the outputs from a single set of
input FFT's.

But these issues are about really decoding to speakers, which would not
apply to creating say 7.1 stems to be later mixed and sent to a bass
managed theater system, right?  Or to put it another way, crossed over
decoders are doing the bass management in the ambisonic domain.

David
VVAudio

On Fri, Oct 16, 2015 at 8:19 AM, Peter Lennox  wrote:

> plus, with the large numbers of speakers, it's cheaper and easier to cross
> over in the b-format 'pinchpoint' than in the speaker feeds.
> ppl
>
> Dr. Peter Lennox
> Senior Fellow of the Higher Education Academy
> Senior Lecturer in Perception
> College of Arts
> University of Derby
>
> Tel: 01332 593155
> 
> From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Steven
> Boardman [boardroomout...@gmail.com]
> Sent: 16 October 2015 14:14
> To: Surround Sound discussion group
> Subject: Re: [Sursound] Advice on new loudspeaker array... Genelec 8010
> speakers?
>
> This is something I have alluded to before.
> There is no need to have multiple transducers in the same box with
> ambisonics.. In fact it should be more accurate, economical, aesthetic, and
> space saving to have each transducer at its own separate point in space.
> Tweeters are more directional so more are needed,  they are also cheaper.
> Going all the way down the frequency range to the least directional and
> most expensive, woofers and subs.
> Separate decodes could be done for each transducer bandwidth, rotating the
> soundfield to align. This would also benefit the transducer crossover
> point, as there would be less interference from proximity. It would also
> help room response, as crossovers could be tweaked and balanced on the fly.
> Which is also alot of fun!
>
> Steve
> On 16 Oct 2015 12:47, "Peter Lennox"  wrote:
>
> > we used a much cruder version of this back in 2002 - decoding a
> > hemispherical 32 speaker array to second order, but crossed over the
> > B-format at 90Hz (I think) to a horizontal-only 8-sub array, decoded in
> 1st
> > order. This was on the basis that we couldn't fly the subs, and anyway,
> > elevation discernment, being largely due to pinnae affects, was not
> > appealed to by the subs anyway. Had to work on the time alignment (the
> sub
> > decoder was analogue, the mid'n'tops 32 speaker array done in software)
> and
> > spatial alignment (rotating the subfield to match t'other, in the
> b-format
> > feed). It worked well, though could have been further refined; it was a
> > one-off installation.
> > But the principle of using decreasing order with decreasing frequency
> made
> > sense from the point of view of efficient use of transducers.
> >
> > It made me wonder whether the same principle extends the other way -
> > increasing order with increasing frequency, to make up for the
> deficiencies
> > in spatial resolution of lower orders at HF.
> > Given that it should now be reasonably 'easy' to align the fields of
> > multiple cells - even having differnt numbers of speakers for each
> > frequency band, there might be less reason to assume that  point source
> > speakers are strictly necessary.
> > We're still using speakers designed as stereo projection systems, and it
> > could even be that starting again, thinking about real-world usages of
> > ambisonics, that one could revisit the speaker design theories.
> >
> > Going off on a tangent, it might be that (as others have experimented
> > with, before) that the trasnducer design for the programme material which
> > is 'ambient' (reflected sound, from no particular source, and therefor
> not
> > requiring precision in phantom imagery) might differ than that for
> > the'virtual sources' ('images')
> >
> > So I experimented with 12 very modest nxt-type flat panels which were
> > rotated thru' 90 deg. to what you'd expect, as it were - that is, they
> > didn't 'face' the centre but were at 

Re: [Sursound] Release of VVEncode

2015-10-07 Thread David McGriffy
To all who requested AU support, I have a beta ready.  See:
http://www.vvaudio.com/downloads

D

On Wed, Oct 7, 2015 at 9:45 AM, John Abram  wrote:

> I am interested in an AU version as well as VST.
>
> I’m also curious what advantages VVEncode and VVDecode will have over
> VVTetraVst and VVMicVst.
>
> best,
> John
>
> On Oct 7, 2015, at 7:21 AM, David McGriffy  wrote:
>
> > So my not so subtle plea for an AU beta tester has worked!
> >
> > Seriously, it's easy for me to produce an AU binary.  I'm working in the
> > wonderful Juce C++ framework which manages not just common source, but
> > common binary across all the plugin formats (VST3 too, if anyone cares?).
> > In theory, all I have to do is set a switch or two and recompile.  In
> > reality, I have to figure out how to package, test, install, document and
> > support it.  If there doesn't turn out to be enough interest for all
> that,
> > perhaps I could produce special versions for users who either don't need
> > much support or whose feedback is worth it.
> >
> > David
> >
> > On Tue, Oct 6, 2015 at 4:28 PM, Ronald C.F. Antony 
> > wrote:
> >
> >>
> >>
> >> Sent from a crippled mobile device
> >>
> >>> On 6 Oct 2015, at 17:08, David McGriffy  wrote:
> >>>
> >>> (and AU if there is demand)
> >>
> >> Consider this me declaring demand :)
> >> -- next part --
> >> A non-text attachment was scrubbed...
> >> Name: smime.p7s
> >> Type: application/pkcs7-signature
> >> Size: 2270 bytes
> >> Desc: not available
> >> URL: <
> >>
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151006/3416bf35/attachment.p7s
> >>>
> >> ___
> >> Sursound mailing list
> >> Sursound@music.vt.edu
> >> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> >> edit account or options, view archives and so on.
> >>
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151007/5d328faa/attachment.html
> >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20151007/93b3f91c/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Release of VVEncode

2015-10-07 Thread David McGriffy
Charles,

I don't actually sell either the TetraMic or Brahma so I'm not really the
one to ask about availability, but I'm pretty sure that they would love to
sell one to you.

Yes, VVDecode will support arbitrary playback systems in that you can set
the speaker positions to whatever you like.  It is limited to 8 outputs,
but you can use two copies to get more if you need to.  I might add,
however, that VVDecode will be but a first order linear decoder and as such
probably isn't the right choice much past 8 speakers.  Parametric decoders
or HOA is a better answer for large arrays.

David

On Wed, Oct 7, 2015 at 10:22 AM, Charles Veasey 
wrote:

> This is great news! I'm putting a budget together for a 3D sound course at
> my school? I've budgeted in at least one TetraMic.
>
> Is it still possible to buy the Brahma mic as well?
>
> Will the decoder support arbitrary playback systems?
>
> thanks,
> Charles
>
> On Wed, Oct 7, 2015 at 8:49 AM, Paul Hodges 
> wrote:
>
> > --On 06 October 2015 16:08 -0500 David McGriffy 
> > wrote:
> >
> > > I'm sure you will let me
> > > know if I've made any silly errors or how it might be improved.
> >
> > Hard to tell; I can't download it, even though I can see it in my login
> > area and have been sent a link.  Maybe it's because I pretended to be
> > in the US, because only US and Canada can be specified as countries.
> >
> > Paul
> >
> > --
> > Paul Hodges
> >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151007/e487df0f/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20151007/56b4ee60/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Release of VVEncode

2015-10-07 Thread David McGriffy
So my not so subtle plea for an AU beta tester has worked!

Seriously, it's easy for me to produce an AU binary.  I'm working in the
wonderful Juce C++ framework which manages not just common source, but
common binary across all the plugin formats (VST3 too, if anyone cares?).
In theory, all I have to do is set a switch or two and recompile.  In
reality, I have to figure out how to package, test, install, document and
support it.  If there doesn't turn out to be enough interest for all that,
perhaps I could produce special versions for users who either don't need
much support or whose feedback is worth it.

David

On Tue, Oct 6, 2015 at 4:28 PM, Ronald C.F. Antony 
wrote:

>
>
> Sent from a crippled mobile device
>
> > On 6 Oct 2015, at 17:08, David McGriffy  wrote:
> >
> >  (and AU if there is demand)
>
> Consider this me declaring demand :)
> -- next part --
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 2270 bytes
> Desc: not available
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20151006/3416bf35/attachment.p7s
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20151007/5d328faa/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Release of VVEncode

2015-10-06 Thread David McGriffy
Dear SurSounders,

I am pleased to announce the release of VVEncode. It supports both TetraMic
and Brahma and it is available for VST & ProTools.  I hope that this is the
first of several new plugins from VVAudio, the next being VVDecode.  My
goal is to produce a complete ambisonic workflow for VST and AAX (and AU if
there is demand), which I think means at least encode, decode, rotate and
pan.

This release comes with a redesign of my website including a bunch of new
background material.  So please, take a look at the site and consider
trying VVEncode if you have a TetraMic or Brahma.  I'm sure you will let me
know if I've made any silly errors or how it might be improved.

Thanks,
David McGriffy
VVAudio
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20151006/4fca6b0a/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] ambisonics audio for 360 film (VR)

2015-07-22 Thread David McGriffy
Henk,

Sounds like a fun project.  I have run into some other folks using the
Kolor Eyes player as well.

I may have just the thing for your new player.  I have been working for a
while on a new ambisonic library I'm calling VVSDK.  It does all the things
you need, rotate, decode, binaural, and can include my latest A to B-Format
conversion code.

Early versions are already in use in the VR and acoustic measurement
communities and I use the same library as the back end for my own new
plugins.  All this and more should be generally available as soon as I can
do the tedious bits like choosing license types, upgrading websites, etc.,
but I am ready to talk about individual projects such as yours immediately.

David McGriffy
vvaudio.com

On Wed, Jul 22, 2015 at 9:30 AM, Marc Lavallée  wrote:

> Hello Henk.
>
> On Wed, 22 Jul 2015 12:38:53 +0200, "Henk | Spook.fm" wrote :
> > My question is: Who can help us find the best libraries for
> > ambisonics audio to use in the 360 Video player.
>
> I started a similar project, for the web: http://ambisonic.xyz/
> A new version (working, but not yet published) will be "full sphere".
>
> > So far we have come up with the following:
> >
> > you can seem to play ambisonics files with these:
> >
> https://github.com/ComposersDesktop/CDP7/tree/b085aa141fa93c98ae137feae5e7c58d656ddfdd/dev/externals/mctools
> > and
> > http://www.mega-nerd.com/libsndfile/
> >
> > Are these the same?
>
> Short answer: no.
>
> > which one would be best for this project?
>
> I created a batch processor to convert and compress ambisonic files,
> using a combination of command-line utilities from the mctools and
> libsndfile, and others. To be published, eventually...
>
> > Are there other options for rotation? People have experience with
> > these libraries? Which one would be best for this project?
>
> I use the h1b_rotate.dsp Faust processor from ADT (by Aaron Heller):
> https://bitbucket.org/ambidecodertoolbox/adt/
> It provides a rotation transformation for each axis (yaw, pitch, roll).
>
> > Where can I find the best HRTFs?
>
> Try the ones from the ATK "kernels".
>
> > Which one is a good starting point for the “average” head?
>
> From the ATK kernels, there's the HRIRs based on the "spherical" model
> by Richard O. Duda (number 4 is the average, other variations
> correspond to smaller and bigger spheres).
>
> Try my player on ambisonic.xyz...
>
> There's also the hb1_to_binaural.dsp Faust processor from ADT, also
> based on the spherical model. The spherical model is is for "horizontal
> only" decoding, but for interactive media it's not a serious problem.
>
> > If anyone has some suggestions or tips for this project it is very
> > much appreciated.
>
> I hope it helps.
>
> > I will keep this group informed about future progress.
>
> Please do!
>
> Marc
>
> >
> > thank you,
> >
> > Henk van Engelen
> >
> > Rapenburgerstraat 109
> > 1011 VL Amsterdam
> >
> > T: +3161694
> > M: h...@spook.fm
> >
> > Spook.fm | Facebook
> >
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL:
> > <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20150722/429aad75/attachment.html
> >
> > -- next part -- A non-text attachment was
> > scrubbed... Name: spookfm-logo-email handtekening.gif
> > Type: image/gif
> > Size: 2129 bytes
> > Desc: not available
> > URL:
> > <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20150722/429aad75/attachment.gif
> >
> > ___ Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> > here, edit account or options, view archives and so on.
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20150722/5947f0e9/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] [allowed] Re: Directional confusion between different B-format players

2015-06-16 Thread David McGriffy
The rear lobes of a figure-8 decode in VVMic should be inverted just like
those of a physical figure-8 mic.  It's a linear decoder and not really
capable of anything else.  Harpex, being a parametric decoder, could be
producing a figure-8 pattern with non-inverting rear lobes, or with the
rear lobes swapped, but I doubt they do.  Someone with the Harpex plugin
could test this by panning an impulse into one of the rear lobes and
checking the results.

I'd say it's more likely that some subtle combination of the spatial
aliasing from the large array with the parametric decoder in Harpex is
producing different results than a linear decode of the same spatial
aliasing.  Neither can really be correct in the upper, aliased part of the
spectrum and so it's not surprising to me that they would sound different.
This could be tested by filtering out frequencies above the aliasing point,
though with an array that large this will be pretty low (~2kHz?).

I might also suggest, as a point of comparison, to test against a simple
mix of the inputs.  Ignoring fine points like shelf filters and NFC, a
figure-8 decode is simply L=X+Y, R=X-Y.

David McGriffy
VVAudio.com

On Mon, Jun 15, 2015 at 5:51 PM, Gerard Lardner  wrote:

> That occurred to me too. The Oktava array is really huge (mine is an old
> one; the newer ones may be smaller); it places the capsules on a sphere of
> radius 48 mm - nearly 4" diameter! But the problem, the swapping of left
> and right in the front soundstage, exists only for one decoder (of those
> that I have tried) and only for one virtual mic pattern - VVmic decoding
> B-format to Blumlein stereo pair; it doesn't occur if I use Harpex B Player
> to decode the B-format to the same Blumlein stereo pair. That's what is
> puzzling. If they were swapped in more cases I could perhaps understand it
> better. I was wondering if VVMic and Harpex treat the 'back' lobes of the
> crossed figure-8s differently.
>
> When I have some more time, I will try to record something using both the
> Oktava array and a Brahma Ambisonic mic and to see if there is the same
> difference. The Brahma array is much smaller than the Oktava array.
>
> Gerard Lardner
>
>
> On 14/06/2015 17:34, Paul Hodges wrote:
>
>> --On 13 June 2015 20:15 +0100 Gerard Lardner  wrote:
>>
>>  I recorded a concert using my Oktava MK-012 4D ambisonic microphone
>>> and encoded it to B-format using Brahmavolver. Yesterday, while
>>> playing back the recording with the conductor we noticed that under
>>> certain circumstances left and right appeared to be swapped.
>>>
>> My first thought is that the microphone used has its capsules more
>> widely spaced than other tetrahedral mics, which could be leading to
>> some directivity confusion at lower frequencies than usual, which in
>> turn this material might be sensitive to.  Certainly I've never noticed
>> such an effect with VVmic, whether using a TetraMic or my previous
>> native B-format arrangement.
>>
>> Paul
>>
>>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20150616/f9291613/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio Available?

2014-11-12 Thread David McGriffy
There does seem to be a lot of interest in ambisonics to binaural these days.  
It certainly seems that way to me since I’ve been immersed in it of late.

Stefan, if your suggestion to the folks at JauntVR is what turned them on to 
ambisonics, then I must thank you as I have been having a great time working 
with them to provide custom ambisonics code, including binaural conversion.  I 
expect that I will also have some level of binaural conversion out in general 
purpose plugins eventually.

David McGriffy
VVAudio.com

On Nov 12, 2014, at 5:10 PM, Stefan Schreiber  wrote:

> Sampo Syreeni wrote:
> 
>> On 2014-11-12, Braxton Boren wrote:
>> 
>>> Our lab is working on some audio spatialization techniques that we'd like 
>>> to sync up with visual content over the Oculus Rift. We would like to 
>>> process the Ambisonic audio content separately (using head-tracking data 
>>> from the Oculus) for binaural rendering while the interactive VR visuals 
>>> are running on the Oculus.
>> 
>> 
>> What is this, a sudden landslide? I *just* got a demo from Ville Pulkki on 
>> this precise stuff at Aalto, here. I'm betting you'd want to contact him 
>> directly on this one.
> 
> 
> Well, but I < did > write to Ville Pulkki during last week about VR and other 
> applications of Ambisonics and DirAC  (or in general: Ambisonics and 
> parametric Ambisonics decoders). Which is probably just a further 
> coincidence...
> 
> Still waiting for the master's answer, though...:-D
> 
> --
> "
> 
>> See Bo-Erik Sandholm's recent posting on sursound (on "VR applications")
>> 
>> http://permalink.gmane.org/gmane.comp.audio.sursound/6227
> 
> 
> ...
> 
>> ... The combination of Ambisonics, DirAC and head-tracking (for binaural 
>> decoders) could be a real game-change development.
> 
> "
> 
> In any case, the VR people have already discovered Ambisonics some while ago 
> -  I remember that I myself have suggested to some representant of Jaunt VR 
> (obviously Jaunt VR is connected with the Oculus Rift train) to consider the 
> use of Harpex. (thread retrievable via sursound archives)
> 
> 
> The bigger picture is of course that head-tracked Ambisonics seems to be a 
> natural fit for VR audio - VR includes HT "by definition", and needs 
> obviously some 3D audio infrastructure for the sound. Ambisonics offers both 
> an established 3D audio framework and (importantly, too...) 3D audio capture!
> Rotation of sound fields according to positional head data can be realized in 
> a straightforward and simple fashion -  before the decoding stage.
> 
> 
> Best,
> 
> Stefan
> 
> 
>> 
>>> I was just wondering if there are any freely available demos that include 
>>> both VR visuals for Oculus as well as Ambisonic sound files.
>> 
>> 
>> Dunno how freely available they are, they they exist. *Oh* do they exist! :)
>> 
>>>  - VR visual content *integrated *with the Oculus Rift
>>>  - *Raw *Ambisonic recordings that go along with the VR simulation
>> 
>> 
>> The latter were played, at first order. Multiple clips too. I'm betting the 
>> A-format has been entombed as well, because I seem to remember at least one 
>> paper might be hanging on it. But let Ville fill out the details. And of 
>> course Archontis Politis, who's working on his PhD at Aalto -- and who 
>> demonstrated even wilder ambisonic minded stuff to me on the same visit... 8)
> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] TetraMic and Jaunt VR in Time, Gizmodo and Engadget (Virtual Reality Recording System)

2014-05-17 Thread David McGriffy

> 
> I don't want to appear overly pedantic, but wasn't it me who wrote this?   ;-)
> 

Oh dear, I seem to have gotten my indention levels mixed up.  Quite a sin for a 
C++ programmer.  But I am reminded of what someone said in a C++ forum once, ‘… 
aren’t all C++ programmers pedantic?’  No relation to the current list, I’m 
sure.

> 
> Why not simply call this a dual-band decoder?
> 
> 

Certainly, shelf filters are a form of dual band decoder.  Doing a real dual 
band decoder is still down there on my to do list somewhere..


David
VVAudio
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] TetraMic and Jaunt VR in Time, Gizmodo and Engadget (Virtual Reality Recording System)

2014-05-16 Thread David McGriffy

On May 16, 2014, at 7:54 AM, Stefan Schreiber  wrote:

> 
> Different people keep telling me that anechoic HRIR/HRTF sets don't sound 
> very well (if applied to deliver virtual surround delivered via headphones), 
> preferably you should use BRIRs (reverbant, or < non-anechoic >  O:-) ). This 
> is not what film sound people would like to do, because the room acoustics 
> and "soundscapes" in different movie scenes usually change a lot. The 
> different acoustical environments would have to be captured via the SFM, or 
> to be synthesized (DAW).
> 
> It might well be the case that "dry" HRIR sets don't work well for normal 
> headphones.
> 
> In < your > case, things seem to work:
> 
> 

I have also read in multiple sources but particularly Bruce Wiggins that adding 
room impulses helps with the perception.  Perhaps it’s easier to convince 
someone that they are sitting in a small room listening to a set of surround 
sound speakers when they are, in fact, sitting in a small room listening to a 
pair of headphones.  But add the VR visuals?  I think everyone expected the 
visuals to help lock in the audio and vice versa, but I do wonder if the room 
impulses would still be better or not?

Mostly, I didn’t end up adding any room responses for reasons more like what 
Adam said, that given the different environments the system is attempting to 
convey, it’s not clear what room you would use.  However, I plan to make this 
same library available for other purposes, both open source and commercial, and 
I expect that adding room responses would be useful to many applications so I 
will probably add that feature.  Perhaps when I do, the kind folks at Jaunt 
will let us know how it works when combined with visuals matching the original 
environment.

> 
>> The decoder consists of four
>> virtual cardioids spaced at 90º offsets, and the HRIR at the appropriate
>> angle & ear is applied to each.  I'm essentially describing his code though
>> so he can chime in with additional details.
>> 
> 
> This is actually not quite the case... Nevertheless, this is a smart try to 
> understand how a soundfield mike might work, and how a SFM recording might be 
> decoded to binaural representation/headphones.   ;-)

Actually that’s just about exactly what my decoder is doing and how some of the 
other b-format to binaural systems I read about seem to work.  There are 
details, of course.  For example, the decoder can easily be changed to use more 
and different virtual speakers as long as it can find a matching HRIR to use; 
and it is possible to read any of the samples from the Listen database.  But in 
my own listening tests, such as they are, I never found that adding more than 
four speakers helped.  Just seems to get muddy.  I really wanted to add height, 
in fact the preset is in the code, but I don’t think it sounds better, at least 
the way I’m doing it so far.

My questions are around the details of a square decode.  Cardioids, of course, 
are not max rE or rV.  I suspect a more optimized decode might be better in any 
situation.  Easy to try, but I have no subjective answer yet myself.  Then 
there are shelf filters and NFC.  I think shelf filters, as a way to optimize 
rE/rV will probably help in most cases too.  NFC I think will depend on the 
situation.  If you are using room responses, then NFC at the distance of the 
speakers in that room might help, otherwise, I don’t know what distance to use. 
 We could choose a default as is sometimes done, but I think that would still 
be correcting for something that isn’t happening in a binaural system.  
If others have experience or especially analytical answers to any of these 
questions, I’d be very interested as it’s something I’m actively working on.


David
VVAudio


> 
>> It should be stated that our current implementation is very much a
>> prototype and will require a good deal of refinement and personalization.
>> 
> 
> It might be oth worthwile and feasible for you to investigate the use of 
> individual HRIRs/HRTFs. (Measured OR derived form 3D scans/photographs, as I 
> have suggested. The latter method is definitively more complicated, but way 
> faster.)



> 
> 
> Best regards,
> 
> Stefan
> 
> 
> 
>> 
>> 
>>> Len Moskowitz wrote:
>>> 
>>> Jaunt VR has developed a virtual reality camera. They're using TetraMic
>>>   
 for recording audio, decoding with headtracking for playback over
 headphones and speakers. For video playback they're using the Oculus Rift.
 
 http://time.com/49228/jaunt-wants-to-help-hollywood-make-
 virtual-reality-movies/
 http://gizmodo.com/meet-the-crazy-camera-that-could-make-
 movies-for-the-oc-1557318674
 
 
>>> Citing from this link:
>>> 
>>> A close-up of the 3D microphone that allows for 3D spacialized audio. If
>>>   
 you're wearing headphones, there's actually headtracking for the Oculus to
 tell which direction you're looking--when you change your view, the sound
 mix will al

Re: [Sursound] VR and cheap headtracking in 2013...

2013-12-02 Thread David McGriffy
You probably don't want to control the decoder directly.  Better to 
control a rotate unit in front of it.   This would be easy to rig up in 
Blogue Bidule or Max/MSP with MIDI or OSC input.


Some decoders, like my own VVMicVST, have the right parameters to 
control the direction, but I doubt any are built with real time changes 
in mind.  I know, for example, that VVMicVST would produce nasty zipper 
noise if you change the azimuth too fast.


On the other hand, some rotate units have either interpolation or even 
audio rate control inputs to smooth this out.  I know the one in the 
Ambisonic Toolkit does, and I recently coded one up myself for a project 
that can do it either way.  I'm not sure about B-Proc, anyone?


David McGriffy
VVAudio

On 12/2/2013 2:47 AM, Bo-Erik Sandholm wrote:

Can you assist by pointing to a VST or standalone decoder that will accept for 
example MIDI input and give us a good headphone decoding of AMB files ?
There are several ways of doing head tracking that we might be able to get to 
work if we have the back end already done ;-)

Best Regards
Bo-Erik




___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Tetramic Question

2013-11-15 Thread David McGriffy

David,

I've also got an ambisonic visualizer plugin developed but not published 
.  Aside from the kind of interface cleanup it would need for public 
release, it uses what I eventually realized is a horribly inefficient 
algorithm.  I've mostly built it for my own testing purposes - to see 
what my other plugins are doing.  To that end I most often test using 
simple tones, perhaps multiple tones or noise, panned with Bruces 
Wiggins' plugins.  I also built a little surround signal generator, but 
I like to use other people software in there to guard against a 
misunderstanding type bug that might be in generator, processor and 
decoder where a common error might generate a correct result.


As to determining the direction of a sound from a soundfield recording, 
I've read a bunch of papers and patents around this sort of thing and 
it's certainly doable with caveats.  Both the SIRR and Harpex decoders 
do a determination of sound direction by direct solving of the 
equations, used to build decoders.  There are techniques, many from the 
conference phone  world, to determine where a sound is coming from using 
non-ambisonic arrays, often used to point cameras.  Then there are the 
ways to track a sound once you know its source as Fons discussed.  The 
best way to go depends a lot on what your goals are.


David McGriffy
vvaudio.com

On 11/14/2013 6:12 PM, David Cindric wrote:

Well... It's not published yet :D I'm testing it. That's why I need that
software to determine sound direction and compare it with visualiser.


https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Questions about support of .amb file (Wave based)

2013-10-02 Thread David McGriffy

Yvan,

I would say yes, definitely, to both parts of question 1, both input and 
output capabilities would be useful.


A normal workflow might go something like this:
- record A-format (direct mic signals) in the field
- Main project:
  + read the field recording in its native format, multi-mono, etc.
  + edit into tracks, do heads and tails, assemble tracks into a whole
  + apply some kinds of processing either surround, like rotate, or not 
like EQ

  + possibly mix in other material via ambisonic panners
  + render to a master B-Format track, to be stored in an .amb file
- Decoding project:
  + read the .amb file from the main project
  + decode to the delivery format(s) like stereo or 5.1
  + repeat for other delivery formats

If everything is in the same workstation, like Nuendo, then it doesn't 
really matter if the intermediate storage is a single .amb file, 
multiple mono .wav files, etc.  However, if we use .amb then we can play 
that file in other software in more predictable ways, for example my 
VVMic will switch out of A-Format mode if it detects the GUID on input.  
It will also write the GUID if set to B-Format output.


In Nuendo, on input files it might be nice to tell the user that the 
GUID was detected, but it's probably not important as long as Nuendo 
will go ahead and use what is also a pretty normal multi-channel .wav 
file.  I believe this works fine today, though putting .amb in the 
extensions list would be a nice touch.


On output, unless Nuendo were modified to have B-Format be a bus type or 
something like that, then I can't see how you would automatically detect 
that the output was supposed to be B-Format. There could be some kind of 
option in the render dialogs to allow the user to say that they want the 
GUID included or include it if the user names the file with the .amb 
extension.   Might also need logic to only include it for multi-channel 
files.


Question 2:  Yes there are some VST's that do B-Format panning and work 
in Nuendo, in particular I have used Bruce's and the Swiss Center B-Pan.


David McGriffy
vvaudio.com

 On 10/2/2013 5:11 AM, Yvan Grabit wrote:

Hi

1)
i want to known if it makes sense for an application like Nuendo (Steinberg) :
- to import directly (without decoding) AMB (Ambisonic) Files (described in 
http://dream.cs.bath.ac.uk/researchdev/wave-ex/bformat.html)
- to allow to export them directly (no Encoding)

This means that the Nuendo file selector will display as readable file, files with 
extension ".amb"
and the same for export ".amb" is selectable as format.

It would be great if it makes sense to have some use cases...

2) are there some Ambisonic plugins which could be use as Panners in Nuendo ?


Thanks a lot for your time


YVan


Yvan Grabit  mailto:y.gra...@steinberg.de
Technical Manager - Technology Group Phone: +49-40-21035125
Steinberg Media Technologies GmbHFax  : +49-40-21035300
Frankenstr. 18 b ,D-20004 Hamburg/Germany
  http://www.steinberg.net

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
- - - - - - - - - - - - - - - - - - - - - - - - - - -
Steinberg Media Technologies GmbH, Frankenstrasse 18b, D-20097 Hamburg, Germany
Phone: +49 (40) 21035-0 | Fax: +49 (40) 21035-300 | www.steinberg.net
Managing Director: Andreas Stelling, Kazunori Kobayashi
Registration Court: Hamburg HRB 86534
  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 
- - - - - - - - - - - - - - - - - - - - - - - - - - - -

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] A-format panner.

2013-09-26 Thread David McGriffy
If you have any kind of calibration involved, rotating before running 
the A to B process makes that calibration meaningless.


In fact, I'd say that one of the main reasons B-format exists is 
precisely that it makes processing like rotations a reasonable and 
general thing to do.


David McGriffy
VVAudio

On 9/26/2013 2:15 PM, Eero Aro wrote:

Well, if there is some kind of B-format analogue panner we would also
like to learn about it´s design




___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] Ambisonics - Decoding 16 channels in DAW

2013-07-03 Thread David McGriffy
"as i can see now, adding a second instance of vvmic or harpex might not 
be suitable as it would generate two separate soundfields. (not sure if 
i am right here...)"


I can't speak for Harpex (and I can imagine reasons why it might not 
work), but it should be no problem to use two copies of VVMicVST. Each 
output depends only on the inputs and the parameters for that output.  A 
given output will not be affected by the azi/elevation/etc. of the other 
mics. Of course, what is correct is a question of the whole set, but 
that set can span several instances of the plugin.


David

P.S. the standalone Windows program VVMic supports 32 outputs if this helps
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] HARPEX-B Plus First Order Mic vs. Third Order

2011-04-08 Thread David McGriffy

Brian,

It has always been my intentions to support OSX.  My plugins are 
released identically for Windows and Mac.  I also have the standalone 
player for Windows only, but now the Harpex player can do a similar job 
in OSX.  Yes, mine will do A-format conversion, but neither will record, 
so other software must be used for that.


The recording options I use day to day are all pretty much bi-platform.  
I use Reaper, Bidule, Nuendo - my own plugins (including unreleased 
stuff) and the other free ambisonic VST's from York, etc.  All cheap to 
free and all nearly identical on both platforms.  (OK, I actually record 
almost everything on my DR-680 now.  The old MIO-2882 is sadly neglected.)


I would love to support ProTools or even SoundBlade, but so far I have 
not managed to invest $5-10K in an audio system purely for plugin 
development.  When and if I do, you can bet I'll be charging real money 
for those versions.


Meanwhile, as others have mentioned, there are some cheap and functional 
options available today for OSX.  The plugins and templates for the 
platforms I mentioned are all available at vvaudio.com/downloads


I'll be happy to help you get one of these options working or continue 
to work with you on some other option, though debugging platforms I 
don't have is going to go a little slower.


David McGriffy


On 4/7/2011 10:01 AM, Brian C. Peters wrote:

Yes if i had Bidule or Nuendo or a compiled version! Neither of which I have.
I have the PC tools running in emulation or boot camp but to decode while I 
record has never been possible in OSX.
I use a Sonic 305 and or a ULN8 with the record panel for sessions/concerts and 
edit in SoundBlade. Reaper may be a choice. But tools that run directly in OSX 
(not plugins)?


Best regards,

Brian C. Peters
Tech Valley Audio
b...@ieee.org


On Apr 7, 2011, at 10:27 AM, Fons Adriaensen wrote:


On Thu, Apr 07, 2011 at 06:39:22AM -0400, Brian C. Peters wrote:


I've been waiting two years or more for your mic to become usable in OSX.

That's must be your choice then for it has been usable in OSX all the time.

--
FA

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Nuendo compatibility for VVMicVST

2010-11-05 Thread David McGriffy
I am pleased to announce the availability of VVMicVST version 1.2 for 
Mac and Windows.  This version has some changes that allow it to work 
happily in Nuendo (and I suspect Cubase, though that hasn't been tested 
yet.  I would appreciate any reports.)  Together with the latest version 
(1.4.2) of VVTetraVST, this makes complete processing of TetraMic 
recordings possible in Nuendo.


VVMicVST is my 1st order ambisonic decoder.  It is freely available to 
anyone.  It is also Open Source software.  The version at SourceForge is 
rather out of date but I hope to fix that soon.  VVTetraVST, the encoder 
for TetraMic to b-format, is available to all TetraMic users.


Both plugins, as well as a couple of Nuendo template projects, are 
downloadable from: http://vvaudio.com/downloads


David McGriffy

P.S. for any coders out there who are curious about what it took, it was 
simple once I managed to borrow a copy of Nuendo to debug in.  I now 
answer 8 outputs and 7.1cine as the speaker arrangement every time, no 
matter what the settings.  Changing the number of outputs and telling 
Neundo about the actual speaker positions was only confusing it and I 
suspect no other host even looks.  If you'd like to see code snippets, 
write me off list.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound