Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-10-05 Thread Hector Centeno
I wonder if when talking about IMU's they should call it something
different, like "9 degrees of measurement" or "9 degrees of sensing" since
it's not freedom related to spatial movement. This particularly now that
spatial tracking is being used more prominently in consumer technology.

On Sat, Oct 5, 2019 at 12:57 PM Stefan Schreiber 
wrote:

> To avoid some potential confusion:
>
> Most modern IMU sensors are supposed to offer “9 degrees of freedom”.
>
> (Directional 3-axis information from accelerometer, gyroscope and
> magnetometer.)
>
> If refering to tracking, you speak “just” of 3DoF (orientation) or 6
> DoF tracking (orientation + positional data).
>
> Further reading
>
> https://developers.google.com/vr/discover/degrees-of-freedom
>
> (for example)
>
> Best,
>
> Stefan
>
> P.S.: 9 or 10 degrees of freedom certainly sounds better than 6DoF...
>
> (Of course this is the raw sensor data before fusion, which probaby
> will give 3DoF orientation data as result. It could be discussed if
> quaternion orientation is 3DoF or 4DoF, though.  If 4DoF, an OHTI 6DoF
> pose would be rather 7DoF... I refuse to take part in the following
> discussion about 7DoF tracking, which means once more: I’m out!   )
>
> - - -
>
> Citando Bo-Erik Sandholm :
>
> > The ohti Projects for a open headtracker has been slow but will hopefully
> >
> >  be updated soon.
> >
> >  It is only for so called 9 degrees of freedom, no position data only
> >
> >  directional info with 2 degrees precision.
> >
> >
> >
> >  Data is currently quaternions over serial communication to control the
> >
> >  omnitone binaural decoder.
> >
> >  The host software written in JavaScript could with a bit of help have
> the
> >
> >  added functionality to send OSC.
> >
> >  There already exists a library to send OSC for JavaScript.
> >
> >
> >
> >  Den tis 1 okt. 2019 13:11Hector Centeno  skrev:
> >
> >> Hi Jack,
> >>
> >>
> >>
> >>  Yes, I saw those Bose but inertial sensors won't work for precise
> tracking.
> >>
> >>  I've done a lot of arkit/arcore with phones and tablets and it works
> great
> >>
> >>  but I don't want to be looking through a screen with a 2D
> representation of
> >>
> >>  the world while experiencing a sound only environment (unless you
> strap the
> >>
> >>  device to headphones with the camera facing forward, but that would be
> not
> >>
> >>  very convenient).
> >>
> >>
> >>
> >>  Cheers,
> >>
> >>
> >>
> >>  Hector
> >>
> >>
> >>
> >>  On Tue., Oct. 1, 2019, 4:18 a.m. jack reynolds, <
> jackreynolds...@gmail.com
> >>
> >>
> >>
> >>  wrote:
> >>
> >>
> >>
> >>  The new Bose headphones have 6dof tracking, but the accelerometers are
> >>
> >>  not
> >>
> >>  terribly accurate, so they are more 3dof really, unless you implement
> >>
> >>  ARKit
> >>
> >>  or similar.
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20191005/bf7c4261/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-10-05 Thread Stefan Schreiber

To avoid some potential confusion:

Most modern IMU sensors are supposed to offer “9 degrees of freedom”.

(Directional 3-axis information from accelerometer, gyroscope and  
magnetometer.)


If refering to tracking, you speak “just” of 3DoF (orientation) or 6  
DoF tracking (orientation + positional data).


Further reading

https://developers.google.com/vr/discover/degrees-of-freedom

(for example)

Best,

Stefan

P.S.: 9 or 10 degrees of freedom certainly sounds better than 6DoF...

(Of course this is the raw sensor data before fusion, which probaby  
will give 3DoF orientation data as result. It could be discussed if  
quaternion orientation is 3DoF or 4DoF, though.  If 4DoF, an OHTI 6DoF  
pose would be rather 7DoF... I refuse to take part in the following  
discussion about 7DoF tracking, which means once more: I’m out!   )


- - -

Citando Bo-Erik Sandholm :


The ohti Projects for a open headtracker has been slow but will hopefully

 be updated soon.

 It is only for so called 9 degrees of freedom, no position data only

 directional info with 2 degrees precision.



 Data is currently quaternions over serial communication to control the

 omnitone binaural decoder.

 The host software written in JavaScript could with a bit of help have the

 added functionality to send OSC.

 There already exists a library to send OSC for JavaScript.



 Den tis 1 okt. 2019 13:11Hector Centeno  skrev:


Hi Jack,



 Yes, I saw those Bose but inertial sensors won't work for precise tracking.

 I've done a lot of arkit/arcore with phones and tablets and it works great

 but I don't want to be looking through a screen with a 2D representation of

 the world while experiencing a sound only environment (unless you strap the

 device to headphones with the camera facing forward, but that would be not

 very convenient).



 Cheers,



 Hector



 On Tue., Oct. 1, 2019, 4:18 a.m. jack reynolds, 
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-10-03 Thread Bo-Erik Sandholm
The ohti Projects for a open headtracker has been slow but will hopefully
be updated soon.
It is only for so called 9 degrees of freedom, no position data only
directional info with 2 degrees precision.

Data is currently quaternions over serial communication to control the
omnitone binaural decoder.
The host software written in JavaScript could with a bit of help have the
added functionality to send OSC.
There already exists a library to send OSC for JavaScript.

Den tis 1 okt. 2019 13:11Hector Centeno  skrev:

> Hi Jack,
>
> Yes, I saw those Bose but inertial sensors won't work for precise tracking.
> I've done a lot of arkit/arcore with phones and tablets and it works great
> but I don't want to be looking through a screen with a 2D representation of
> the world while experiencing a sound only environment (unless you strap the
> device to headphones with the camera facing forward, but that would be not
> very convenient).
>
> Cheers,
>
> Hector
>
> On Tue., Oct. 1, 2019, 4:18 a.m. jack reynolds,  >
> wrote:
>
> > The new Bose headphones have 6dof tracking, but the accelerometers are
> not
> > terribly accurate, so they are more 3dof really, unless you implement
> ARKit
> > or similar.
> >
> >
> >
> > On Mon, 30 Sep 2019 at 15:29, Hector Centeno  wrote:
> >
> > > I've been wanting to create work in this way for a while now (6DOF
> > > audio-only augmented reality). This audio augmentation is what I found
> > > appealing when I tried the Magic Leap AR headset for the first time
> since
> > > it's very well implemented there (as opposed to the disappointing
> visual
> > > quality). I wish someone will soon produce headphones with 6DOF
> tracking
> > > (will require cameras to perform SLAM). In the meantime, I'm also
> waiting
> > > for Intel to add Android support to their RealSense Tracking Camera
> T265
> > [
> > > https://www.intelrealsense.com/tracking-camera-t265 ] which they claim
> > is
> > > on the works. This would allow strapping one of those sensors to a pair
> > of
> > > headphones and running an Android app that does the spatialization (I
> > > agree with Przemyslaw that using a game engine such as Unity or Unreal
> > > would make things very easy, even in this scenario or in a sever based
> > > one).
> > >
> > > Cheers,
> > >
> > > Hector
> > >
> > >
> > > On Mon, Sep 30, 2019 at 3:31 AM Przemysław Danowski <
> > > przemyslaw.danow...@chopin.edu.pl> wrote:
> > >
> > > > Michelle,
> > > >
> > > > You could look at game engines like Unreal or Unity, and if you can
> get
> > > > headtrackers send OSC to the engine you will get coordinates for the
> > > > position of the players head. You can use native multiplayer feature
> of
> > > the
> > > > engines. You could employ Kinect camera to trace position of the
> > players
> > > in
> > > > the room.
> > > >
> > > > It would be very easy to implement using VR/AR 6DoF headsets, where
> you
> > > > have room position tracking and headtracking combined and the SDK
> ready
> > > to
> > > > use.
> > > >
> > > > best,
> > > >
> > > >
> > > > Przemyslaw Danowski
> > > > Sound Engineering Department
> > > > Fryderyk Chopin University of Music
> > > > mob.+48603700626
> > > > www.chopin.edu.pl 
> > > >
> > > > UMFC VR : virtual exhibition
> > > > http://fb.me/umfcvr 
> > > > > Wiadomość napisana przez Michelle Irving <
> > > > michelle.irv...@soleilsound.com  > michelle.irv...@soleilsound.com
> > > >>
> > > > w dniu 27.09.2019, o godz. 19:22:
> > > > >
> > > > > Hi,
> > > > >
> > > > > I'm working with an artist who wants to explore Ambisonic Audio
> > > > > and use the Audeze Mobius headphones in an audio installation.
> > > > > The soundscape will consist of recordings of various individual
> > vocals
> > > > > spatialized
> > > > > throughout the "room". There is a video projection overhead. Hard
> > sync
> > > is
> > > > > not required.
> > > > >
> > > > > Questions:
> > > > > 1.Is it possible to exploit the headtracking of the Mobius
> headphones
> > > to
> > > > > give each person and individualized experience of the audio
> > > composition.
> > > > > ie. Person A is in the far left front corner and hearing a
> particular
> > > > voice
> > > > > in close proximity while Person B is in the far back right corner
> > > barely
> > > > > hearing what Person A is hearing?
> > > > >
> > > > > 2.If the Answer to 1. is YES - would you recommend using Max/Msp or
> > > > Arduino
> > > > > for configuring hte individual playbacks (mappings between
> headphones
> > > and
> > > > > some sort of player)
> > > > >
> > > > > 3.I've looked at the Waves NX toolkit and I don't see a feature to
> > > > > determine virtual room size?Am I missing something or is there
> other
> > > tech
> > > > > that could allow me to map the headtracker to a specific roomsize?
> > > > >
> > > > > 4.Open to better ideas how to achieve an interactive Ambisonic
> audio
> > > > > soundscape that works with multiple headsets.
> > > > >
> > > > > 

Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-10-01 Thread Hector Centeno
Hi Jack,

Yes, I saw those Bose but inertial sensors won't work for precise tracking.
I've done a lot of arkit/arcore with phones and tablets and it works great
but I don't want to be looking through a screen with a 2D representation of
the world while experiencing a sound only environment (unless you strap the
device to headphones with the camera facing forward, but that would be not
very convenient).

Cheers,

Hector

On Tue., Oct. 1, 2019, 4:18 a.m. jack reynolds, 
wrote:

> The new Bose headphones have 6dof tracking, but the accelerometers are not
> terribly accurate, so they are more 3dof really, unless you implement ARKit
> or similar.
>
>
>
> On Mon, 30 Sep 2019 at 15:29, Hector Centeno  wrote:
>
> > I've been wanting to create work in this way for a while now (6DOF
> > audio-only augmented reality). This audio augmentation is what I found
> > appealing when I tried the Magic Leap AR headset for the first time since
> > it's very well implemented there (as opposed to the disappointing visual
> > quality). I wish someone will soon produce headphones with 6DOF tracking
> > (will require cameras to perform SLAM). In the meantime, I'm also waiting
> > for Intel to add Android support to their RealSense Tracking Camera T265
> [
> > https://www.intelrealsense.com/tracking-camera-t265 ] which they claim
> is
> > on the works. This would allow strapping one of those sensors to a pair
> of
> > headphones and running an Android app that does the spatialization (I
> > agree with Przemyslaw that using a game engine such as Unity or Unreal
> > would make things very easy, even in this scenario or in a sever based
> > one).
> >
> > Cheers,
> >
> > Hector
> >
> >
> > On Mon, Sep 30, 2019 at 3:31 AM Przemysław Danowski <
> > przemyslaw.danow...@chopin.edu.pl> wrote:
> >
> > > Michelle,
> > >
> > > You could look at game engines like Unreal or Unity, and if you can get
> > > headtrackers send OSC to the engine you will get coordinates for the
> > > position of the players head. You can use native multiplayer feature of
> > the
> > > engines. You could employ Kinect camera to trace position of the
> players
> > in
> > > the room.
> > >
> > > It would be very easy to implement using VR/AR 6DoF headsets, where you
> > > have room position tracking and headtracking combined and the SDK ready
> > to
> > > use.
> > >
> > > best,
> > >
> > >
> > > Przemyslaw Danowski
> > > Sound Engineering Department
> > > Fryderyk Chopin University of Music
> > > mob.+48603700626
> > > www.chopin.edu.pl 
> > >
> > > UMFC VR : virtual exhibition
> > > http://fb.me/umfcvr 
> > > > Wiadomość napisana przez Michelle Irving <
> > > michelle.irv...@soleilsound.com  michelle.irv...@soleilsound.com
> > >>
> > > w dniu 27.09.2019, o godz. 19:22:
> > > >
> > > > Hi,
> > > >
> > > > I'm working with an artist who wants to explore Ambisonic Audio
> > > > and use the Audeze Mobius headphones in an audio installation.
> > > > The soundscape will consist of recordings of various individual
> vocals
> > > > spatialized
> > > > throughout the "room". There is a video projection overhead. Hard
> sync
> > is
> > > > not required.
> > > >
> > > > Questions:
> > > > 1.Is it possible to exploit the headtracking of the Mobius headphones
> > to
> > > > give each person and individualized experience of the audio
> > composition.
> > > > ie. Person A is in the far left front corner and hearing a particular
> > > voice
> > > > in close proximity while Person B is in the far back right corner
> > barely
> > > > hearing what Person A is hearing?
> > > >
> > > > 2.If the Answer to 1. is YES - would you recommend using Max/Msp or
> > > Arduino
> > > > for configuring hte individual playbacks (mappings between headphones
> > and
> > > > some sort of player)
> > > >
> > > > 3.I've looked at the Waves NX toolkit and I don't see a feature to
> > > > determine virtual room size?Am I missing something or is there other
> > tech
> > > > that could allow me to map the headtracker to a specific roomsize?
> > > >
> > > > 4.Open to better ideas how to achieve an interactive Ambisonic audio
> > > > soundscape that works with multiple headsets.
> > > >
> > > > thanks!
> > > > Michelle
> > > >
> > > > --
> > > >
> > > > Michelle Irving
> > > >
> > > > Post-Audio Supervisor
> > > >
> > > > 416-500-1631
> > > >
> > > > 507 King St. East
> > > >
> > > > Toronto, Ontario
> > > >
> > > > www.soleilsound.com 
> > > > -- next part --
> > > > An HTML attachment was scrubbed...
> > > > URL: <
> > >
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190927/8046a93b/attachment.html
> > > >
> > > > ___
> > > > Sursound mailing list
> > > > Sursound@music.vt.edu
> > > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> > here,
> > > edit account or options, view archives and so on.
> > >
> > > -- next 

Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-10-01 Thread jack reynolds
The new Bose headphones have 6dof tracking, but the accelerometers are not
terribly accurate, so they are more 3dof really, unless you implement ARKit
or similar.



On Mon, 30 Sep 2019 at 15:29, Hector Centeno  wrote:

> I've been wanting to create work in this way for a while now (6DOF
> audio-only augmented reality). This audio augmentation is what I found
> appealing when I tried the Magic Leap AR headset for the first time since
> it's very well implemented there (as opposed to the disappointing visual
> quality). I wish someone will soon produce headphones with 6DOF tracking
> (will require cameras to perform SLAM). In the meantime, I'm also waiting
> for Intel to add Android support to their RealSense Tracking Camera T265 [
> https://www.intelrealsense.com/tracking-camera-t265 ] which they claim is
> on the works. This would allow strapping one of those sensors to a pair of
> headphones and running an Android app that does the spatialization (I
> agree with Przemyslaw that using a game engine such as Unity or Unreal
> would make things very easy, even in this scenario or in a sever based
> one).
>
> Cheers,
>
> Hector
>
>
> On Mon, Sep 30, 2019 at 3:31 AM Przemysław Danowski <
> przemyslaw.danow...@chopin.edu.pl> wrote:
>
> > Michelle,
> >
> > You could look at game engines like Unreal or Unity, and if you can get
> > headtrackers send OSC to the engine you will get coordinates for the
> > position of the players head. You can use native multiplayer feature of
> the
> > engines. You could employ Kinect camera to trace position of the players
> in
> > the room.
> >
> > It would be very easy to implement using VR/AR 6DoF headsets, where you
> > have room position tracking and headtracking combined and the SDK ready
> to
> > use.
> >
> > best,
> >
> >
> > Przemyslaw Danowski
> > Sound Engineering Department
> > Fryderyk Chopin University of Music
> > mob.+48603700626
> > www.chopin.edu.pl 
> >
> > UMFC VR : virtual exhibition
> > http://fb.me/umfcvr 
> > > Wiadomość napisana przez Michelle Irving <
> > michelle.irv...@soleilsound.com  >>
> > w dniu 27.09.2019, o godz. 19:22:
> > >
> > > Hi,
> > >
> > > I'm working with an artist who wants to explore Ambisonic Audio
> > > and use the Audeze Mobius headphones in an audio installation.
> > > The soundscape will consist of recordings of various individual vocals
> > > spatialized
> > > throughout the "room". There is a video projection overhead. Hard sync
> is
> > > not required.
> > >
> > > Questions:
> > > 1.Is it possible to exploit the headtracking of the Mobius headphones
> to
> > > give each person and individualized experience of the audio
> composition.
> > > ie. Person A is in the far left front corner and hearing a particular
> > voice
> > > in close proximity while Person B is in the far back right corner
> barely
> > > hearing what Person A is hearing?
> > >
> > > 2.If the Answer to 1. is YES - would you recommend using Max/Msp or
> > Arduino
> > > for configuring hte individual playbacks (mappings between headphones
> and
> > > some sort of player)
> > >
> > > 3.I've looked at the Waves NX toolkit and I don't see a feature to
> > > determine virtual room size?Am I missing something or is there other
> tech
> > > that could allow me to map the headtracker to a specific roomsize?
> > >
> > > 4.Open to better ideas how to achieve an interactive Ambisonic audio
> > > soundscape that works with multiple headsets.
> > >
> > > thanks!
> > > Michelle
> > >
> > > --
> > >
> > > Michelle Irving
> > >
> > > Post-Audio Supervisor
> > >
> > > 416-500-1631
> > >
> > > 507 King St. East
> > >
> > > Toronto, Ontario
> > >
> > > www.soleilsound.com 
> > > -- next part --
> > > An HTML attachment was scrubbed...
> > > URL: <
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190927/8046a93b/attachment.html
> > >
> > > ___
> > > Sursound mailing list
> > > Sursound@music.vt.edu
> > > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here,
> > edit account or options, view archives and so on.
> >
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> >
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190930/cb9fd2f1/attachment.html
> > >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> > edit account or options, view archives and so on.
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190930/ed44fed2/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> 

Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-30 Thread Hector Centeno
I've been wanting to create work in this way for a while now (6DOF
audio-only augmented reality). This audio augmentation is what I found
appealing when I tried the Magic Leap AR headset for the first time since
it's very well implemented there (as opposed to the disappointing visual
quality). I wish someone will soon produce headphones with 6DOF tracking
(will require cameras to perform SLAM). In the meantime, I'm also waiting
for Intel to add Android support to their RealSense Tracking Camera T265 [
https://www.intelrealsense.com/tracking-camera-t265 ] which they claim is
on the works. This would allow strapping one of those sensors to a pair of
headphones and running an Android app that does the spatialization (I
agree with Przemyslaw that using a game engine such as Unity or Unreal
would make things very easy, even in this scenario or in a sever based one).

Cheers,

Hector


On Mon, Sep 30, 2019 at 3:31 AM Przemysław Danowski <
przemyslaw.danow...@chopin.edu.pl> wrote:

> Michelle,
>
> You could look at game engines like Unreal or Unity, and if you can get
> headtrackers send OSC to the engine you will get coordinates for the
> position of the players head. You can use native multiplayer feature of the
> engines. You could employ Kinect camera to trace position of the players in
> the room.
>
> It would be very easy to implement using VR/AR 6DoF headsets, where you
> have room position tracking and headtracking combined and the SDK ready to
> use.
>
> best,
>
>
> Przemyslaw Danowski
> Sound Engineering Department
> Fryderyk Chopin University of Music
> mob.+48603700626
> www.chopin.edu.pl 
>
> UMFC VR : virtual exhibition
> http://fb.me/umfcvr 
> > Wiadomość napisana przez Michelle Irving <
> michelle.irv...@soleilsound.com >
> w dniu 27.09.2019, o godz. 19:22:
> >
> > Hi,
> >
> > I'm working with an artist who wants to explore Ambisonic Audio
> > and use the Audeze Mobius headphones in an audio installation.
> > The soundscape will consist of recordings of various individual vocals
> > spatialized
> > throughout the "room". There is a video projection overhead. Hard sync is
> > not required.
> >
> > Questions:
> > 1.Is it possible to exploit the headtracking of the Mobius headphones to
> > give each person and individualized experience of the audio composition.
> > ie. Person A is in the far left front corner and hearing a particular
> voice
> > in close proximity while Person B is in the far back right corner barely
> > hearing what Person A is hearing?
> >
> > 2.If the Answer to 1. is YES - would you recommend using Max/Msp or
> Arduino
> > for configuring hte individual playbacks (mappings between headphones and
> > some sort of player)
> >
> > 3.I've looked at the Waves NX toolkit and I don't see a feature to
> > determine virtual room size?Am I missing something or is there other tech
> > that could allow me to map the headtracker to a specific roomsize?
> >
> > 4.Open to better ideas how to achieve an interactive Ambisonic audio
> > soundscape that works with multiple headsets.
> >
> > thanks!
> > Michelle
> >
> > --
> >
> > Michelle Irving
> >
> > Post-Audio Supervisor
> >
> > 416-500-1631
> >
> > 507 King St. East
> >
> > Toronto, Ontario
> >
> > www.soleilsound.com 
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190927/8046a93b/attachment.html
> >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190930/cb9fd2f1/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-30 Thread Przemysław Danowski
Michelle, 

You could look at game engines like Unreal or Unity, and if you can get 
headtrackers send OSC to the engine you will get coordinates for the position 
of the players head. You can use native multiplayer feature of the engines. You 
could employ Kinect camera to trace position of the players in the room.

It would be very easy to implement using VR/AR 6DoF headsets, where you have 
room position tracking and headtracking combined and the SDK ready to use. 

best,


Przemyslaw Danowski
Sound Engineering Department
Fryderyk Chopin University of Music
mob.+48603700626
www.chopin.edu.pl 

UMFC VR : virtual exhibition
http://fb.me/umfcvr 
> Wiadomość napisana przez Michelle Irving  > w dniu 27.09.2019, o godz. 19:22:
> 
> Hi,
> 
> I'm working with an artist who wants to explore Ambisonic Audio
> and use the Audeze Mobius headphones in an audio installation.
> The soundscape will consist of recordings of various individual vocals
> spatialized
> throughout the "room". There is a video projection overhead. Hard sync is
> not required.
> 
> Questions:
> 1.Is it possible to exploit the headtracking of the Mobius headphones to
> give each person and individualized experience of the audio composition.
> ie. Person A is in the far left front corner and hearing a particular voice
> in close proximity while Person B is in the far back right corner barely
> hearing what Person A is hearing?
> 
> 2.If the Answer to 1. is YES - would you recommend using Max/Msp or Arduino
> for configuring hte individual playbacks (mappings between headphones and
> some sort of player)
> 
> 3.I've looked at the Waves NX toolkit and I don't see a feature to
> determine virtual room size?Am I missing something or is there other tech
> that could allow me to map the headtracker to a specific roomsize?
> 
> 4.Open to better ideas how to achieve an interactive Ambisonic audio
> soundscape that works with multiple headsets.
> 
> thanks!
> Michelle
> 
> -- 
> 
> Michelle Irving
> 
> Post-Audio Supervisor
> 
> 416-500-1631
> 
> 507 King St. East
> 
> Toronto, Ontario
> 
> www.soleilsound.com 
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-29 Thread Picinali, Lorenzo
Hello Michelle,

if you don't absolutely need to work with Ambisonic, then you could use our 3D 
Tune-In Toolkit test application (compatible with MacOS, Windows and Linux), 
which works with "direct" binaural (i.e. convolution with HRTFs) for the direct 
path, and with a virtual loudspeaker ambisonic-based rendering for the 
reverberation (through Binaural Room Impulse Responses - BRIRs). This "blended" 
approach allows you to have a very realistic simulation of distance (excellent 
in near-field), and a "true" binaural reverb (not just stereo).

I've made a quick video showing how the basic functions work (wear headphones 
when you listen)

https://imperialcollegelondon.box.com/s/t8pza3xazaqwpzh1gr5az1v1o8ulv5m9

You can run multiple instances of the app on MacOS, each outputting to a 
different channel pair on a multichannel audio interface, from which you could 
then stream (using for example radio transmitters/receivers) to the different 
headphones. The 3DTI Toolkit test application can be controlled via OSC (6DOF 
for the listener position and orientation, plus you can control individually 
the playback and position of each source). It might though be problematic to 
use the head-tracker of the Mobius HP, as I'm not sure you can receive data 
from multiple headphones on a single computer. One solution we have 
successfully tried in the past is to just use mobile phones as head-trackers, 
using apps such as GyrOSC, which can all be directed to the main computer, 
using different UDP ports for each different user/spatialisation.

We have developed an iOS 'wrapper' of the Toolkit, and a "prototype" iOS 
application which uses Apple ARKit and can allow the user to navigate with 5DOF 
(no up-down - Z) around a space. It works well if there are no 
objects/furniture in the room, and if the floor is not too dark. It runs only 
on new iPhones or iPad Pro, but...it's just a prototype at the moment.

If you want to simulate different rooms (e.g. different size, etc), you just 
need to import in the Toolkit the BRIR of the room you want to emulate (only 6 
positions are needed: 0°, 90°, 180°, 270° of azimuth at 0° elevation, and +-90° 
of elevation). You can either measure these using a dummy head, or simulate 
them using image-source, ray tracing, or other techniques. With the 3DTI 
Toolkit we have also released a "resources" package with a few BRIRs (and a few 
HRTFs).

You can download the 3DTI Toolkit test app from the GitHub repo:

https://github.com/3DTune-In/3dti_AudioToolkit/releases

You'll find the resources package in the "Common Resources" section.

I hope it helps!
Lorenzo


--
Dr Lorenzo Picinali
Senior Lecturer in Audio Experience Design
Director of Undergraduate Studies
Dyson School of Design Engineering
Imperial College London
Dyson Building
Imperial College Road
South Kensington, SW7 2DB, London
T: 0044 (0)20 7594 8158
E: l.picin...@imperial.ac.uk

http://www.imperial.ac.uk/people/l.picinali

www.imperial.ac.uk/design-engineering-school<http://www.imperial.ac.uk/design-engineering-school>

From: Sursound  on behalf of Bo-Erik Sandholm 

Sent: 29 September 2019 17:50
To: sursound 
Subject: Re: [Sursound] Ambisonic Audio - Interactive Installation

https://proximi.io/accurate-indoor-positioning-bluetooth-beacons/

Principles and solution för ble localization.

Den sön 29 sep. 2019 16:37Marc Lavallée  skrev:

> Hi Michelle,
>
> A master computer could remotely control portable computers (probably
> phones) to render the sounds using either a customized embedded version
> of the SSR software (http://spatialaudio.net/ssr/), or to play
> personalized binaural streams rendered in real time from the master
> computer using either SSR, Panoramix (from IRCAM) or some other software
> solution (as described in previous answers).
>
> Then there's the question of tracking... For interactive installations,
> the easiest I used was a Kinect (and now there's alternative products).
> The unknown part is how to link one portable computer to a detected
> person; maybe Bluetooth beacons and triangulation could be used to
> detect the computers and report their approximate positions.
>
> Marc
>
>
> Le 19-09-29 à 08 h 45, Daniel Rudrich a écrit :
> > Hi there,
> >
> > it’s indeed very cpu intense. Especially, as each source has to be
> encoded with their reflections.
> >
> > I wrote a VST plug-in called RoomEncoder, which renders a source in a
> shoebox-room, and adds reflections which are also filtered depending on the
> wall absorption and image source degree. Source and listener can be freely
> placed within the room, and the source can also have a directivity which
> can be frequency dependent. The output is higher order Ambisonics. So to be
> played back, all sources for one listener have to be summed up, ro

Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-29 Thread Bo-Erik Sandholm
 approach ??
> >>
> >> For this it may also be better to consider an amplitude/delay based
> approach (Delta stereophony, or basic wave field synthesis), rather than
> ambisonics.. This is what it appears TiMax, L-ISA, d’s Soundscape, and
> Astro and other similar systems are based on. Again, not easy to do well.
> Not perfect for everything, but then no algorithm is.
> >>
> >>
> >> Ciao,
> >>
> >> Dave Hunt
> >>
> >>
> >>> On 28 Sep 2019, at 17:00, sursound-requ...@music.vt.edu wrote
> >>>
> >>> From: Michelle Irving 
> >>> Subject: [Sursound] Ambisonic Audio - Interactive Installation
> >>> Date: 27 September 2019 at 18:22:26 BST
> >>> To: sursound@music.vt.edu
> >>>
> >>>
> >>> Hi,
> >>>
> >>> I'm working with an artist who wants to explore Ambisonic Audio
> >>> and use the Audeze Mobius headphones in an audio installation.
> >>> The soundscape will consist of recordings of various individual vocals
> >>> spatialized
> >>> throughout the "room". There is a video projection overhead. Hard sync
> is
> >>> not required.
> >>>
> >>> Questions:
> >>> 1.Is it possible to exploit the headtracking of the Mobius headphones
> to
> >>> give each person and individualized experience of the audio
> composition.
> >>> ie. Person A is in the far left front corner and hearing a particular
> voice
> >>> in close proximity while Person B is in the far back right corner
> barely
> >>> hearing what Person A is hearing?
> >>>
> >>> 2.If the Answer to 1. is YES - would you recommend using Max/Msp or
> Arduino
> >>> for configuring hte individual playbacks (mappings between headphones
> and
> >>> some sort of player)
> >>>
> >>> 3.I've looked at the Waves NX toolkit and I don't see a feature to
> >>> determine virtual room size?Am I missing something or is there other
> tech
> >>> that could allow me to map the headtracker to a specific roomsize?
> >>>
> >>> 4.Open to better ideas how to achieve an interactive Ambisonic audio
> >>> soundscape that works with multiple headsets.
> >>>
> >>> thanks!
> >>> Michelle
> >>>
> >>> --
> >>>
> >>> Michelle Irving
> >>>
> >>> Post-Audio Supervisor
> >>>
> >>> 416-500-1631
> >>>
> >>> 507 King St. East
> >>>
> >>> Toronto, Ontario
> >>>
> >>> www.soleilsound.com
> >> ___
> >> Sursound mailing list
> >> Sursound@music.vt.edu
> >> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here, edit account or options, view archives and so on.
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20190929/306c80b8/attachment.html
> >
> > ___
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20190929/d4b7d377/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-29 Thread Marc Lavallée
Hi Michelle,

A master computer could remotely control portable computers (probably
phones) to render the sounds using either a customized embedded version
of the SSR software (http://spatialaudio.net/ssr/), or to play
personalized binaural streams rendered in real time from the master
computer using either SSR, Panoramix (from IRCAM) or some other software
solution (as described in previous answers).

Then there's the question of tracking... For interactive installations,
the easiest I used was a Kinect (and now there's alternative products).
The unknown part is how to link one portable computer to a detected
person; maybe Bluetooth beacons and triangulation could be used to
detect the computers and report their approximate positions.

Marc


Le 19-09-29 à 08 h 45, Daniel Rudrich a écrit :
> Hi there,
>
> it’s indeed very cpu intense. Especially, as each source has to be encoded 
> with their reflections.
>
> I wrote a VST plug-in called RoomEncoder, which renders a source in a 
> shoebox-room, and adds reflections which are also filtered depending on the 
> wall absorption and image source degree. Source and listener can be freely 
> placed within the room, and the source can also have a directivity which can 
> be frequency dependent. The output is higher order Ambisonics. So to be 
> played back, all sources for one listener have to be summed up, rotated 
> (head-tracking), and binaurally decoded. 
>
> Several instances of the plug-in can be linked, so if you change the room 
> properties in one of them, all of them change. The plug-in renders up to 236 
> reflections, however, a hand full (or two) of them are enough to give a 
> convincing room impression. Especially when combined with a FDN network. The 
> good thing is, that you’ll need only one FDN network for all listeners and 
> sources, so at least this one is not so cpu demanding. The FdnReverb plug-in 
> also has a fade-in feature, which helps to not get in the way of the early 
> reflections of the RoomEncoder.
>
> We used this setup in an interactive audio game, tracked with an optical 
> tracking system, logic implemented in PD and rendered with Reaper. 
>
> Both plug-ins can be found in the IEM Plug-in Suite: https://plugins.iem.at 
> <https://plugins.iem.at/>. As they are open-source you could compile them 
> yourself, to get the most out of your CPU architecture you are using e.g. 
> AVX512 SIMD extensions. 
>
> Best
> Daniel
>
>> Am 29.09.2019 um 14:06 schrieb Dave Hunt :
>>
>> Hi Michelle,
>>
>> I believe that what you want to do is possible, but not easy.
>>
>> It is possible to move the listener in a totally synthetic ambisonic sound 
>> field. You have to build in “distance modelling’, as well as differing 
>> direction of each source, as the listener moves. Adding room simulation or 
>> reverberation brings an extra layer of complexity, as the nature of the 
>> reflected, delayed and diffused sound from each source is different at every 
>> listening position.
>>
>> This soon becomes rather processor intensive. I have made Max patches that 
>> take this approach, and they do “work”, but there are problems. Although 
>> basic ambisonic source encoding is mathematically relatively simple multiple 
>> sources, with multiple reflections, each of which have to be encoded, 
>> becomes appreciably more involved. Moving sources require multiple constant 
>> recalculation, preferably at near audio sampling rate. Even with just first 
>> order ambisonics, this gets pretty demanding to do well.
>>
>> For what you are proposing, you would have to deliver a unique audio stream 
>> to each pair of headphones, binaurally encoded to match the position of the 
>> headphones in the “room”. Thus you need spatial tracking of each pair, as 
>> well as the head movement data for each. This could control the ambisonic 
>> encoding and decoding to binaural of each individual headphone signal.
>>
>> For one listener this already involves a lot of data and processing, and at 
>> least two wireless transmission channels. For several listeners the 
>> technology and resources required becomes uneconomic. Currently you would 
>> probably require a computer for each listener, or possibly a very powerful 
>> computer or two. Then a lot of programming, engineering and material expense.
>>
>> Perhaps you would be better considering a loudspeaker based approach ??
>>
>> For this it may also be better to consider an amplitude/delay based approach 
>> (Delta stereophony, or basic wave field synthesis), rather than ambisonics.. 
>> This is what it appears TiMax, L-ISA, d’s Soundscape, and Astro and other 
>> similar systems are b

Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-29 Thread Daniel Rudrich
Hi there,

it’s indeed very cpu intense. Especially, as each source has to be encoded with 
their reflections.

I wrote a VST plug-in called RoomEncoder, which renders a source in a 
shoebox-room, and adds reflections which are also filtered depending on the 
wall absorption and image source degree. Source and listener can be freely 
placed within the room, and the source can also have a directivity which can be 
frequency dependent. The output is higher order Ambisonics. So to be played 
back, all sources for one listener have to be summed up, rotated 
(head-tracking), and binaurally decoded. 

Several instances of the plug-in can be linked, so if you change the room 
properties in one of them, all of them change. The plug-in renders up to 236 
reflections, however, a hand full (or two) of them are enough to give a 
convincing room impression. Especially when combined with a FDN network. The 
good thing is, that you’ll need only one FDN network for all listeners and 
sources, so at least this one is not so cpu demanding. The FdnReverb plug-in 
also has a fade-in feature, which helps to not get in the way of the early 
reflections of the RoomEncoder.

We used this setup in an interactive audio game, tracked with an optical 
tracking system, logic implemented in PD and rendered with Reaper. 

Both plug-ins can be found in the IEM Plug-in Suite: https://plugins.iem.at 
<https://plugins.iem.at/>. As they are open-source you could compile them 
yourself, to get the most out of your CPU architecture you are using e.g. 
AVX512 SIMD extensions. 

Best
Daniel

> Am 29.09.2019 um 14:06 schrieb Dave Hunt :
> 
> Hi Michelle,
> 
> I believe that what you want to do is possible, but not easy.
> 
> It is possible to move the listener in a totally synthetic ambisonic sound 
> field. You have to build in “distance modelling’, as well as differing 
> direction of each source, as the listener moves. Adding room simulation or 
> reverberation brings an extra layer of complexity, as the nature of the 
> reflected, delayed and diffused sound from each source is different at every 
> listening position.
> 
> This soon becomes rather processor intensive. I have made Max patches that 
> take this approach, and they do “work”, but there are problems. Although 
> basic ambisonic source encoding is mathematically relatively simple multiple 
> sources, with multiple reflections, each of which have to be encoded, becomes 
> appreciably more involved. Moving sources require multiple constant 
> recalculation, preferably at near audio sampling rate. Even with just first 
> order ambisonics, this gets pretty demanding to do well.
> 
> For what you are proposing, you would have to deliver a unique audio stream 
> to each pair of headphones, binaurally encoded to match the position of the 
> headphones in the “room”. Thus you need spatial tracking of each pair, as 
> well as the head movement data for each. This could control the ambisonic 
> encoding and decoding to binaural of each individual headphone signal.
> 
> For one listener this already involves a lot of data and processing, and at 
> least two wireless transmission channels. For several listeners the 
> technology and resources required becomes uneconomic. Currently you would 
> probably require a computer for each listener, or possibly a very powerful 
> computer or two. Then a lot of programming, engineering and material expense.
> 
> Perhaps you would be better considering a loudspeaker based approach ??
> 
> For this it may also be better to consider an amplitude/delay based approach 
> (Delta stereophony, or basic wave field synthesis), rather than ambisonics.. 
> This is what it appears TiMax, L-ISA, d’s Soundscape, and Astro and other 
> similar systems are based on. Again, not easy to do well. Not perfect for 
> everything, but then no algorithm is.
> 
> 
> Ciao,
> 
> Dave Hunt
> 
> 
>> On 28 Sep 2019, at 17:00, sursound-requ...@music.vt.edu wrote
>> 
>> From: Michelle Irving 
>> Subject: [Sursound] Ambisonic Audio - Interactive Installation
>> Date: 27 September 2019 at 18:22:26 BST
>> To: sursound@music.vt.edu
>> 
>> 
>> Hi,
>> 
>> I'm working with an artist who wants to explore Ambisonic Audio
>> and use the Audeze Mobius headphones in an audio installation.
>> The soundscape will consist of recordings of various individual vocals
>> spatialized
>> throughout the "room". There is a video projection overhead. Hard sync is
>> not required.
>> 
>> Questions:
>> 1.Is it possible to exploit the headtracking of the Mobius headphones to
>> give each person and individualized experience of the audio composition.
>> ie. Person A is in the far left front corner and hearing a particular voice
&g

Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-29 Thread Dave Hunt
Hi Michelle,

I believe that what you want to do is possible, but not easy.

It is possible to move the listener in a totally synthetic ambisonic sound 
field. You have to build in “distance modelling’, as well as differing 
direction of each source, as the listener moves. Adding room simulation or 
reverberation brings an extra layer of complexity, as the nature of the 
reflected, delayed and diffused sound from each source is different at every 
listening position.

This soon becomes rather processor intensive. I have made Max patches that take 
this approach, and they do “work”, but there are problems. Although basic 
ambisonic source encoding is mathematically relatively simple multiple sources, 
with multiple reflections, each of which have to be encoded, becomes 
appreciably more involved. Moving sources require multiple constant 
recalculation, preferably at near audio sampling rate. Even with just first 
order ambisonics, this gets pretty demanding to do well.

For what you are proposing, you would have to deliver a unique audio stream to 
each pair of headphones, binaurally encoded to match the position of the 
headphones in the “room”. Thus you need spatial tracking of each pair, as well 
as the head movement data for each. This could control the ambisonic encoding 
and decoding to binaural of each individual headphone signal.

For one listener this already involves a lot of data and processing, and at 
least two wireless transmission channels. For several listeners the technology 
and resources required becomes uneconomic. Currently you would probably require 
a computer for each listener, or possibly a very powerful computer or two. Then 
a lot of programming, engineering and material expense.

Perhaps you would be better considering a loudspeaker based approach ??

For this it may also be better to consider an amplitude/delay based approach 
(Delta stereophony, or basic wave field synthesis), rather than ambisonics.. 
This is what it appears TiMax, L-ISA, d’s Soundscape, and Astro and other 
similar systems are based on. Again, not easy to do well. Not perfect for 
everything, but then no algorithm is.


Ciao,

Dave Hunt


> On 28 Sep 2019, at 17:00, sursound-requ...@music.vt.edu wrote
> 
> From: Michelle Irving 
> Subject: [Sursound] Ambisonic Audio - Interactive Installation
> Date: 27 September 2019 at 18:22:26 BST
> To: sursound@music.vt.edu
> 
> 
> Hi,
> 
> I'm working with an artist who wants to explore Ambisonic Audio
> and use the Audeze Mobius headphones in an audio installation.
> The soundscape will consist of recordings of various individual vocals
> spatialized
> throughout the "room". There is a video projection overhead. Hard sync is
> not required.
> 
> Questions:
> 1.Is it possible to exploit the headtracking of the Mobius headphones to
> give each person and individualized experience of the audio composition.
> ie. Person A is in the far left front corner and hearing a particular voice
> in close proximity while Person B is in the far back right corner barely
> hearing what Person A is hearing?
> 
> 2.If the Answer to 1. is YES - would you recommend using Max/Msp or Arduino
> for configuring hte individual playbacks (mappings between headphones and
> some sort of player)
> 
> 3.I've looked at the Waves NX toolkit and I don't see a feature to
> determine virtual room size?Am I missing something or is there other tech
> that could allow me to map the headtracker to a specific roomsize?
> 
> 4.Open to better ideas how to achieve an interactive Ambisonic audio
> soundscape that works with multiple headsets.
> 
> thanks!
> Michelle
> 
> -- 
> 
> Michelle Irving
> 
> Post-Audio Supervisor
> 
> 416-500-1631
> 
> 507 King St. East
> 
> Toronto, Ontario
> 
> www.soleilsound.com

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-28 Thread ByungJun

Hello,

I've been doing my headphones project with my own custom circuitry.

https://drive.google.com/open?id=1jXUxTZdPJi4AP_uGeMuksiLMCkq5u-yM

It's based on Teensy board and DWM1000 module which plays multiple 
stereo files(4 or more),  gets precise position in the space(10cm 
precision) and headtracking.


B-format decoding is not implemented yet though.

I use it for site-specific audio playback or audio-mixing depending on 
the distance between users and etc.


Best, Byungjun


On 19. 9. 28. 오전 2:22, Michelle Irving wrote:

Hi,

I'm working with an artist who wants to explore Ambisonic Audio
and use the Audeze Mobius headphones in an audio installation.
The soundscape will consist of recordings of various individual vocals
spatialized
throughout the "room". There is a video projection overhead. Hard 
sync is

not required.

Questions:
1.Is it possible to exploit the headtracking of the Mobius headphones to
give each person and individualized experience of the audio composition.
ie. Person A is in the far left front corner and hearing a particular 
voice

in close proximity while Person B is in the far back right corner barely
hearing what Person A is hearing?

2.If the Answer to 1. is YES - would you recommend using Max/Msp or 
Arduino
for configuring hte individual playbacks (mappings between headphones 
and

some sort of player)

3.I've looked at the Waves NX toolkit and I don't see a feature to
determine virtual room size?Am I missing something or is there other 
tech

that could allow me to map the headtracker to a specific roomsize?

4.Open to better ideas how to achieve an interactive Ambisonic audio
soundscape that works with multiple headsets.

thanks!
Michelle


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Ambisonic Audio - Interactive Installation

2019-09-28 Thread Douglas Murray
Hi Michelle,

I believe the Waves NX toolkit allows only for head tracking around a point. 
It’s accelerometer technology doesn’t allow for movement within a room, but 
only head turns and tilts. So you wouldn’t be able to use the Mobius headphones 
to track variable proximity to different virtual sound sources with that 
technology. Each listener would be in the same virtual position no matter where 
they stand or move, while the phones would track when they tilt and turn their 
heads only.

I’m afraid I am ignorant enough to know not to suggest working alternatives, 
but I know they exist.

All the best,

Doug

> On Sep 27, 2019, at 6:22 PM, Michelle Irving 
>  wrote:
> 
> Hi,
> 
> I'm working with an artist who wants to explore Ambisonic Audio
> and use the Audeze Mobius headphones in an audio installation.
> The soundscape will consist of recordings of various individual vocals
> spatialized
> throughout the "room". There is a video projection overhead. Hard sync is
> not required.
> 
> Questions:
> 1.Is it possible to exploit the headtracking of the Mobius headphones to
> give each person and individualized experience of the audio composition.
> ie. Person A is in the far left front corner and hearing a particular voice
> in close proximity while Person B is in the far back right corner barely
> hearing what Person A is hearing?
> 
> 2.If the Answer to 1. is YES - would you recommend using Max/Msp or Arduino
> for configuring hte individual playbacks (mappings between headphones and
> some sort of player)
> 
> 3.I've looked at the Waves NX toolkit and I don't see a feature to
> determine virtual room size?Am I missing something or is there other tech
> that could allow me to map the headtracker to a specific roomsize?
> 
> 4.Open to better ideas how to achieve an interactive Ambisonic audio
> soundscape that works with multiple headsets.
> 
> thanks!
> Michelle
> 
> -- 
> 
> Michelle Irving
> 
> Post-Audio Supervisor
> 
> 416-500-1631
> 
> 507 King St. East
> 
> Toronto, Ontario
> 
> www.soleilsound.com
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] Ambisonic Audio - Interactive Installation

2019-09-27 Thread Michelle Irving
Hi,

I'm working with an artist who wants to explore Ambisonic Audio
and use the Audeze Mobius headphones in an audio installation.
The soundscape will consist of recordings of various individual vocals
spatialized
throughout the "room". There is a video projection overhead. Hard sync is
not required.

Questions:
1.Is it possible to exploit the headtracking of the Mobius headphones to
give each person and individualized experience of the audio composition.
ie. Person A is in the far left front corner and hearing a particular voice
in close proximity while Person B is in the far back right corner barely
hearing what Person A is hearing?

2.If the Answer to 1. is YES - would you recommend using Max/Msp or Arduino
for configuring hte individual playbacks (mappings between headphones and
some sort of player)

3.I've looked at the Waves NX toolkit and I don't see a feature to
determine virtual room size?Am I missing something or is there other tech
that could allow me to map the headtracker to a specific roomsize?

4.Open to better ideas how to achieve an interactive Ambisonic audio
soundscape that works with multiple headsets.

thanks!
Michelle

-- 

Michelle Irving

Post-Audio Supervisor

416-500-1631

507 King St. East

Toronto, Ontario

www.soleilsound.com
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.