Re: [Sursound] AES London Lecture

2015-01-14 Thread Dave Hunt

Hi,

Yes it was interesting and enjoyable, mostly due to interactions with  
others. It attracted a good turn out, with interesting and interested  
attendees.


Since hearing of this presentation I've played with the ideas with  
Max/MSP, helped by the manual for ViMic and a bit of web-based maths  
education, so had a pretty good idea what to expect.


I didn't bother trying to sit in the centre and operate it myself.  
There were too many people for that, but could clearly hear it  
working from the fringes of the speaker circle and outside it. There  
seems to be no more precision about it than first order ambisonics.  
That can also be a bit lumpy when panning around a circle of  
loudspeakers unless they are well set up in a good environment.


It can obviously be extended to include height. Although higher order  
microphones are nearly unachievable physically they are achievable in  
software, so more channels and more loudspeakers are possible for  
spatial synthesis. It does use spaced microphones to produce  
something related to head acoustics, something I've always had  
reservations about, largely because one is going to listen with one's  
own head.


I had noticed that panning a sound to centre results in it being at  
the back of all the microphones, and suggested to Zoran that this  
could be fixed by pointing the array inwards, which works nicely in  
synthesis where the microphones cannot mask each other.


I've also experimented with widely spaced and irregular microphone  
arrays, as microphone and speaker are basically analogous in this  
case. I suppose really one should be able to control the order of the  
loudspeaker radiation pattern, not easy. Very difficult to make  
smooth, but there are possibilities.


Ciao,

Dave Hunt




From: Jon Honeyball j...@jonhoneyball.com
Date: 14 January 2015 08:59:19 GMT
To: Surround Sound discussion group sursound@music.vt.edu
Subject: Re: [Sursound] AES London Lecture


It was an interesting presentation. The demonstration was not wholly
convincing, I have to say — (a bongo drum sound that you could  
manually

steer around the horizontal sound field) I think I have heard better
surround from normal ambisonics. A claim for this system is that it is
more convincing away from the “sweet spot”. Again, on the demo, I  
wasn’t

particularly convinced, with notable collapse and fail-over from one
speaker to the next especially from 90 degrees to 180 degrees. In
fairness, it wasn’t a particularly easy demo environment, with a  
lot of

people in the room. In a more purist test, it might well do better?

As I said, interesting and well worth attending, despite the  
limitations.


And, my goodness, doesn’t Kings College have *dreadful* internal  
signage?

Trying to get out turned into an episode of a maze game.

jon




On 31/12/2014 17:52, Aaron Heller hel...@ai.sri.com wrote:


On Wed, Dec 31, 2014 at 3:20 AM, Dave Malham dave.mal...@york.ac.uk
wrote:


I wonder how closely this is related to the paper he was one of the

authors

of at the 2010 Ambisonics Symposium? Anyone have it handy?


Here's the URL:
 http://ambisonics10.ircam.fr/drupal/files/proceedings/poster/ 
P6_41.pdf


Also an IEEE paper from 2013

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6508825

This paper presents a systematic framework for the analysis and  
design of
circular multichannel surround sound systems. Objective analysis  
based on
the concept of active intensity fields shows that for stable  
rendition of
monochromatic plane waves it is beneficial to render each such  
wave by no
more than two channels. Based on that finding, we propose a  
methodology

for
the design of circular microphone arrays, in the same  
configuration as the
corresponding loudspeaker system, which aims to capture inter- 
channel time
and intensity differences that ensure accurate rendition of the  
auditory

perspective. The methodology is applicable to regular and irregular
microphone/speaker layouts, and a wide range of microphone array  
radii,

including the special case of coincident arrays which corresponds to
intensity-based systems. Several design examples, involving first and
higher-order microphones are presented. Results of formal  
listening tests

suggest that the proposed design methodology achieves a performance
comparable to prior art in the center of the loudspeaker array and  
a more

graceful degradation away from the center.



Le 31 déc. 2014 à 00:08, John Leonard j...@johnleonard.co.uk a

écrit :



This looks interesting:

Upcoming Lectures

London: Tuesday 13th January

Perceptual Sound Field Reconstruction and Coherent Synthesis

Zoran Cvetkovic, Professor of Signal Processing at King's College

London


Imagine a group of fans cheering their team at the Olympics from a

local

pub, who want to feel transposed to the arena by experiencing a

faithful

and convincing auditory perspective of the scene they see on the

screen.
They hear the punch

Re: [Sursound] AES London Lecture

2015-01-14 Thread Jon Honeyball
It was an interesting presentation. The demonstration was not wholly 
convincing, I have to say — (a bongo drum sound that you could manually 
steer around the horizontal sound field) I think I have heard better 
surround from normal ambisonics. A claim for this system is that it is 
more convincing away from the “sweet spot”. Again, on the demo, I wasn’t 
particularly convinced, with notable collapse and fail-over from one 
speaker to the next especially from 90 degrees to 180 degrees. In 
fairness, it wasn’t a particularly easy demo environment, with a lot of 
people in the room. In a more purist test, it might well do better?

As I said, interesting and well worth attending, despite the limitations. 

And, my goodness, doesn’t Kings College have *dreadful* internal signage? 
Trying to get out turned into an episode of a maze game. 

jon




On 31/12/2014 17:52, Aaron Heller hel...@ai.sri.com wrote:

On Wed, Dec 31, 2014 at 3:20 AM, Dave Malham dave.mal...@york.ac.uk 
wrote:

 I wonder how closely this is related to the paper he was one of the
authors
 of at the 2010 Ambisonics Symposium? Anyone have it handy?

Here's the URL:
  http://ambisonics10.ircam.fr/drupal/files/proceedings/poster/P6_41.pdf

Also an IEEE paper from 2013

 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6508825

This paper presents a systematic framework for the analysis and design of
circular multichannel surround sound systems. Objective analysis based on
the concept of active intensity fields shows that for stable rendition of
monochromatic plane waves it is beneficial to render each such wave by no
more than two channels. Based on that finding, we propose a methodology 
for
the design of circular microphone arrays, in the same configuration as the
corresponding loudspeaker system, which aims to capture inter-channel time
and intensity differences that ensure accurate rendition of the auditory
perspective. The methodology is applicable to regular and irregular
microphone/speaker layouts, and a wide range of microphone array radii,
including the special case of coincident arrays which corresponds to
intensity-based systems. Several design examples, involving first and
higher-order microphones are presented. Results of formal listening tests
suggest that the proposed design methodology achieves a performance
comparable to prior art in the center of the loudspeaker array and a more
graceful degradation away from the center.


  Le 31 déc. 2014 à 00:08, John Leonard j...@johnleonard.co.uk a 
écrit :
 
   This looks interesting:
  
   Upcoming Lectures
  
   London: Tuesday 13th January
  
   Perceptual Sound Field Reconstruction and Coherent Synthesis
  
   Zoran Cvetkovic, Professor of Signal Processing at King's College
London
  
   Imagine a group of fans cheering their team at the Olympics from a
local
  pub, who want to feel transposed to the arena by experiencing a 
faithful
  and convincing auditory perspective of the scene they see on the 
screen.
  They hear the punch of the player kicking the ball and are immersed in
the
  atmosphere as if they are watching from the sideline. Alternatively,
  imagine a small group of classical music aficionados following a
broadcast
  from the Royal Opera at home, who want to have the experience of
listening
  to it from best seats at the opera house. Imagine having finally a
surround
  sound system with room simulators that actually sound like the spaces
they
  are supposed to synthesise, or watching a 3D nature film in a home
theatre
  where the sound closely follows the movements one sees on the screen.
  Imagine also a video game capable of providing a convincing dynamic
  auditory perspective that tracks a moving game player and responds to
his
  actions, with virtual objects moving and acoustic environments 
changing.
  Finally, place all this in the context of visual technology that is
moving
  firmly in the direction of 3D capture and rendering, where enhanced
  spatial accuracy and detail are key features. In this talk we will
present
  a technology that enables all these spatial sound applications using
  low-count multichannel systems.
   This month's lecture is being held at King's College London, Nash
  Lecture Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm
start.
  
   I'll be there if I can.
  
   John
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20141231/b
118ab2e/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, 
edit account or options, view archives and so on.
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] AES London Lecture

2014-12-31 Thread Pierre Alexandre Tremblay
It does indeed - especially if it delivers for real ;-)





Le 31 déc. 2014 à 00:08, John Leonard j...@johnleonard.co.uk a écrit :

 This looks interesting:
 
 Upcoming Lectures
 
 London: Tuesday 13th January
 
 Perceptual Sound Field Reconstruction and Coherent Synthesis
 
 Zoran Cvetkovic, Professor of Signal Processing at King’s College London
 
 Imagine a group of fans cheering their team at the Olympics from a local pub, 
 who want to feel transposed to the arena by experiencing a faithful and 
 convincing auditory perspective of the scene they see on the screen. They 
 hear the punch of the player kicking the ball and are immersed in the 
 atmosphere as if they are watching from the sideline. Alternatively, imagine 
 a small group of classical music aficionados following a broadcast from the 
 Royal Opera at home, who want to have the experience of listening to it from 
 best seats at the opera house. Imagine having finally a surround sound system 
 with room simulators that actually sound like the spaces they are supposed to 
 synthesise, or watching a 3D nature film in a home theatre where the sound 
 closely follows the movements one sees on the screen. Imagine also a video 
 game capable of providing a convincing dynamic auditory perspective that 
 tracks a moving game player and responds to his actions, with virtual objects 
 moving and acoustic environments changing. Finally, place all this in the 
 context of visual technology that is moving firmly in the direction of ”3D” 
 capture and rendering, where enhanced spatial accuracy and detail are key 
 features. In this talk we will present a technology that enables all these 
 spatial sound applications using low-count multichannel systems.
 This month's lecture is being held at King’s College London, Nash Lecture 
 Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm start. 
 
 I'll be there if I can.
 
 John
 
 
 
 Please note new email address  direct line phone number
 email: j...@johnleonard.uk
 phone +44 (0)20 3286 5942
 
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
 account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] AES London Lecture

2014-12-31 Thread Dave Malham
Now that's annoying - I'm actually in London the Tuesday before that AND
the Tuesday the week after, but not that Tuesday. Arghh.

  Dave

On 30 December 2014 at 23:08, John Leonard j...@johnleonard.co.uk wrote:

 This looks interesting:

 Upcoming Lectures

 London: Tuesday 13th January

 Perceptual Sound Field Reconstruction and Coherent Synthesis

 Zoran Cvetkovic, Professor of Signal Processing at King’s College London

 Imagine a group of fans cheering their team at the Olympics from a local
 pub, who want to feel transposed to the arena by experiencing a faithful
 and convincing auditory perspective of the scene they see on the screen.
 They hear the punch of the player kicking the ball and are immersed in the
 atmosphere as if they are watching from the sideline. Alternatively,
 imagine a small group of classical music aficionados following a broadcast
 from the Royal Opera at home, who want to have the experience of listening
 to it from best seats at the opera house. Imagine having finally a surround
 sound system with room simulators that actually sound like the spaces they
 are supposed to synthesise, or watching a 3D nature film in a home theatre
 where the sound closely follows the movements one sees on the screen.
 Imagine also a video game capable of providing a convincing dynamic
 auditory perspective that tracks a moving game player and responds to his
 actions, with virtual objects moving and acoustic environments changing.
 Finally, place all this in the context of visual technology that is moving
 firmly in the direction of ”3D” capture and rendering, where enhanced
 spatial accuracy and detail are key features. In this talk we will present
 a technology that enables all these spatial sound applications using
 low-count multichannel systems.
 This month's lecture is being held at King’s College London, Nash Lecture
 Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm start.

 I'll be there if I can.

 John



 Please note new email address  direct line phone number
 email: j...@johnleonard.uk
 phone +44 (0)20 3286 5942


 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
 edit account or options, view archives and so on.




-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20141231/8320ab6e/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] AES London Lecture

2014-12-31 Thread Dave Malham
I wonder how closely this is related to the paper he was one of the authors
of at the 2010 Ambisonics Symposium? Anyone have it handy?

   Dave

On 31 December 2014 at 08:47, Pierre Alexandre Tremblay tremb...@gmail.com
wrote:

 It does indeed - especially if it delivers for real ;-)





 Le 31 déc. 2014 à 00:08, John Leonard j...@johnleonard.co.uk a écrit :

  This looks interesting:
 
  Upcoming Lectures
 
  London: Tuesday 13th January
 
  Perceptual Sound Field Reconstruction and Coherent Synthesis
 
  Zoran Cvetkovic, Professor of Signal Processing at King’s College London
 
  Imagine a group of fans cheering their team at the Olympics from a local
 pub, who want to feel transposed to the arena by experiencing a faithful
 and convincing auditory perspective of the scene they see on the screen.
 They hear the punch of the player kicking the ball and are immersed in the
 atmosphere as if they are watching from the sideline. Alternatively,
 imagine a small group of classical music aficionados following a broadcast
 from the Royal Opera at home, who want to have the experience of listening
 to it from best seats at the opera house. Imagine having finally a surround
 sound system with room simulators that actually sound like the spaces they
 are supposed to synthesise, or watching a 3D nature film in a home theatre
 where the sound closely follows the movements one sees on the screen.
 Imagine also a video game capable of providing a convincing dynamic
 auditory perspective that tracks a moving game player and responds to his
 actions, with virtual objects moving and acoustic environments changing.
 Finally, place all this in the context of visual technology that is moving
 firmly in the direction of ”3D” capture and rendering, where enhanced
 spatial accuracy and detail are key features. In this talk we will present
 a technology that enables all these spatial sound applications using
 low-count multichannel systems.
  This month's lecture is being held at King’s College London, Nash
 Lecture Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm start.
 
  I'll be there if I can.
 
  John
 
 
 
  Please note new email address  direct line phone number
  email: j...@johnleonard.uk
  phone +44 (0)20 3286 5942
 
 
  ___
  Sursound mailing list
  Sursound@music.vt.edu
  https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
 edit account or options, view archives and so on.

 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
 edit account or options, view archives and so on.




-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20141231/b1b3c71a/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] AES London Lecture

2014-12-31 Thread Aaron Heller
On Wed, Dec 31, 2014 at 3:20 AM, Dave Malham dave.mal...@york.ac.uk wrote:

 I wonder how closely this is related to the paper he was one of the
authors
 of at the 2010 Ambisonics Symposium? Anyone have it handy?

Here's the URL:
  http://ambisonics10.ircam.fr/drupal/files/proceedings/poster/P6_41.pdf

Also an IEEE paper from 2013

 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6508825

This paper presents a systematic framework for the analysis and design of
circular multichannel surround sound systems. Objective analysis based on
the concept of active intensity fields shows that for stable rendition of
monochromatic plane waves it is beneficial to render each such wave by no
more than two channels. Based on that finding, we propose a methodology for
the design of circular microphone arrays, in the same configuration as the
corresponding loudspeaker system, which aims to capture inter-channel time
and intensity differences that ensure accurate rendition of the auditory
perspective. The methodology is applicable to regular and irregular
microphone/speaker layouts, and a wide range of microphone array radii,
including the special case of coincident arrays which corresponds to
intensity-based systems. Several design examples, involving first and
higher-order microphones are presented. Results of formal listening tests
suggest that the proposed design methodology achieves a performance
comparable to prior art in the center of the loudspeaker array and a more
graceful degradation away from the center.


  Le 31 déc. 2014 à 00:08, John Leonard j...@johnleonard.co.uk a écrit :
 
   This looks interesting:
  
   Upcoming Lectures
  
   London: Tuesday 13th January
  
   Perceptual Sound Field Reconstruction and Coherent Synthesis
  
   Zoran Cvetkovic, Professor of Signal Processing at King's College
London
  
   Imagine a group of fans cheering their team at the Olympics from a
local
  pub, who want to feel transposed to the arena by experiencing a faithful
  and convincing auditory perspective of the scene they see on the screen.
  They hear the punch of the player kicking the ball and are immersed in
the
  atmosphere as if they are watching from the sideline. Alternatively,
  imagine a small group of classical music aficionados following a
broadcast
  from the Royal Opera at home, who want to have the experience of
listening
  to it from best seats at the opera house. Imagine having finally a
surround
  sound system with room simulators that actually sound like the spaces
they
  are supposed to synthesise, or watching a 3D nature film in a home
theatre
  where the sound closely follows the movements one sees on the screen.
  Imagine also a video game capable of providing a convincing dynamic
  auditory perspective that tracks a moving game player and responds to
his
  actions, with virtual objects moving and acoustic environments changing.
  Finally, place all this in the context of visual technology that is
moving
  firmly in the direction of 3D capture and rendering, where enhanced
  spatial accuracy and detail are key features. In this talk we will
present
  a technology that enables all these spatial sound applications using
  low-count multichannel systems.
   This month's lecture is being held at King's College London, Nash
  Lecture Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm
start.
  
   I'll be there if I can.
  
   John
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20141231/b118ab2e/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] AES London Lecture

2014-12-30 Thread John Leonard
This looks interesting:

Upcoming Lectures

London: Tuesday 13th January

Perceptual Sound Field Reconstruction and Coherent Synthesis

Zoran Cvetkovic, Professor of Signal Processing at King’s College London

Imagine a group of fans cheering their team at the Olympics from a local pub, 
who want to feel transposed to the arena by experiencing a faithful and 
convincing auditory perspective of the scene they see on the screen. They hear 
the punch of the player kicking the ball and are immersed in the atmosphere as 
if they are watching from the sideline. Alternatively, imagine a small group of 
classical music aficionados following a broadcast from the Royal Opera at home, 
who want to have the experience of listening to it from best seats at the opera 
house. Imagine having finally a surround sound system with room simulators that 
actually sound like the spaces they are supposed to synthesise, or watching a 
3D nature film in a home theatre where the sound closely follows the movements 
one sees on the screen. Imagine also a video game capable of providing a 
convincing dynamic auditory perspective that tracks a moving game player and 
responds to his actions, with virtual objects moving and acoustic environments 
changing. Finally, place all this in the context of visual technology that is 
moving firmly in the direction of ”3D” capture and rendering, where enhanced 
spatial accuracy and detail are key features. In this talk we will present a 
technology that enables all these spatial sound applications using low-count 
multichannel systems.
This month's lecture is being held at King’s College London, Nash Lecture 
Theatre, K2.31, Strand, London, WC2R 2LS. 6:30pm for 7:00pm start. 

I'll be there if I can.

John



Please note new email address  direct line phone number
email: j...@johnleonard.uk
phone +44 (0)20 3286 5942


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.