Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Augustine Leudar wrote:


As far as I know the Kemar head is the average human head shape taken from
thousands of measurments - http://kemar.us/   Im not quite sure why an HRTF
from a Kemar head would be less desirable than averaging of 50 seperate
head measurements in real human head. I dont know how well Kemar averages
the delicate folds of the Pinna though.
 



Interesting thoughts...

It could be very well that  HRTFs correspond (in a rough form) to some 
simple anthropometric features, such as head size ("sphere diameter"), 
ear distance etc. You could argue that ITD and ILD values would depend 
on these parameters in a pretty linear form.


The pinna influence on HRTFs should be more chaotic, in my view. 
(Resonances and "shadows", for example.)


In any case a binaural decoder has to use  HRTFs, which will depend on 
anthropometric features - but in a complicated way. It is therefore a 
better approach to average a HRTF base (like Amber HRTF does), and not 
to average across some anthropometric base (KEMAR mannequin).


In the end you could compare Amber HRTFs (or similarily derived 
universal HRTF sets...) directly to KEMAR HRTFs, in practical 
performance tests.


For the reasons given  I would assume that MIT's KEMAR model (as a < 
generic reference model >) is finally beaten.


Best regards,

Stefan

P.S.: The only thing missing is now to present some form of (Open 
Source) Amber HRTF set.


See 1st thread posting...  





On 24 January 2016 at 21:47, Marc Lavallée  wrote:

 


Le Sun, 24 Jan 2016 16:35:03 -0500,I wrote:
   


There could be a way to find a "best match" from sets of measured
heads, maybe using 3D scanning, or with simple measurements. For
example, the spherical HRTF model have different diameters, which is a
good start.

There's a paper on "Rapid Generation of Personalized HRTFs" :
http://www.aes.org/e-lib/browse.cfm?elib=17365
 


A quick googling on "hrtf 3d scanning" returns more interesting results:


http://www.technologyreview.com/news/527826/microsofts-3-d-audio-gives-virtual-objects-a-voice/

http://dsp.eng.fiu.edu/HRTFDB/main.htm


http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/169.pdf

http://www.computer.org/csdl/proceedings/icisce/2015/6850/00/6850a225.pdf

https://hal.inria.fr/inria-00606814/document

http://www-users.york.ac.uk/~ait1/head.htm

--
Marc
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.

   





 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber
It's all back in a certain academic scheme by now, even if this thread 
has started with some practical implementations, and some reflections of 
Richard Furse about HRTFs:


We provide "generic" HRTFs models (for instance, our Amber HRTF 
<http://www.blueripplesound.com/hrtf-amber>) which work well for many 
people, but even better results can be achieved using personalized 
HRTF measurements. 


I have asked some clear and practical questions, application-centric.

Anyway, I will go back to the top...

Best,

Stefan



Peter Lennox wrote:


Told you Brian was doing something in this line.


Brian, can you say more about learning effects / training periods in respect of 
hrtf sets?
Dr. Peter Lennox
Senior Fellow of the Higher Education Academy
Senior Lecturer in Perception
College of Arts
University of Derby

Tel: 01332 593155

From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Brian FG Katz 
[brian.k...@limsi.fr]
Sent: 25 January 2016 16:45
To: sursound@music.vt.edu
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

   B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
   G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings.

   A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095.

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b111cdc6/attachment.p7s>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

The University of Derby has a published policy regarding email and reserves the 
right to monitor email traffic. If you believe this was sent to you in error, 
please select unsubscribe.

Unsubscribe and Security information contact:   info...@derby.ac.uk
For all FOI requests please contact:   f...@derby.ac.uk
All other Contacts are at http://www.derby.ac.uk/its/contacts/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Augustine Leudar
Im pretty sure we recalibrate all the time - I mean its not like we
suddenly lose all ability to localise in the median plane just because we
get a hair cut or clean our ears outAlso Id be willing to bet that
boxers and wrestlers who get cauliflower ear do not have significantly
impaired localisation. Still waiting on a grant for that one

On 25 January 2016 at 12:45, umashankar manthravadi 
wrote:

> I must be wrong, but about 50 years ago in scientific American, I read an
> article about how Bekesy measured the inter-diaphragm spacing of various
> mammals (including an elephant) and found it surprisingly constant. If the
> spacing between diaphragms is constant, would not that simplify the design
> of synthesized HRTFs ?
>
> umashankar
>
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
> From: Bo-Erik Sandholm<mailto:bosses...@gmail.com>
> Sent: Monday, January 25, 2016 5:15 PM
> To: sursound<mailto:sursound@music.vt.edu>
> Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?
>
> Just a short note, my wish list for what I think. could be a good way of
> doing binaural coding is to use these parameters:
>
> - the distance between the ears (head size) is the most import factor so
> maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
> norm)
>
> - use only simple generic compensation for ear shape above ~4kHz.
>
> - the shoulder reflection controlled by head tracking data, the simplest
> way is to assume the listener is stationary and only turns his head. Could
> this be implemented to be a parametric controlled filter set?
>
> Can anyone create a binaural encoding using this?
>
> I think the shoulder compensation is something that have not been done.
> As far as I know all binaural encodings are done using data sets with a
> fixed head and shoulder position.
>
> Best regards
> Bo-Erik Sandholm
> Stockholm
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b9442b95/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 
www.augustineleudar.com
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160126/0d912dc7/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Joseph Anderson
Hi All,

Just an echo... the ATK web page is down at the moment. (Has been since
before Christmas. Argh!)

The problem appears to be something just a bit more than 'trivial'. We're
working on it, and hopefully will have the site live soon. Apologies to
everyone for the inconvenience!


My kind regards,


*Joseph Anderson*



*http://www.ambisonictoolkit.net/ <http://www.ambisonictoolkit.net/>*


On Sun, Jan 24, 2016 at 5:24 PM, Marc Lavallée  wrote:

> On Mon, 25 Jan 2016 01:04:30 +,
> Stefan Schreiber  wrote :
>
> > >ATK provides HRTF data as wav files, ready to be used with
> > >convolvers. It a "ready to use" application specific format.
> > >
> > In which (angular) resolution, etc.? Azimuth/elevation resolution?
>
> The wav files are impulse response files, to be used with convolvers
> in order to render FOA streams to binaural. That's all I know.
>
> Unfortunately, http://www.ambisonictoolkit.net/ is down.
>
> --
> Marc
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/14377b40/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Marc Lavallée wrote:


Current consumer grade VR systems are technically imperfect; we can see
the pixels, there's lens aberrations and color distorsions, the field of
view is limited, etc. Paired with imperfect headphones to listen
to imperfect HRTF rendered content, the experience should be good
enough, because it is new. We may care more about "hi-fi" VR audio is a
few years, because most of the early efforts will be spent on enhancing
the visual experience. So, yes: head-tracked audio is, at this point,
much more important than perfectly matched HRTFs, and it could fit the
bill for a long time.
--
Marc
 



But Sennheiser's aim is also to present surround music via headphones, 
so we still need quality solutions...


St.


On Mon, 25 Jan 2016 13:15:10 +,
Dave Malham  wrote :

 


At this stage, one has to wonder just how much need there really is
for matching HRTF's. I'm not so convinced these dys as I once was, at
least if headtracking is properly implemented,  the few "fixed"
parameters are matched (like inter-ear distances) and there are good
visual cues as there are in modern VR systems.

   Dave

On 25 January 2016 at 12:58, Steven Boardman
 wrote:

   


We can definitely re-learn/reprogram our perception of stimulus, as
long as there are other cues to back it up.
This has been proven via glasses that alter the orientation of the
visual field. After a while the brain adapts, and presents this as
normal. I think that once learn your brain can remember this too…

https://en.wikipedia.org/wiki/Perceptual_adaptation

Indeed I have found this to be the true of different HRTFs too,
using my visual panning cues to learn the HRTF.
This is even easier with head tracked VR video.

Steve
 




___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Dave Malham wrote:


At this stage, one has to wonder just how much need there really is for
matching HRTF's. I'm not so convinced these dys as I once was, at least if
headtracking is properly implemented,  the few "fixed" parameters are
matched (like inter-ear distances) and there are good visual cues as there
are in modern VR systems.

   Dave
 



But this assumption could actually be investigated/studied, couldn't it?

Best,

Stefan

P.S.: So, the VR companies will solve the questions for which there is 
no university or AES paper?



On 25 January 2016 at 12:58, Steven Boardman 
wrote:

 


We can definitely re-learn/reprogram our perception of stimulus, as long
as there are other cues to back it up.
This has been proven via glasses that alter the orientation of the visual
field. After a while the brain adapts, and presents this as normal.
I think that once learn your brain can remember this too…

https://en.wikipedia.org/wiki/Perceptual_adaptation

Indeed I have found this to be the true of different HRTFs too, using my
visual panning cues to learn the HRTF.
This is even easier with head tracked VR video.

Steve

On 25 Jan 2016, at 12:40, Chris  wrote:

   


Maybe a silly question...

But how much work has been done on the self-consistency of HRTFs? I'm
 


aware that ear-wax, colds, which way round I sleep, etc can affect the
level and HF response of one ear to another. And clothing, haircuts etc
must significantly change the acoustic signal round our heads.
   


So are measured HRTFs consistent over time? Or do we re-calibrate
 


ourselves on a continuous basis?
   


If the latter is true, then I can see that a generic HRTF could work if
 


we were given some method (and time for) calibration.
   


Chris Woolf

On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
 


-- next part --
An HTML attachment was scrubbed...
URL: <
https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html
   


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.

   





 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Bo-Erik Sandholm wrote:



I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.
 



I would call the general case (Sandholm) HRTFs of 2nd order...   ;-)


Best,

St.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Chris

"I would like to (shamelessly) promote two studies...

Nothing shameful about answering a question very usefully! Thank you.

Chris Woolf


On 25-Jan-16 16:45, Brian FG Katz wrote:

Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings.

A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095.

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b111cdc6/attachment.p7s>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.





---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Peter Lennox
Told you Brian was doing something in this line.


Brian, can you say more about learning effects / training periods in respect of 
hrtf sets?
Dr. Peter Lennox
Senior Fellow of the Higher Education Academy
Senior Lecturer in Perception
College of Arts
University of Derby

Tel: 01332 593155

From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Brian FG Katz 
[brian.k...@limsi.fr]
Sent: 25 January 2016 16:45
To: sursound@music.vt.edu
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings.

A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095.

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b111cdc6/attachment.p7s>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

The University of Derby has a published policy regarding email and reserves the 
right to monitor email traffic. If you believe this was sent to you in error, 
please select unsubscribe.

Unsubscribe and Security information contact:   info...@derby.ac.uk
For all FOI requests please contact:   f...@derby.ac.uk
All other Contacts are at http://www.derby.ac.uk/its/contacts/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Brian FG Katz
Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings. 

A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095. 

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex 
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b111cdc6/attachment.p7s>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread Courville, Daniel
Le 16-01-25 08:00, Courville, Daniel a écrit :

>http://content.jwplatform.com/previews/C05CXB7o-ERPbx32c

On YouTube: https://www.youtube.com/watch?v=mmYHgzCdXi8

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread David Pickett

At 15:55 25-01-16, Bo-Erik Sandholm wrote:

>Does recording with in ear mics work for you?

Binaural with headphones never really works for me, though sometimes 
fig 8 Bumlein stuff does on ambient noises; but I had an experience 
with in-ear mics once -- c. 1990.  The mics were in somebody else's 
ears who sitting in front of me while I conducted an orchestra 
rehearsal.  This was an experiment associated with SynAudCon.  It was 
played back to me through loudspeakers that were either side of 
me.  I had the impression of "being there" and could hear first 
violins in a line to the left and seconds inline to the right, etc.


>Have you tried listening for a long time with closed eyes of in the dark?
>Stereo listening is known to work better in the dark when the sight does
>not interfere with the soundscape.

No. I have no problems with stereo at all, particularly Blumlein fig 
8 stereo.  However, I notice that I often turn my eyes off when 
listening -- particularly when listening for a specific edit point.


David

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread mgraves
 
 
- Original Message - Subject: Re: [Sursound] Sennheiser Easy 3D 
Recording and Modeling
From: "David Pickett" 
Date: 1/25/16 8:08 am
To: "Surround Sound discussion group" 

At 14:21 25-01-16, Courville, Daniel wrote:
 >
 >http://www.videomaker.com/videonews/2016/01/best-microphone-of-ces-201
 >6-sennheiser-vr-mic
 >
 >This says four caps. OK.
 >

 If that's the case, it looks like a copy of the SF Mike. How can the 
 article say: "This is the first VR Mic we've seen..." ?



 Perhaps because it's written by someone very young and inexperienced?
 
Or perhaps because the lay person doesn't equate "VR" with "surround."
 
Michael Graves
 mgra...@mstvp.com
http://www.mgraves.org
o(713) 861-4005
 c(713) 201-1262
 sip:mgra...@mjg.onsip.com
 skype mjgraves
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/3e8ffc01/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Bo-Erik Sandholm
Does recording with in ear mics work for you?
Have you tried listening for a long time with closed eyes of in the dark?
Stereo listening is known to work better in the dark when the sight does
not interfere with the soundscape.
On 25 Jan 2016 15:11, "David Pickett"  wrote:

> At 13:40 25-01-16, Chris wrote:
>
>> Maybe a silly question...
>>
>> But how much work has been done on the self-consistency of HRTFs? I'm
>> aware that ear-wax, colds, which way round I sleep, etc can affect the
>> level and HF response of one ear to another. And clothing, haircuts etc
>> must significantly change the acoustic signal round our heads.
>>
>> So are measured HRTFs consistent over time? Or do we re-calibrate
>> ourselves on a continuous basis?
>>
>> If the latter is true, then I can see that a generic HRTF could work if
>> we were given some method (and time for) calibration.
>>
>
> Forgive me if I have missed something, but I thought the whole point was
> that generic HRTFs dont work -- they dont work for me...
>
> David
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/62454b2c/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread David Pickett

At 14:26 25-01-16, umashankar manthravadi wrote:
>
>But a tetrahedron set up as top and three sides ­ I buiilt one like
>that too, on the surface of a hemisphere.

Yes, and although the signals can fairly simply 
be converted to A format, there is a perhaps a 
difference in practical pick up and maybe this 
configuration actually gives better Ambisonic 
results than the SF mike.  Unlike the SF mike, it 
is completely "symmetrical" in the horizontal and vertical planes.


David

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread David Pickett

At 13:40 25-01-16, Chris wrote:

Maybe a silly question...

But how much work has been done on the self-consistency of HRTFs? 
I'm aware that ear-wax, colds, which way round I sleep, etc can 
affect the level and HF response of one ear to another. And 
clothing, haircuts etc must significantly change the acoustic signal 
round our heads.


So are measured HRTFs consistent over time? Or do we re-calibrate 
ourselves on a continuous basis?


If the latter is true, then I can see that a generic HRTF could work 
if we were given some method (and time for) calibration.


Forgive me if I have missed something, but I thought the whole point 
was that generic HRTFs dont work -- they dont work for me...


David 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread David Pickett

At 14:21 25-01-16, Courville, Daniel wrote:
>
>http://www.videomaker.com/videonews/2016/01/best-microphone-of-ces-201
>6-sennheiser-vr-mic
>
>This says four caps. OK.
>

If that's the case, it looks like a copy of the SF Mike.  How can the 
article say: "This is the first VR Mic we've seen..." ?


David

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Marc Lavallée

Current consumer grade VR systems are technically imperfect; we can see
the pixels, there's lens aberrations and color distorsions, the field of
view is limited, etc. Paired with imperfect headphones to listen
to imperfect HRTF rendered content, the experience should be good
enough, because it is new. We may care more about "hi-fi" VR audio is a
few years, because most of the early efforts will be spent on enhancing
the visual experience. So, yes: head-tracked audio is, at this point,
much more important than perfectly matched HRTFs, and it could fit the
bill for a long time.
--
Marc

On Mon, 25 Jan 2016 13:15:10 +,
Dave Malham  wrote :

> At this stage, one has to wonder just how much need there really is
> for matching HRTF's. I'm not so convinced these dys as I once was, at
> least if headtracking is properly implemented,  the few "fixed"
> parameters are matched (like inter-ear distances) and there are good
> visual cues as there are in modern VR systems.
> 
> Dave
> 
> On 25 January 2016 at 12:58, Steven Boardman
>  wrote:
> 
> > We can definitely re-learn/reprogram our perception of stimulus, as
> > long as there are other cues to back it up.
> > This has been proven via glasses that alter the orientation of the
> > visual field. After a while the brain adapts, and presents this as
> > normal. I think that once learn your brain can remember this too…
> >
> > https://en.wikipedia.org/wiki/Perceptual_adaptation
> >
> > Indeed I have found this to be the true of different HRTFs too,
> > using my visual panning cues to learn the HRTF.
> > This is even easier with head tracked VR video.
> >
> > Steve


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread Dave Malham
Yeah - I just pulled up the last image Daniel posted in Gimp and its
clearly a tetrahedron, albeit perhaps a bit wider spaced than is ideal.

   Dave

On 25 January 2016 at 13:24, umashankar manthravadi 
wrote:

> That is a tetrahedron! After all my struggles to making one, I can
> recognize it in sleep, in pitch darkness/
>
> umashankar
>
> Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
> From: Courville, Daniel<mailto:courville.dan...@uqam.ca>
> Sent: Monday, January 25, 2016 6:47 PM
> To: Sursound<mailto:sursound@music.vt.edu>
> Subject: Re: [Sursound] Sennheiser Easy 3D Recording and Modeling
>
> Le 16-01-25 08:00, Courville, Daniel a écrit :
>
> >From what we're seeing, the Ambeo mic looks like it has more then four
> capsules.
>
> Still not sure:
>
>
> http://static.videomaker.com/sites/videomaker.com/files/styles/blog_post_primary/public/videonews/2016/01/IMG_1524%20Cropped.jpg
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/948a8991/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/d1b6d64c/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Peter Lennox
Dave - yes - you could have made that point; you were the first person I 
observed to exhibit this 'training period' in that you could hear more detail 
in an ambisonic recording than most, because (I think) of prolonged exposure.

Training periods are known in psychology experimentation, and Kopco and 
Shinn-Cunningham did quite a bit on listening in different rooms, finding 
(amongst other things) that auditory spatial perception showed performance 
improvement of a period (couple of hours to reach asymptote, I think) and these 
improvements carried over to the next day, and, oddly to quite different 
locations in the same room.

I suspect it's the same principle as the 'golden pinnae' experiments, where 
subjects can(after training period) achieve results with others' pinnae 
equivalent to their own - and, on occasions, better results!

So a small range of well-chosen HRTFs ought to suffice for the majority of the 
population, providing there is opportunity for appropriate training periods.

Isn't Brian Katz doing something on this?
cheers (get back to my marking, and stop prevaricating)

Dr. Peter Lennox
Senior Fellow of the Higher Education Academy
Senior Lecturer in Perception
College of Arts
University of Derby

Tel: 01332 593155

From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Dave Malham 
[dave.mal...@york.ac.uk]
Sent: 25 January 2016 13:04
To: ch...@chriswoolf.co.uk; Surround Sound discussion group
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Chris makes some very good points, ones that I wish I'd made myself! We
must be continuously recalibrating our hearing to be able to deal with all
the effects Chris mentions otherwise the conflict between the physical
sense of hearing and our internal perceptual models would become too
excessive.

   Dave

On 25 January 2016 at 12:40, Chris  wrote:

> Maybe a silly question...
>
> But how much work has been done on the self-consistency of HRTFs? I'm
> aware that ear-wax, colds, which way round I sleep, etc can affect the
> level and HF response of one ear to another. And clothing, haircuts etc
> must significantly change the acoustic signal round our heads.
>
> So are measured HRTFs consistent over time? Or do we re-calibrate
> ourselves on a continuous basis?
>
> If the latter is true, then I can see that a generic HRTF could work if we
> were given some method (and time for) calibration.
>
> Chris Woolf
>
> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>
>> Just a short note, my wish list for what I think. could be a good way of
>> doing binaural coding is to use these parameters:
>>
>> - the distance between the ears (head size) is the most import factor so
>> maybe 5 sizes to choose from. ( I have a larger inter ear distance than
>> the
>> norm)
>>
>> - use only simple generic compensation for ear shape above ~4kHz.
>>
>> - the shoulder reflection controlled by head tracking data, the simplest
>> way is to assume the listener is stationary and only turns his head. Could
>> this be implemented to be a parametric controlled filter set?
>>
>> Can anyone create a binaural encoding using this?
>>
>> I think the shoulder compensation is something that have not been done.
>> As far as I know all binaural encodings are done using data sets with a
>> fixed head and shoulder position.
>>
>> Best regards
>> Bo-Erik Sandholm
>> Stockholm
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html
>> >
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
>>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



--

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/8fec

Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread umashankar manthravadi
But a tetrahedron set up as top and three sides – I built one like that too, on 
the surface of a hemisphere.

umashankar

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Courville, Daniel<mailto:courville.dan...@uqam.ca>
Sent: Monday, January 25, 2016 6:52 PM
To: Sursound<mailto:sursound@music.vt.edu>
Subject: Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

http://www.videomaker.com/videonews/2016/01/best-microphone-of-ces-2016-sennheiser-vr-mic

This says four caps. OK.

Le 16-01-25 08:17, Courville, Daniel a écrit :

>Le 16-01-25 08:00, Courville, Daniel a écrit :
>
>>From what we're seeing, the Ambeo mic looks like it has more then four 
>>capsules.
>
>Still not sure:
>
>http://static.videomaker.com/sites/videomaker.com/files/styles/blog_post_primary/public/videonews/2016/01/IMG_1524%20Cropped.jpg


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/d98ea7af/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread umashankar manthravadi
That is a tetrahedron! After all my struggles to making one, I can recognize it 
in sleep, in pitch darkness/

umashankar

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Courville, Daniel<mailto:courville.dan...@uqam.ca>
Sent: Monday, January 25, 2016 6:47 PM
To: Sursound<mailto:sursound@music.vt.edu>
Subject: Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

Le 16-01-25 08:00, Courville, Daniel a écrit :

>From what we're seeing, the Ambeo mic looks like it has more then four 
>capsules.

Still not sure:

http://static.videomaker.com/sites/videomaker.com/files/styles/blog_post_primary/public/videonews/2016/01/IMG_1524%20Cropped.jpg
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/948a8991/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Steven Boardman
Me too.

Personally if all the HRTF databases contained all this individual data, i.e 
head width, height, ear distance from shoulders, and a picture of the pinnae. 
One could select the nearest to ones own, or have a system that throws up a few 
to chose. Listen for 24hours then done…

Steve




On 25 Jan 2016, at 13:15, Dave Malham  wrote:

> At this stage, one has to wonder just how much need there really is for
> matching HRTF's. I'm not so convinced these dys as I once was, at least if
> headtracking is properly implemented,  the few "fixed" parameters are
> matched (like inter-ear distances) and there are good visual cues as there
> are in modern VR systems.
> 
>Dave
> 
> On 25 January 2016 at 12:58, Steven Boardman 
> wrote:
> 
>> We can definitely re-learn/reprogram our perception of stimulus, as long
>> as there are other cues to back it up.
>> This has been proven via glasses that alter the orientation of the visual
>> field. After a while the brain adapts, and presents this as normal.
>> I think that once learn your brain can remember this too…
>> 
>> https://en.wikipedia.org/wiki/Perceptual_adaptation
>> 
>> Indeed I have found this to be the true of different HRTFs too, using my
>> visual panning cues to learn the HRTF.
>> This is even easier with head tracked VR video.
>> 
>> Steve
>> 
>> On 25 Jan 2016, at 12:40, Chris  wrote:
>> 
>>> Maybe a silly question...
>>> 
>>> But how much work has been done on the self-consistency of HRTFs? I'm
>> aware that ear-wax, colds, which way round I sleep, etc can affect the
>> level and HF response of one ear to another. And clothing, haircuts etc
>> must significantly change the acoustic signal round our heads.
>>> 
>>> So are measured HRTFs consistent over time? Or do we re-calibrate
>> ourselves on a continuous basis?
>>> 
>>> If the latter is true, then I can see that a generic HRTF could work if
>> we were given some method (and time for) calibration.
>>> 
>>> Chris Woolf
>>> 
>>> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>> 
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html
>>> 
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>> 
> 
> 
> 
> -- 
> 
> As of 1st October 2012, I have retired from the University.
> 
> These are my own views and may or may not be shared by the University
> 
> Dave Malham
> Honorary Fellow, Department of Music
> The University of York
> York YO10 5DD
> UK
> 
> 'Ambisonics - Component Imaging for Audio'
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/589efcae/attachment.html>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/8d46d445/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread Courville, Daniel
http://www.videomaker.com/videonews/2016/01/best-microphone-of-ces-2016-sennheiser-vr-mic

This says four caps. OK.

Le 16-01-25 08:17, Courville, Daniel a écrit :

>Le 16-01-25 08:00, Courville, Daniel a écrit :
>
>>From what we're seeing, the Ambeo mic looks like it has more then four 
>>capsules. 
>
>Still not sure:
>
>http://static.videomaker.com/sites/videomaker.com/files/styles/blog_post_primary/public/videonews/2016/01/IMG_1524%20Cropped.jpg


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread Courville, Daniel
Le 16-01-25 08:00, Courville, Daniel a écrit :

>From what we're seeing, the Ambeo mic looks like it has more then four 
>capsules. 

Still not sure:

http://static.videomaker.com/sites/videomaker.com/files/styles/blog_post_primary/public/videonews/2016/01/IMG_1524%20Cropped.jpg
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Dave Malham
At this stage, one has to wonder just how much need there really is for
matching HRTF's. I'm not so convinced these dys as I once was, at least if
headtracking is properly implemented,  the few "fixed" parameters are
matched (like inter-ear distances) and there are good visual cues as there
are in modern VR systems.

Dave

On 25 January 2016 at 12:58, Steven Boardman 
wrote:

> We can definitely re-learn/reprogram our perception of stimulus, as long
> as there are other cues to back it up.
> This has been proven via glasses that alter the orientation of the visual
> field. After a while the brain adapts, and presents this as normal.
> I think that once learn your brain can remember this too…
>
> https://en.wikipedia.org/wiki/Perceptual_adaptation
>
> Indeed I have found this to be the true of different HRTFs too, using my
> visual panning cues to learn the HRTF.
> This is even easier with head tracked VR video.
>
> Steve
>
> On 25 Jan 2016, at 12:40, Chris  wrote:
>
> > Maybe a silly question...
> >
> > But how much work has been done on the self-consistency of HRTFs? I'm
> aware that ear-wax, colds, which way round I sleep, etc can affect the
> level and HF response of one ear to another. And clothing, haircuts etc
> must significantly change the acoustic signal round our heads.
> >
> > So are measured HRTFs consistent over time? Or do we re-calibrate
> ourselves on a continuous basis?
> >
> > If the latter is true, then I can see that a generic HRTF could work if
> we were given some method (and time for) calibration.
> >
> > Chris Woolf
> >
> > On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/589efcae/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Dave Malham
Chris makes some very good points, ones that I wish I'd made myself! We
must be continuously recalibrating our hearing to be able to deal with all
the effects Chris mentions otherwise the conflict between the physical
sense of hearing and our internal perceptual models would become too
excessive.

   Dave

On 25 January 2016 at 12:40, Chris  wrote:

> Maybe a silly question...
>
> But how much work has been done on the self-consistency of HRTFs? I'm
> aware that ear-wax, colds, which way round I sleep, etc can affect the
> level and HF response of one ear to another. And clothing, haircuts etc
> must significantly change the acoustic signal round our heads.
>
> So are measured HRTFs consistent over time? Or do we re-calibrate
> ourselves on a continuous basis?
>
> If the latter is true, then I can see that a generic HRTF could work if we
> were given some method (and time for) calibration.
>
> Chris Woolf
>
> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>
>> Just a short note, my wish list for what I think. could be a good way of
>> doing binaural coding is to use these parameters:
>>
>> - the distance between the ears (head size) is the most import factor so
>> maybe 5 sizes to choose from. ( I have a larger inter ear distance than
>> the
>> norm)
>>
>> - use only simple generic compensation for ear shape above ~4kHz.
>>
>> - the shoulder reflection controlled by head tracking data, the simplest
>> way is to assume the listener is stationary and only turns his head. Could
>> this be implemented to be a parametric controlled filter set?
>>
>> Can anyone create a binaural encoding using this?
>>
>> I think the shoulder compensation is something that have not been done.
>> As far as I know all binaural encodings are done using data sets with a
>> fixed head and shoulder position.
>>
>> Best regards
>> Bo-Erik Sandholm
>> Stockholm
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html
>> >
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
>>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/8fecc5ce/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Sennheiser Easy 3D Recording and Modeling

2016-01-25 Thread Courville, Daniel
http://content.jwplatform.com/previews/C05CXB7o-ERPbx32c




Interesting that in the video, the first word out of the Sennheiser 
representative is "Ambisonics".

>From what we're seeing, the Ambeo mic looks like it has more then four 
>capsules. One thing is sure, Sennheiser kept their Ambisonics cards close to 
>the chest...

Maybe now AKG will reconsider making their Ambisonics mic with triangular 
capsules...
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Steven Boardman
We can definitely re-learn/reprogram our perception of stimulus, as long as 
there are other cues to back it up.
This has been proven via glasses that alter the orientation of the visual 
field. After a while the brain adapts, and presents this as normal.
I think that once learn your brain can remember this too…

https://en.wikipedia.org/wiki/Perceptual_adaptation

Indeed I have found this to be the true of different HRTFs too, using my visual 
panning cues to learn the HRTF.
This is even easier with head tracked VR video.

Steve

On 25 Jan 2016, at 12:40, Chris  wrote:

> Maybe a silly question...
> 
> But how much work has been done on the self-consistency of HRTFs? I'm aware 
> that ear-wax, colds, which way round I sleep, etc can affect the level and HF 
> response of one ear to another. And clothing, haircuts etc must significantly 
> change the acoustic signal round our heads.
> 
> So are measured HRTFs consistent over time? Or do we re-calibrate ourselves 
> on a continuous basis?
> 
> If the latter is true, then I can see that a generic HRTF could work if we 
> were given some method (and time for) calibration.
> 
> Chris Woolf
> 
> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:

-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread umashankar manthravadi
I must be wrong, but about 50 years ago in scientific American, I read an 
article about how Bekesy measured the inter-diaphragm spacing of various 
mammals (including an elephant) and found it surprisingly constant. If the 
spacing between diaphragms is constant, would not that simplify the design of 
synthesized HRTFs ?

umashankar

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Bo-Erik Sandholm<mailto:bosses...@gmail.com>
Sent: Monday, January 25, 2016 5:15 PM
To: sursound<mailto:sursound@music.vt.edu>
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Just a short note, my wish list for what I think. could be a good way of
doing binaural coding is to use these parameters:

- the distance between the ears (head size) is the most import factor so
maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
norm)

- use only simple generic compensation for ear shape above ~4kHz.

- the shoulder reflection controlled by head tracking data, the simplest
way is to assume the listener is stationary and only turns his head. Could
this be implemented to be a parametric controlled filter set?

Can anyone create a binaural encoding using this?

I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.

Best regards
Bo-Erik Sandholm
Stockholm
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b9442b95/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Chris

Maybe a silly question...

But how much work has been done on the self-consistency of HRTFs? I'm 
aware that ear-wax, colds, which way round I sleep, etc can affect the 
level and HF response of one ear to another. And clothing, haircuts etc 
must significantly change the acoustic signal round our heads.


So are measured HRTFs consistent over time? Or do we re-calibrate 
ourselves on a continuous basis?


If the latter is true, then I can see that a generic HRTF could work if 
we were given some method (and time for) calibration.


Chris Woolf

On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:

Just a short note, my wish list for what I think. could be a good way of
doing binaural coding is to use these parameters:

- the distance between the ears (head size) is the most import factor so
maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
norm)

- use only simple generic compensation for ear shape above ~4kHz.

- the shoulder reflection controlled by head tracking data, the simplest
way is to assume the listener is stationary and only turns his head. Could
this be implemented to be a parametric controlled filter set?

Can anyone create a binaural encoding using this?

I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.

Best regards
Bo-Erik Sandholm
Stockholm
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.





---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Bo-Erik Sandholm
Just a short note, my wish list for what I think. could be a good way of
doing binaural coding is to use these parameters:

- the distance between the ears (head size) is the most import factor so
maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
norm)

- use only simple generic compensation for ear shape above ~4kHz.

- the shoulder reflection controlled by head tracking data, the simplest
way is to assume the listener is stationary and only turns his head. Could
this be implemented to be a parametric controlled filter set?

Can anyone create a binaural encoding using this?

I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.

Best regards
Bo-Erik Sandholm
Stockholm
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.