Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-30 Thread umashankar manthravadi
This might interest some
http://patents.stackexchange.com/questions/4569/virtual-loudspeakers-over-headphones-using-head-tracking-issued-patent-prior

umashankar

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Markus Noisternig<mailto:markus.noister...@ircam.fr>
Sent: Tuesday, January 26, 2016 7:02 PM
To: Surround Sound discussion group<mailto:sursound@music.vt.edu>
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Dear All,

I would like to add a few words to the discussion on the AES69-2015 / SOFA 
format:

AES69 standardizes the SOFA file format to exchange space-related acoustic 
data. The format is designed to be sufficiently flexible to include source 
materials from different databases and for different use cases (e.g., HRIRs, 
MIMO-RIRs, etc.).

AES69 is split in two parts: (1) the main body of the text, wich defines 
dimensions and general rules for creating so-called conventions; (2) 
‘conventions' for a consistent description of particular setups in the annex.

A ‘convention’ defines recommendations on the naming of AES69 attributes, 
variables, and dimensions for particular application fields. In other standards 
a set of ‘conventions’ is often referred to as a ‘profile’. Conventions are 
discussed on the SOFA website (http://www.sofaconventions.org/). As soon as a 
new convention is considered as being consistent and stable, it will be added 
to the annex of the AES69 standard through the normal revision process.

In other words, if you want AES69 / SOFA to support ATK, feel free to open the 
discussion on a new set of conventions.

Open source APIs for Matlab, Octave, and C++ are available at 
http://sourceforge.net/projects/sofacoustics/. The API provides functionality 
to create, read, and write AES69 ‘.sofa’ files. You can freely download and use 
the APIs, in whole or in part, for personal or commercial purposes.

Best regards,

Markus

--
Markus Noisternig
Acoustics and Cognition Research Group
IRCAM, CNRS, Sorbonne Universities, UPMC
Paris, France

> On 26 janv. 2016, at 11:30, Trond Lossius <trond.loss...@bek.no> wrote:
>
>> On 25 Jan 2016, at 01:37, Marc Lavallée <m...@hacklava.net> wrote:
>>
>>> As anything simpler but functional might be sufficient and even
>>> preferable in most cases:
>>>
>>> - Does ATK define an HRTF interface which is sufficiently flexible to
>>> be the base for a real < standard > ?
>>
>> Not really, but you should ask the maintainers of ATK.
>
> I don’t think ATK makes sense as a standard. The ATK sets are pretty 
> application-specific: For each HRTF it contains 8 impulses so that each of 
> the WXYZ channels can be convolved to the left and right ear. These are 
> calculated as a reductionfrom larger sets of HRTF measurements. A general 
> HRTF measurement contains much more information (measurements for multiple 
> azimuths and elevations). As such SOFA seems to me an interesting move 
> towards a standardisation.
>
> What would be useful though, would be a standard solution for generating 
> impulses for ATK from SOFA data.



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160131/15b9bfcf/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-26 Thread Trond Lossius
> On 25 Jan 2016, at 01:37, Marc Lavallée  wrote:
> 
>> As anything simpler but functional might be sufficient and even 
>> preferable in most cases:
>> 
>> - Does ATK define an HRTF interface which is sufficiently flexible to
>> be the base for a real < standard > ?
> 
> Not really, but you should ask the maintainers of ATK.

I don’t think ATK makes sense as a standard. The ATK sets are pretty 
application-specific: For each HRTF it contains 8 impulses so that each of the 
WXYZ channels can be convolved to the left and right ear. These are calculated 
as a reductionfrom larger sets of HRTF measurements. A general HRTF measurement 
contains much more information (measurements for multiple azimuths and 
elevations). As such SOFA seems to me an interesting move towards a 
standardisation.

What would be useful though, would be a standard solution for generating 
impulses for ATK from SOFA data.

Best,
Trond
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-26 Thread Markus Noisternig
Dear All, 

I would like to add a few words to the discussion on the AES69-2015 / SOFA 
format:

AES69 standardizes the SOFA file format to exchange space-related acoustic 
data. The format is designed to be sufficiently flexible to include source 
materials from different databases and for different use cases (e.g., HRIRs, 
MIMO-RIRs, etc.).  

AES69 is split in two parts: (1) the main body of the text, wich defines 
dimensions and general rules for creating so-called conventions; (2) 
‘conventions' for a consistent description of particular setups in the annex.

A ‘convention’ defines recommendations on the naming of AES69 attributes, 
variables, and dimensions for particular application fields. In other standards 
a set of ‘conventions’ is often referred to as a ‘profile’. Conventions are 
discussed on the SOFA website (http://www.sofaconventions.org/). As soon as a 
new convention is considered as being consistent and stable, it will be added 
to the annex of the AES69 standard through the normal revision process. 

In other words, if you want AES69 / SOFA to support ATK, feel free to open the 
discussion on a new set of conventions.

Open source APIs for Matlab, Octave, and C++ are available at 
http://sourceforge.net/projects/sofacoustics/. The API provides functionality 
to create, read, and write AES69 ‘.sofa’ files. You can freely download and use 
the APIs, in whole or in part, for personal or commercial purposes.

Best regards, 

Markus

-- 
Markus Noisternig 
Acoustics and Cognition Research Group 
IRCAM, CNRS, Sorbonne Universities, UPMC 
Paris, France 
 
> On 26 janv. 2016, at 11:30, Trond Lossius  wrote:
> 
>> On 25 Jan 2016, at 01:37, Marc Lavallée  wrote:
>> 
>>> As anything simpler but functional might be sufficient and even 
>>> preferable in most cases:
>>> 
>>> - Does ATK define an HRTF interface which is sufficiently flexible to
>>> be the base for a real < standard > ?
>> 
>> Not really, but you should ask the maintainers of ATK.
> 
> I don’t think ATK makes sense as a standard. The ATK sets are pretty 
> application-specific: For each HRTF it contains 8 impulses so that each of 
> the WXYZ channels can be convolved to the left and right ear. These are 
> calculated as a reductionfrom larger sets of HRTF measurements. A general 
> HRTF measurement contains much more information (measurements for multiple 
> azimuths and elevations). As such SOFA seems to me an interesting move 
> towards a standardisation.
> 
> What would be useful though, would be a standard solution for generating 
> impulses for ATK from SOFA data.



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-26 Thread Stefan Schreiber

Trond Lossius wrote:


On 25 Jan 2016, at 01:37, Marc Lavallée  wrote:

   

As anything simpler but functional might be sufficient and even 
preferable in most cases:


- Does ATK define an HRTF interface which is sufficiently flexible to
be the base for a real < standard > ?
 


Not really, but you should ask the maintainers of ATK.
   



I don’t think ATK makes sense as a standard. The ATK sets are pretty 
application-specific: For each HRTF it contains 8 impulses so that each of the 
WXYZ channels can be convolved to the left and right ear. These are calculated 
as a reductionfrom larger sets of HRTF measurements. A general HRTF measurement 
contains much more information (measurements for multiple azimuths and 
elevations). As such SOFA seems to me an interesting move towards a 
standardisation.

What would be useful though, would be a standard solution for generating 
impulses for ATK from SOFA data.

Best,
Trond
___

 


Two questions:

1. Doesn't ATK support binaural HOA decoders?

2. < 8 > impulses (for 4 virtual speakers) implies that you don't 
support 3D decoders (?). If not, why this? (Immersive/3D audio is on the 
requirement list for VR. It wouldn't make a lot of sense if all sound 
sources will follow your gaze - looking upwards or downwards.)


Best,

Stefan






___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-26 Thread Joseph Anderson
Hello Stefan,


On Tue, Jan 26, 2016 at 9:36 AM, Stefan Schreiber 
 wrote:

>
>> Two questions:
>
> 1. Doesn't ATK support binaural HOA decoders?
>
> 2. < 8 > impulses (for 4 virtual speakers) implies that you don't support
> 3D decoders (?). If not, why this? (Immersive/3D audio is on the
> requirement list for VR. It wouldn't make a lot of sense if all sound
> sources will follow your gaze - looking upwards or downwards.)
>

Two answers:

1. At the moment, the distributed versions (SuperCollider and JSFX
plug-ins) don't support HOA. That includes binaural HOA decoders.

An HOA version is in development, and we have run some concerts here in
Seattle using some of these features.

2. The FOA ATK HRTF decoders are

-3D for the decoders generated from the measured IRCAM Listen and CIPIC
HRTF sets

-2D for the decoders generated from the simple Duda Spherical Head model

You can read about these decoders in the documentation for the
SuperCollider version of the ATK here
. Repeated from that
page regarding the Spherical Head model:

With no pinnae and no body, elevation cues are not present.


No pinnae + No body = 2D

You'll need to remember, this is only the final rendering (decoding).
B-format soundfields are 3D, the ATK's Duda Model decoder, just doesn't
reproduce elevation cues. You can rotate the soundfield with VR-head
tracking as you normally would/should

Clear?


My best,



*Joseph Anderson*



*http://www.ambisonictoolkit.net/ *
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread umashankar manthravadi
I must be wrong, but about 50 years ago in scientific American, I read an 
article about how Bekesy measured the inter-diaphragm spacing of various 
mammals (including an elephant) and found it surprisingly constant. If the 
spacing between diaphragms is constant, would not that simplify the design of 
synthesized HRTFs ?

umashankar

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Bo-Erik Sandholm<mailto:bosses...@gmail.com>
Sent: Monday, January 25, 2016 5:15 PM
To: sursound<mailto:sursound@music.vt.edu>
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Just a short note, my wish list for what I think. could be a good way of
doing binaural coding is to use these parameters:

- the distance between the ears (head size) is the most import factor so
maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
norm)

- use only simple generic compensation for ear shape above ~4kHz.

- the shoulder reflection controlled by head tracking data, the simplest
way is to assume the listener is stationary and only turns his head. Could
this be implemented to be a parametric controlled filter set?

Can anyone create a binaural encoding using this?

I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.

Best regards
Bo-Erik Sandholm
Stockholm
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b9442b95/attachment.html>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Steven Boardman
We can definitely re-learn/reprogram our perception of stimulus, as long as 
there are other cues to back it up.
This has been proven via glasses that alter the orientation of the visual 
field. After a while the brain adapts, and presents this as normal.
I think that once learn your brain can remember this too…

https://en.wikipedia.org/wiki/Perceptual_adaptation

Indeed I have found this to be the true of different HRTFs too, using my visual 
panning cues to learn the HRTF.
This is even easier with head tracked VR video.

Steve

On 25 Jan 2016, at 12:40, Chris  wrote:

> Maybe a silly question...
> 
> But how much work has been done on the self-consistency of HRTFs? I'm aware 
> that ear-wax, colds, which way round I sleep, etc can affect the level and HF 
> response of one ear to another. And clothing, haircuts etc must significantly 
> change the acoustic signal round our heads.
> 
> So are measured HRTFs consistent over time? Or do we re-calibrate ourselves 
> on a continuous basis?
> 
> If the latter is true, then I can see that a generic HRTF could work if we 
> were given some method (and time for) calibration.
> 
> Chris Woolf
> 
> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:

-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Dave Malham
At this stage, one has to wonder just how much need there really is for
matching HRTF's. I'm not so convinced these dys as I once was, at least if
headtracking is properly implemented,  the few "fixed" parameters are
matched (like inter-ear distances) and there are good visual cues as there
are in modern VR systems.

Dave

On 25 January 2016 at 12:58, Steven Boardman 
wrote:

> We can definitely re-learn/reprogram our perception of stimulus, as long
> as there are other cues to back it up.
> This has been proven via glasses that alter the orientation of the visual
> field. After a while the brain adapts, and presents this as normal.
> I think that once learn your brain can remember this too…
>
> https://en.wikipedia.org/wiki/Perceptual_adaptation
>
> Indeed I have found this to be the true of different HRTFs too, using my
> visual panning cues to learn the HRTF.
> This is even easier with head tracked VR video.
>
> Steve
>
> On 25 Jan 2016, at 12:40, Chris  wrote:
>
> > Maybe a silly question...
> >
> > But how much work has been done on the self-consistency of HRTFs? I'm
> aware that ear-wax, colds, which way round I sleep, etc can affect the
> level and HF response of one ear to another. And clothing, haircuts etc
> must significantly change the acoustic signal round our heads.
> >
> > So are measured HRTFs consistent over time? Or do we re-calibrate
> ourselves on a continuous basis?
> >
> > If the latter is true, then I can see that a generic HRTF could work if
> we were given some method (and time for) calibration.
> >
> > Chris Woolf
> >
> > On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread David Pickett

At 13:40 25-01-16, Chris wrote:

Maybe a silly question...

But how much work has been done on the self-consistency of HRTFs? 
I'm aware that ear-wax, colds, which way round I sleep, etc can 
affect the level and HF response of one ear to another. And 
clothing, haircuts etc must significantly change the acoustic signal 
round our heads.


So are measured HRTFs consistent over time? Or do we re-calibrate 
ourselves on a continuous basis?


If the latter is true, then I can see that a generic HRTF could work 
if we were given some method (and time for) calibration.


Forgive me if I have missed something, but I thought the whole point 
was that generic HRTFs dont work -- they dont work for me...


David 


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Chris

Maybe a silly question...

But how much work has been done on the self-consistency of HRTFs? I'm 
aware that ear-wax, colds, which way round I sleep, etc can affect the 
level and HF response of one ear to another. And clothing, haircuts etc 
must significantly change the acoustic signal round our heads.


So are measured HRTFs consistent over time? Or do we re-calibrate 
ourselves on a continuous basis?


If the latter is true, then I can see that a generic HRTF could work if 
we were given some method (and time for) calibration.


Chris Woolf

On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:

Just a short note, my wish list for what I think. could be a good way of
doing binaural coding is to use these parameters:

- the distance between the ears (head size) is the most import factor so
maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
norm)

- use only simple generic compensation for ear shape above ~4kHz.

- the shoulder reflection controlled by head tracking data, the simplest
way is to assume the listener is stationary and only turns his head. Could
this be implemented to be a parametric controlled filter set?

Can anyone create a binaural encoding using this?

I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.

Best regards
Bo-Erik Sandholm
Stockholm
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.





---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Dave Malham
Chris makes some very good points, ones that I wish I'd made myself! We
must be continuously recalibrating our hearing to be able to deal with all
the effects Chris mentions otherwise the conflict between the physical
sense of hearing and our internal perceptual models would become too
excessive.

   Dave

On 25 January 2016 at 12:40, Chris  wrote:

> Maybe a silly question...
>
> But how much work has been done on the self-consistency of HRTFs? I'm
> aware that ear-wax, colds, which way round I sleep, etc can affect the
> level and HF response of one ear to another. And clothing, haircuts etc
> must significantly change the acoustic signal round our heads.
>
> So are measured HRTFs consistent over time? Or do we re-calibrate
> ourselves on a continuous basis?
>
> If the latter is true, then I can see that a generic HRTF could work if we
> were given some method (and time for) calibration.
>
> Chris Woolf
>
> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>
>> Just a short note, my wish list for what I think. could be a good way of
>> doing binaural coding is to use these parameters:
>>
>> - the distance between the ears (head size) is the most import factor so
>> maybe 5 sizes to choose from. ( I have a larger inter ear distance than
>> the
>> norm)
>>
>> - use only simple generic compensation for ear shape above ~4kHz.
>>
>> - the shoulder reflection controlled by head tracking data, the simplest
>> way is to assume the listener is stationary and only turns his head. Could
>> this be implemented to be a parametric controlled filter set?
>>
>> Can anyone create a binaural encoding using this?
>>
>> I think the shoulder compensation is something that have not been done.
>> As far as I know all binaural encodings are done using data sets with a
>> fixed head and shoulder position.
>>
>> Best regards
>> Bo-Erik Sandholm
>> Stockholm
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html
>> >
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
>>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Steven Boardman
Me too.

Personally if all the HRTF databases contained all this individual data, i.e 
head width, height, ear distance from shoulders, and a picture of the pinnae. 
One could select the nearest to ones own, or have a system that throws up a few 
to chose. Listen for 24hours then done…

Steve




On 25 Jan 2016, at 13:15, Dave Malham  wrote:

> At this stage, one has to wonder just how much need there really is for
> matching HRTF's. I'm not so convinced these dys as I once was, at least if
> headtracking is properly implemented,  the few "fixed" parameters are
> matched (like inter-ear distances) and there are good visual cues as there
> are in modern VR systems.
> 
>Dave
> 
> On 25 January 2016 at 12:58, Steven Boardman 
> wrote:
> 
>> We can definitely re-learn/reprogram our perception of stimulus, as long
>> as there are other cues to back it up.
>> This has been proven via glasses that alter the orientation of the visual
>> field. After a while the brain adapts, and presents this as normal.
>> I think that once learn your brain can remember this too…
>> 
>> https://en.wikipedia.org/wiki/Perceptual_adaptation
>> 
>> Indeed I have found this to be the true of different HRTFs too, using my
>> visual panning cues to learn the HRTF.
>> This is even easier with head tracked VR video.
>> 
>> Steve
>> 
>> On 25 Jan 2016, at 12:40, Chris  wrote:
>> 
>>> Maybe a silly question...
>>> 
>>> But how much work has been done on the self-consistency of HRTFs? I'm
>> aware that ear-wax, colds, which way round I sleep, etc can affect the
>> level and HF response of one ear to another. And clothing, haircuts etc
>> must significantly change the acoustic signal round our heads.
>>> 
>>> So are measured HRTFs consistent over time? Or do we re-calibrate
>> ourselves on a continuous basis?
>>> 
>>> If the latter is true, then I can see that a generic HRTF could work if
>> we were given some method (and time for) calibration.
>>> 
>>> Chris Woolf
>>> 
>>> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>> 
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html
>>> 
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>> 
> 
> 
> 
> -- 
> 
> As of 1st October 2012, I have retired from the University.
> 
> These are my own views and may or may not be shared by the University
> 
> Dave Malham
> Honorary Fellow, Department of Music
> The University of York
> York YO10 5DD
> UK
> 
> 'Ambisonics - Component Imaging for Audio'
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Bo-Erik Sandholm
Does recording with in ear mics work for you?
Have you tried listening for a long time with closed eyes of in the dark?
Stereo listening is known to work better in the dark when the sight does
not interfere with the soundscape.
On 25 Jan 2016 15:11, "David Pickett"  wrote:

> At 13:40 25-01-16, Chris wrote:
>
>> Maybe a silly question...
>>
>> But how much work has been done on the self-consistency of HRTFs? I'm
>> aware that ear-wax, colds, which way round I sleep, etc can affect the
>> level and HF response of one ear to another. And clothing, haircuts etc
>> must significantly change the acoustic signal round our heads.
>>
>> So are measured HRTFs consistent over time? Or do we re-calibrate
>> ourselves on a continuous basis?
>>
>> If the latter is true, then I can see that a generic HRTF could work if
>> we were given some method (and time for) calibration.
>>
>
> Forgive me if I have missed something, but I thought the whole point was
> that generic HRTFs dont work -- they dont work for me...
>
> David
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Marc Lavallée

Current consumer grade VR systems are technically imperfect; we can see
the pixels, there's lens aberrations and color distorsions, the field of
view is limited, etc. Paired with imperfect headphones to listen
to imperfect HRTF rendered content, the experience should be good
enough, because it is new. We may care more about "hi-fi" VR audio is a
few years, because most of the early efforts will be spent on enhancing
the visual experience. So, yes: head-tracked audio is, at this point,
much more important than perfectly matched HRTFs, and it could fit the
bill for a long time.
--
Marc

On Mon, 25 Jan 2016 13:15:10 +,
Dave Malham  wrote :

> At this stage, one has to wonder just how much need there really is
> for matching HRTF's. I'm not so convinced these dys as I once was, at
> least if headtracking is properly implemented,  the few "fixed"
> parameters are matched (like inter-ear distances) and there are good
> visual cues as there are in modern VR systems.
> 
> Dave
> 
> On 25 January 2016 at 12:58, Steven Boardman
>  wrote:
> 
> > We can definitely re-learn/reprogram our perception of stimulus, as
> > long as there are other cues to back it up.
> > This has been proven via glasses that alter the orientation of the
> > visual field. After a while the brain adapts, and presents this as
> > normal. I think that once learn your brain can remember this too…
> >
> > https://en.wikipedia.org/wiki/Perceptual_adaptation
> >
> > Indeed I have found this to be the true of different HRTFs too,
> > using my visual panning cues to learn the HRTF.
> > This is even easier with head tracked VR video.
> >
> > Steve


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Peter Lennox
Dave - yes - you could have made that point; you were the first person I 
observed to exhibit this 'training period' in that you could hear more detail 
in an ambisonic recording than most, because (I think) of prolonged exposure.

Training periods are known in psychology experimentation, and Kopco and 
Shinn-Cunningham did quite a bit on listening in different rooms, finding 
(amongst other things) that auditory spatial perception showed performance 
improvement of a period (couple of hours to reach asymptote, I think) and these 
improvements carried over to the next day, and, oddly to quite different 
locations in the same room.

I suspect it's the same principle as the 'golden pinnae' experiments, where 
subjects can(after training period) achieve results with others' pinnae 
equivalent to their own - and, on occasions, better results!

So a small range of well-chosen HRTFs ought to suffice for the majority of the 
population, providing there is opportunity for appropriate training periods.

Isn't Brian Katz doing something on this?
cheers (get back to my marking, and stop prevaricating)

Dr. Peter Lennox
Senior Fellow of the Higher Education Academy
Senior Lecturer in Perception
College of Arts
University of Derby

Tel: 01332 593155

From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Dave Malham 
[dave.mal...@york.ac.uk]
Sent: 25 January 2016 13:04
To: ch...@chriswoolf.co.uk; Surround Sound discussion group
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Chris makes some very good points, ones that I wish I'd made myself! We
must be continuously recalibrating our hearing to be able to deal with all
the effects Chris mentions otherwise the conflict between the physical
sense of hearing and our internal perceptual models would become too
excessive.

   Dave

On 25 January 2016 at 12:40, Chris <ch...@chriswoolf.co.uk> wrote:

> Maybe a silly question...
>
> But how much work has been done on the self-consistency of HRTFs? I'm
> aware that ear-wax, colds, which way round I sleep, etc can affect the
> level and HF response of one ear to another. And clothing, haircuts etc
> must significantly change the acoustic signal round our heads.
>
> So are measured HRTFs consistent over time? Or do we re-calibrate
> ourselves on a continuous basis?
>
> If the latter is true, then I can see that a generic HRTF could work if we
> were given some method (and time for) calibration.
>
> Chris Woolf
>
> On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
>
>> Just a short note, my wish list for what I think. could be a good way of
>> doing binaural coding is to use these parameters:
>>
>> - the distance between the ears (head size) is the most import factor so
>> maybe 5 sizes to choose from. ( I have a larger inter ear distance than
>> the
>> norm)
>>
>> - use only simple generic compensation for ear shape above ~4kHz.
>>
>> - the shoulder reflection controlled by head tracking data, the simplest
>> way is to assume the listener is stationary and only turns his head. Could
>> this be implemented to be a parametric controlled filter set?
>>
>> Can anyone create a binaural encoding using this?
>>
>> I think the shoulder compensation is something that have not been done.
>> As far as I know all binaural encodings are done using data sets with a
>> fixed head and shoulder position.
>>
>> Best regards
>> Bo-Erik Sandholm
>> Stockholm
>> -- next part --
>> An HTML attachment was scrubbed...
>> URL: <
>> https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/039046f6/attachment.html
>> >
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>>
>>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



--

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/8fecc5ce/attachment.html>

Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Brian FG Katz
Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings. 

A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095. 

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex 
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread David Pickett

At 15:55 25-01-16, Bo-Erik Sandholm wrote:

>Does recording with in ear mics work for you?

Binaural with headphones never really works for me, though sometimes 
fig 8 Bumlein stuff does on ambient noises; but I had an experience 
with in-ear mics once -- c. 1990.  The mics were in somebody else's 
ears who sitting in front of me while I conducted an orchestra 
rehearsal.  This was an experiment associated with SynAudCon.  It was 
played back to me through loudspeakers that were either side of 
me.  I had the impression of "being there" and could hear first 
violins in a line to the left and seconds inline to the right, etc.


>Have you tried listening for a long time with closed eyes of in the dark?
>Stereo listening is known to work better in the dark when the sight does
>not interfere with the soundscape.

No. I have no problems with stereo at all, particularly Blumlein fig 
8 stereo.  However, I notice that I often turn my eyes off when 
listening -- particularly when listening for a specific edit point.


David

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Peter Lennox
Told you Brian was doing something in this line.


Brian, can you say more about learning effects / training periods in respect of 
hrtf sets?
Dr. Peter Lennox
Senior Fellow of the Higher Education Academy
Senior Lecturer in Perception
College of Arts
University of Derby

Tel: 01332 593155

From: Sursound [sursound-boun...@music.vt.edu] On Behalf Of Brian FG Katz 
[brian.k...@limsi.fr]
Sent: 25 January 2016 16:45
To: sursound@music.vt.edu
Subject: Re: [Sursound] How to derive a good "universal" HRTF data set?

Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings.

A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095.

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/b111cdc6/attachment.p7s>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

The University of Derby has a published policy regarding email and reserves the 
right to monitor email traffic. If you believe this was sent to you in error, 
please select unsubscribe.

Unsubscribe and Security information contact:   info...@derby.ac.uk
For all FOI requests please contact:   f...@derby.ac.uk
All other Contacts are at http://www.derby.ac.uk/its/contacts/
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Chris

"I would like to (shamelessly) promote two studies...

Nothing shameful about answering a question very usefully! Thank you.

Chris Woolf


On 25-Jan-16 16:45, Brian FG Katz wrote:

Dear list,

On the topic of creating a reduced set of HRTFs from a large database and on
learning non-individual HRTFs, I would like to (shamelessly) promote two
studies we carried out a few years ago looking at exactly these questions:

B. Katz and G. Parseihian, “Perceptually based head-related transfer
function database optimization,” J. Acoust. Soc. Am., vol. 131, no. 2, pp.
EL99–EL105, 2012, doi:10.1121/1.3672641. (free on-line)
G. Parseihian and B. Katz, “Rapid head-related transfer function
adaptation using a virtual auditory environment,” J. Acoust. Soc. Am., vol.
131, no. 4, pp. 2948–2957, 2012, doi:10.1121/1.3687448.

I can also point you towards a recent direction of interest with regards to
HRTF ratings. I think experiment, in addition to the 7 from the above study
and some other HRTFs, there were also 2 HRTF pairs of the same people,
measured several years apart. The similarity of the ratings of these HRTFs
gives some insight, and we are currently extending this study on general
repeatability of HRTF perceptual ratings.

A. Andreopoulou and B. F. G. Katz, “On the use of subjective HRTF
evaluations for creating global perceptual similarity metrics of assessors
and assessees,” in 21st International Conference on Auditory Display (ICAD),
pp. 13–20, 2015, http://hdl.handle.net/1853/54095.

Best regards,

-Brian FG Katz
--
Brian FG Katz, Ph.D, HDR
Research Director, Resp. Groupe Audio & Acoustique
LIMSI, CNRS, Université Paris-Saclay
Rue John von Neumann
Campus Universitaire d'Orsay, Bât 508
91405 Orsay cedex
France
Phone. +  33 (0)1 69 85 80 67 - Fax.  +  33 (0)1 69 85 80 88
http://www.limsi.frweb_group: https://www.limsi.fr/fr/recherche/aa
web_theme: http://www.limsi.fr/Scientifique/aa/thmsonesp/

-- next part --
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3877 bytes
Desc: not available
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.





---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Dave Malham wrote:


At this stage, one has to wonder just how much need there really is for
matching HRTF's. I'm not so convinced these dys as I once was, at least if
headtracking is properly implemented,  the few "fixed" parameters are
matched (like inter-ear distances) and there are good visual cues as there
are in modern VR systems.

   Dave
 



But this assumption could actually be investigated/studied, couldn't it?

Best,

Stefan

P.S.: So, the VR companies will solve the questions for which there is 
no university or AES paper?



On 25 January 2016 at 12:58, Steven Boardman 
wrote:

 


We can definitely re-learn/reprogram our perception of stimulus, as long
as there are other cues to back it up.
This has been proven via glasses that alter the orientation of the visual
field. After a while the brain adapts, and presents this as normal.
I think that once learn your brain can remember this too…

https://en.wikipedia.org/wiki/Perceptual_adaptation

Indeed I have found this to be the true of different HRTFs too, using my
visual panning cues to learn the HRTF.
This is even easier with head tracked VR video.

Steve

On 25 Jan 2016, at 12:40, Chris  wrote:

   


Maybe a silly question...

But how much work has been done on the self-consistency of HRTFs? I'm
 


aware that ear-wax, colds, which way round I sleep, etc can affect the
level and HF response of one ear to another. And clothing, haircuts etc
must significantly change the acoustic signal round our heads.
   


So are measured HRTFs consistent over time? Or do we re-calibrate
 


ourselves on a continuous basis?
   


If the latter is true, then I can see that a generic HRTF could work if
 


we were given some method (and time for) calibration.
   


Chris Woolf

On 25-Jan-16 11:45, Bo-Erik Sandholm wrote:
 


-- next part --
An HTML attachment was scrubbed...
URL: <
https://mail.music.vt.edu/mailman/private/sursound/attachments/20160125/29b98f04/attachment.html
   


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.

   





 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Marc Lavallée wrote:


Current consumer grade VR systems are technically imperfect; we can see
the pixels, there's lens aberrations and color distorsions, the field of
view is limited, etc. Paired with imperfect headphones to listen
to imperfect HRTF rendered content, the experience should be good
enough, because it is new. We may care more about "hi-fi" VR audio is a
few years, because most of the early efforts will be spent on enhancing
the visual experience. So, yes: head-tracked audio is, at this point,
much more important than perfectly matched HRTFs, and it could fit the
bill for a long time.
--
Marc
 



But Sennheiser's aim is also to present surround music via headphones, 
so we still need quality solutions...


St.


On Mon, 25 Jan 2016 13:15:10 +,
Dave Malham  wrote :

 


At this stage, one has to wonder just how much need there really is
for matching HRTF's. I'm not so convinced these dys as I once was, at
least if headtracking is properly implemented,  the few "fixed"
parameters are matched (like inter-ear distances) and there are good
visual cues as there are in modern VR systems.

   Dave

On 25 January 2016 at 12:58, Steven Boardman
 wrote:

   


We can definitely re-learn/reprogram our perception of stimulus, as
long as there are other cues to back it up.
This has been proven via glasses that alter the orientation of the
visual field. After a while the brain adapts, and presents this as
normal. I think that once learn your brain can remember this too…

https://en.wikipedia.org/wiki/Perceptual_adaptation

Indeed I have found this to be the true of different HRTFs too,
using my visual panning cues to learn the HRTF.
This is even easier with head tracked VR video.

Steve
 




___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Joseph Anderson
Hi All,

Just an echo... the ATK web page is down at the moment. (Has been since
before Christmas. Argh!)

The problem appears to be something just a bit more than 'trivial'. We're
working on it, and hopefully will have the site live soon. Apologies to
everyone for the inconvenience!


My kind regards,


*Joseph Anderson*



*http://www.ambisonictoolkit.net/ *


On Sun, Jan 24, 2016 at 5:24 PM, Marc Lavallée  wrote:

> On Mon, 25 Jan 2016 01:04:30 +,
> Stefan Schreiber  wrote :
>
> > >ATK provides HRTF data as wav files, ready to be used with
> > >convolvers. It a "ready to use" application specific format.
> > >
> > In which (angular) resolution, etc.? Azimuth/elevation resolution?
>
> The wav files are impulse response files, to be used with convolvers
> in order to render FOA streams to binaural. That's all I know.
>
> Unfortunately, http://www.ambisonictoolkit.net/ is down.
>
> --
> Marc
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Bo-Erik Sandholm
Just a short note, my wish list for what I think. could be a good way of
doing binaural coding is to use these parameters:

- the distance between the ears (head size) is the most import factor so
maybe 5 sizes to choose from. ( I have a larger inter ear distance than the
norm)

- use only simple generic compensation for ear shape above ~4kHz.

- the shoulder reflection controlled by head tracking data, the simplest
way is to assume the listener is stationary and only turns his head. Could
this be implemented to be a parametric controlled filter set?

Can anyone create a binaural encoding using this?

I think the shoulder compensation is something that have not been done.
As far as I know all binaural encodings are done using data sets with a
fixed head and shoulder position.

Best regards
Bo-Erik Sandholm
Stockholm
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-25 Thread Stefan Schreiber

Augustine Leudar wrote:


As far as I know the Kemar head is the average human head shape taken from
thousands of measurments - http://kemar.us/   Im not quite sure why an HRTF
from a Kemar head would be less desirable than averaging of 50 seperate
head measurements in real human head. I dont know how well Kemar averages
the delicate folds of the Pinna though.
 



Interesting thoughts...

It could be very well that  HRTFs correspond (in a rough form) to some 
simple anthropometric features, such as head size ("sphere diameter"), 
ear distance etc. You could argue that ITD and ILD values would depend 
on these parameters in a pretty linear form.


The pinna influence on HRTFs should be more chaotic, in my view. 
(Resonances and "shadows", for example.)


In any case a binaural decoder has to use  HRTFs, which will depend on 
anthropometric features - but in a complicated way. It is therefore a 
better approach to average a HRTF base (like Amber HRTF does), and not 
to average across some anthropometric base (KEMAR mannequin).


In the end you could compare Amber HRTFs (or similarily derived 
universal HRTF sets...) directly to KEMAR HRTFs, in practical 
performance tests.


For the reasons given  I would assume that MIT's KEMAR model (as a < 
generic reference model >) is finally beaten.


Best regards,

Stefan

P.S.: The only thing missing is now to present some form of (Open 
Source) Amber HRTF set.


See 1st thread posting...  





On 24 January 2016 at 21:47, Marc Lavallée  wrote:

 


Le Sun, 24 Jan 2016 16:35:03 -0500,I wrote:
   


There could be a way to find a "best match" from sets of measured
heads, maybe using 3D scanning, or with simple measurements. For
example, the spherical HRTF model have different diameters, which is a
good start.

There's a paper on "Rapid Generation of Personalized HRTFs" :
http://www.aes.org/e-lib/browse.cfm?elib=17365
 


A quick googling on "hrtf 3d scanning" returns more interesting results:


http://www.technologyreview.com/news/527826/microsofts-3-d-audio-gives-virtual-objects-a-voice/

http://dsp.eng.fiu.edu/HRTFDB/main.htm


http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/169.pdf

http://www.computer.org/csdl/proceedings/icisce/2015/6850/00/6850a225.pdf

https://hal.inria.fr/inria-00606814/document

http://www-users.york.ac.uk/~ait1/head.htm

--
Marc
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.

   





 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Stefan Schreiber

Marc Lavallée wrote:




Formats like 5.1 to 22.2 are application specific, so I think they
should not be part of a standard. 

No, but any 5.1 to binaural decoder (application, as you say) would have 
to use some HRTF data brought into some specific shape/format. You would 
have to match loudspeakers/rotational data to HRTF positions/data.



ATK provides HRTF data as wav files, ready to be used with
convolvers. It a "ready to use" application specific format.


In which (angular) resolution, etc.? Azimuth/elevation resolution?


Creating *.* to binaural renderers
are special cases, that can use the SOFA specifications and data.
Adding *.* formats to the SOFA specifications would simply add to the
confusion about "spatial audio".
 



No, I meant really an HRTF standard (such as Sofa/AES-69).

Citing myself:

Does anyone want to write down such a standard (maybe in RFC form), 
which would be usable for every type of binaural decoder? (say also 
5.1, 9.1, 7+4H and 22.2 to binaural decoders)


Decoder != HRTF interface/standard.

I also didn't try to teach informatics, BTW.

Best,

Stefan


 


Best,

Stefan





   


On Sun, 24 Jan 2016 19:31:33 +,
Stefan Schreiber  wrote :



 


http://www.blueripplesound.com/hrtf-amber

  

   


The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about
50 different people - this must have taken a lot of effort and
we're very grateful to the good folk of IRCAM for doing the work
and making the results available to the world! What we've done is
analyse this data and come up with an 'average' HRTF that is a
sensible compromise, using some new work. As it's an average, it
wouldn't be perfect for any of the people actually measured, but
hopefully not awful for any of them either! It's certainly much
better than conventional "panning" techniques.


 


(See also:

http://www.blueripplesound.com/personalized-hrtfs
)

  

   

We provide "generic" HRTFs models (for instance, our Amber HRTF 
) which work well for

many people, but even better results can be achieved using
personalized HRTF measurements.


 


Could any people, companies or institutions on this list provide
access to such a practical and < usable > generic HRTF model?

If not: I believe that some essential theses and papers should have
been done in the academic world, but don't exist anyway.

Richard Furse basically states that a "good" generic HRTF is
derived from many HRTF measurements (data sets) via some  form of
averaging, as a "sensible compromise". I doubt that this is a
trivial process, though...

Best regards,

Stefan


P.S.:  VR companies will currently have to look into these issues,
and to find solutions which are practical at least < for most >
people. If some proposed HRTF data set doesn't fit to an individual
listener it should be pretty hard to distinguish between front/back
sources, for example. (Even with head-tracking.)

Don't tell me that I didn't present a paper to prove my point... 
Instead, give me the link to a paper which delivers some kind of 
optimized generic HRTF data set. If such a paper doesn't exist

(yet), I don't see any reason why something like "Amber HRTF" can't
be re-engineered.
(Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a 
public available list. And even IRCAM should be interested to

provide a good universal  HRTF based on its own and public HRTF
research!) ___
   



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Augustine Leudar
As far as I know the Kemar head is the average human head shape taken from
thousands of measurments - http://kemar.us/   Im not quite sure why an HRTF
from a Kemar head would be less desirable than averaging of 50 seperate
head measurements in real human head. I dont know how well Kemar averages
the delicate folds of the Pinna though.

On 24 January 2016 at 21:47, Marc Lavallée  wrote:

> Le Sun, 24 Jan 2016 16:35:03 -0500,I wrote:
> > There could be a way to find a "best match" from sets of measured
> > heads, maybe using 3D scanning, or with simple measurements. For
> > example, the spherical HRTF model have different diameters, which is a
> > good start.
> >
> > There's a paper on "Rapid Generation of Personalized HRTFs" :
> > http://www.aes.org/e-lib/browse.cfm?elib=17365
>
> A quick googling on "hrtf 3d scanning" returns more interesting results:
>
>
> http://www.technologyreview.com/news/527826/microsofts-3-d-audio-gives-virtual-objects-a-voice/
>
> http://dsp.eng.fiu.edu/HRTFDB/main.htm
>
>
> http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/169.pdf
>
> http://www.computer.org/csdl/proceedings/icisce/2015/6850/00/6850a225.pdf
>
> https://hal.inria.fr/inria-00606814/document
>
> http://www-users.york.ac.uk/~ait1/head.htm
>
> --
> Marc
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 
www.augustineleudar.com
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Marc Lavallée
On Sun, 24 Jan 2016 23:57:37 +,
Stefan Schreiber  wrote :

> Marc, just some idea:
> 
> If the ATK includes different HRTF sets, doesn't it already provide
> an open interface/standard for exchangeable HRTF data?

ATK provides HRTF data as wav files, ready to be used with
convolvers. It a "ready to use" application specific format.

> The "competition" would be AES-69, a non-open standard provided by
> AES and BiLi project. (They needed a looong time to agree on AES-69.)

It's not a case for competition; the AES-69 format can be used to
create application specific formats (or as in the web developer
language: a "micro-format"). For example, on ambisonic.xyz, I had to
change the ATK files a little bit.

> As anything simpler but functional might be sufficient and even 
> preferable in most cases:
> 
> - Does ATK define an HRTF interface which is sufficiently flexible to
> be the base for a real < standard > ?

Not really, but you should ask the maintainers of ATK.

> - Does anyone want to write down such a standard (maybe in RFC form), 
> which would be usable for every type of binaural decoder? (say also
> 5.1, 9.1, 7+4H and 22.2 to binaural decoders)

Formats like 5.1 to 22.2 are application specific, so I think they
should not be part of a standard. Creating *.* to binaural renderers
are special cases, that can use the SOFA specifications and data.
Adding *.* formats to the SOFA specifications would simply add to the
confusion about "spatial audio".

> Best,
> 
> Stefan
> 
> 
> 
> 
> 
> >On Sun, 24 Jan 2016 19:31:33 +,
> >Stefan Schreiber  wrote :
> >
> >  
> >
> >>http://www.blueripplesound.com/hrtf-amber
> >>
> >>
> >>
> >>>The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about
> >>>50 different people - this must have taken a lot of effort and
> >>>we're very grateful to the good folk of IRCAM for doing the work
> >>>and making the results available to the world! What we've done is
> >>>analyse this data and come up with an 'average' HRTF that is a
> >>>sensible compromise, using some new work. As it's an average, it
> >>>wouldn't be perfect for any of the people actually measured, but
> >>>hopefully not awful for any of them either! It's certainly much
> >>>better than conventional "panning" techniques.
> >>>  
> >>>
> >>(See also:
> >>
> >>http://www.blueripplesound.com/personalized-hrtfs
> >>)
> >>
> >>
> >>
> >>>We provide "generic" HRTFs models (for instance, our Amber HRTF 
> >>>) which work well for
> >>>many people, but even better results can be achieved using
> >>>personalized HRTF measurements.
> >>>  
> >>>
> >>Could any people, companies or institutions on this list provide
> >>access to such a practical and < usable > generic HRTF model?
> >>
> >>If not: I believe that some essential theses and papers should have
> >>been done in the academic world, but don't exist anyway.
> >>
> >>Richard Furse basically states that a "good" generic HRTF is
> >>derived from many HRTF measurements (data sets) via some  form of
> >>averaging, as a "sensible compromise". I doubt that this is a
> >>trivial process, though...
> >>
> >>Best regards,
> >>
> >>Stefan
> >>
> >>
> >>P.S.:  VR companies will currently have to look into these issues,
> >>and to find solutions which are practical at least < for most >
> >>people. If some proposed HRTF data set doesn't fit to an individual
> >>listener it should be pretty hard to distinguish between front/back
> >>sources, for example. (Even with head-tracking.)
> >>
> >>Don't tell me that I didn't present a paper to prove my point... 
> >>Instead, give me the link to a paper which delivers some kind of 
> >>optimized generic HRTF data set. If such a paper doesn't exist
> >>(yet), I don't see any reason why something like "Amber HRTF" can't
> >>be re-engineered.
> >>(Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a 
> >>public available list. And even IRCAM should be interested to
> >>provide a good universal  HRTF based on its own and public HRTF
> >>research!) ___

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Marc Lavallée
On Mon, 25 Jan 2016 01:04:30 +,
Stefan Schreiber  wrote :

> >ATK provides HRTF data as wav files, ready to be used with
> >convolvers. It a "ready to use" application specific format.
> >
> In which (angular) resolution, etc.? Azimuth/elevation resolution?

The wav files are impulse response files, to be used with convolvers
in order to render FOA streams to binaural. That's all I know.

Unfortunately, http://www.ambisonictoolkit.net/ is down.

--
Marc
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


[Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Stefan Schreiber

http://www.blueripplesound.com/hrtf-amber

The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about 50 
different people - this must have taken a lot of effort and we're very 
grateful to the good folk of IRCAM for doing the work and making the 
results available to the world! What we've done is analyse this data 
and come up with an 'average' HRTF that is a sensible compromise, 
using some new work. As it's an average, it wouldn't be perfect for 
any of the people actually measured, but hopefully not awful for any 
of them either! It's certainly much better than conventional "panning" 
techniques.



(See also:

http://www.blueripplesound.com/personalized-hrtfs
)

We provide "generic" HRTFs models (for instance, our Amber HRTF 
) which work well for many 
people, but even better results can be achieved using personalized 
HRTF measurements.



Could any people, companies or institutions on this list provide access 
to such a practical and < usable > generic HRTF model?


If not: I believe that some essential theses and papers should have been 
done in the academic world, but don't exist anyway.


Richard Furse basically states that a "good" generic HRTF is derived 
from many HRTF measurements (data sets) via some  form of averaging, as 
a "sensible compromise". I doubt that this is a trivial process, though...


Best regards,

Stefan


P.S.:  VR companies will currently have to look into these issues, and 
to find solutions which are practical at least < for most > people. If 
some proposed HRTF data set doesn't fit to an individual listener it 
should be pretty hard to distinguish between front/back sources, for 
example. (Even with head-tracking.)


Don't tell me that I didn't present a paper to prove my point... 
Instead, give me the link to a paper which delivers some kind of 
optimized generic HRTF data set. If such a paper doesn't exist (yet), I 
don't see any reason why something like "Amber HRTF" can't be 
re-engineered.
(Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a 
public available list. And even IRCAM should be interested to provide a 
good universal  HRTF based on its own and public HRTF research!)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Marc Lavallée
Le Sun, 24 Jan 2016 16:35:03 -0500,I wrote:
> There could be a way to find a "best match" from sets of measured
> heads, maybe using 3D scanning, or with simple measurements. For
> example, the spherical HRTF model have different diameters, which is a
> good start.
> 
> There's a paper on "Rapid Generation of Personalized HRTFs" :
> http://www.aes.org/e-lib/browse.cfm?elib=17365

A quick googling on "hrtf 3d scanning" returns more interesting results:

http://www.technologyreview.com/news/527826/microsofts-3-d-audio-gives-virtual-objects-a-voice/

http://dsp.eng.fiu.edu/HRTFDB/main.htm

http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/169.pdf

http://www.computer.org/csdl/proceedings/icisce/2015/6850/00/6850a225.pdf

https://hal.inria.fr/inria-00606814/document

http://www-users.york.ac.uk/~ait1/head.htm

--
Marc
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Augustine Leudar
In reply to your original question - I believe there are available data
sets for Kemar head measurements - not quite what you asking for but it is
supposed to be an "average" head

http://sound.media.mit.edu/resources/KEMAR.html

On 24 January 2016 at 20:19, Augustine Leudar 
wrote:

> I have idea of how hrtfs could be individually measured and could work on
> a commercial scale with minimum inconvenience to the public (so they might
> actually use it ). Not sure who to talk to about htis though.
>
> On 24 January 2016 at 19:31, Stefan Schreiber 
> wrote:
>
>> http://www.blueripplesound.com/hrtf-amber
>>
>> The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about 50
>>> different people - this must have taken a lot of effort and we're very
>>> grateful to the good folk of IRCAM for doing the work and making the
>>> results available to the world! What we've done is analyse this data and
>>> come up with an 'average' HRTF that is a sensible compromise, using some
>>> new work. As it's an average, it wouldn't be perfect for any of the people
>>> actually measured, but hopefully not awful for any of them either! It's
>>> certainly much better than conventional "panning" techniques.
>>>
>>
>>
>> (See also:
>>
>> http://www.blueripplesound.com/personalized-hrtfs
>> )
>>
>> We provide "generic" HRTFs models (for instance, our Amber HRTF <
>>> http://www.blueripplesound.com/hrtf-amber>) which work well for many
>>> people, but even better results can be achieved using personalized HRTF
>>> measurements.
>>>
>>
>>
>> Could any people, companies or institutions on this list provide access
>> to such a practical and < usable > generic HRTF model?
>>
>> If not: I believe that some essential theses and papers should have been
>> done in the academic world, but don't exist anyway.
>>
>> Richard Furse basically states that a "good" generic HRTF is derived from
>> many HRTF measurements (data sets) via some  form of averaging, as a
>> "sensible compromise". I doubt that this is a trivial process, though...
>>
>> Best regards,
>>
>> Stefan
>>
>>
>> P.S.:  VR companies will currently have to look into these issues, and to
>> find solutions which are practical at least < for most > people. If some
>> proposed HRTF data set doesn't fit to an individual listener it should be
>> pretty hard to distinguish between front/back sources, for example. (Even
>> with head-tracking.)
>>
>> Don't tell me that I didn't present a paper to prove my point... Instead,
>> give me the link to a paper which delivers some kind of optimized generic
>> HRTF data set. If such a paper doesn't exist (yet), I don't see any reason
>> why something like "Amber HRTF" can't be re-engineered.
>> (Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a public
>> available list. And even IRCAM should be interested to provide a good
>> universal  HRTF based on its own and public HRTF research!)
>> ___
>> Sursound mailing list
>> Sursound@music.vt.edu
>> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
>> edit account or options, view archives and so on.
>>
>
>
>
> --
> www.augustineleudar.com
>



-- 
www.augustineleudar.com
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Stefan Schreiber

Augustine Leudar wrote:


In reply to your original question - I believe there are available data
sets for Kemar head measurements - not quite what you asking for but it is
supposed to be an "average" head

http://sound.media.mit.edu/resources/KEMAR.html
 

I was aware of the fact that dummy head HRTFs are often used as generic 
HRTFs..


But are they more < objective > than the HRTFs of any real person?

If I understood Richard Furse well, it is better to derive a generic 
HRTF set by averaging across some HRTF measurement data base (containing 
HRTFs of many individuals).


Whereas KEMAR is supposed to be a standard (= averaged) dummy head, the 
Amber HRTF is derived in a different form.


Should we "normalize" anthropometric data or (measured) HRTF data sets?

In any case: Methods and results will differ!

Best,

St.

P.S.: Alas, Richard Furse didn't use KEMAR measurements as standard 
HRTF...  :-)






On 24 January 2016 at 20:19, Augustine Leudar 
wrote:

 


I have idea of how hrtfs could be individually measured and could work on
a commercial scale with minimum inconvenience to the public (so they might
actually use it ). Not sure who to talk to about htis though.

On 24 January 2016 at 19:31, Stefan Schreiber 
wrote:

   


http://www.blueripplesound.com/hrtf-amber

The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about 50
 


different people - this must have taken a lot of effort and we're very
grateful to the good folk of IRCAM for doing the work and making the
results available to the world! What we've done is analyse this data and
come up with an 'average' HRTF that is a sensible compromise, using some
new work. As it's an average, it wouldn't be perfect for any of the people
actually measured, but hopefully not awful for any of them either! It's
certainly much better than conventional "panning" techniques.

   


(See also:

http://www.blueripplesound.com/personalized-hrtfs
)

We provide "generic" HRTFs models (for instance, our Amber HRTF <
 


http://www.blueripplesound.com/hrtf-amber>) which work well for many
people, but even better results can be achieved using personalized HRTF
measurements.

   


Could any people, companies or institutions on this list provide access
to such a practical and < usable > generic HRTF model?

If not: I believe that some essential theses and papers should have been
done in the academic world, but don't exist anyway.

Richard Furse basically states that a "good" generic HRTF is derived from
many HRTF measurements (data sets) via some  form of averaging, as a
"sensible compromise". I doubt that this is a trivial process, though...

Best regards,

Stefan


P.S.:  VR companies will currently have to look into these issues, and to
find solutions which are practical at least < for most > people. If some
proposed HRTF data set doesn't fit to an individual listener it should be
pretty hard to distinguish between front/back sources, for example. (Even
with head-tracking.)

Don't tell me that I didn't present a paper to prove my point... Instead,
give me the link to a paper which delivers some kind of optimized generic
HRTF data set. If such a paper doesn't exist (yet), I don't see any reason
why something like "Amber HRTF" can't be re-engineered.
(Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a public
available list. And even IRCAM should be interested to provide a good
universal  HRTF based on its own and public HRTF research!)
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
edit account or options, view archives and so on.

 



--
www.augustineleudar.com

   





 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Marc Lavallée
On Sun, 24 Jan 2016 21:00:37 +,
Stefan Schreiber  wrote :

> Should we "normalize" anthropometric data or (measured) HRTF data
> sets?

That would discriminate potential users at the sides of the normal
curve...

There could be a way to find a "best match" from sets of measured
heads, maybe using 3D scanning, or with simple measurements. For
example, the spherical HRTF model have different diameters, which is a
good start.

There's a paper on "Rapid Generation of Personalized HRTFs" :
http://www.aes.org/e-lib/browse.cfm?elib=17365
(unfortunately, it's not free).

--
Marc 
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Augustine Leudar
I have idea of how hrtfs could be individually measured and could work on a
commercial scale with minimum inconvenience to the public (so they might
actually use it ). Not sure who to talk to about htis though.

On 24 January 2016 at 19:31, Stefan Schreiber  wrote:

> http://www.blueripplesound.com/hrtf-amber
>
> The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about 50
>> different people - this must have taken a lot of effort and we're very
>> grateful to the good folk of IRCAM for doing the work and making the
>> results available to the world! What we've done is analyse this data and
>> come up with an 'average' HRTF that is a sensible compromise, using some
>> new work. As it's an average, it wouldn't be perfect for any of the people
>> actually measured, but hopefully not awful for any of them either! It's
>> certainly much better than conventional "panning" techniques.
>>
>
>
> (See also:
>
> http://www.blueripplesound.com/personalized-hrtfs
> )
>
> We provide "generic" HRTFs models (for instance, our Amber HRTF <
>> http://www.blueripplesound.com/hrtf-amber>) which work well for many
>> people, but even better results can be achieved using personalized HRTF
>> measurements.
>>
>
>
> Could any people, companies or institutions on this list provide access to
> such a practical and < usable > generic HRTF model?
>
> If not: I believe that some essential theses and papers should have been
> done in the academic world, but don't exist anyway.
>
> Richard Furse basically states that a "good" generic HRTF is derived from
> many HRTF measurements (data sets) via some  form of averaging, as a
> "sensible compromise". I doubt that this is a trivial process, though...
>
> Best regards,
>
> Stefan
>
>
> P.S.:  VR companies will currently have to look into these issues, and to
> find solutions which are practical at least < for most > people. If some
> proposed HRTF data set doesn't fit to an individual listener it should be
> pretty hard to distinguish between front/back sources, for example. (Even
> with head-tracking.)
>
> Don't tell me that I didn't present a paper to prove my point... Instead,
> give me the link to a paper which delivers some kind of optimized generic
> HRTF data set. If such a paper doesn't exist (yet), I don't see any reason
> why something like "Amber HRTF" can't be re-engineered.
> (Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a public
> available list. And even IRCAM should be interested to provide a good
> universal  HRTF based on its own and public HRTF research!)
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>



-- 
www.augustineleudar.com
-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Stefan Schreiber

Marc Lavallée wrote:


The LISTEN set from IRCAM, the KEMAR set from MIT and the spherical set
by R. Duda are included in the Ambisonic Toolkit. I use them on
http://ambisonic.xyz/ . The spherical set is probably a good enough
compromise for VR applications, because perfection is not required for a
good experience. What seems to be missing is a practical method to
provide personalized HRTFs to users.
--
Marc

 


This is not completely logic.


The spherical set is probably a good enough
compromise for VR applications, because perfection is not required for a
good experience.


vs.


What seems to be missing is a practical method to
provide personalized HRTFs to users.



If the 2nd option would be ideal (I very obviously do agree!), you could 
also try to improve on "good enough" (1st option).


Best,

Stefan



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Stefan Schreiber

Marc Lavallée wrote:


On Sun, 24 Jan 2016 21:00:37 +,
Stefan Schreiber  wrote :

 


Should we "normalize" anthropometric data or (measured) HRTF data
sets?
   



That would discriminate potential users at the sides of the normal
curve...

There could be a way to find a "best match" from sets of measured
heads, maybe using 3D scanning, or with simple measurements. For
example, the spherical HRTF model have different diameters, which is a
good start.
 


Yes, I wrote about this.

Actually several times, and quite some time before "Oculus Audio".

(s. http://www.vrvis.at/projects/locaphoto, an "older" but ongoing 
project which I had mentioned.)


But please, let's not mix up things again.

The thread subject is clear enough! What happens if you can't 
personalise HRTF data, which is (from any practical point of view) still 
the standard case?


(So, just some suggestion: Combine a "good universal HRTF data set" with 
the possibility to load another < closely matched >  or even < 
personalised > HRTF set, maybe in the AES-69 format. If you provide such 
a solution you will be future-proof. I hope this makes some sense...  ;-) )


Best,

Stefan



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Marc Lavallée

You are right, the SOFA effort is important.
Here I see that many more data sets are available now:
http://www.sofaconventions.org/mediawiki/index.php/Files
I suspect that there will be many different solutions based of
the available sets. 

I don't think there could be a "universal" or optimal HRTF data set,
but I'm not a scientist, so maybe such a thing could exist. I don't
know if the publicly available data sets are good enough, if much
more measures are required, and if a simple matching method could help
listeners to find the best. Or if HRTF synthesis (LocaPhoto) have a
future.

But I'm sure that we can use existing HRTFs and adapt the experience in
creative ways, as with any other audio technology, because audio
producers never waited for the ultimate technology to appear and be
universally adopted.

--
Marc


Sun, 24 Jan 2016 22:38:41 +,
Stefan Schreiber  wrote :
> Marc Lavallée wrote:
> 
> >On Sun, 24 Jan 2016 21:00:37 +,
> >Stefan Schreiber  wrote :
> >
> >  
> >
> >>Should we "normalize" anthropometric data or (measured) HRTF data
> >>sets?
> >>
> >>
> >
> >That would discriminate potential users at the sides of the normal
> >curve...
> >
> >There could be a way to find a "best match" from sets of measured
> >heads, maybe using 3D scanning, or with simple measurements. For
> >example, the spherical HRTF model have different diameters, which is
> >a good start.
> >  
> >
> Yes, I wrote about this.
> 
> Actually several times, and quite some time before "Oculus Audio".
> 
> (s. http://www.vrvis.at/projects/locaphoto, an "older" but ongoing 
> project which I had mentioned.)
> 
> But please, let's not mix up things again.
> 
> The thread subject is clear enough! What happens if you can't 
> personalise HRTF data, which is (from any practical point of view)
> still the standard case?
> 
> (So, just some suggestion: Combine a "good universal HRTF data set"
> with the possibility to load another < closely matched >  or even < 
> personalised > HRTF set, maybe in the AES-69 format. If you provide
> such a solution you will be future-proof. I hope this makes some
> sense...  ;-) )
> 
> Best,
> 
> Stefan
> 
> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
> here, edit account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] How to derive a good "universal" HRTF data set?

2016-01-24 Thread Stefan Schreiber

Marc Lavallée wrote:


The LISTEN set from IRCAM, the KEMAR set from MIT and the spherical set
by R. Duda are included in the Ambisonic Toolkit. I use them on
http://ambisonic.xyz/ . The spherical set is probably a good enough
compromise for VR applications, because perfection is not required for a
good experience. What seems to be missing is a practical method to
provide personalized HRTFs to users.
--
Marc
 



Marc, just some idea:

If the ATK includes different HRTF sets, doesn't it already provide an 
open interface/standard for exchangeable HRTF data?


The "competition" would be AES-69, a non-open standard provided by AES 
and BiLi project. (They needed a looong time to agree on AES-69.)


As anything simpler but functional might be sufficient and even 
preferable in most cases:


- Does ATK define an HRTF interface which is sufficiently flexible to be 
the base for a real < standard > ?


- Does anyone want to write down such a standard (maybe in RFC form), 
which would be usable for every type of binaural decoder? (say also 5.1, 
9.1, 7+4H and 22.2 to binaural decoders)


Best,

Stefan






On Sun, 24 Jan 2016 19:31:33 +,
Stefan Schreiber  wrote :

 


http://www.blueripplesound.com/hrtf-amber

   


The IRCAM AKG "Listen" HRTF data contains measured HRTFs from about
50 different people - this must have taken a lot of effort and
we're very grateful to the good folk of IRCAM for doing the work
and making the results available to the world! What we've done is
analyse this data and come up with an 'average' HRTF that is a
sensible compromise, using some new work. As it's an average, it
wouldn't be perfect for any of the people actually measured, but
hopefully not awful for any of them either! It's certainly much
better than conventional "panning" techniques.
 


(See also:

http://www.blueripplesound.com/personalized-hrtfs
)

   

We provide "generic" HRTFs models (for instance, our Amber HRTF 
) which work well for

many people, but even better results can be achieved using
personalized HRTF measurements.
 


Could any people, companies or institutions on this list provide
access to such a practical and < usable > generic HRTF model?

If not: I believe that some essential theses and papers should have
been done in the academic world, but don't exist anyway.

Richard Furse basically states that a "good" generic HRTF is derived 
from many HRTF measurements (data sets) via some  form of averaging,

as a "sensible compromise". I doubt that this is a trivial process,
though...

Best regards,

Stefan


P.S.:  VR companies will currently have to look into these issues,
and to find solutions which are practical at least < for most >
people. If some proposed HRTF data set doesn't fit to an individual
listener it should be pretty hard to distinguish between front/back
sources, for example. (Even with head-tracking.)

Don't tell me that I didn't present a paper to prove my point... 
Instead, give me the link to a paper which delivers some kind of 
optimized generic HRTF data set. If such a paper doesn't exist (yet),
I don't see any reason why something like "Amber HRTF" can't be 
re-engineered.
(Amber HRTF itself is derived from IRCAM AKG "Listen" HRTF data, a 
public available list. And even IRCAM should be interested to provide

a good universal  HRTF based on its own and public HRTF research!)
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe
here, edit account or options, view archives and so on.
   



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

 



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.