[Sursound] Natural sound source to calibrate playback level

2014-11-25 Thread Kees de Visser
Dear list,

I'm looking for easy to use natural sound sources that can be used to calibrate 
a playback system, especially headphones and in-ear monitors, without the use 
of specialized equipment.
This website uses a neat trick, but I wonder if there are more precise ways:

http://www.audiocheck.net/testtones_hearingtestaudiogram.php

"rub your hands together, in front of your nose, quickly and firmly, and try 
producing the same sound as our calibration file. You are now generating a 
reference sound that is approximately 65 dBSPL. As you play back our 
calibration file, adjust your computer's volume to match the sound level you 
just heard from you hands. Proceed back and forth - preferably with your eyes 
closed, to increase concentration - until both levels match. Then, do not touch 
your computer's volume knob anymore. Calibration is done: your computer's 
volume knob has been set to match 65 dBSPL. This procedure should give us a 
confidence of approximately 10 dBHL in the next hearing test."

Thanks,
Kees de Visser

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio Available?

2014-11-25 Thread Sampo Syreeni

On 2014-11-19, dw wrote:

There are numerous examples where the predictions of HRTF localisation 
are falsified by observations. What is one to think of the science?


So now you'd need to define what you mean by HRTF's. I at least think it 
means "the full, static, anechoic impulse response from a certain source 
to your brain". I.e. my idea of what an HRTF is, is the full, optimal, 
linearly and time-invariantly modelable subset in L^2 norm, of any and 
all phenomena in both time and space/angle, which our too ears appear to 
be able to hear.


So how could it be falsified? Tell me? At least as far as all of the 
linear acoustics happening around the head, and pinna, and shoulders, 
and the ear canal, go, it's a tautology that a full set of HRTF's 
captures it all.


So what are you talking about, really? Something else than linear 
acoustics governed by the usual wave equation, for sure. But what 
precisely? I'd really want to hear.

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio Available?

2014-11-25 Thread Sampo Syreeni

On 2014-11-21, dw wrote:

The state-of-the-art finds it very difficult to render sounds below 
the listener.


True. But then, at the same time, have you ever truly heard sounds from 
right below yourself? Does even the human auditory system *really* know 
what it means to "hear something from below"?


Think about it or awhile. In the psychoacoustic sense there actually 
might not even *be* such a thing "due below".

--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio Available?

2014-11-25 Thread Sampo Syreeni

On 2014-11-12, Adam Somers wrote:

VR video (or as we call it, Cinematic VR) is in some ways the perfect 
use-case for ambisonics.  This year we've created hundreds, if not 
thousands, of b-format recordings with accompanying 360º 3D video.


It really is. Because of the basic, most-old-fashioned ambisonic 
principle: fully uncompromised isotropy in encoding. (Note, I'm not 
saying anything about decoding.) In that the technology fits 
*abominably* well with stereoscopy, and especially looking around from a 
fixed viewpoint, in optics.


Then what ambisonics might *not* be so good in is virtual environments 
where you move about. That's because of the centred, angularly 
parametrized framework, which pretty much only lends itself to a fixed 
"view"point into the acoustic field.


You can then make it work in synthetic and even recorded-recreated 
acoustic environments. But not by direct recording and playback. You 
have to do something extra in between. You have to somehow abstract off 
your B-format recording, so that auditory cues still match. Like 
reverberation falloff, the auditory parallax of close sources over your 
movement, the mutual, directional correlation coefficients of the stuff 
you perceive as being part of the space and envelopment.


Ville's and Archontis's work can do anything of the sort. But still, 
given how nice they sound and how they also do that said kind of 
abtraction on the way, I'd say they are at the forefront of the stuff 
which could eventually become a Gaming/Movie/Department-store 
miracle-development.


Still, I've yet to find a solution for b-to-binaural which is as 
convincing as some of the BRIR-based object-sound spatialization 
packages (e.g. DTS HeadphoneX and Visisonics Realspace).


I believe I know where the problem is, or at least I believe I can 
participate meaningfully in a process which leads to a nigh-optimal 
solution.


And in this one, I do mean it, for real. I have some real ideas here, 
with my only problem being that I'm lazy, poor, already well underway 
into hard alcoholism...and short of hands who'd take my ideas seriously. 
The spherical surface harmonic kinds of ideas.


Just give me the usual starved for life and scholarship doctorand, even 
on-list, and I'll tell you how b-to-binaural is done. If not as a final 
solution, then as a bunch of processes and guidelines. A la Gerzon 
Himself. :D


I think what's primarily lacking is externalization, which perhaps can 
be 'faked' with BRIRs.


Ville Pulkki's work with DirAC, and his and his workgroup's two 
demonstrations, have me convinced that even fourth order ambisonics 
leaves too much artificial correlation in the soundfield at the size of 
a human head, to sound natural. That then also means that you can't just 
naïvely, linearly, statically, matrix down from any extant order of 
(periphonic or otherwise) ambisonics to binaural, even with full head 
tracking, and expect it to sound as good as the best object panning 
format.


So you need some active decoding magic in between. The Gerzon era 
problem with active decoding actually was that it was being used for the 
wrong purposes, too aggressively, and at such a very low level of 
electronic sophistication that it just couldn't work. Gerzon didn't ever 
touch Mathematica or MATLAB either, so that much of his analysis was 
that of an engineer, and not of an all-knowing AI-mathematician. Now, we 
can do a bit better in all those regards. We already even have things 
like Bruce's Tabu search results, well entrenced in the open literature; 
something the old days could not have dared to dream about.


So, why *not* go with active decoding once again? It's not a blasphemy 
if all of the original counterpoints have been answered, and we have 
psychoacoustic (and in my case stupendously obvious anecdotals) data to 
show we can just do better with active processing? I say no reason.


I.e. let's not talk about faking at all, anywhere. Let's only talk 
about capturing the most, most processable, simplest auditory data on 
the scene, and then about how to make the best of that captured data 
when replaying it. Via any means at all, aiming at transparency.


That's then why I'm such a fanboy once again of Aalto people's work: 
even the 4th order ideal, synthetic playback sounded like shit compared 
to the reference. After the newfangled DirAC processing, lo and behold, 
it came pretty close to the original. So if you think -- like I do -- of 
the outcomes, you too would have to beg for Eigenmics, Aalto's software 
for them, and then home AV cognizant of decorrelation, B-format minded 
sound intensity analysis, and that newfangled variety of infinite order 
decoder we call Higher Order DirAC.


Let me coin a short: as opposed to HOA, it's HODAC.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
___
Sursound mailing list
Sur

Re: [Sursound] Oculus Rift Visual Demo + Ambisonic Audio Available?

2014-11-25 Thread dw
I have many molds and casts of my ears and head in different materials. 
I can, for instance put my face ( in plaster ) on the back of my head. 
Once you start with something that works for you, your own natural 
hearing, for example, it is remarkably difficult to break, or 
understand. How would you know whether when you tilt your head whether 
the image stays in place wrt to the room because your brain has adjusted 
to the changed HRTF or whether it does not care. Having a working model 
to play with is quite useful.


On 25/11/2014 00:23, Paul Hodges wrote:

--On 24 November 2014 21:16 + John Leonard 
wrote:


The MDR-7506 headphones are effectively the same as the MDR-V6, so
that's probably it.

I don't find binaural demos very effective; and guess what - I use
MDR-7506 headphones!

Paul



___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.