On 2021-12-01, Jens Ahrens wrote:

We would like to make you aware of the concept of equatorial microphone arrays, which use a spherical scattering body and microphones along the equator of that body. Here’s a 3-minute video of a binaural rendering of the signals from such array: https://youtu.be/95qDd13pVVY

Impressive, but of course 7th order horizontal-to-binaural always is. The auditory parallax you demonstrate by speaking close to the array is especially convincing.

What isn't so convincing is the performance to the back. There's something wonky going on there, because the soundscape is easily heard to be pulling in. Are you sure your HRTF's are symmetrical, as befits a hard spherical array? Or did you instead start with something like KEMAR-data, which would make the array inherently asymmetrical.

I also ask this because where you do the dual test of turning the array and moving the source at the same time, I can *definitely* hear skew in the field. Maybe even a quadrupole moment. That might suggest your algorithms are turning the spherical harmonics the wrong way around; not dually as they should be, but something else.

Finally, I'd like to hear this done with extremely wideband and impulsive sources as well. A soft speaking voice ain't gonna cut it as a test. I'd like to hear balloons being popped, triangles being rung, small tambourines being pounded. And at the other end of the spectrum, the lowest notes of an organ, to measure how the sensor fairs in reactive fields. In particular, what happens above the equatorial plane: does the sensor adequately and reliably mitigate the spatial aliasing which *necessarily* comes with off-plane waves, over the entire frequency range, of it.

It *can* theoretically do something like that, even in the linear regime. That's just about joint spatial-temporal domain regularization. About taming the off-sightline, aliasing modes, over time-frequency, enough to make the thing seem isotropic, over a wide band. You blur out where there would be aliasing in direction, you not blur where there's symmetry enough to not do that.

Now, if you do something besides this, tell us what it is. There's all sorts of work to be done in the nonlinear, machine learning, whatnot, work. "Superresolution" it's called, including all of the active decoders of the analogue era, like Dolby Surround's Active Matrix, and the lot. All of that could be done much better now, using AI methods and statistical learning, target tracking algorithms, and whatnot. If you're doing those, do tell, I'd be highly interested.

But right now, I don't really see what is so special about the array. It sounds like a conventional horizontal array, on a rigid sphere, and then jsut conventional processing into a binaural playback. That sort of thing has been done in the ambisonic ambit for decades. I've even heard in free air better simulacrums of duality, with POA.

So, why not publish your equations and methodology? Let's read back from there. :)

Their main advantage over conventional spherical microphone arrays is the fact that they require only 2N+1 microphones for Nth spherical harmonic order (conventional arrays require (N+1)^2 microphones). The price to pay is the circumstance that the array does not capture the actual sound field but a horizontal projection of it.

In fact you don't capture a horizontal projection. The only way you could do that is with an infinite vertical stack of microphones on a stiff cylinder, sensitive to only plane waves. In here, the soundfield about the spherical sensor will very much be sensitive to sound from above -- as you showed in the video.

That sound will be sensitive to directional aliasing. We just don't hear it, because you talk to the mic array in a muffled speaking tone. Wideband signals of the like I suggested, would spatially alias widely, leading to a *lot* of audible artifacts, direction reversals, and the like, when the source and/or the array is made to revolve.

This poses the question of what it may sound like if the array captures sound that originates from outside of the horizontal plane?!?

The video is going to demonstrate this!

Now test it the way I wanted, and give us the results. It's going to fail, because it's not isotropic. ;)

Though, maybe it's not *supposed* to be isotropic. When you go about it that way, it's going to be a kind of periphonic array. In that role it probably will perform well. It's just that you can't use that sort of array in anything but a free field. In confined spaces, sound does not propagate in two dimensions, but in three, with there being coupled modes between the 2D and 3D fields. If you try to capture those with any anisotropic probe, there will be interplay across dimensions. For instance, reverberation will be diminished because it spreads out into the third dimension, which isn't being captured, and if there's a standing oblique mode in the 3D room, any excitation of it, will spatially alias onto the equatorial array, sometimes leading to nasty troughs or hills in the amplitude response.

Which in fact happened when you talked onto the mic, close range: the proximity effect from above wasn't as it's supposed to sound, even if it was pretty close in the ecliptic. Above the equatorial, the sound was indistinct in direction, and I'm perfectly confident that with a wider band test signal, it would have sounded even more amorphous, because of spatial aliasing.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to