On 2022-04-06, Fons Adriaensen wrote:

With near field sources, the outwards radiating field from a point source
is reactive at each point. Pressure and velocity are *not* in phase,

Even a pantophonic mic will pick that up that phase difference.

It will, but it will also miss out on energy radiating off the pantophonic plane. Counterwise, a purely planar, pantophonic analysis in reconstruction will also miss out on convergent energy from the rig.

This was in fact why Christoff Faller, when he lectured at Aalto University in the day, thought "Ambisonics couldn't work". He took to the fact that a pantophonic rig setup necessarily leads to a 1/r falloff in amplitude as measured from the rim inwards. What he didn't understand was that reconstructing that basic WXY-field utilizing a full periphonic rig (or if you will an idealized cylindrical pantophonic setup, reproducing not point but line sources) takes away the problem entirely.

Now, the counterpoint for a microphone is a bit different. A soundfield mic as its central/coincident one doesn't have quite this problem, so you can in theory do without Z, so as to have a coincident pantophonic setup. At first order POA. That's because the acoustic field is pointwise four-dimensional: pressure/W, and three velocity components/XYZ, all independent. It's all orthogonal, so that you can just drop Z with no harm done, energy or propagation direction wise.

Not so with anything above first order, because then you'll be dealing with non-local features of the acoustic field. And once you do that, parametrize it in direction, your Fourier series won't terminate. They will attenuate by frequency, but they'll never truly terminate. Which is why the first order Gerzonian soundfield is so easy to derive, while even a second order mic actually necessarily is an engineering feat, and nonperfect in its attenuation of higher order spherical harmonical contributions. From second order up, you *cannot* just neglect the vertical components, like you can with POA soundfields; it's the same thing as it is with Faller's analysis of pantophonic playback rigs, and their losing energy to the third dimension. As soon as you deal with a field, non-locally, this sort of thing necessarily happens, and since you can only deal with the acoustic field locally using four coordinates, WXYZ, anything above that becomes more complicated.

This is even the reason the dominance/boost transformation of early yore doesn't readily generalize. Because it's basically a local Lorenz boost, akin to what is done in special relativity. It doesn't generalize to non-local things, which is what anything beyond first order in ambisonics necessarily is. It could generalize over if the acoustic field pointwise had more degrees of freedom. Like, say, if it *pointwise* had a second order spherical harmonical symmetry; if it was a tensor field of that kind. But it isn't.

Also, below what we have in sound, it's also theoretically possible that we couldn't have even this kind of extant symmetry. I mean, if acoustics was described fully by a scalar W field, so that there was no velocity part XYZ to its tensor, actually we couldn't have even a workable soundfield mic. We'd have the same problems of spatial extension we have with second order and further mics, starting with first order. Such it is e.g. with the heat equation.

the vector describing energy transfer (in EM I think the Poynting vector)
is *not* in the plane, but outwards from the source, all round.

Which means that whatever you try to describe here is NOT a vector.

Of course it is. We might not yet agree where it points, but a vector it is.

The velocity vector will be in the horizontal plane, and is represented correctly by X,Y. It has no Z component.

This is not true. Suppose you have two point sources of sound, one metre to your left and one to the right. They radiate 3D at an equal amplitude, but also in opposite polarity.

When you vectorially sum their contributions at the origin, you'll discover there is a vertical component. It's counter-intuitive to be sure, but one way of seeing why it happens is that in such a nearfield, the pressure field will be oscillating not just in the plane, but above and below the center point. That then leads to an oscillating, vertical pressure gradient, off plane, which forces transverse oscillation in velocity as well.

This wouldn't happen if the sources were infinitely tall cylindrical ones, since then there wouldn't be any pressure variation in the plane of symmetry. Also, there wouldn't be any such phenomena if the equations were 2D to begin with — though if they were, we also wouldn't have an inverse *square* laws to deal with, but just a straight inverse.

But given even two monopole sources in 3D, in antiphony, at equal distances, you indeed get lateral velocity/forces. Different kinds and amounts too, depending on frequency and distance.

So what happens is that while the pressure field is fully symmetric in the horizontal plane, there still has to be a Z component in order to recreate the field to full first order.

Not for it to be correct for a listener in the same horizontal plane
as the speakers.

But yes. Faller's analysis was that you miss a 1/r component in intensity by distance. The analysis I think holds as well. So pantophony doesn't cut it. Even if you want to recreate a well-recorded pantophonic field, you actually need to recreate it using periphony, in order to avoid that 1/r fading in amplitude.

Of course if the listener moves up or down the sound field he/she senses will be incorrect. What do you expect?

Well that'd just be stupid. What do you expect.

No, no, I'm talking about a fully distinct phenomenon. About something far more interesting and intricate.

Or if I'm perchance poking holes in my head, at least I'm doing so in good company. Analysing stuff rationally, instead of just shouting into the wind. So do poke me. Let's see where this gets us. :)

...and that's precisely why pantophony is an idea born dead. We don't have infinite vertical line sources, nor microphone arrays which mimic their directional patterns. The only thing we really have is 3D mic arrays and 3D rigs.

Indeed. But we also have situations in which most sources are in the horizontal plane or close to it, and as listeners we tend to stay on the ground and not fly around.

Then I think the most interesting thing is to adapt our mic arrays *to* this situation. While we also adapt our mathematical machinery to it as well. Our reconstruction machinery.

The HOA machinery, WFS too, is pretty good at analysing what then happens. Only, they haven't much been used to deal with such uneven, anisotropic kinds of problems.

Maybe we/someone ought to take the theory towards those kinds of problems as well?
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to