On 2022-06-27, Fons Adriaensen wrote:

1. Scale the optical instrument by the ratio of wavelengths and
  then consider if the size is practical.

But it really is. An occulter the size of a metre or so is right in the range, and doable.

2. The required optical processors may not exist for sound waves.
  And if they existed, they probably wouldn't work over a frequency
  range of several decades.

In the audio range we already have metamaterials and nonlinear ultrasonic processing which is proven to work. In fact we even have continuously varying refractive auditory lenses, coming from certain whales.

3. Pressure waves do not work in the same way as EM waves. For
  example, they are not transversal, so there is no concept of
  polarisation.

Sure, but much of the optical work with wavefield sensing doesn't rely on such concepts either. They are a plus, but not needed. For instance the idea of "closure phase" in optical interferometry is purely an amplitude one, trying to get around from phase to amplitude. If we work with pure longitudinal pressure waves, as we do in audio, there's nothing to be got around, only what is there.

And yet the technique works. Either the Fourier-optics or the ray-optics, depending on the acoustical-optical regime and the apparatus.

Interestingly, I just bumped into this one, from Las Vegas, on one of my engineering channels. https://www.youtube.com/watch?v=ydOn8qwLJzA

Towards the end, it's noted that the sphere will beamform content towards individual audience members. If there's any veracity to that claim, it will have to come from something like an order-1000 HOA system. Though likelier, it does something like a few nonlinear acoustics pointbeams from an ultrasonic active array, blending AESA with https://en.wikipedia.org/wiki/Sound_from_ultrasound , leading to a smaller emitter, yet an active one so that you can "shoot" even sound.

The point is there are plenty of active acoustical techniques which can mimic or even overthrow (within their bandwidth) optical techniques. It's the same wave equation we're working with, after all. So I wonder, why aren't the the optical techniques borrowed more often onto the acoustic side? All of them could be; the same equation governs all, and on the acoustic side, we even have nonlinerity to help with further continuous time processing, before the DSP machinery. Why?

I'd guess nobody really needed something like acoustic coronagraphy. We don't surveil the surrounding acoustic field like we do the more photogenic one. Also, the normal acoustic field around us is pretty uniform and governed by low level reverberation plus noise; as them astronomers put it, "it has a high level background". And in an acoustic field, you typically don't have any high interest or exceedingly high intensity point sources, which you'd want to follow, or take a detailed spectrum of. So no, there's not a lot of interest, for a reason, for doing this sort of thing on the auditory, receiving end.

At the same time, there apparently *is* real interest in solving the reciprocal problem: how to project sound to a single person from a limited array.

That one then is at least a second or maybe even third order nonlinear optimization problem, of the partial differential kind. Highly nontrivial, provably non-convex, and definitely of class NP even when discretised.
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to