2011/4/26 Dave Malham <d...@york.ac.uk>

>
>
  On 24/04/2011 19:11, Helmut Oellers wrote:

   ...modern computers are also clever. Today nothing is unaccountable if we
know the formula and all variables.

That's a BIG assumption - and given the essentially chaotic (in the
mathematical sense) nature of the Universe, wrong. We are now pretty certain
that nothing is that predictable and that that idea's basically (old)
Science Fiction - we have moved from  E. E. "Doc" Smith's "Lensman" universe
( where ultimately intelligent beings could predict everything because they
knew the complete starting conditions and laws of the Universe) to the
Discworld universe of Terry Pratchett where one flap of a Quantum Weather
Butterfly's *** wings can change the course of the entire Universe (and
confound even the Gods).



Hello Dave,

what you are describing, I would consider as the “Heisenberg uncertainty
principle”, which  disclosures, as closer we look at the things, as less we
can discover.  Accordingly, in the quantum world the random exist, really
not computable. However, in the macro world of whole air molecules, the
conditions are describable.


On 24/04/2011 19:11, Helmut Oellers wrote: However, at WFS we have a chance
for avoiding that problem. All we need is including the playback room
properties into the synthesis. By this way becomes possible, subtract the
additional detours of single wave fronts in the playback room.

You wrote: Even in theory, this can only be done to a limited extent as you
don't have sufficient foreknowledge of the state of the room acoustics, only
that of a model with significant simplifications on things like surface
properties of materials and, perhaps more importantly, air movements in the
space - which cannot be predicted given the chaotic nature of the system.



I agree we have to simplify. However, at least for direct wave and her first
order reflections an model calculation would be sufficiently. The first
reflections are causing deep comb filter effects with the direct wave,
especially at lower frequencies. That is simply calculable, so simple that
we are able for construct a common model from recording and playback room.
By that way, we can really subtract the additional detours and level changes
from the playback room regarding the first reflections. The playback wall
reflection become able for faking the first recording room reflection in
time, level and direction, at this way.
The timbre of the recording room, caused at the fine structure of the
reflecting surfaces, is mainly include in the reverberation. That complex
reflection pattern is really not calculable in simple model. However, there
is no need for restore the correct starting point of any reflection in the
reverberation tail. We need only enveloping distribution. Convolution in a
simple impulse response is reliable way in this matter.

Nevertheless, true spatial distribution of the starting points,
independently the listener position, is indespensable for the direct wave
and her first reflections. An air movement, i admit, is hardly predictable.
That will cause some gaps in the perfect recreation of the first
reflections. Nevertheless, as you have written, fortunately exist
“capabilities of the human brain to fill in missing details and/or ignore
incorrect/inconsistent cues just so long as a sufficient subset of the cues
presented are both (adequately) correct and consistent.” Obviously, we can
fill out the gaps.
However, from physically point of view, in the conventional procedures we
haven’t to fill out some gaps; we are constrained to recreating the audio
event from a very crude fragment!


You wrote: In practice, the limitations caused by finite speaker separation
and the two dimensional nature of most extant WFS systems mean that the
cancellation would only be valid at low frequencies. The perceptual effects
of having the disruptive effects of playback space reflections cancelled at
low frequencies and not at higher ones would need to be subject to listening
tests, as would the 2D/3D conflict before we could give any kind of
definitive answers - there might well be situations (or source materials)
for which attempting cancellation of playback space acoustics could actually
be worse than the effects of the playback space acoustics themselves.



In theory, the Wave field synthesis principle is not limited at the
horizontal plane. The attempt, to raze the playback room reflection by means
of the differently shaped cylinder waves of the loudspeaker rows, seem
really dilettantish. As you know, problems cannot become solved from the
same reasoning, by which they were created. Much more clever as figthing
against the reflections is including those playback room reflections
purposefully into the synthesis.



You wrote: On the other hand, if "true spatial audio" is intended to mean
the recreation of an original real acoustic event (concert, airshow
recording, whatever...) as it would have existed as a percept by a human
attending that event, well, that's not going to happen with any of our
systems until (and if) we can do direct brain recording and stimulation to a
level of detail that current fMRI working is only barely beginning to hint
at.

That's because that percept does not just consist of the stimulation of our
ear drums by the sound waves but also all the inputs from the other senses
and, moreover, not just at that point in time but also previous to that.



Well, the sound waves cannot be recreating the visual impression or the
smell of the concert hall. All we can reach ist true spatial audio. In my
view that is defined by the fact, change of the listener position in the
playback room cause the same changes in perception, as accordingly movement
of a listener in the recording room.

That would possible today from technical point of view. At my
www.holophony.net website I have described a possible way. It was patented
in Germany in 2005, however at that time wasn’t available sufficiently
computing power. The procedure is  described at simple animations at fourth
capture of the website. Finally, the listener at home still able for moving
across the virtual recording room by means of a joystick. :)

As far as the way a matter of interest, I can describing the main principle
as short as possible in a next mail.



Regards, Helmut
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20110428/60be7d3c/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to