I think that the one really important thing to do when applying any kind of
processing, but especially processing with any sort of adaptive elements is
to always LISTEN to the results because even the best will fall down with
some sounds. One problem we face these days is the shear amount of
processed (mp3, etc.) sounds younger people are exposed to - from birth in
many cases. The resulting desensitization of their hearing was something I
was really becoming concerned with towards the end of my days at the uni.
It was becoming increasingly difficult to find students that could hear
things that even with my age-degraded hearing I found profoundly disturbing
(quantsation noise, aliasing, etc.) - recoverable with training, of course,
but, even so..... :-(
     Dave

On Tue, 18 Dec 2018 at 08:35, David Pickett <d...@fugato.com> wrote:

> At 12:15 17-12-18, Politis Archontis wrote:
>
>  >Another very sensible approach was presented by Cristoff Faller and
>  >Illusonics in the same conference, in a simpler adaptive filter is
>  >used to align the microphone signals to the phase of one of the
>  >capsules, making them again in essence coincident.
>
> I am interested to know how this approach gets around the problem
> that with any pair of capsules, the difference in phase is a function
> of the angle of incidence and how it distinguishes between identical
> frequency components that are actually generated by different acoustic
> sources.
>
> It seems to me that some assumptions must be involved, e.g.:
>
> -- that the microphone array is stationary during the time of the
> capture window and subsequent computation.
>
> -- that the filtered frequency components used for the computation
> contain only signals from a unique direction.
>
> The first of these assumptions has been mentioned here by reference
> to the generation of spurious signals when the mic is in motion. The
> implication would be that the time taken to acquire and compute the
> data is too long to satisfy the condition.
>
> The second is demonstrably untrue when we are talking about tonal
> music in a reverberant acoustic. How does the system distinguish
> between a 1kHz partial from a source or reflecting surface on one
> side of the array and one from a different source or reflecting
> surface arriving from another direction? (A moment's thought shows
> that a C major chord, distributed among various performers would have
> a large number of Cs, Es, G, etc coming from all around.)
>
> David
>
> _______________________________________________
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>


-- 

As of 1st October 2012, I have retired from the University.

These are my own views and may or may not be shared by the University

Dave Malham
Honorary Fellow, Department of Music
The University of York
York YO10 5DD
UK

'Ambisonics - Component Imaging for Audio'
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20181218/6f37dbce/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to