Garth,

I wonder why it is that your recordings are so afflicted by noise.  The self 
noise spec for the SPS200 is 12 dBA, which is similar to that of other 
soundfield microphones from Soundfield.  While 12 dBA isn't noise free, it 
should be pretty quiet.  As a reference, the average threshold of detectability 
for microphone noise is about 6 dBA, assuming a natural recording scenario.  
That is, assuming that the sounds are replayed at the same level at which they 
occurred in the recording environment.

Of course, it may be that the microphone doesn't meet specifications.

I'm a bit confused by the recordings that you placed at
http://listen.ame.asu.edu/sonic_events.php


The first recording is labeled as "no audio".  The second recording is labeled 
as "you can hear Garth open his canteen and move some things around."  There's 
certainly a lot more noise in that second recording.  About 46 dB more, 
unweighted.  It would be interesting to try to perform some more controlled 
recordings to find out whether the noise is coming from the mic, or not, and 
whether it meets specifications.

Do you ever get to the SF bay area?

Eric Benjamin


On Wednesday, August 6, 2014 3:12 PM, Sampo Syreeni <de...@iki.fi> wrote:
 


On 2014-08-06, Joseph Anderson wrote:

> I take the noise profile from each individual A-format channel...

At the risk of sounding trite, what is noise? I'd argue that it isn't 
one thing, and that it's pretty difficult to define with mathematical 
precision. If you're talking about environmental background, then 
approaches like gating A-format or some other suitable directional 
representation of sound is a good idea.

If you're talking about tape noise instead, that isn't directional at 
all, at least until you get into directional masking calculations over 
what you can throw away without getting caught. In that case you'd want 
to operationalise what you consider noise, then find out an optimal way 
of extending that idea to B-format, and do the kind of joint processing 
Eero suggests.

The easiest way probably is to go with just W in the sidechain and equal 
gating for all the channels in the main one. The next step would be to 
do the same per frequency, and so on. However, in the ambisonic world, 
you'll then bump into a third source: the mic. Since the Soundfield 
works on differencing principles, W has a totally different noise 
profile from XYZ, and typically it only gets worse from there as the 
order goes up. (Or it doesn't; that depends wholly on the mic geometry.)

The point is, I don't think there is a monolithic thing called "noise" 
which can be just blindly "reduced". There never was even in monophonic 
recordings, and the free degrees of freedom in your signal chain just 
multiply when you go through stereo to ambisonic. So, you need to be 
careful about which source(s) of unwanted hiss, distortion or bogus 
sources you're talking about, you'll have to develop computationally 
tractable models of both your utility signal and the noise, and only 
then can you really start to combine all of the machinery into something 
which actually works/sounds good.

E.g. when you expand/limit A-format, implicitly your noise model is a 
hiss which is directional to first order and your model of the utility 
signal is something like a strong, wideband directional signal near it, 
which makes directional sine-to-noise masking statistics relevant. Break 
those conditions and bad things will most likely happen.

So, try your approach on a two sine test signal, separated in frequency 
more than a critical band's worth. Pan one of the sines due front, and 
revolve the other one around at about 1Hz and say -6dB. Then add pink 
noise at about -10dB to each of the B-format channels independently. I'm 
rather sure that while your approach will work beautifully for the front 
signal alone when adjusted right, it'll lead to nasty, anisotropic noise 
pumping with the dynamic signal in place.

(Oh, and by the way, which A-format? As long as you're dealing with a 
perfect mic and linear, time-invariant filtering operation, you don't 
have to think about that because you can go willy nilly between A and B. 
But once you start applying this kind of processing, every possible 
orientation of the mic gives rise to a separate A-format. Which one 
should it be? The above example presumes one of the capsules is facing 
towards the reference. It gets much worse if you place the source 
directly between three adjacent capsules, in angle space.)
-- 
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3255353, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20140806/e445ee8b/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to