Ok, the subject title is a take on the Robbie Robertson/The Band/Dylan song The 
Weight (Take a load off Annie... and you can put the load right on me). So what 
does this have to do with sursound? Answer: Native processing (Intel) versus 
dedicated hardware control (via a collection of BB PGA2311 ICs).
One of my early concerns regarding research and Ambisonics had to do with 
simultaneous control of 8 or more channels. I'm guessing (and it's really a 
guess) that many Ambisonic Aficionados (AA?) use either a surround 
preamp/receiver or the Master fader of a DAW--so long as the fader can provide 
control for a 4-channel (or greater) buss. I don't like using a mouse-n-DAW 
because this requires being at the computer. Surround controllers, on the other 
hand, are generally limited in their number of channels or become expensive. 
One solution to my 'dilemma' was to use a DAW surface controller. The simplest 
implementation of this idea was an attempt to use a MIDI volume controller to 
remotely control the Master fader. A kit available from midikits.net23.net 
provided an easy to build and flexible solution. This is a hardware device with 
a USB interface that serves to control the (software) Master fader. Another 
solution, for anyone dealing with large channel
 counts, is to use programmable gain amplifiers. This is probably what the 
majority of modern surround receivers use. But by building my own preamp, I 
achieved a large channel count by using serially-connected Burr Brown PGA2311 
ICs. A single rotary pulse encoder controls all channels, but now I have the 
added benefit of software control. The software control has advantages when 
automating data collection and stimuli presentation. Attempts of mixing DAQ 
applications and DAWs via ReWire weren't so successful. The hybrid solution 
works well. This brings me to my recent post regarding hyfi...
Thanks to all who wrote. The info on Richard Furse's site helped immensely. 
Regarding my 6th (or roaming speaker): This channel stands alone for a few 
reasons that I didn't explain but will comment on here: First, my current study 
involves SNRs in reverberant environments. The primary noise source is talkers 
and room reflections... specifically, talkers at a distance. The signal is 
speech from a nearby talker. This represents a scenario found in restaurants, 
and a listening condition that is difficult for cochlear implant users. In the 
absence of other noise or talkers, the SPL of direct sound coming from the 
talker (signal) is considerably greater than the resulting echoes that follow. 
This may not be the case for extremely reverberant spaces such as the Hamilton 
Mausoleum, but it does apply to a typical classroom or restaurant. In fact, the 
signal-to-reverb ratio of the talker's voice gives a clue as to his/her 
distance. This ratio changes depending on
 the noise source's distance from the listener. I could use a minimal wet/dry 
mix to create 'some' reverberation for the nearby talker, but it isn't really 
necessary. Another consideration (the real story) is that the speech signal 
emanating from the lone speaker is created on the fly. I use a fixed number of 
words to make data collection simple, but software randomly mixes the word 
order (grammatically correct sentences aren't required). This way, I use a 
handheld response box containing, say, 8 words written on push-buttons, and the 
subject simply pushes the buttons in the order the words are heard. (Keyboards 
or word recognition software to collect responses becomes unwieldy and 
unreliable). When the listener makes x consecutive mistakes, the SNR is 
automatically improved to make listening easier (or decreased to make it more 
difficult in the case of consecutive correct responses). The noise is surround 
noise via an Ambisonic set up and auralizaton/or
 live recordings of restaurant noise. Although reverberant noise is generally 
diffuse, localization cues and "glimpsing" aid the listener in segregating and 
understanding the signal. At least that's the idea.
I'm fortunate that, for the time-being, I have access to a controlled listening 
environments. You can see photos of my gear and the room by going to
elcaudio.com/research/page_001.htm
elcaudio.com/research/page_002.htm
elcaudio.com/research/page_003.htm
The room isn't my own, and my thanks go to Dr. William (Bill) Yost for letting 
me use his research space.
I have an array of speakers at home, but data collected from a living room 
hardly qualifies as "controlled" or scientific. This was why I was wondering 
whether an Ambiophonic-Ambisonic hybrid system might be possible. Ideally, I'd 
like to construct a system that is portable. Gobos and flats may work, 
particularly if they are constructed of materials that provide absorption 
across the speech spectrum of frequencies. Low-frequency absorption via a gobo 
would be a more daunting task, though the right combo of mass and compliance 
could yield a low Q absorber. Just ideas... and I appreciate the plethora of 
wisdom others on this site have provided. In the meantime, I continue to enjoy 
make Ambisonic field recordings solely for the fun of it. My last recording was 
an attempt to record a bald eagle at a nearby lake. Most of what I captured was 
an agitated squirrel and a flying insect that found the mic's windscreen to be 
a good landing pad. Fun effects!
Best,
Eric
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20121005/880d9d73/attachment.html>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound

Reply via email to