Thanks both David and Richard for posting this. As Richard knows I use
something akin to DBAP a lot in my installations and am always keen on
seeing ways of improving panning between speakers that are far apart in
irregular arrays etc. I will be reading your papers over the coming weeks
and following your research closely. I'm always up for beta testing
software too!!
All the best
Gus

On Sunday, 15 November 2020, Richard Foss <rich...@immersivedsp.com> wrote:

>
> Thanks for the delta stereophony history Dave, interesting!
>
> > Current products do not allow progress to true Delta Stereophony (DBADP)
>
>
> Well conceptually it should be possible if, beyond aux mixes, you have a
> further layer of mixes that can comprise aux bus sends (with controllable
> delays/filtering/volumes) as well as input channels. A possible problem is
> not having sufficiently small delay increments, and not having smoothing
> within the device. Anyway, its worth doing some experimentation!
> Implementing DBAP or VBAP is fine.
>
> > DSP chips are now capable of providing it
>
>
> Yes, there is a Sharc DSP in the miniDSP speakers we use, and a
> controllable 32x2 matrix with delays/attenuation at the cross points.
>
> As you say, running Spat and a DAW is processor intensive. This was one of
> the reasons we have turned to using the processors in current devices to do
> the post-render mixing/delays. Having this capability in a speaker is
> great, because your processing capability grows with each speaker. Having
> it in an audio interface/mixing desk means that all the inputs -
> analog/usb/ADAT/… can have spatialisation applied to them.
>
> > On 15 Nov 2020, at 13:56, Dave Hunt <davehuntau...@btinternet.com>
> wrote:
> >
> > Hi Richard,
> >
> > I’ve changed the title of this topic to something more relevant.
> >
> > I still prefer the term Delta Stereophony to describe this. It seems to
> date back to the mid 1980’s, and was described by Gerhard Steinke and
> Wolfgang Ahnert. They were working in East Germany behind the Iron Curtain,
> reputedly working with Sinclair ZX Spectrum computers and expensive AKG
> delay lines somehow imported from Austria.
> >
> > It does make a great deal of sense. When digital delay lines became more
> generally available and affordable (1990’s ???) they were increasingly used
> in public address systems to improve coverage over a greater area, using
> speakers down the length of an auditorium to augment the usual left/right
> or LCR main frontal system. The feed to these was delayed by an amount that
> caused the time of arrival of sound from them to match that of the main
> frontal system . Sometimes the feed to a "front fill” system, arrayed along
> the front of the stage to increase clarity in the rows of seating near the
> stage, was also delayed to match the time of arrival of sound from its
> source. Amplitudes were usually adjusted by ear, as indeed were delay times
> after an initial calculation.
> >
> > These systems were more “appropriately distributed mono” than spatial.
> It is impossible to get the delay/amplitude combination correct for every
> position in the space with a finite number of speakers and output channels,
> so compromises are inevitable. This became common practice, especially for
> large scale stadium events. Digital mixing desks now commonly incorporate
> delays on each output, making this simpler to implement.
> >
> > Current products do not allow progress to true Delta Stereophony
> (DBADP), as the architecture does not provide delay as well as amplitude
> control on each matrix crosspoint, and the market doesn’t expect or demand
> it. DSP chips are now capable of providing it, as proved by TiMax, LISA,
> d&b’s Soundscape, Iosono , Astro, and Meyer’s relaunched system. The market
> is small, and the DSP boxes pricey. It becomes relatively more affordable
> for large multi-speaker systems with large budgets.
> >
> > For the rest of us, it’s down to software. Ircam have a basic
> implementation of DBAP in Spat~ for Max/MSP (or you can roll your own), and
> adding the delay component is relatively simple. You can then scale the
> amplitude and delay separately for each source, as seems appropriate. Using
> delay alone is surprisingly effective. The variation of amplitude between
> widely spaced speakers can be excessive.
> >
> > Of course you need a fast and powerful computer, and efficient
> programming to do this, but that is also true with any of the alternative
> algorithms (ambisonics, VBAP, DBAP, WFS etc.). None of these are perfect
> for every situation, and it is hard to envisage a combination of them that
> would work.
> >
> > Ciao,
> >
> > Dave Hunt
> >
> >
> >> On 14 Nov 2020, at 17:00, sursound-requ...@music.vt.edu wrote:
> >>
> >> From: Richard Foss <rich...@immersivedsp.com>
> >> Subject: Re: [Sursound] Was: Recorder for ORTF-3D OUTDOOR SET
> >> Date: 14 November 2020 at 16:48:36 GMT
> >> To: sursound@music.vt.edu
> >>
> >>
> >> Dave, I have meant to follow up on your message  for some time, because
> your ideas match what I am currently busy with - at last getting to it!
> >>
> >> Our first immersive audio implementation uses networked PoE miniDSP
> speakers which each incorporate a matrix mixer with volume and delay
> control at the cross points. The delays were a later addition, and I
> certainly found that the localization was enhanced by incorporating delays.
> We implemented DBAP for the amplitude panning, but we have implemented and
> experimented with VBAP. Given that our targeted applications will need
> irregular speaker configurations, we have settled on DBAP for now.
> >>
> >> We had an idea, similar to yours, to utilize the signal processing
> capabilities of audio interfaces/mixers. Because we owned MOTU devices, we
> tried this first on three of the MOTU devices, and have updated our ImmerGo
> software to work with these interfaces. However, it was not possible to
> implement a delay matrix on the MOTU devices, so they just have a DBAP
> implementation, not DBADP (your innovative label:)).
> >>
> >> I am now working on a mixing console implementation where I believe I
> can have delay/EQ at matrix cross points for a few channels, where there is
> an inverse relationship between number of speakers and number of channels
> with delay/EQ, although all channels can have DBAP. One does need to have
> mix buses to enable this, and also there often is the timing constraint,
> because a lot of messages go to the mixer as the sound sources are
> spatialized. I have found that the MOTU devices are very responsive in this
> regard.
> >>
> >> Anyway, good to have a fellow DBADP enthusiast .
> >>
> >> Regards,
> >>
> >> Richard.
> >>
> >> —
> >> Richard Foss (PhD)
> >> Software engineer/director
> >>
> >> ImmersiveDSP
> >> 46 Silver Ranch Estate
> >> Keurbooms River Road
> >> Plettenberg Bay 6600
> >> South Africa
> >
> > _______________________________________________
> > Sursound mailing list
> > Sursound@music.vt.edu
> > https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
> _______________________________________________
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>


-- 
Artist website: www.augustineleudar.com
Business website: www.magikdoor.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20201116/5e7c5f55/attachment.htm>
_______________________________________________
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

Reply via email to