Re: [Sursound] DBADP

2020-11-17 Thread Richard Foss
 
> events.
> 
> I presume that each of your speakers receives the 32 output channels from the 
> matrix and the channel it is using can be remotely selected. The DSP in the 
> speaker is used to modify the response of the speaker (EQ, dynamic 
> processing, delay, etc.), and that this too can be remotely controlled.
> 
> The challenge then moves to linking the spatial audio engine to the DAW or to 
> controls on the mixer. In the case of a DAW, this can be done using plug-ins 
> that send messages (OSC or something similar) to the SAE, or the timeline to 
> recall (with smoothing) memories of states in the SAE. In a mixer it would 
> involve reallocation of controls for this purpose, or adding extra controls 
> (e.g. a joystick or several). 
> 
> There seems to be a general consensus on this approach.
> 
> I’ve looked again at the available MOTU products, which are excellent audio 
> interfaces, but they alone do not seem to provide what is needed for an SAE.
> 
> Unfortunately Covid has put a huge damper on progress in this direction, as 
> large scale public events are untenable, and the world economy is being 
> severely damaged. These are indeed “interesting times”.
> 
> I wish you good luck with your products, though at this stage of my life I am 
> unlikely ever to be able to use them.
> 
> Ciao,
> 
> Dave Hunt
> 
> 
>>  1. Re: DBADP (Richard Foss)
>>  2. Re: DBADP (Augustine Leudar)
>> 
>> From: Richard Foss 
>> Subject: Re: [Sursound] DBADP
>> Date: 15 November 2020 at 20:03:04 GMT
>> To: Surround Sound discussion group 
>> 
>>> Current products do not allow progress to true Delta Stereophony (DBADP)
>> 
>> 
>> Well conceptually it should be possible if, beyond aux mixes, you have a 
>> further layer of mixes that can comprise aux bus sends (with controllable 
>> delays/filtering/volumes) as well as input channels. A possible problem is 
>> not having sufficiently small delay increments, and not having smoothing 
>> within the device. Anyway, its worth doing some experimentation! 
>> Implementing DBAP or VBAP is fine.
>> 
>>> DSP chips are now capable of providing it
>> 
>> 
>> Yes, there is a Sharc DSP in the miniDSP speakers we use, and a controllable 
>> 32x2 matrix with delays/attenuation at the cross points.
>> 
>> As you say, running Spat and a DAW is processor intensive. This was one of 
>> the reasons we have turned to using the processors in current devices to do 
>> the post-render mixing/delays. Having this capability in a speaker is great, 
>> because your processing capability grows with each speaker. Having it in an 
>> audio interface/mixing desk means that all the inputs - analog/usb/ADAT/… 
>> can have spatialisation applied to them.
>> 
> 
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] DBADP

2020-11-17 Thread Dave Hunt
Hi Richard,

In the interests of avoiding too much repetition, I’ve edited the previous 
conversations.

By “current products” I meant hardware audio mixers and DSP processors with 
multiple inputs and outputs that are readily available, rather than what can be 
built using things like the Sharc DSP chips or cards. This is certainly beyond 
my capabilities, but seems to be what you and others are doing.

Having looked at your website, I’ve been trying to understand your approach to 
this, and how it may agree with or differ from mine and others'. 

A large number of inputs needs to be mixed to a large number of outputs in a 
controlled way. A 32 channel in/out matrix mixer with independent amplitude, 
delay (and possibly other processing) at the crosspoints for each input to each 
output is indeed desirable. The inputs are mixed to the outputs in a way that 
depends on the spatial algorithm (ambisonics, Dolby Atmos, VBAP, DBAP, WFS 
etc.). This requires computer control as the number of instructions is large. 
If an input sound source needs to move spatially the instructions need to be 
smoothed, to avoid it jumping between positions, with a ramp or curve driven by 
time.

All this could be built into a digital mixer, but incorporating a good user 
interface is far from trivial, and this is unlikely to happen as general demand 
for it is low, and it will be expensive.

So, we end up with a separate DSP spatial audio engine (SAE for short) that 
sits between a large mixer or DAW sending many channels, and a large number of 
amplifiers and speakers. The connections are best done with a digital audio 
network (AVB, Dante, MADI or other). The mixer and DAW are used relatively 
normally, though mixes are to several “stems”, which are then "spatialised" , 
rather than to a stereo or 5.1 output. This avoids adding extra processing 
loads to the mixer or DAW. This SAE could be software in another, or even the 
same computer, though processing load and latency would be problematic. Less so 
when using a DAW (even with video) than with realtime events.

I presume that each of your speakers receives the 32 output channels from the 
matrix and the channel it is using can be remotely selected. The DSP in the 
speaker is used to modify the response of the speaker (EQ, dynamic processing, 
delay, etc.), and that this too can be remotely controlled.

The challenge then moves to linking the spatial audio engine to the DAW or to 
controls on the mixer. In the case of a DAW, this can be done using plug-ins 
that send messages (OSC or something similar) to the SAE, or the timeline to 
recall (with smoothing) memories of states in the SAE. In a mixer it would 
involve reallocation of controls for this purpose, or adding extra controls 
(e.g. a joystick or several). 

There seems to be a general consensus on this approach.

I’ve looked again at the available MOTU products, which are excellent audio 
interfaces, but they alone do not seem to provide what is needed for an SAE.

Unfortunately Covid has put a huge damper on progress in this direction, as 
large scale public events are untenable, and the world economy is being 
severely damaged. These are indeed “interesting times”.

I wish you good luck with your products, though at this stage of my life I am 
unlikely ever to be able to use them.

Ciao,

Dave Hunt


>   1. Re: DBADP (Richard Foss)
>   2. Re: DBADP (Augustine Leudar)
> 
> From: Richard Foss 
> Subject: Re: [Sursound] DBADP
> Date: 15 November 2020 at 20:03:04 GMT
> To: Surround Sound discussion group 
> 
>> Current products do not allow progress to true Delta Stereophony (DBADP)
> 
> 
> Well conceptually it should be possible if, beyond aux mixes, you have a 
> further layer of mixes that can comprise aux bus sends (with controllable 
> delays/filtering/volumes) as well as input channels. A possible problem is 
> not having sufficiently small delay increments, and not having smoothing 
> within the device. Anyway, its worth doing some experimentation! Implementing 
> DBAP or VBAP is fine.
> 
>> DSP chips are now capable of providing it
> 
> 
> Yes, there is a Sharc DSP in the miniDSP speakers we use, and a controllable 
> 32x2 matrix with delays/attenuation at the cross points.
> 
> As you say, running Spat and a DAW is processor intensive. This was one of 
> the reasons we have turned to using the processors in current devices to do 
> the post-render mixing/delays. Having this capability in a speaker is great, 
> because your processing capability grows with each speaker. Having it in an 
> audio interface/mixing desk means that all the inputs - analog/usb/ADAT/… can 
> have spatialisation applied to them.
> 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] DBADP

2020-11-16 Thread Augustine Leudar
Thanks both David and Richard for posting this. As Richard knows I use
something akin to DBAP a lot in my installations and am always keen on
seeing ways of improving panning between speakers that are far apart in
irregular arrays etc. I will be reading your papers over the coming weeks
and following your research closely. I'm always up for beta testing
software too!!
All the best
Gus

On Sunday, 15 November 2020, Richard Foss  wrote:

>
> Thanks for the delta stereophony history Dave, interesting!
>
> > Current products do not allow progress to true Delta Stereophony (DBADP)
>
>
> Well conceptually it should be possible if, beyond aux mixes, you have a
> further layer of mixes that can comprise aux bus sends (with controllable
> delays/filtering/volumes) as well as input channels. A possible problem is
> not having sufficiently small delay increments, and not having smoothing
> within the device. Anyway, its worth doing some experimentation!
> Implementing DBAP or VBAP is fine.
>
> > DSP chips are now capable of providing it
>
>
> Yes, there is a Sharc DSP in the miniDSP speakers we use, and a
> controllable 32x2 matrix with delays/attenuation at the cross points.
>
> As you say, running Spat and a DAW is processor intensive. This was one of
> the reasons we have turned to using the processors in current devices to do
> the post-render mixing/delays. Having this capability in a speaker is
> great, because your processing capability grows with each speaker. Having
> it in an audio interface/mixing desk means that all the inputs -
> analog/usb/ADAT/… can have spatialisation applied to them.
>
> > On 15 Nov 2020, at 13:56, Dave Hunt 
> wrote:
> >
> > Hi Richard,
> >
> > I’ve changed the title of this topic to something more relevant.
> >
> > I still prefer the term Delta Stereophony to describe this. It seems to
> date back to the mid 1980’s, and was described by Gerhard Steinke and
> Wolfgang Ahnert. They were working in East Germany behind the Iron Curtain,
> reputedly working with Sinclair ZX Spectrum computers and expensive AKG
> delay lines somehow imported from Austria.
> >
> > It does make a great deal of sense. When digital delay lines became more
> generally available and affordable (1990’s ???) they were increasingly used
> in public address systems to improve coverage over a greater area, using
> speakers down the length of an auditorium to augment the usual left/right
> or LCR main frontal system. The feed to these was delayed by an amount that
> caused the time of arrival of sound from them to match that of the main
> frontal system . Sometimes the feed to a "front fill” system, arrayed along
> the front of the stage to increase clarity in the rows of seating near the
> stage, was also delayed to match the time of arrival of sound from its
> source. Amplitudes were usually adjusted by ear, as indeed were delay times
> after an initial calculation.
> >
> > These systems were more “appropriately distributed mono” than spatial.
> It is impossible to get the delay/amplitude combination correct for every
> position in the space with a finite number of speakers and output channels,
> so compromises are inevitable. This became common practice, especially for
> large scale stadium events. Digital mixing desks now commonly incorporate
> delays on each output, making this simpler to implement.
> >
> > Current products do not allow progress to true Delta Stereophony
> (DBADP), as the architecture does not provide delay as well as amplitude
> control on each matrix crosspoint, and the market doesn’t expect or demand
> it. DSP chips are now capable of providing it, as proved by TiMax, LISA,
> d&b’s Soundscape, Iosono , Astro, and Meyer’s relaunched system. The market
> is small, and the DSP boxes pricey. It becomes relatively more affordable
> for large multi-speaker systems with large budgets.
> >
> > For the rest of us, it’s down to software. Ircam have a basic
> implementation of DBAP in Spat~ for Max/MSP (or you can roll your own), and
> adding the delay component is relatively simple. You can then scale the
> amplitude and delay separately for each source, as seems appropriate. Using
> delay alone is surprisingly effective. The variation of amplitude between
> widely spaced speakers can be excessive.
> >
> > Of course you need a fast and powerful computer, and efficient
> programming to do this, but that is also true with any of the alternative
> algorithms (ambisonics, VBAP, DBAP, WFS etc.). None of these are perfect
> for every situation, and it is hard to envisage a combination of them that
> would work.
> >
> > Ciao,
> >
> > Dave Hunt
> >
> >
> >> On 14 Nov 2020, at 17:00, sursound-requ...@music.vt.edu wrote:
> >>
> >> From: Richard Foss 
> >> Subject: Re: [Sursound] Was: Recorder for ORTF-3D OUTDOOR SET
> >> Date: 14 November 2020 at 16:48:36 GMT
> >> To: sursound@music.vt.edu
> >>
> >>
> >> Dave, I have meant to follow up on your message  for some time, because
> your ideas match what I am current

Re: [Sursound] DBADP

2020-11-15 Thread Richard Foss

Thanks for the delta stereophony history Dave, interesting!

> Current products do not allow progress to true Delta Stereophony (DBADP)


Well conceptually it should be possible if, beyond aux mixes, you have a 
further layer of mixes that can comprise aux bus sends (with controllable 
delays/filtering/volumes) as well as input channels. A possible problem is not 
having sufficiently small delay increments, and not having smoothing within the 
device. Anyway, its worth doing some experimentation! Implementing DBAP or VBAP 
is fine.

> DSP chips are now capable of providing it


Yes, there is a Sharc DSP in the miniDSP speakers we use, and a controllable 
32x2 matrix with delays/attenuation at the cross points.

As you say, running Spat and a DAW is processor intensive. This was one of the 
reasons we have turned to using the processors in current devices to do the 
post-render mixing/delays. Having this capability in a speaker is great, 
because your processing capability grows with each speaker. Having it in an 
audio interface/mixing desk means that all the inputs - analog/usb/ADAT/… can 
have spatialisation applied to them.

> On 15 Nov 2020, at 13:56, Dave Hunt  wrote:
> 
> Hi Richard,
> 
> I’ve changed the title of this topic to something more relevant. 
> 
> I still prefer the term Delta Stereophony to describe this. It seems to date 
> back to the mid 1980’s, and was described by Gerhard Steinke and Wolfgang 
> Ahnert. They were working in East Germany behind the Iron Curtain, reputedly 
> working with Sinclair ZX Spectrum computers and expensive AKG delay lines 
> somehow imported from Austria.
> 
> It does make a great deal of sense. When digital delay lines became more 
> generally available and affordable (1990’s ???) they were increasingly used 
> in public address systems to improve coverage over a greater area, using 
> speakers down the length of an auditorium to augment the usual left/right or 
> LCR main frontal system. The feed to these was delayed by an amount that 
> caused the time of arrival of sound from them to match that of the main 
> frontal system . Sometimes the feed to a "front fill” system, arrayed along 
> the front of the stage to increase clarity in the rows of seating near the 
> stage, was also delayed to match the time of arrival of sound from its 
> source. Amplitudes were usually adjusted by ear, as indeed were delay times 
> after an initial calculation.
> 
> These systems were more “appropriately distributed mono” than spatial. It is 
> impossible to get the delay/amplitude combination correct for every position 
> in the space with a finite number of speakers and output channels, so 
> compromises are inevitable. This became common practice, especially for large 
> scale stadium events. Digital mixing desks now commonly incorporate delays on 
> each output, making this simpler to implement.
> 
> Current products do not allow progress to true Delta Stereophony (DBADP), as 
> the architecture does not provide delay as well as amplitude control on each 
> matrix crosspoint, and the market doesn’t expect or demand it. DSP chips are 
> now capable of providing it, as proved by TiMax, LISA, d&b’s Soundscape, 
> Iosono , Astro, and Meyer’s relaunched system. The market is small, and the 
> DSP boxes pricey. It becomes relatively more affordable for large 
> multi-speaker systems with large budgets.
> 
> For the rest of us, it’s down to software. Ircam have a basic implementation 
> of DBAP in Spat~ for Max/MSP (or you can roll your own), and adding the delay 
> component is relatively simple. You can then scale the amplitude and delay 
> separately for each source, as seems appropriate. Using delay alone is 
> surprisingly effective. The variation of amplitude between widely spaced 
> speakers can be excessive.
> 
> Of course you need a fast and powerful computer, and efficient programming to 
> do this, but that is also true with any of the alternative algorithms 
> (ambisonics, VBAP, DBAP, WFS etc.). None of these are perfect for every 
> situation, and it is hard to envisage a combination of them that would work.
> 
> Ciao,
> 
> Dave Hunt
> 
> 
>> On 14 Nov 2020, at 17:00, sursound-requ...@music.vt.edu wrote:
>> 
>> From: Richard Foss 
>> Subject: Re: [Sursound] Was: Recorder for ORTF-3D OUTDOOR SET
>> Date: 14 November 2020 at 16:48:36 GMT
>> To: sursound@music.vt.edu
>> 
>> 
>> Dave, I have meant to follow up on your message  for some time, because your 
>> ideas match what I am currently busy with - at last getting to it!
>> 
>> Our first immersive audio implementation uses networked PoE miniDSP speakers 
>> which each incorporate a matrix mixer with volume and delay control at the 
>> cross points. The delays were a later addition, and I certainly found that 
>> the localization was enhanced by incorporating delays. We implemented DBAP 
>> for the amplitude panning, but we have implemented and experimented with 
>> VBAP. Given that our targeted applications will n