Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-25 Thread Politis Archontis
Hi Jorn,

Yes, you’re right. To summarize the process:

- Measure the HRIRs at Q directions around the listener
- Take the FFT of all measurements
- For each frequency bin perform the SHT to the complex HRTFs, up to maximum 
order that Q directions permit (and their arrangement: for equiangular 
measurement grids the order is N<=4*Q^2). You end up with (N+1)^2 coefficients 
per bin per ear
- Take the IFFT for each of the (N+1)^2 coefficients. You end up with 2x(N+1)^2 
FIR filters that can be used to binauralize your HOA recordings directly.
- To binauralize, convolve each HOA signal with the respective SH coefficient 
filter of the HRTF, for each ear, and sum the outputs per ear.

Thinking about it now, it should be very easy to set-up a HOA2binaural 
conversion like that in REAPER, after getting the filters, by using the ambiX 
plugins, and setting the decoder matrix to a unity matrix, then convolving the 
outputs with the FIR filters using the multichannel convolver plugin.

If you would like to experiment, I’ll be happy to process some HRTFs and send 
you the filters.

There are various papers on expansion of HRTFs to SHs, Duraiswami, Dylan 
Menzies, Brungart, Abhayapala, Evans, are some of the names that spring to 
mind. It is a very convenient way of interpolating HRTFs too at any direction, 
so  two birds with one stone..

About the inter-aural time delay, the complex HRTFs should include that 
automatically, since they are measured at the ears, “off-centre” for the 
measuring setup. You can also do it however by the common factorization of 
splitting the HRTFs into a (directional) inter-aural time difference and a 
minimum phase filter, expand the minimum phase HRTFs, and then introduce the 
inter-aural time difference afterwards. 

About inter-aural time delay and frequency dependence, I think it has been 
shown that for most practical purposes replacing it with a 
frequency-independent one does not affect much. You can also express the ITD on 
SHs, it is a very convenient representation of it, since it really approximates 
a slightly elongated dipole on the interaural axis, and hence only the first 
2-3 orders are enough to describe it well.

Regards,
Archontis


From: Sursound [sursound-boun...@music.vt.edu] on behalf of Jörn Nettingsmeier 
[netti...@stackingdwarves.net]
Sent: 25 February 2016 22:48
To: sursound@music.vt.edu
Subject: [Sursound] expressing HRTFs in spherical harmonics

On 01/27/2016 01:56 PM, Jörn Nettingsmeier wrote:
> On 01/26/2016 11:05 PM, Politis Archontis wrote:
>> Hi Jorn,
>>
>> yes that is correct. I think however that the virtual loudspeaker
>> stage is unnecessary. It is equivalent if you expand the left and
>> right HRTFs into spherical harmonics and multiply their coefficients
>> (in the frequency domain) directly with the coefficients of the sound
>> scene (which in the 1st-order case is the B-format recording). This
>> is simpler and more elegant I think. Taking the IFFT of each
>> coefficient of the HRTFs, you end up with an FIR filter that maps the
>> respective HOA signal to its binaural output, hence as you said it's
>> always 2*(HOA channels) no matter what. Arbitrary rotations can be
>> done on the HOA signals before the HOA-to-binaural filters, so
>> head-tracking is perfectly possible.
>
> Wow. That sounds intriguing, thanks! I'll try to wrap my head around the
> SH expression of an HRTF set in the coming months, hopefully with the
> help of Rozenn Nicol's book.

Sorry to revive such an old thread, but the AES monograph on binaural
technology has arrived, and I've begun to study it. Definitely a great
resource, recommended:

http://www.aes.org/publications/monographs/

Archontis, I'm still trying to understand how to express a set of HRTFS
as a SH series.
If I understand correctly, all HRTFS for a given ear can be expressed as
a function on the sphere, but it would be frequency dependent. So we'd
need an extra degree of freedom there, how does that tie in with
Ambisonics? One HRTF "balloon" per frequency bin?
Also, how do you express the inter-aural time delay conveniently (which,
as I've learned from Rozenn Nicol, depends not only on direction, but
also on frequency)?

Are there papers out there that describe this in detail?

Best,


Jörn

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-25 Thread Politis Archontis
Hi Jorn,

Yes, you’re right. To summarize the process:

- Measure the HRIRs at Q directions around the listener
- Take the FFT of all measurements
- For each frequency bin perform the SHT to the complex HRTFs, up to maximum 
order that Q directions permit (and their arrangement: for equiangular 
measurement grids the order is N<=4*Q^2). You end up with (N+1)^2 coefficients 
per bin per ear
- Take the IFFT for each of the (N+1)^2 coefficients. You end up with 2x(N+1)^2 
FIR filters that can be used to binauralize your HOA recordings directly.
- To binauralize, convolve each HOA signal with the respective SH coefficient 
filter of the HRTF, for each ear, and sum the outputs per ear.

Thinking about it now, it should be very easy to set-up a HOA2binaural 
conversion like that in REAPER, after getting the filters, by using the ambiX 
plugins, and setting the decoder matrix to a unity matrix, then convolving the 
outputs with the FIR filters using the multichannel convolver plugin.

If you would like to experiment, I’ll be happy to process some HRTFs and send 
you the filters.

There are various papers on expansion of HRTFs to SHs, Duraiswami, Dylan 
Menzies, Brungart, Abhayapala, Evans, are some of the names that spring to 
mind. It is a very convenient way of interpolating HRTFs too at any direction, 
so  two birds with one stone..

About the inter-aural time delay, the complex HRTFs should include that 
automatically, since they are measured at the ears, “off-centre” for the 
measuring setup. You can also do it however by the common factorization of 
splitting the HRTFs into a (directional) inter-aural time difference and a 
minimum phase filter, expand the minimum phase HRTFs, and then introduce the 
inter-aural time difference afterwards.

About inter-aural time delay and frequency dependence, I think it has been 
shown that for most practical purposes replacing it with a 
frequency-independent one does not affect much. You can also express the ITD on 
SHs, it is a very convenient representation of it, since it really approximates 
a slightly elongated dipole on the interaural axis, and hence only the first 
2-3 orders are enough to describe it well.

Regards,
Archontis




On 25 Feb 2016, at 22:48, Jörn Nettingsmeier 
mailto:netti...@stackingdwarves.net>> wrote:

On 01/27/2016 01:56 PM, Jörn Nettingsmeier wrote:
On 01/26/2016 11:05 PM, Politis Archontis wrote:
Hi Jorn,

yes that is correct. I think however that the virtual loudspeaker
stage is unnecessary. It is equivalent if you expand the left and
right HRTFs into spherical harmonics and multiply their coefficients
(in the frequency domain) directly with the coefficients of the sound
scene (which in the 1st-order case is the B-format recording). This
is simpler and more elegant I think. Taking the IFFT of each
coefficient of the HRTFs, you end up with an FIR filter that maps the
respective HOA signal to its binaural output, hence as you said it's
always 2*(HOA channels) no matter what. Arbitrary rotations can be
done on the HOA signals before the HOA-to-binaural filters, so
head-tracking is perfectly possible.

Wow. That sounds intriguing, thanks! I'll try to wrap my head around the
SH expression of an HRTF set in the coming months, hopefully with the
help of Rozenn Nicol's book.

Sorry to revive such an old thread, but the AES monograph on binaural 
technology has arrived, and I've begun to study it. Definitely a great 
resource, recommended:

http://www.aes.org/publications/monographs/

Archontis, I'm still trying to understand how to express a set of HRTFS as a SH 
series.
If I understand correctly, all HRTFS for a given ear can be expressed as a 
function on the sphere, but it would be frequency dependent. So we'd need an 
extra degree of freedom there, how does that tie in with Ambisonics? One HRTF 
"balloon" per frequency bin?
Also, how do you express the inter-aural time delay conveniently (which, as 
I've learned from Rozenn Nicol, depends not only on direction, but also on 
frequency)?

Are there papers out there that describe this in detail?

Best,


Jörn



--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487

Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT

http://stackingdwarves.net

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.

-- next part --
An HTML attachment was scrubbed...
URL: 

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Fons Adriaensen
On Thu, Feb 25, 2016 at 09:25:48PM +, Politis Archontis wrote:

> - Measure the HRIRs at Q directions around the listener
> - Take the FFT of all measurements
> - For each frequency bin perform the SHT to the complex HRTFs,
>   up to maximum order that Q directions permit (and their arrangement:
>   for equiangular measurement grids the order is N<=4*Q^2).

The ^2 probably should be on N, not Q.

>   You end up with (N+1)^2 coefficients per bin per ear.
> - Take the IFFT for each of the (N+1)^2 coefficients.
>   You end up with 2x(N+1)^2 FIR filters that can be used
>   to binauralize your HOA recordings directly.
> - To binauralize, convolve each HOA signal with the respective
>   SH coefficient filter of the HRTF, for each ear, and sum the
>   outputs per ear.

To me it looks like the FFT/IFFT can be factored out. Both FFT and SHT
are linear transforms, so their order can be swapped. With H (t,q) the
HRIR in direction q:

   IFFT (SHT (FFT (H (t,q = IFFT (FFT (SHT (H (t,q = SHT (H (t,q))

Now if Q is a set of more or less uniformly distributed directions,
the coefficients of a systematic decoder will be very near to just
the SH evaluated in the directions Q. So summing the convolution
of the decoder outputs with the HRIR is equivalent to the SHT on
the HRIR.

In other words, this method is just the same as the 'decoding to
virtual speakers' one, with Q the set of speaker directions and
using a systematic decoder. 

Ciao,

> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Politis Archontis
Hi Fons,

True, slight mistake! for equiangular grids it should be (N+1)^2<4*Q.

You are absolutely correct about the linearity and the exchange of the order of 
the transforms. And the virtual loudspeakers approach should be exactly 
equivalent, and that's the main reason I don't understand why one should use 
it. In the best case, taking into account that the decoding was done properly,  
it will just give the same result as doing the conversion directly on the SH 
domain. 

The virtual loudspeaker approach essentially takes the HOA signals back to the 
spatial domain, via decoding, convolves each plane wave with the respective 
HRTF, and integrates across all directions. Which is done directly in the SH 
domain by convolving the HOA signals with the HRTF SH coefficients, and summing 
the results. I don't see the reason for this extra intermediate inverse SHT 
(decoding) in this case.

Regards,
Archontis

From: Sursound [sursound-boun...@music.vt.edu] on behalf of Fons Adriaensen 
[f...@linuxaudio.org]
Sent: 27 February 2016 12:14
To: sursound@music.vt.edu
Subject: Re: [Sursound] expressing HRTFs in spherical harmonics

On Thu, Feb 25, 2016 at 09:25:48PM +, Politis Archontis wrote:

> - Measure the HRIRs at Q directions around the listener
> - Take the FFT of all measurements
> - For each frequency bin perform the SHT to the complex HRTFs,
>   up to maximum order that Q directions permit (and their arrangement:
>   for equiangular measurement grids the order is N<=4*Q^2).

The ^2 probably should be on N, not Q.

>   You end up with (N+1)^2 coefficients per bin per ear.
> - Take the IFFT for each of the (N+1)^2 coefficients.
>   You end up with 2x(N+1)^2 FIR filters that can be used
>   to binauralize your HOA recordings directly.
> - To binauralize, convolve each HOA signal with the respective
>   SH coefficient filter of the HRTF, for each ear, and sum the
>   outputs per ear.

To me it looks like the FFT/IFFT can be factored out. Both FFT and SHT
are linear transforms, so their order can be swapped. With H (t,q) the
HRIR in direction q:

   IFFT (SHT (FFT (H (t,q = IFFT (FFT (SHT (H (t,q = SHT (H (t,q))

Now if Q is a set of more or less uniformly distributed directions,
the coefficients of a systematic decoder will be very near to just
the SH evaluated in the directions Q. So summing the convolution
of the decoder outputs with the HRIR is equivalent to the SHT on
the HRIR.

In other words, this method is just the same as the 'decoding to
virtual speakers' one, with Q the set of speaker directions and
using a systematic decoder.

Ciao,

--
FA
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Politis Archontis
And I forgot to mention in the previous message that, while I don't see any 
benefit in the virtual loudspeaker approach, I see benefits in the direct 
approach. Doing the virtual loudspeaker decoding, you'll need some uniform 
arrangement of decoding directions that will be most likely of more points than 
the harmonics, so you'll need more convolutions compared to the minimum of the 
direct approach (N+1)^2. 

I personally find also useful the fact that storing the HRTFs in an SH format 
gives an efficient and high-resolution interpolation for non-ambisonic binaural 
panning, one that uses all the measurement data to interpolate and not only the 
2-3 surrounding data points (not relevant though to the ambisonic conversion 
question..)

BR,
Archontis


From: Sursound [sursound-boun...@music.vt.edu] on behalf of Fons Adriaensen 
[f...@linuxaudio.org]
Sent: 27 February 2016 12:14
To: sursound@music.vt.edu
Subject: Re: [Sursound] expressing HRTFs in spherical harmonics

On Thu, Feb 25, 2016 at 09:25:48PM +, Politis Archontis wrote:

> - Measure the HRIRs at Q directions around the listener
> - Take the FFT of all measurements
> - For each frequency bin perform the SHT to the complex HRTFs,
>   up to maximum order that Q directions permit (and their arrangement:
>   for equiangular measurement grids the order is N<=4*Q^2).

The ^2 probably should be on N, not Q.

>   You end up with (N+1)^2 coefficients per bin per ear.
> - Take the IFFT for each of the (N+1)^2 coefficients.
>   You end up with 2x(N+1)^2 FIR filters that can be used
>   to binauralize your HOA recordings directly.
> - To binauralize, convolve each HOA signal with the respective
>   SH coefficient filter of the HRTF, for each ear, and sum the
>   outputs per ear.

To me it looks like the FFT/IFFT can be factored out. Both FFT and SHT
are linear transforms, so their order can be swapped. With H (t,q) the
HRIR in direction q:

   IFFT (SHT (FFT (H (t,q = IFFT (FFT (SHT (H (t,q = SHT (H (t,q))

Now if Q is a set of more or less uniformly distributed directions,
the coefficients of a systematic decoder will be very near to just
the SH evaluated in the directions Q. So summing the convolution
of the decoder outputs with the HRIR is equivalent to the SHT on
the HRIR.

In other words, this method is just the same as the 'decoding to
virtual speakers' one, with Q the set of speaker directions and
using a systematic decoder.

Ciao,

> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
> account or options, view archives and so on.

--
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Fons Adriaensen
On Sat, Feb 27, 2016 at 11:54:39AM +, Politis Archontis wrote:

> And I forgot to mention in the previous message that, while I don't
> see any benefit in the virtual loudspeaker approach, I see benefits
> in the direct approach. Doing the virtual loudspeaker decoding, you'll
> need some uniform arrangement of decoding directions that will be most
> likely of more points than the harmonics, so you'll need more convolutions
> compared to the minimum of the direct approach (N+1)^2. 

No, this is not true. The decoder and convolution matrix can be combined
into a (N+1)^2 * 2 convolution matrix.

The only remaining difference is the set of directions. And it is known
that using too many speakers for a given order is suboptimal. 

Ciao,

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Politis Archontis
> No, this is not true. The decoder and convolution matrix can be combined
> into a (N+1)^2 * 2 convolution matrix.

Ah true! by summing the terms..

> The only remaining difference is the set of directions. And it is known
> that using too many speakers for a given order is suboptimal.

So what is the benefit then of adding a decoding stage in the middle?

BR,
Archontis
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Fons Adriaensen
On Sat, Feb 27, 2016 at 12:21:30PM +, Politis Archontis wrote:

> > No, this is not true. The decoder and convolution matrix can be combined
> > into a (N+1)^2 * 2 convolution matrix.
> 
> Ah true! by summing the terms..
> 
> > The only remaining difference is the set of directions. And it is known
> > that using too many speakers for a given order is suboptimal.
> 
> So what is the benefit then of adding a decoding stage in the middle?

The advantage of having an explicit decoder stage is that you 
can tweak the decoder for optimum results. For example it can
be dual-band [1], or have some front preference, etc.

Note that when doing the SHT for the direct method, you need
to apply a correction matrix (depending on the set of directions)
to ortho-normalise the transform. When doing that, the result is
*exactly* equivalent to a systematic decoder for the set of
directions used in the SHT.

[1] For the direct method, you could merge the convolution
matrix with shelf filters for the same result.

Ciao,

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Politis Archontis
>> So what is the benefit then of adding a decoding stage in the middle?

> The advantage of having an explicit decoder stage is that you
> can tweak the decoder for optimum results. For example it can
> be dual-band [1], or have some front preference, etc.

I see. I find that more a matter of preference though than a benefit, or 
whatever one finds more intuitive! I tend to think of these operations not on 
the decoder stage but before that. I mean it is possible to smooth, apply rE 
weighting, warp or modulate the HOA signals on the stage before decoding or 
binauralization. But I can see how it can be useful to think of it as a 
decoding operation. 

One case I can think of when it is really necessary is when one needs to 
auralize binaurally the effect of a certain decoder, e.g. due to non-uniform 
arrangement of virtual loudspeakers, compared to an ideal case..

BR,
Archontis
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Fons Adriaensen
On Sat, Feb 27, 2016 at 12:52:20PM +, Politis Archontis wrote:
 
> One case I can think of when it is really necessary is when one needs
> to auralize binaurally the effect of a certain decoder, e.g. due to
> non-uniform arrangement of virtual loudspeakers, compared to an ideal case..

The point is that, since the direct method is equivalent to 
a decoding for the set of directions used to compute the SHT
(which will be the set for which you have HRIR), there is 
nothing 'ideal' or special to it. It is just one specific case
of the decoder + virtual speakers method in disguise, and one
that is very probably using way too many speakers.

Ciao,

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Politis Archontis
> The point is that, since the direct method is equivalent to
> a decoding for the set of directions used to compute the SHT
> (which will be the set for which you have HRIR), there is
> nothing 'ideal' or special to it. It is just one specific case
> of the decoder + virtual speakers method in disguise, and one
> that is very probably using way too many speakers.

Heh, you see that's exactly the opposite of how I would call it, the virtual 
loudspeaker+decoding method would correspond to the direct method "in disguise".

The order of the HRTFs is limited by their spatial/angular variability, and 
that would be estimated correctly by orthonormalization of the measurement 
directions on the SHT stage (as you mentioned). It has nothing to do with too 
many speakers etc.. The limiting happens inherently by the number of HOA 
channels, no matter if the SH coefficients of the HRTFs are of higher order, 
since only the ones corresponding to the HOA components would be used. That 
would equivalent of decoding with the minimum number of speakers for the 
certain HOA order and then doing the binaural conversion.

I will try to show with some plots that there is no error in the direct 
approach compared to a decoding stage. But it has to wait for a bit...

BR,
Archontis

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Re: [Sursound] expressing HRTFs in spherical harmonics

2016-02-27 Thread Fons Adriaensen
On Sat, Feb 27, 2016 at 01:19:40PM +, Politis Archontis wrote:
> > The point is that, since the direct method is equivalent to
> > a decoding for the set of directions used to compute the SHT
> > (which will be the set for which you have HRIR), there is
> > nothing 'ideal' or special to it. It is just one specific case
> > of the decoder + virtual speakers method in disguise, and one
> > that is very probably using way too many speakers.
> 
> The order of the HRTFs is limited by their spatial/angular variability,
> and that would be estimated correctly by orthonormalization of the
> measurement directions on the SHT stage (as you mentioned). It has
> nothing to do with too many speakers etc.. 

What I mean with too many speakers is that it is know that using
too many speakers for any given order (for normal HOA reproduction
via speakers) doesn't give the best results.

The decoder + virtual speaker method means you are placing the
listener in the sound field as would be reproduced by the decoder
and real speakers. So the direct method, if based on a large set
of HRIR, can be expected to have the same problems as HAO reproduction
via too many speakers.

> I will try to show with some plots that there is no error in the
> direct approach compared to a decoding stage. But it has to wait
> for a bit...

Not necessary, I'm not claiming there are any 'errors' !!

All I am saying that there is no reason to assume that using the
particual set of directions for which you happen to have HRIR,
as speaker positions (which is what the direct method amounts to),
is any better or more 'ideal' than using a classic set as would
be used for real speaker reproduction.

Ciao,

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.