Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-04-13 Thread Spencer Russell



On Mon, Apr 13, 2020, at 1:36 PM, Spencer Russell wrote:
> 
> Andreas - is this the general approach you use for Gaborator?
> 

Whoops, just clicked through to the documentation and it looks like this is the 
track you're on also. I'm curious if you have any insight into the 
window-selection for the analysis and synthesis process. It seems like the NSGT 
framework forces you to be a bit smarter with windows than just sticking to 
COLA, but the dual frame techniques should apply for regular STFT processing, 
right?
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Sliding Phase Vocoder (was FIR blog post & interactive demo)

2020-04-13 Thread Spencer Russell
On Fri, Mar 20, 2020, at 4:58 PM, Andreas Gustafsson wrote:
> robert bristow-johnson wrote:
but i would be excited to see a good
> > implementation of constant Q filterbank that is very close to
> > perfect reconstruction if the modification in the frequency domain
> > is null. 
> 
> Isn't this pretty much what my Gaborator library (gaborator.com) does?
> It performs constant Q analysis using Gaussian windows, and resynthesis
> that reconstructs the original signal to within about -115 dB using
> single precision floats.

A while ago I read through some the literature [1] on implementing an 
invertible CQT as a special case of the Nonstationary Gabor Transform. It's 
implemented by the essentia library [2] among other places probably.

The main idea is that you take the FFT of your whole signal, then apply the 
filter bank in the frequency domain (just multiplication). Then you IFFT each 
filtered signal, which gives you the time-domain samples for each band of the 
filter bank. Each frequency-domain filter has a different bandwidth, so your 
IFFT is a different length for each one, which gives you the different sample 
rates for each one. They also give an "online" version where you do the 
processing in chunks, but really for this to work I think you'd need large-ish 
chunks so the latency would be pretty bad.

The whole process is in some ways dual to the usual STFT process, where we 
first window and then FFT. in the NSGT you first FFT and then window, and then 
IFFT each band to get a Time-Frequency representation.

For resynthesis you end up with a similar window overlap constraint as in STFT, 
except now the windows are in the frequency domain. It's a little more 
complicated because the window centers aren't evenly-spaced, so creating COLA 
windows is complicated. There are some fancier approaches to designing a set of 
synthesis windows that are complementary (inverse) of the analysis windows, 
which is what the frame-theory folks like that Austrian group seem to like to 
use.

One of the nice things about the NSGT is it lets you be really flexible in your 
filterbank design while still giving you invertibility.

Andreas - is this the general approach you use for Gaborator?


[1]: Balazs, P., Dörfler, M., Jaillet, F., Holighaus, N., & Velasco, G. (2011). 
Theory, implementation and applications of nonstationary Gabor frames. Journal 
of Computational and Applied Mathematics, 236(6), 1481–1496.
[2]: https://mtg.github.io/essentia-labs/news/2019/02/07/invertible-constant-q/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FIR blog post & interactive demo

2020-03-10 Thread Spencer Russell
Thanks for your expanded notes, RBJ. I haven't found anything that I disagree 
with or that contradicts what I was saying earlier - I'm not sure if they were 
intended as expanded context or if there was something you were disagreeing 
with.

On March 8, 2020 7:55 PM Ethan Duni  wrote:
> 
> Fast FIR is a different thing than an FFT filter bank.
> 
> You can combine the two approaches but I don’t think that’s what is being 
> done here?

The point I'm making here is that overlap-add fast FIR is a special case of 
STFT-domain multiplication and resynthesis. I'm defining the standard STFT 
pipeline here as:

1. slice your signal into frames
2. pointwise-multiply an analysis window by each frame
3. perform `rfft` on each frame to give the STFT domain representation
4. modify the STFT representation
5. perform `irfft` on each frame
6. pointwise-multiply a synthesis window on each frame
7. overlap-add each frame to get the resulting time-domain signal

See below for more.

On Mon, Mar 9, 2020, at 5:44 PM, robert bristow-johnson wrote:
> 
> > On March 9, 2020 10:15 AM Spencer Russell  wrote:
> > 
> > 
> > I think we're mostly on the same page, Ethan.
> 
> well, i think that i am on the same page as Ethan.
> 
> > Though even with STFT-domain time-variant filtering (such as with noise 
> > reduction, or mask-based source separation) it would seem you could still 
> > zero-pad each input frame to eliminate any issues due to time-aliasing.
> 
> zero-padding is the sole technique that gets rid of time-aliasing.

Right - we're together here.

> let's say your FIR is of length L.  let's say that your frame hop is H 
> and frame length is F ≥ H and we're doing overlap-add.  then your F 
> samples of input (H samples are *new* samples in the current frame, F-H 
> samples are remaining from the previous frame) are considered 
> zero-padded out to infinity in both directions.  then the length of the 
> result of linear convolution is L+F-1.  now if you can guarantee that 
> the size of the DFT, which we'll call "N" (and most of the time is a 
> power of 2) is at least as large as the non-zero length of the linear 
> convolution, then the result of circular convolution of the zero-padded 
> FIR and the zero-padded frame of samples will be exactly the same.  
> that means
> 
>N ≥ L + F - 1

You are completely correct, and as far as I can tell we're in agreement here 
(again please correct me if this was meant to be a rebuttal). Specifically I'm 
talking about the case where F=H. You then perform a standard STFT with these 
parameters (Hop size H, rectangular window of size F = H, FFT length H+L-1), 
multiply each frame by the (r)FFT of your filter, then do the standard ISTFT 
with overlap-add.

Your STFT will have a height of `N/2-1` (integer division). You do the standard 
ISTFT with overlap-add, and the same hop size H. The frame size is now the full 
N. You use a "synthesis window" that's the full length N (in practice just 
taking each chunk with no windowing). Within the ISTFT process you took the 
`irfft` of each frame, which is now nonzero for some length longer than H, but 
not more than N (so there's no time aliasing).

That should be exactly the same thing as fast FIR convolution with a chunk size 
of F, but in the framework of STFT->multiply->ISTFT. The only thing that's not 
standard STFT processing is the zero-padding (to remove aliasing due to 
circular convolution, a now much-belabored point).

This is just to make the point that fast FIR is a special case of STFT 
processing. From a compute perspective this should be no less efficient than 
fast FIR (I mean, it's doing the same thing). If you do the whole STFT off-line 
then you wasted some memory materializing the whole STFT, but you could 
consider a streaming version, and at that point the implementation would look 
very similar to what you'd code up for fast FIR.

Are we all together here?

= Time Variant Filtering 

So this seems like it's the really interesting part, and usually why people 
work in the STFT domain in the first place. As RBJ mentioned, padding (ensuring 
N >= L+F-1) completely resolves time-aliasing is true whether the filter is 
stationary or time-varying.

> if it is a rectangular window, the frame length and frame hop are the 
> same, F=H, and the number of generated output samples that are valid is 
> H, and the most you can hope to get is:
> 
> H = F = N - L + 1

Right, this is the Fast FIR situation I described above.

> 
> if you cut your frame hop size, H, from F to nearly half (F+1)/2 (and 
> use a complementary window such as Hann), it is half as efficient, but 
> the crossfade is even smoother (and the frame rate is faster, so the 
> filter definition can change more often).
> 
> all of this is well-established knowledge regarding

Re: [music-dsp] FIR blog post & interactive demo

2020-03-09 Thread Spencer Russell
I think we're mostly on the same page, Ethan. Though even with STFT-domain 
time-variant filtering (such as with noise reduction, or mask-based source 
separation) it would seem you could still zero-pad each input frame to 
eliminate any issues due to time-aliasing. As you mention (paraphrasing), you 
can smooth out the mask which will reduce the amount of zero-padding you need, 
but if you have an KxN STFT (K frequency components and N frames) then then 
zero-padding each frame by K-1 should still eliminate any time-aliasing even if 
your filter has hard edges in the frequency domain, right?

I understand the role of time-domain windowing in STFT processing to be mostly:
1. Reduce frequency-domain ripple (side-lobes in each band)
2. Provide a sort of cross-fade from frame-to-frame to smooth out framing 
effects

In my mind doing STFT-domain masking/filtering is _roughly_ equivalent to a 
filter bank with time-varying response. In the STFT case though you're keeping 
things invariant within each frame and then cross-fading from frame to frame. 
This is a pretty intuitive/ad-hoc way of thinking on my part though - I'd love 
to see some literature that gives a more formal treatment.

-s

On Mon, Mar 9, 2020, at 12:52 AM, Ethan Duni wrote:
> 
> 
> On Sun, Mar 8, 2020 at 8:02 PM Spencer Russell  wrote:
>> In fact, the the standard STFT analysis/synthesis pipeline is the same thing 
>> as overlap-add "fast convolution" if you:
>> 
>> 1. Use a rectangular window with a length equal to your hop size
>> 2. zero-pad each input frame by the length of your FIR kernel minus 1
> 
> Indeed, the two ideas are closely related and can be combined. It's more a 
> difference in the larger approach. 
> 
> If you can specify the desired response in terms of an FIR of some fixed 
> length, then you can account for the circular effects and use fast FIR. Note 
> that this is a time-variant MIMO system constructed to be equivalent to a 
> time-invariant SISO system (modulo finite word length effects, as you say). 
> 
> Alternatively, the desired response can be specified in the STFT domain. This 
> comes up naturally in situations where it is estimated in the frequency 
> domain to begin with, such as noise suppression or channel equalization. 
> Then, circular convolution effects are controlled through a combination of 
> pre/post windowing and smoothing/conditioning of the frequency response. 
> Unlike the fast FIR case, the time-variant effects are only approximately 
> suppressed: this is a time-variant MIMO system that is *not* equivalent to 
> any time-invariant SISO system. 
> 
> So there is an extra layer of engineering needed in STFT systems to ensure 
> that time domain aliasing is adequately suppressed. With fast FIR, you just 
> calculate the correct size to zero-pad (or delete), and then there is no 
> aliasing to worry about. 
> 
> Ethan
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FIR blog post & interactive demo

2020-03-08 Thread Spencer Russell
On Sun, Mar 8, 2020, at 7:41 PM, Ethan Duni wrote:
> FFT filterbanks are time variant due to framing effects and the circular 
> convolution property. They exhibit “perfect reconstruction” if you design the 
> windows correctly, but this only applies if the FFT coefficients are not 
> altered between analysis and synthesis. If you alter the FFT coefficients 
> (i.e., “filtering”), it causes time domain aliasing. 

But you can avoid this by zero-padding before you do the FFT. In effect this 
turns circular convolution into linear convolution - the "tail" ends up in the 
zero-pading rather than wrapping around and causing time-aliasing. This is what 
overlap-add FFT convolution does.

In fact, the the standard STFT analysis/synthesis pipeline is the same thing as 
overlap-add "fast convolution" if you:

1. Use a rectangular window with a length equal to your hop size
2. zero-pad each input frame by the length of your FIR kernel minus 1

Then the regular overlap-add STFT resynthesis is the same as "fast 
convolution", and will give you the same thing (to numerical precision) you 
would get with a time-domain FIR implementation.

>> On Mar 8, 2020, at 2:04 PM, zhiguang zhang  wrote:
>> but bringing up traditional FIR/IIR filtering terminology to describe FFT 
>> filtering doesn't make sense in my mind. I'm not in the audio field. but 
>> yes, I do believe that the system is time invariant, but I don't have time 
>> to prove myself to you on this forum at this time, nor do I have any 
>> interest in meeting Dr Bosi at AES.

I don't really understand this perspective - there's a tremendous amount of 
conceptual overlap between these ideas, and regimes where they are completely 
equivalent (e.g. implementing a time-invariant FIR filter in the frequency 
domain using block-by-block "fast convolution"). Certainly when you're doing 
time-variant filtering things are somewhat different (e.g. multiplying in the 
STFT domain with changing coefficients is doing some kind of frame-by-frame 
cross-fading, which will not give the same result as varying the parameters of 
an FIR filter on a sample-by-sample basis, in a pure time-domain 
implementation). That said, using the same terminology where we can helps 
highlight the places where these concepts are related.

-s___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FIR blog post & interactive demo

2020-03-07 Thread Spencer Russell
On Sat, Mar 7, 2020, at 6:00 AM, Zhiguang Eric Zhang wrote:
> Traditional FIR/IIR filtering is ubiquitous but actually does suffer from 
> drawbacks such as phase distortion and the inherent delay involved. FFT 
> filtering is essentially zero-phase, but instead of delays due to samples, 
> you get delays due to FFT computational complexity instead.

I wouldn’t say the delay when using FFT processing is due to computational 
complexity fundamentally. Compute affects your max throughput more than your 
latency. In other words, if you had an infinitely-fast computer you would still 
have to deal with latency. The issue is just that you need at least 1 block of 
input before you can do anything. It’s the same thing as with FIR filters, they 
need to be causal so they can’t be zero-phase. In fact you could interchange 
the FFT processing with a bank of FIR band pass filters that you sample from 
whenever you want to get your DFT frame. (that’s basically just a restatement 
of what I said before about the STFT.)

-s___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] FIR blog post & interactive demo

2020-03-04 Thread Spencer Russell
On Tue, Mar 3, 2020, at 4:21 PM, robert bristow-johnson wrote:
> 
> Like a lotta things, sometimes people use the same term to mean something 
> different. A "phase vocoder" (an STFT thing a la Portnoff) is not the same as 
> a "channel vocoder" (which is a filter bank thing).

It’s maybe worth noting that the STFT _is_ a filter bank where each channel is 
sampled at the hop size and the bandwidth of each band pass filter is just the 
Fourier transform of the windowing function. There’s some phase twiddling you 
can do to convert between a modulated or demodulated filter bank.

-s___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] phase vocoder

2018-10-15 Thread Spencer Russell
Alex,

A number of experienced DSP engineers have spent considerable time
trying to help you understand the problem you're describing, yet it
doesn't seem like you've made much progress. Your questions often seem
to end up with asking folks to basically write your code for you. I
don't want to be unkind here, but at some point it just feels like
you're trolling the list.
I think the most effective next steps for you are:

1. if you are working on a commercial product, you should hire a DSP
   consultant to write this for you2. if you ware working on a personal 
project, post the code you have
   so far on GitHub (or similar) and point to the specific parts
   you're having trouble with. I'm not sure if there are still folks
   on the list willing to spend time moving you forward, but that
   definitely seems like the most productive and concrete way to try
   to ask for help.
-s


On Mon, Oct 15, 2018, at 12:31 PM, Alex Dashevski wrote:
> I need the code for pitch shifting on RealTime. Could you help ?
> 
> ‫בתאריך יום ב׳, 15 באוק׳ 2018 ב-15:15 מאת ‪Eder Souza‬‏
> <‪ederwan...@gmail.com‬‏>:‬>> Some time ago I posted some code in matlab, 
> this is not real-time,
>> you can see the basic math of an standard phase vocoder.>> 
>> https://dsp.stackexchange.com/questions/40101/audio-time-stretching-without-pitch-shifting/40367#40367>>
>>  
>> This code just change the time, you can pitch shift using a
>> combination of the code above and resampler...>> 
>> The core code is the same for real-time or off-line, the only thing
>> that you need to worry about is how deal with input and output buffer
>> (do you need ensure output buffer continuity, learn about Circular
>> Buffer).>> 
>> PS: Prepare to latency lol 
>> 
>> 
>> Eder de Souza
>> ♪♫♫♪>>  
>> ▇ ▅ █ ▅ ▇ ▂ ▃ ▁ ▁ ▅ ▃ ▅ ▅ ▄ ▅ ▇
>> Sent From The Moon and Written With My Thumbs !
>> 
>> 
>> On Sun, Oct 14, 2018 at 2:11 AM Alex Dashevski
>>  wrote:>>> Hi,
>>> Which library?
>>> Thanks,
>>> Alex
>>> 
>>> On Sat, Oct 13, 2018, 22:42 he can jog  wrote:
 The library is easy to work with. Having enough of a working
 knowledge of phase vocoders to make use of an existing
 implementation and implementing one from scratch are two different
 levels of complexity. 
 On Sat, Oct 13, 2018 at 2:31 PM Daniel Varela
  wrote:> Complex stuff have no easy fix
> 
> El sáb., 13 oct. 2018 20:43, he can jog 
> escribió:>> Paul Batchelor has a great port of the csound 'mincer' 
> phase
>> vocoder in his SoundPipe library:
>> https://github.com/PaulBatchelor/Soundpipe/blob/master/modules/mincer.c>>
>>  
>> That's definitely beyond my understanding to re-implement, but
>> his library is designed to be embedded and has a really nice API,
>> I've found it easy to work with in my own projects.>> 
>> On Sat, Oct 13, 2018 at 1:31 PM Alex Dashevski
>>  wrote:>>> Hi,
>>> 
>>> Where can I find a simple explanation and code example(supported
>>> RealTime and multi threading) ?>>> I found 
>>> https://breakfastquay.com/rubberband/index.html but it
>>> very difficult undersand how it work, I need to integrate this
>>> code into android that should run on an audio buffer.>>> 
>>> Thanks,
>>> Alex
>>> ___
>>>  dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>> ___
>>  dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> ___
>  dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
 ___
  dupswapdrop: music-dsp mailing list
 music-dsp@music.columbia.edu
 https://lists.columbia.edu/mailman/listinfo/music-dsp
>>> ___
>>>  dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>> ___
>>  dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> _
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Finding discontinuity in a sine wave.

2018-01-10 Thread Spencer Russell
I think the PLL approach will be much more robust, and will let you
detect phase changes.
-s


On Wed, Jan 10, 2018, at 11:51 AM, Benny Alexandar wrote:
> Here is what I was planning. The sine wave frequency is known. 
> 
> For example if sine wave is having a frequency of 1 kHz and sampling
> rate is 48 kHz.> Then every 48 samples will make one full cycle. Find the 
> norm of this
> 48 samples.> It should remain constant,  if any fading, mute etc will be 
> detected
> by comparing with> this threshold value. But if there is a phase 
> discontinuity it will be
> hard to detect.> 
> -ben
> 
> *From:* Benny Alexandar  *Sent:* Wednesday,
> January 10, 2018 10:21 PM *To:* Spencer Jackson; music-
> d...@music.columbia.edu *Subject:* Re: [music-dsp] Finding
> discontinuity in a sine wave.>  
> 
> Here is what I was planning. The sine wave frequency is known. 
> 
> For example if sine wave is having a frequency of 1 kHz and sampling
> rate is 48 kHz.> Then every 48 samples will make one full cycle. Find the 
> norm of this
> 48 samples.> It should remain constant,  if any fading, mute etc will be 
> detected
> by comparing with> this threshold value. But if there is a phase 
> discontinuity it will be
> hard to detect.> 
> -ben
> 
> 
> *From:* music-dsp-boun...@music.columbia.edu  boun...@music.columbia.edu> on behalf of Spencer Jackson
>  *Sent:* Wednesday, January 10, 2018 10:04 PM
> *To:* music-dsp@music.columbia.edu *Subject:* Re: [music-dsp] Finding
> discontinuity in a sine wave.>  
> If the sine frequency is known, perhaps you could use a goertzel
> filter and compare a average signal power calculation to measure the
> power of the error signal.> 
> That doesn't identify the nature of the error, but strikes me as an
> interesting approach.> _spencer 
> 
> On Wed, Jan 10, 2018 at 9:23 AM, Eric Brombaugh
>  wrote:> 
>> Maybe try locking a PLL to the sinewave to get the expected frequency
>> and phase, then look for differences between them?
>>
>>  Eric>> 
>> 
>> On 01/10/2018 09:08 AM, Benny Alexandar wrote:
>> 
>>> Hi,
>>> 
>>>  I want to do some time domain analysis on a sine wave signal which
>>>  is continuously streaming.>>>  My objective is to detect any 
>>> discontinuities such as audio gap,
>>>  fading, phase discontinuity etc.>>> 
>>>  Any algorithms available on time domain other than doing FFT based
>>>  approach ?>>> 
>>>  -ben
>>> 
>>> 
>>>  ___
>>>  dupswapdrop: music-dsp mailing list
>>> music-dsp@music.columbia.edu
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>> 
>> 
>> ___
>>  dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>> 
> _
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Seminar: Listening and Learning Systems for Composition and Live Performance (by Nick Collins)

2012-05-30 Thread Spencer Russell
Looks like an extremely interesting seminar. Unfortunately I'm
US-based and won't be able to make it (though 90 EUR would be a
bargain!).

Will any slides/materials/videos be posted online for the general public?

Thanks,
Spencer

On Wed, May 30, 2012 at 9:54 PM, Charlie Morrow c...@cmorrow.com wrote:
 What a wonderful seminar. Nick Collins is an inspiring artist and a gifted 
 teacher. His tech literacy is contagious. And barcelona is amazing.
 Charlie Morrow
 Sent via BlackBerry from T-Mobile

 -Original Message-
 From: Sam Roig s...@lullcec.org
 Sender: music-dsp-bounces@music.columbia.eduDate: Thu, 31 May 2012 03:39:11
 To: music-dsp@music.columbia.edu
 Reply-To: A discussion list for music-related DSP
        music-dsp@music.columbia.edu
 Subject: [music-dsp] Seminar: Listening and Learning Systems for Composition
  and Live Performance (by Nick Collins)

 8/9/10.06.2012

 LISTENING AND LEARNING SYSTEMS FOR COMPOSITION AND LIVE PERFORMANCE
 (a 3-day seminar by Nick Collins)

 This seminar will explore practical machine listening and machine
 learning methods within the SuperCollider environment, alongside
 associated technical and musical issues. Applications for such
 techniques range from autonomous concert systems, through novel musical
 controllers, to sound analysis projects outside of realtime informing
 musical composition. We will investigate built-in and third party UGens
 and classes for listening and learning, including the SCMIR library for
 music information retrieval in SuperCollider.

 Level: intermediate

 Tutor: Nick Collins [ http://www.sussex.ac.uk/Users/nc81/ ]

 Nick Collins is a composer, performer and researcher in the field of
 computer music. He lectures at the University of Sussex, running the
 music informatics degree programmes and research group. Research
 interests include machine listening, interactive and generative music,
 and audiovisual performance. He co-edited the Cambridge Companion to
 Electronic Music (Cambridge University Press 2007) and The SuperCollider
 Book (MIT Press, 2011) and wrote the Introduction to Computer Music
 (Wiley 2009). iPhone apps include RISCy, TOPLAPapp, Concat, BBCut and
 PhotoNoise for iPad.

 Dates:
 Friday 08.06.2012, 18:00-22.00h.
 Saturday 09.06.2012, 11:00–14:00h, 16:00-19:00h
 Sunday 10.06.2012, 11:00–14:00h, 16:00-19:00h

 Location: Fabra i Coats – Fàbrica de Creació. Sant Adrià, 20. Barcelona.
 Metro Sant Andreu.

 Price: 90€

 To sign up please send an email to i...@lullcec.org.

 +info: [
 http://lullcec.org/en/2012/workshops/sistemes-daudicio-i-aprenentatge-artificial-per-a-la-composicio-i-la-interpretacio-en-viu/
 ]

 This activity is organized by l'ull cec with the collaboration of
 Consell Nacional de la Cultura i les Arts, Institut de Cultura de
 Barcelona and Fabra Coats – Fábrica de Creació.

 

 web: [ http://lullcec.org ]
 facebook: [ http://facebook.com/lullcec ]
 twitter: [ http://twitter.com/lullcec ]
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
 --
 dupswapdrop -- the music-dsp mailing list and website:
 subscription info, FAQ, source code archive, list archive, book reviews, dsp 
 links
 http://music.columbia.edu/cmc/music-dsp
 http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp