[music-dsp] R: R: Anyone using unums?

2016-04-15 Thread Marco Lo Monaco
Ethan, I havent read the 2016 slides yet, but I was referring to the funnier 
one here

http://www.slideshare.net/insideHPC/unum-computing-an-energy-efficient-and-massively-parallel-approach-to-valid-numerics

 

M.

 

Da: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Ethan Fenn
Inviato: venerdì 15 aprile 2016 15:03
A: music-dsp@music.columbia.edu
Oggetto: Re: [music-dsp] R: Anyone using unums?

 

I really don't think there's a serious idea here. Pure snake oil and conspiracy 
theory.

 

Notice how he never really pins down one precise encoding of unums... doing so 
would make it too easy to poke holes in the idea.

 

For example, this idea of SORNs is presented, wherein one bit represents the 
presence or absence of a particular value or interval. Which is fine if you're 
dealing with 8 possible values. But if you want a number system that represents 
256 different values -- seems like a reasonable requirement to me! -- you need 
2^256 bits to represent a general SORN. Whoops! But of course he bounces on to 
a different topic before the obvious problem comes up.

 

-Ethan

 

 

 

On Fri, Apr 15, 2016 at 4:38 AM, Marco Lo Monaco  
wrote:

I read his slides. Great ideas but the best part is when he challenges Dr. 
Kahan with the star trek trasing/kidding. That made my day.

Thanks for sharing Alan

 

 

 

Inviato dal mio dispositivo Samsung

 

 Messaggio originale 
Da: Alan Wolfe  
Data: 14/04/2016 23:30 (GMT+01:00) 
A: A discussion list for music-related DSP  
Oggetto: [music-dsp] Anyone using unums? 

Apologies if this is a double post.  I believe my last email was in
HTML format so was likely rejected.  I checked the list archives but
they seem to have stopped updating as of last year, so posting again
in plain text mode!

I came across unums a couple weeks back, which seem to be a plausible
replacement for floating point (pros and cons to it vs floating
point).

One interesting thing is that division is that addition, subtraction,
multiplication and division are all single flop operations and are on
"equal footing".

To get a glimpse, to do a division, you do a 1s compliment type
operation (flip all bits but the first 1, then add 1) and you now have
the inverse that you can do a multiplication with.

Another interesting thing is that you have different accuracy
concerns.  You basically can have knowledge that you are either on an
exact answer, or between two exact answers.  Depending on how you set
it up, you could have the exact answers be integral multiples of some
fraction of pi, or whatever else you want.

Interesting stuff, so i was curious if anyone here on the list has
heard of them, has used them for dsp, etc?

Fast division and the lack of denormals seem pretty attractive.

http://www.johngustafson.net/presentations/Multicore2016-JLG.pdf
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp




___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Anyone using unums?

2016-04-15 Thread Marco Lo Monaco


I read his slides. Great ideas but the best part is when he challenges Dr. 
Kahan with the star trek trasing/kidding. That made my day.Thanks for sharing 
Alan


Inviato dal mio dispositivo Samsung

 Messaggio originale 
Da: Alan Wolfe  
Data: 14/04/2016  23:30  (GMT+01:00) 
A: A discussion list for music-related DSP  
Oggetto: [music-dsp] Anyone using unums? 

Apologies if this is a double post.  I believe my last email was in
HTML format so was likely rejected.  I checked the list archives but
they seem to have stopped updating as of last year, so posting again
in plain text mode!

I came across unums a couple weeks back, which seem to be a plausible
replacement for floating point (pros and cons to it vs floating
point).

One interesting thing is that division is that addition, subtraction,
multiplication and division are all single flop operations and are on
"equal footing".

To get a glimpse, to do a division, you do a 1s compliment type
operation (flip all bits but the first 1, then add 1) and you now have
the inverse that you can do a multiplication with.

Another interesting thing is that you have different accuracy
concerns.  You basically can have knowledge that you are either on an
exact answer, or between two exact answers.  Depending on how you set
it up, you could have the exact answers be integral multiples of some
fraction of pi, or whatever else you want.

Interesting stuff, so i was curious if anyone here on the list has
heard of them, has used them for dsp, etc?

Fast division and the lack of denormals seem pretty attractive.

http://www.johngustafson.net/presentations/Multicore2016-JLG.pdf
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Re: NAMM Meetup?

2016-01-23 Thread Marco Lo Monaco


I am at room 210b for the all morning. Will get back to you guys in the 
afternoon
M.


Inviato dal mio dispositivo Samsung

 Messaggio originale 
Da: Stefan Stenzel  
Data: 22/01/2016  21:30  (GMT-08:00) 
A: music-dsp@music.columbia.edu 
Oggetto: Re: [music-dsp] NAMM Meetup? 

My booth #6009 is about 5 metres away from #6100, way too much for walking 
unfortunately.

I’ll be there most of the afternoon and happy to meet all of you there.

> On 22 Jan 2016, at 20:54 , Christian Luther  wrote:
> 
> Sorry I didn't get back to this thread earlier. I didn't anticipate how 
> intense these days are. ;)
> 
> I'm at the kemper booth #6100 most of the time. After 5pm is always a good 
> time to drop by.
> 
> Guess I won't be able to make it to the juce meetup unfortunately.
> 
> Cheers
> Christian
> 
> Am 19.01.2016 um 00:41 schrieb Marco Lo Monaco :
> 
>> I will be there by friday. Where do we gather exactly?
>> 
>> Marco
>> 
>> 
>> 
>> Inviato dal mio dispositivo Samsung
>> 
>> 
>>  Messaggio originale 
>> Da: Nigel Redmon  
>> Data: 19/01/2016 07:03 (GMT+01:00) 
>> A: A discussion list for music-related DSP  
>> Oggetto: Re: [music-dsp] NAMM Meetup? 
>> 
>> Nice blog, Christian, good job.
>> 
>> Gee, I think I may have been to them all (Anaheim Winter NAMM)…anyway, too 
>> busy to make a weekday, but I plan to go Saturday.
>> 
>> > On Jan 18, 2016, at 2:42 AM, Christian Luther  wrote:
>> > 
>> > Hey everyone!
>> > 
>> > who’ll be there and who’s in for a little music-dsp meetup?
>> > 
>> > Cheers
>> > Christian
>> > 
>> > P.S.: I just started a new blog, might be interesting for you guys. Have a 
>> > look:
>> > http://science-of-sound.net
>> 
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Re: NAMM Meetup?

2016-01-19 Thread Marco Lo Monaco


I will be there by friday. Where do we gather exactly?
Marco


Inviato dal mio dispositivo Samsung

 Messaggio originale 
Da: Nigel Redmon  
Data: 19/01/2016  07:03  (GMT+01:00) 
A: A discussion list for music-related DSP  
Oggetto: Re: [music-dsp] NAMM Meetup? 

Nice blog, Christian, good job.

Gee, I think I may have been to them all (Anaheim Winter NAMM)…anyway, too busy 
to make a weekday, but I plan to go Saturday.

> On Jan 18, 2016, at 2:42 AM, Christian Luther  wrote:
> 
> Hey everyone!
> 
> who’ll be there and who’s in for a little music-dsp meetup?
> 
> Cheers
> Christian
> 
> P.S.: I just started a new blog, might be interesting for you guys. Have a 
> look:
> http://science-of-sound.net

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Compensate for interpolation high frequency signal loss

2015-08-17 Thread Marco Lo Monaco
Talking about the all-pass modulation, (and also reconnecting to a past hot 
post about time varying filters), did anyone tried the following:
1) see if the allpass passes the minimum norm criterion that Laroche introduced 
(maybe the 5 samples transient found in the paper for a in the range 
0.618-1.618 comes out from that)
2) use the trick of the linear combi of the SVF outputs to model (at the cost 
of a more complex state space implementation) _any_ 2nd order filter (thus also 
the modulated allpass) while keeping it stable in the minimum norm sense (see 
DAFX 14 paper TIME-VARYING FILTERS FOR MUSICAL APPLICATIONS by A. Wishnick from 
iZotope)

M.

> -Messaggio originale-
> Da: music-dsp [mailto:music-dsp-boun...@music.columbia.edu] Per conto di
> Nigel Redmon
> Inviato: domenica 16 agosto 2015 19:57
> A: Sham Beam
> Cc: music-dsp@music.columbia.edu
> Oggetto: Re: [music-dsp] Compensate for interpolation high frequency signal
> loss
> 
> As far as compensation: Taking linear as an example, we know that the
> response rolls off (“sinc^2"). Would you compensate by boosting the highs?
> Consider that for a linearly interpolated delay line, a delay of an integer
> number of samples, i, has no high frequency loss at all. But that the error is
> max it you need a delay of i + 0.5 samples. More difficult to compensate for,
> would be such a delay line where the delay time is modulated.
> 
> A well-published way of getting around the fractional problem is allpass
> compensation. But a lot of people seem to miss that this method doesn’t
> lend itself to modulation—it’s ideally suited for a fixed fractional delay.
> Here’s a paper that shows one possible solution, crossfading two allpass
> filters:
> 
> http://scandalis.com/jarrah/Documents/DelayLine.pdf
> 
> Obviously, the most straight-forward way to avoid the problem is to convert
> to a higher sample rate going into the delay line (using windowed sinc, etc.),
> then use linear, hermite, etc.
> 
> 
> > On Aug 16, 2015, at 1:09 AM, Sham Beam  wrote:
> >
> > Hi,
> >
> > Is it possible to use a filter to compensate for high frequency signal loss 
> > due
> to interpolation? For example linear or hermite interpolation.
> >
> > Are there any papers that detail what such a filter might look like?
> >
> >
> > Thanks
> > Shannon
> > ___
> > music-dsp mailing list
> > music-dsp@music.columbia.edu
> > https://lists.columbia.edu/mailman/listinfo/music-dsp
> 
> 
> ___
> music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp


___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Re: The Art of VA Filter Design book revision 1.1.0

2015-07-25 Thread Marco Lo Monaco


Hello Andrew.Yes as I already said I know the Mitra paper of the 75 but it 
relates to linear networks. Of course I know the Italian guys (personally) and 
to my knowledge they were the pioneers as far as the nonlinear systems are 
concerned.  I thought that there were references in the 70s talking about non 
linear systems and delay free loops, which would have been quite surprising for 
me.
Thanks for the other links anyway which I also was aware of.
Marco


Inviato dal mio dispositivo Samsung

 Messaggio originale 
Da: Andrew Leary  
Data: 25/07/2015  12:42  (GMT-08:00) 
A: music-dsp@music.columbia.edu 
Oggetto: Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0 

On Jul 25, 2015, at 12:25 AM, Marco Lo Monaco  wrote:

> Also you tell us that the delay free loop problem was initially faced in the 
> 70s, which is something I didn't know. Do you have please any citation about 
> it?

An important early reference is:
Szczupak, J. and Mitra, S.K., “Detection, Location and Removal of Delay-free 
Loops in Digital Filter Configurations,” IEEE Transactions on Acoustics, Speech 
and Signal Processing, vol. 23, issue 6, pp. 558-562 (1975 December)

See also the later paper by Aki Harma:
Härmä, Aki, “Implementtion of Recursive Filters Having Delay Free Loops,” 
Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and 
Signal Processing, vol. 3, pp. 1261 – 1264 (1998 May)

and recently, these papers by Will Pirkle at University of Miami which have an 
approach slightly different from Zavalishin.
Pirkle, Will, “Resolving Delay-Free Loops in Recursive Filters using the 
Modified Härmä Method,” paper Number 9194, Audio Engineering Society Convention 
137, 2014 October 8

Pirkle, Will, “Novel Hybrid Virtual Analog Filters Based on the Sallen-Key 
Architecture,” paper Number 9195, Audio Engineering Society Convention 137, 
2014 October 8

Also:
G. Borin, G. De Poli, and D. Rocchesso, “Elimination of delay-free loops in 
discrete-time models of nonlinear acous- tic systems,” IEEE Trans. on Speech 
and Audio Processing, vol. 8, no. 5, pp. 597–605, 2000

There are some other papers by Rocchesso and Federico Fontana in the early 
2000’s


Andrew Leary
Korg Research & Development
a...@korgrd.com




___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: R: Re: The Art of VA Filter Design book revision 1.1.0

2015-07-25 Thread Marco Lo Monaco
Unless you consider the papers by Mitra et alias, but dealing only with linear 
graphs computability…which are dated effectively in the 70s.

 

M.

 

Da: music-dsp [mailto:music-dsp-boun...@music.columbia.edu] Per conto di Marco 
Lo Monaco
Inviato: sabato 25 luglio 2015 01:31
A: music-dsp@music.columbia.edu
Oggetto: [music-dsp] R: Re: The Art of VA Filter Design book revision 1.1.0

 

Ciao Vadim,

On page viii the name of my dear friend and coworker Federico Avanzini is the 
correct one.

 

Also you tell us that the delay free loop problem was initially faced in the 
70s, which is something I didn't know. Do you have please any citation about it?

 

Cheers

 

Marco

 

 

Inviato dal mio dispositivo Samsung



 Messaggio originale 
Da: robert bristow-johnson  
Data: 24/07/2015 12:48 (GMT-08:00) 
A: music-dsp@music.columbia.edu 
Oggetto: Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0 


hey Vadim,

i love the rigor in your paper.  i'm still looking through it.

in the 2nd-order analog filters, i might suggest replacing "2R" with 1/Q 
in all of your equations, text, and figures because Q is a notation and 
parameter much more commonly used and referred to in either the EE or 
audio/music-dsp contexts.

in section 3.2, i would replace n0-1 with n0 (which means replacing n0 
with n0+1 in the bottom limit of the summation).  let t0 correspond 
directly with n0.

now even though it is ostensibly obvious on page 40, somewhere (and 
maybe i just missed it) you should be explicit in identifying the 
"trapezoidal integrator" with the "BLT integrator".  you intimate that 
such is the case, but i can't see where you say so directly.

section 3.9 is about pre-warping the cutoff frequency, which is of 
course oft treated in textbooks regarding the BLT.  it turns out that 
any *single* frequency (for each degree of freedom or "knob") can be 
prewarped, not only or specifically the cutoff.  in 2nd-order system, 
you have two independent degrees of freedom that can, in a BPF, be 
expressed as two frequencies (both left and right bandedges).  you might 
want to consider pre-warping both, or alternatively, pre-warping the 
bandwidth defined by both bandedges.

lastly, i know this was a little bit of a sore point before (i can't 
remember if it was you also that was involved with the little tiff i had 
with Andrew Simper), but as depicted on Fig. 3-18, any purported 
"zero-delay" feedback using this trapezoidal or BLT integrator does get 
"resolved" (as you put it) into a form where there truly is no 
zero-delay feedback.  a "resolved" zero-delay feedback really isn't a 
zero-delay feedback at all.  the paths that actually feedback come from 
the output end of a delay element.  the structure in Fig 3-18 can be 
transposed into a simple 1st-order direct form that would be clear *not* 
having zero-delay feedback (but there is some zero-delay feedforward, 
which has never been a problem).

i'll be looking this over more closely, but these are my first 
impressions.  i hope you don't mind the review (that was not explicitly 
asked for).

L8r,

r b-j


On 7/24/15 6:58 AM, Vadim Zavalishin wrote:
> Released the promised bugfix
> http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf
>  
>
>
> On 22-Jun-15 10:51, Vadim Zavalishin wrote:
>> Didn't realize I was answering a personal rather than a list email, so
>> I'm forwarding here the piece of information which was supposed to go to
>> the list:
>>
>> While we are on the topic of the book, I have to mention that I found
>> the bug in the Hilbert transformer cutoff formulas 7.42 and 7.43. Tried
>> to merge odd and even orders into a more simple formula and introduced
>> several mistakes. The necessary corrections are (if I didn't do another
>> mistake again ;) )
>> - the sign in front of each occurence of sn must be flipped
>> - x=(4n+2+(-1)^N)*K(k)/N
>> - the stable poles are given by n> odd.
>>
>> I plan to release a bugfix update, but want to wait for possibly more
>> bugs being discovered.
>>
>> Regards,
>> Vadim
>>
>>
>


-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Re: The Art of VA Filter Design book revision 1.1.0

2015-07-24 Thread Marco Lo Monaco


Ciao Vadim,On page viii the name of my dear friend and coworker Federico 
Avanzini is the correct one.
Also you tell us that the delay free loop problem was initially faced in the 
70s, which is something I didn't know. Do you have please any citation about it?
Cheers
Marco

Inviato dal mio dispositivo Samsung

 Messaggio originale 
Da: robert bristow-johnson  
Data: 24/07/2015  12:48  (GMT-08:00) 
A: music-dsp@music.columbia.edu 
Oggetto: Re: [music-dsp] The Art of VA Filter Design book revision 1.1.0 


hey Vadim,

i love the rigor in your paper.  i'm still looking through it.

in the 2nd-order analog filters, i might suggest replacing "2R" with 1/Q 
in all of your equations, text, and figures because Q is a notation and 
parameter much more commonly used and referred to in either the EE or 
audio/music-dsp contexts.

in section 3.2, i would replace n0-1 with n0 (which means replacing n0 
with n0+1 in the bottom limit of the summation).  let t0 correspond 
directly with n0.

now even though it is ostensibly obvious on page 40, somewhere (and 
maybe i just missed it) you should be explicit in identifying the 
"trapezoidal integrator" with the "BLT integrator".  you intimate that 
such is the case, but i can't see where you say so directly.

section 3.9 is about pre-warping the cutoff frequency, which is of 
course oft treated in textbooks regarding the BLT.  it turns out that 
any *single* frequency (for each degree of freedom or "knob") can be 
prewarped, not only or specifically the cutoff.  in 2nd-order system, 
you have two independent degrees of freedom that can, in a BPF, be 
expressed as two frequencies (both left and right bandedges).  you might 
want to consider pre-warping both, or alternatively, pre-warping the 
bandwidth defined by both bandedges.

lastly, i know this was a little bit of a sore point before (i can't 
remember if it was you also that was involved with the little tiff i had 
with Andrew Simper), but as depicted on Fig. 3-18, any purported 
"zero-delay" feedback using this trapezoidal or BLT integrator does get 
"resolved" (as you put it) into a form where there truly is no 
zero-delay feedback.  a "resolved" zero-delay feedback really isn't a 
zero-delay feedback at all.  the paths that actually feedback come from 
the output end of a delay element.  the structure in Fig 3-18 can be 
transposed into a simple 1st-order direct form that would be clear *not* 
having zero-delay feedback (but there is some zero-delay feedforward, 
which has never been a problem).

i'll be looking this over more closely, but these are my first 
impressions.  i hope you don't mind the review (that was not explicitly 
asked for).

L8r,

r b-j


On 7/24/15 6:58 AM, Vadim Zavalishin wrote:
> Released the promised bugfix
> http://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf
>  
>
>
> On 22-Jun-15 10:51, Vadim Zavalishin wrote:
>> Didn't realize I was answering a personal rather than a list email, so
>> I'm forwarding here the piece of information which was supposed to go to
>> the list:
>>
>> While we are on the topic of the book, I have to mention that I found
>> the bug in the Hilbert transformer cutoff formulas 7.42 and 7.43. Tried
>> to merge odd and even orders into a more simple formula and introduced
>> several mistakes. The necessary corrections are (if I didn't do another
>> mistake again ;) )
>> - the sign in front of each occurence of sn must be flipped
>> - x=(4n+2+(-1)^N)*K(k)/N
>> - the stable poles are given by n> odd.
>>
>> I plan to release a bugfix update, but want to wait for possibly more
>> bugs being discovered.
>>
>> Regards,
>> Vadim
>>
>>
>


-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

___
music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: R: Comb filter decay wrt. feedback

2015-05-12 Thread Marco Lo Monaco
Ahaha, ok...I often go to California, so if you hang out there just lemme
know!

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di robert
bristow-johnson
Inviato: martedì 12 maggio 2015 20:04
A: music-dsp@music.columbia.edu
Oggetto: Re: [music-dsp] R: Comb filter decay wrt. feedback

On 5/11/15 3:25 AM, Marco Lo Monaco wrote:
>
>> we should have a drink together (do you come to U.S. AES shows?) and 
>> all you do is mention one of several political topics and the 
>> hyperbole coming outa me will be worse than this.
> Hey, I also wanna have a drink with RBJ and talk about life, politics, 
> women and DSP?!?!?!!
>

as if i know shit about any of those topics.

i've also never made it across the pond.  so you'll have to come to the U.S.
(or maybe Montreal which is as close to Europe as i can get).


-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Comb filter decay wrt. feedback

2015-05-11 Thread Marco Lo Monaco
 
> we should have a drink together (do you come to U.S. AES shows?) and all
> you do is mention one of several political topics and the hyperbole coming
> outa me will be worse than this.

Hey, I also wanna have a drink with RBJ and talk about life, politics, women
and DSP?!?!?!!

Even if there is the risk that that hyperbolic conversation could last
asymptotically for ever


M.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Geographical visualization of music-dsp list participants

2015-04-23 Thread Marco Lo Monaco
Hello Peter,
I remember I clicked on your links when I was in San Diego, but actually my
place is near Venice, which is missing on the maps.
Just to let you know other weird cases not listed in your emails.

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Peter S
> Inviato: giovedì 23 aprile 2015 17:18
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Geographical visualization of music-dsp list
> participants
> 
> Hi List,
> 
> As I promised earlier, here's an updated geographical visualization of
music-
> dsp mailing list participants. I analyzed visitors who clicked on links
that I
> posted earlier to this mailing list (sound demos, log2 approximation,
biquad
> filter).
> 
> Some info:
> - 29 IP addresses were clearly associated with bots (GoogleBot, BingBot
etc.),
> these were removed from the data.
> - The data included a total of 347 unique IP addresses, which correspond
to
> 227 unique geolocations. Some nearby locations (for example, nearby IP
> addresses within the same organization) resolved to the same location.
> - Some of these addresses are disjunct, they visited one link, but not the
> other, or vice versa. The visualization includes all IPs from the logs.
> - Multiple IP addresses may correspond to the same person (browsing from
> multiple locations, etc.).
> - The list excludes people who are not interested in what I say, and send
my
> mail to /dev/null, their number is unknown.
> - I tried to make the data set as accurate as possible, though some minor
> errors cannot be ruled out.
> 
> Here are the visualizations (click on the images to zoom):
> http://morpheus.spectralhead.com/musicdsp-geo/
> 
> Best regards,
> Peter Schoffhauzer
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: recursive SIMD?

2015-04-15 Thread Marco Lo Monaco
If you are looking for an example of IIR filter implementation, it is
actually possible and under some restrictions even if the benefit arises
only for a large order (which is rarely used in audio).
The main problem is that there is no instruction AFAIK that can sum up all
the float in the 4 slots, you must do it by shuffling, which sometimes
invalidates the benefit.

You may look for this 2 papers about IIR filtering with SIMD and get some
ideas:

http://saluc.engr.uconn.edu/refs/processors/intel/mmx_sse/iir_fir.pdf (the
classic Intel application note AP598)
http://www.cosy.sbg.ac.at/~rkutil/publication/Kutil08b.pdf

Have fun!
M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Peter S
> Inviato: mercoledì 15 aprile 2015 15:00
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] recursive SIMD?
> 
> On 11/04/2015, Eric Christiansen  wrote:
> > I haven't done much with SIMD in the past, so my experience is pretty
> > low, but my understanding is that each data piece must be defined
> > prior to the operation, correct? Meaning that you can't use result of
> > the operation of one piece of data as the source data for the next
> operation, right?
> >
> > This came up in thinking about how to optimize an anti-aliasing routine.
> > If, for example, the process is oversampling by 4 and running each
> > through a low pass filter and then averaging the results, I was
> > wondering if there's some way of using some SIMD process to speed this
> > up, specifically the part sending each sample through the filter.
> > Since each piece has to go through sequentially, I would need to use
> > the result of the first filter tick as the input for the second filter
tick.
> >
> > But that's not possible, right?
> 
> Technically you can do it, by keeping the previous result in some
temporary
> register, but since you cannot parallelize the recursion (unless you have
> actual parallel filters), it rarely gives much speedup for IIR filters, if
any. So it's
> probably not worth the hassle for recursive filters.
> 
> Quote from a post from 2007:
> 
> - Original Message -
> From: "Eric Brombaugh" 
> To: "A discussion list for music-related DSP"  music.columbia.edu>
> Sent: Friday, October 05, 2007 11:38 PM
> Subject: Re: [music-dsp] Cascaded biquad filter structures
> 
> > You can vectorize a cascade of biquads if you're willing to accept
> > some transport delay - just insert pipelines between the stages to
> > hold the previous results.
> >
> > A few years back I coded up an FIR and an IIR biquad in SSE. I got
> > about 2x speed improvement in the FIR version over plain optimized GCC
> > with floats, but the IIR implementation was about 70% slower in SSE
> > than plain optimized GCC.
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Glitch/Alias free modulated delay

2015-03-21 Thread Marco Lo Monaco
That's exactly what I would also suggest!
The linear interpolation is used in commercial products more than one could 
think.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Nigel Redmon
> Inviato: sabato 21 marzo 2015 00:07
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Glitch/Alias free modulated delay
> 
> Suggestion:
> 
> Make it work with linear interpolation first.
> 
> The implementation is extremely simple—it won’t take much of your time to
> try it—and you’ll eliminate most of the problems (buffer wrap, etc.) without
> getting confused about whether your interpolation scheme is the fault.
> 
> Plus, you’ll have a baseline to compare higher-order improvements with.
> Linear interpolation sounds better than most people would guess, with
> typical musical input (musical interments usually have weaker upper
> harmonics), so you’ll have a better idea of whether you’re getting your
> money’s worth with more elaborate methods.
> 
> 
> 
> > On Mar 20, 2015, at 2:10 PM, Nuno Santos 
> wrote:
> >
> >
> >> On 20 Mar 2015, at 18:58, Alan Wolfe  wrote:
> >>
> >> One thing to watch out for is to make sure you are not looking
> >> backwards AND forwards in time, but only looking BACK in time.
> >
> > This is how I calculate the read index:
> >
> > float t=_writeIndex-_time-_modulation*_modulationRange;
> >
> > if(t<0)
> >t+=_size;
> >
> >>
> >> When you say you have an LFO going from -1 to 1 that makes me think
> >> you might be going FORWARD in the buffer as well as backwards, which
> >> would definitely cause audible problems.
> >
> > I have tried to rescale the LFO to fit between 0 and 1 and it doing the same
> artefacts:
> >
> >
> > // this where delay gets updated with lfo float lfo =
> > (_lfo.step()-1.f)/2.f;
> >
> > delay.setModulation(lfo);
> >
> >>
> >> your LFO really should go between -1 and 0, you then multiply that
> >> value by the number of samples in your buffer (minus 1 if needed,
> >> depending on your design and timing in your code), and then subtract
> >> that value from your "write index" into the buffer, making sure to
> >> handle the case of going negative, where your subtracted offset is
> >> greater than your current write index.
> >
> > I even tried to change from
> >
> > _time+_modulation*_modulationRange
> >
> > to
> >
> > _time-_modulation*_modulationRange
> >
> > Exactly the same issues….
> >
> > :/
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Glitch/Alias free modulated delay

2015-03-20 Thread Marco Lo Monaco
How often do you update the LFO? Every buffersize (32/64 samples)?

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Nuno Santos
> Inviato: venerdì 20 marzo 2015 19:06
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Glitch/Alias free modulated delay
> 
> Hi,
> 
> Today I have used a piece of code which is on musicdsp for testing this out
> again.
> 
> http://musicdsp.org/archive.php?classid=4#98
> 
> 
> I was able to have a delay changing in time without any kind of artefact or
> glitch. However I have only managed to get this results by changing the
> parameter by myself.
> 
> When I say, manually moving the parameter by myself I say that I update a
> property which will be linearly interpolated in time (500ms).
> 
> When I try to apply the modulation which is a value between -1 and 1, that
> comes from an LFO, I always end up with artefacts and noise
> 
> I don’t understand why it works so well when I move the parameter value
> (which is also changing constantly due to interpolation, and it doesn’t work
> when I apply the modulation with the lo…
> 
> Any ideas?
> 
> This is my current code
> 
> void IDelay::read(IAudioSample *output)
> {
> double t=double(_writeIndex)-_time; // works perfectly moving the
> handle manually with the value being interpolated before getting the
> variable _time;
> //double t=double(_writeIndex)-_time+_modulation*_modulationRange;
> 
> // clip lookback buffer-bound
> if(t<0.0)
> t=_size+t;
> 
> // compute interpolation left-floor
> int const index0=int(t);
> 
> // compute interpolation right-floor
> int index_1=index0-1;
> int index1=index0+1;
> int index2=index0+2;
> 
> // clip interp. buffer-bound
> if(index_1<0)index_1=_size-1;
> if(index1>=_size)index1=0;
> if(index2>=_size)index2=0;
> 
> // get neighbourgh samples
> float const y_1= _buffer[index_1];
> float const y0 = _buffer[index0];
> float const y1 = _buffer[index1];
> float const y2 = _buffer[index2];
> 
> // compute interpolation x
> float const x=(float)t-(float)index0;
> 
> // calculate
> float const c0 = y0;
> float const c1 = 0.5f*(y1-y_1);
> float const c2 = y_1 - 2.5f*y0 + 2.0f*y1 - 0.5f*y2;
> float const c3 = 0.5f*(y2-y_1) + 1.5f*(y0-y1);
> 
> *output=((c3*x+c2)*x+c1)*x+c0;
> }
> > On 20 Mar 2015, at 14:20, Bjorn Roche  > wrote:
> >
> > Interpolating the sample value is not sufficient to eliminate artifacts.
> > You also need to eliminate glitches that occur when jumping from one
> > time value to another. In other words: no matter how good your
> > sample-value interpolation is, you will still introduce artifacts when
> > changing the delay time. A steep low-pass filter going into the delay
> > line would be one way to solve this. (this is the idea of
> > "bandlimiting" alluded to earlier in this discussion.)
> >
> > I can say from experience that you absolutely must take this into
> > account, but, if memory serves (which it may not), the quality of
> > interpolation and filtering is not that important. I am pretty sure
> > I've written code to handle both cases using something super simple
> > and efficient like linear interpolation and it sounded surprisingly
> > good, which is to say everyone else on the project thought it sounded
> > great, and that was enough to consider it done on that particular project.
> >
> > HTH
> >
> >
> >
> > On Fri, Mar 20, 2015 at 6:43 AM, Steven Cook
> > mailto:stevenpaulc...@tiscali.co.uk>>
> > wrote:
> >
> >>
> >> Let suppose that I fix the errors In the algorithm. Is this
> >> sufficient
> >>> for a quality delay time
> >>> Modulation? Or will I need more advance technics?
> >>>
> >>
> >> That's a matter of opinion :-) My opinion is that the hermite
> >> interpolation you're using here (I didn't check to see if it's
> >> implemented
> >> correctly!) is more than adequate for modulated delay effects like
> >> chorus - I suspect a lot of commercial effects have used linear
> interpolation.
> >>
> >> Steven Cook.
> >>
> >>
> >>
> >> -Original Message- From: Nuno Santos
> >>> Sent: Thursday, March 19, 2015 6:28 PM
> >>> To: A discussion list for music-related DSP
> >>> Subject: Re: [music-dsp] Glitch/Alias free modulated delay
> >>>
> >>> Hi,
> >>>
> >>> Thanks for your replies.
> >>>
> >>> What I hear is definitely related with the modulation. The artefacts
> >>> are audible every time the modulation is applied: manually or
> >>> automatically (please not that I have an interpolator for manual
> >>> parameter changes to avoid abrupt changes). I think I was already
> >>> applying an Hermit interpolation. This is my delay read function.
> >>>
> >>> void IDelay::read(IAudioSample *output) {  float t =
> >>> _time+_modulation*_modulationRange;
> >>>
> >>>  if (t>(_size-1)

[music-dsp] R: R: Geographical visualization of music-dsp list participants

2015-02-11 Thread Marco Lo Monaco
Ok thanks for saying that.
I will just add the reference here for the Massberg LPF to the ones on the list 
who don’t know him.

A quick glance at it via the Pirkle's book:  
https://books.google.com/books?id=v0ulUYdhgXYC&pg=PA201&lpg=PA201&dq=massberg+lpf&source=bl&ots=WTivtRhrn-&sig=m3s6AKN02Qy4C48D7YVvhZQ8TnI&hl=it&sa=X&ei=qa3bVOCKHcSzoQT35YCICg&ved=0CDwQ6AEwBA#v=onepage&q=massberg%20lpf&f=false

The original AES 2011 paper (for those who have the AES eLib subscription):
http://www.aes.org/e-lib/browse.cfm?elib=16077

The IR looks very similar to yours, and the trick is to compensate with 
additional zeros to the digital LPF biquad in order to stretch the frequency 
response around Nyquist and cutoff/reso (and compensate the warping). So it 
finally happens to be a 2nd order shelf more than a pure LPF.

Hope this can help

Marco


> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Peter S
> Inviato: mercoledì 11 febbraio 2015 20:21
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] R: Geographical visualization of music-dsp list
> participants
> 
> Hi Marco,
> 
> On 11/02/2015, Marco Lo Monaco  wrote:
> > my hit was from San Diego, but it actually should be from Italy (near
> > Venice). When I will be back home in a week or so I will click again
> > to update the map.
> > :)
> 
> No problem :) I'll update the map in 1-2 weeks, including all the data points
> that I skipped the first time.
> 
> > Thanks for sharing.
> 
> You're welcome :)
> 
> > PS: I responded to your LPF biquad post but I got no feedback, don’t
> > know if you missed it or not.
> 
> I saw it, and thanks for your reply. I just didn't have anything else to add 
> at
> the moment.
> 
> - Peter
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews, dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: Geographical visualization of music-dsp list participants

2015-02-11 Thread Marco Lo Monaco
Hi Peter,
my hit was from San Diego, but it actually should be from Italy (near
Venice). When I will be back home in a week or so I will click again to
update the map.
:)
It's also nice to see, depending on the world location,  how I can recognize
some authors in the dsp literature living in those places.
Thanks for sharing.

M.

PS: I responded to your LPF biquad post but I got no feedback, don’t know if
you missed it or not.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Peter S
> Inviato: mercoledì 11 febbraio 2015 03:05
> A: A discussion list for music-related DSP
> Oggetto: [music-dsp] Geographical visualization of music-dsp list
participants
> 
> Hi All,
> 
> Since I like to know who I am communicating with, I did an analysis on the
> visitors who clicked on the links that I posted here. This should
represent a
> large part of the people who actively read this mailing list.
> 
> My tracker registered a total of 133 visitors: 75 from Europe, 47 from
North
> America, 7 from Asia and 4 from South America.
> 
> Geographical visualization of visitors on a world map:
> http://morpheus.spectralhead.com/img/musicdsp/world.png
> 
> Visualization of visitors from EU area:
> http://morpheus.spectralhead.com/img/musicdsp/eu.png
> 
> Visitors from EU area, close-up:
> http://morpheus.spectralhead.com/img/musicdsp/eu-zoom.png
> 
> Close-up of EU North-West region:
> http://morpheus.spectralhead.com/img/musicdsp/eu-nw.png
> 
> Visitors from USA:
> http://morpheus.spectralhead.com/img/musicdsp/usa.png
> 
> Close-up of San Francisco - Los Angeles area:
> http://morpheus.spectralhead.com/img/musicdsp/usa-west.png
> 
> Close-up of USA East region:
> http://morpheus.spectralhead.com/img/musicdsp/usa-east.png
> 
> Visitor classification based on operating system:
> 44.36% - OS X
> 33.83% - Windows
> 14.23% - Linux
> 6.77%  - iOS
> 0.75%  - Android
> 
> Best regards,
> Peter
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Two pole one zero biquad filter

2015-02-06 Thread Marco Lo Monaco
Nicely done, Peter and thanks for sharing.
I was the one who suggested to be super-clear and void any doubt in us
judging your results, because via email and with no paper published that
should be not ever an ultimate argument in the discussion. I never thought
that you could be cheating on your proof.

To me, without going into any math and by just giving a thought/quick glance
at the SPAN screenshot, they look like the Massberg analog-matched, or
something very similar.

Ciao

Marco



> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Peter S
> Inviato: venerdì 6 febbraio 2015 23:18
> A: A discussion list for music-related DSP
> Oggetto: [music-dsp] Two pole one zero biquad filter
> 
> Hi All,
> 
> I'm going to show some transfer curves of what I call "scp biquad v1",
which is
> my first and simplest digital approximation of an s-domain analog 2-pole
> resonant lowpass filter, using the following time domain
> function:
> 
> y[n] = a0*x[n] + a1*x[n-1] - b1*y[n-1] - b2*y[n-2]
> 
> Some expressed doubt that I might be "faking" the transfer curves, so I
> actually implemented this as an audio plugin, and grabbed and merged
> several screenshots from a spectrum analyzer plugin. Here are the
> graphs:
> 
> http://morpheus.spectralhead.com/img/scp-biquad.png
> 
> Parameters are: q = 10
> w = 0.013, 0.025, 0.05, 0.1, 0.2, 0.3, 0.4, 0.45 (573, 1100, 2205, 4410,
8820,
> 13230, 17640, 19845 Hz)
> 
> As you see, there's still some "misbehaving" near Nyquist, but this is
still
> work-in-progres. I have two ideas on how to improve this further.
> 
> Your homework:
> --
> 
> "Implement the filter with the above transfer function using 2 poles and 1
> zero. In other words, implement a two pole biquadratic lowpass filter
> formula with the 2nd zero fixed at origin."
> 
> You guys are masters of formal and symbolic computation, right? So I'm
sure
> it's going to be child's play for you. I won't disturb your thinking with
my
> boring explanations ;)
> 
> Good luck! ;)
> 
> Best regards,
> Peter Schoffhauzer
> Prof. Bitflip
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Thoughts on DSP books and neural networks

2015-02-05 Thread Marco Lo Monaco
Hello Peter,
I would rather get an IR from a SPAN plugin like the Voxengo on a real
stimulus signal (say noise or impulse). That gives more the idea that it's a
real context and not just a formula with possible cheating (i.e. if you have
discovered the perfect filter with no warping and Q-narrowing near Nyquist
and you just plot a bode diagram how can I know that you are not cheating
and just using the s-domain formula? :P )

Hope this helps.

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Peter S
> Inviato: venerdì 6 febbraio 2015 01:56
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Thoughts on DSP books and neural networks
> 
> Ian,
> 
> Thanks for the suggestion. I'll try to make some fancy graphs.
> I think I have Octave and Scilab installed (hm, even Mathematica).
> 
> - Peter
> 
> 
> On 06/02/2015, Ian Esten  wrote:
> > Octave or Matlab. Or even Mathematica. It would be very interesting to
> > see the transfer function of your filter on the same graph as the
'ideal'
> > analog filter.
> >
> > Ian
> >
> > On Thursday, February 5, 2015, Peter S 
> > wrote:
> >
> >> What do you guys use to turn your impulse responses into fancy FFT
> >> diagrams? If you can recommend some software, I'll post some transfer
> >> curves of the 2 pole 1 zero biquad filter.
> >>
> >> - P
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Thoughts on DSP books and neural networks

2015-02-05 Thread Marco Lo Monaco
Well of course there are many ways to do that, but it is simply more
difficult. One of course is to claim that you are correcting an error and in
such case you can also re-engineer and keep a private copy of the object
code. It's up to you then find a convincing proof that you corrected an
error and in which sense.

You cannot use a disassembler because that's what we call "a translation
from object code" (mere numbers) to opcodes human readable (nor ASM
debuggers help). Translating is a copyrighted action and you cannot do it.
Of course enforcing is the problem, but it just  puts you in a situation
where you are aware that you are misrepresenting your license agreement and
committing an illegal action.
Then, you reasonably say: who is ever going to find me as long as I do that
in a dark sweat room under my desk with a blanket covering me and my pc so
that no spy-webcam can see me? You are right, no one and being in Italy that
is 100% sure :)

The principle is that decompiling and reverse engineering violates trade
secrets if done in the "translation"/"modify" way. This simply puts you in a
guilty position whatever you may want to do to trick your competitor and
milk the system. But again you will be never caught probably, and yes then
you can claim that you independently reached/improved the math and at that
point you reverse the "burden of proof" to your competitor.

Of course there are methods like the Clean Room Design/Chinese wall where
this can be easily circumvented...as I said is only a matter of just letting
you know that you are misbehaving. As I understand, in the states you can
freely decompile and debug the asm code and it is legal. Am I wrong?

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di robert bristow-johnson
> Inviato: venerdì 6 febbraio 2015 01:10
> A: music-dsp@music.columbia.edu
> Oggetto: Re: [music-dsp] R: Thoughts on DSP books and neural networks
> 
> On 2/5/15 6:48 PM, Marco Lo Monaco wrote:
> > I dont know very much about US IP laws but in Italy (EU), reverse
> > engineering in illegal unless for interoperability issues.
> 
> > You can understand an algorithm only by measuring in-out relationships
> > (very difficult to understand the details of an algo by only doing
> > this),
> 
> maybe, if it's LTI.
> 
> >   but no
> > decompiling or "modify-the-code-and-see-what-happens" techniques.
> 
> how do they enforce that?  if someone is really good at it, and with a
good
> logic-state analyzer and/or disassembler (remember MacNosy?), gets to look
> at the code after it's been decrypted, and figgers out what the math that
> gets done is, and then writes that math in his company notebook, how can
> they differentiate that from if the mathematical ideas didn't just pop
into
> that someone's head and he or she wrote it down?
> 
> even if, in litigating the reverse engineering, they say "the math is the
same,
> he or she must have reverse engineered it", usually you can find something
> different (and often better) to do to it that ostensibly makes it not
quite the
> same.
> 
> people don't volunteer evidence that is contrary to their own interest.
> 
> > But actually you are also allowed to reverse engineer to fix some
> > errors in the code.
> 
> how about making something better in design?  not quite the same as fixing
> errors, even though both make the product better.
> 
> --
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Thoughts on DSP books and neural networks

2015-02-05 Thread Marco Lo Monaco
Ethan,
I dont know very much about US IP laws but in Italy (EU), reverse
engineering in illegal unless for interoperability issues.
You can understand an algorithm only by measuring in-out relationships (very
difficult to understand the details of an algo by only doing this), but no
decompiling or "modify-the-code-and-see-what-happens" techniques.
But actually you are also allowed to reverse engineer to fix some errors in
the code.

Indeed, a trade secret can is automatically protected but doesn’t prevent
any other to discover the same knowledge independently. 
IMHO by seeing what has been happening in the newsgroup in the last two
years, we are (more or less) on the same level of knowledge, so yes, trade
secret are nowadays very little in this highly biased tech MI sector.
Maybe some of us know a trick that can save much more CPU, some others know
more formally/rigorously a well-known problem, others have reached a high
quality bag of tricks.

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Ethan Duni
> Inviato: giovedì 5 febbraio 2015 23:01
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Thoughts on DSP books and neural networks
> 
> >P.S. Anyone who knows how to effectively turn ideas into money while
> >everyone can benefit, let me know. Patenting stuff doesn't sound like a
> >viable means to me.
> 
> Well, that's exactly what patents are for. I'm not sure why you don't
consider
> that viable. Is it to do with the costs and time required to file a
patent?
> 
> Absent a patent filing, you're stuck choosing between keeping your work
> undisclosed as a trade secret, or publicizing it without any means of
collecting
> licensing or other revenues. That's why patents were introduced, to cut
> through that knot and allow inventors to profit while still disclosing
their
> findings.
> 
> Note that if you decide to retain your work as a trade secret, there is no
legal
> barrier for others to reverse engineer it and use it without paying you a
dime.
> IANAL but someone could even come up with it independently and patent it
> themselves.
> 
> E
> 
> On Thu, Feb 5, 2015 at 5:15 AM, Peter S 
> wrote:
> 
> > The sad fact is
> >
> > 1) I really don't have time to write papers or books. I know from
> > experience, that both takes a lot of time, even writing a single DSP
> > paper properly will take days to complete.
> >
> > 2) Writing a book to a very small audience is simply not worth it
> > financially (again, I know this from experience). Small niche markets
> > are not profitable, and realistically, the DSP market is maybe just a
> > few hundred people (or a few thousand, at max). So it's very time
> > consuming but gives you very little profit.
> >
> > 3) What I would effectively be doing, is giving away my algorithms to
> > all my competitors. Sadly, I am not an academic who gets paid an
> > hourly rate to write papers and books, so I also have to keep business
> > considerations in mind (that's the sad reality).
> >
> > Best,
> > Peter
> >
> > P.S. Anyone who knows how to effectively turn ideas into money while
> > everyone can benefit, let me know. Patenting stuff doesn't sound like
> > a viable means to me.
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book
> > reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> >
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Sallen Key with sin only coefficient computation

2014-12-24 Thread Marco Lo Monaco
Agreed. And also, let's measure lengths in
inches/yards/miles.ehhm...ooops! :)))

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Stefan Stenzel
> Inviato: mercoledì 24 dicembre 2014 09:51
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] R: Sallen Key with sin only coefficient
computation
> 
> Time to stop this tragedy, let's also measure frequency in dnoces
> 
> 
> > On 24 Dec 2014, at 3:40 , Nigel Redmon  wrote:
> >
> >> On Dec 23, 2014, at 4:45 AM, r...@audioimagination.com wrote:
> >>
> >> in units of mhos (reciprocal of ohms)?
> >
> > Tragically, the formal name for the mho is Siemens, in keeping with
naming
> units after the principal scientists involved. (Also, it follows from the
> "Siemens mercury unit".) The tragedy is not only in having such a clever
and
> descriptive term replaced by a non-descriptive one, but also the problem
> with the trailing "s" on the latter. 10 mhos = 10 Siemens; 1 mho = 1, er,
> Siemens...
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book
> > reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Sallen Key with sin only coefficient computation

2014-12-23 Thread Marco Lo Monaco
Hi Robert,

> okay, that acronym could use a little bit of definition:
> http://en.wikipedia.org/wiki/Operational_transconductance_amplifier .
> we used to just call them "transconductance amplifiers"
> (or, more fundamentally, a "voltage-controlled current source") back in the
> olden daze.  the symbol with the two little circles is new to me (we didn't 
> use
> it in the 70s). 

Did you use the trapezoidal one?

> so is "g" marked on the OTA is in units of mhos (reciprocal of
> ohms)?  what is the value of that "g"?  then the voltage gain would be the
> transconductance (which i am assuming is "g") times the impedance of the
> load (which is 1/(jwC)).  and the voltage on the top of the cap is offset by 
> the
> voltage on the bottom.
> 

Yes all OTAs have a programmable gM via an Iabc current, and that's why they 
are almost always used in VCFs.

> > the unity/k/mo0/m1/m2 gains as having infinite input impedance and
> > zero
> 
> > output impedance,
> 
> i don't think that transconductance amplifiers have zero output impedance.
> they ideally have infinite output impedance.  the current delivered is 
> (ideally)
> independent of the load connected or (ideally) of the voltage necessary to
> deliver the prescribed current.

In this case I am talking about the summing nodes and various voltage gains 
which are not related at all to OTAs, but they are simply ideal voltage 
amplifiers/mixers with a gain and zero output/infinite input impedance.

> 
> > the summing nodes as having infinite input impedence on each addend
> > input and zero output on the summed voltage node.
> 
> > By doing this the analog counterpart of Andrews scheme works.
> 
> 
> well, at least i can begin to model the circuit.

Good,  it will be much easier if you use a CAS tool.

> 
> that said, and this will make Andrew unhappy, unless there are nonlinear
> components put into this analysis using trapezoidal integration, there is
> nothing new to be discovered.  if the entire circuit remains as an LTI (with 
> op-
> amps or "OTAs" operating in linear mode, resistors, capacitors, even an
> occasional coil), then an H(s) will pop out with order equal to the number of
> reactive elements (now sometimes the order of the input-output transfer
> function is less than the number of caps because of pole-zero cancellation
> built into the design, but if it were modeled using state-variable convention,
> the internal order will *always* be equal to the number of reactive
> elements).
> so then you get your H(s) and substitute for every integrator (which is 
> s^(-1))
> the
> following:
> 
> s^(-1) = T/2 (z+1)/(z-1)
> or, if you prefer unit delays,
> s^(-1) = T/2 (1 + z^(-1))/(1 - z^(-1))   that is what you will get for 
> modeling
> that continuous-time system with a discrete-time model using trapezoidal
> integration rule for every continuous-time integrator in the system.  and that
> is also what you will get when applying the bilinear transform, without
> compensation for frequency warping, to H(s).  they will and they must come
> out the same.  2nd-order analog system gets transformed to a 2nd-order
> digital system.
>

Yes nothing new in the discretizing approach, Andrew was simply stating that 
the analog-topology is quite new and interesting compared the usual ones used 
in analog synthesizers.
Having said that, he just proposed its relative discretization analysis, more 
as an exercise or a case study I guess.
 
> now not all biquad filter topologies are the same, even if they have the same
> H(s).  perhaps the topology you end up with using Andrew's analysis will be
> better than others (like the Direct Forms or the Lattice or Normallized 
> Lattice
> or the Rader-Gold form or Hal's SVF) for changing (slewing or modulating)
> coefficients.  perhaps Andrew's topology will have better roundoff noise
> behavior at the nodes where quantization must be done.  perhaps there will
> be better decoupling of coefficients from the user knobs (or modulating
> waveforms) that change them (like in the lattice, there is one coefficient 
> that
> solely determines the resonant frequency).
> 

What he does is basically a state space representation of the analog system 
discretized via bilinear (like the Matlab bilinear() can do given the ABCD 
s-domain matrixes).
He is not using any of the topologies you mention AFAIK. The nice trick he is 
using is a simplification of ABCD matrix coeffs in terms of sin(.) instead of 
tan(.), which I expect in a gain of efficienty given a time varying filter 
context.

> but to determine the biquad coefficients from the user parameters is a
> solved problem.  to change the simple biquad coefficients (Direct Form
> x) to Lattice or Normalized Ladder is also not a new thing.  if the purpose 
> is to
> model the non-linear components in the position in the circuit where they
> actually exist, then i can see something coming out of this.  otherwise it's
> really nothing new.
> 

Even in the case of non-linearities whic

[music-dsp] R: Sallen Key with sin only coefficient computation

2014-12-21 Thread Marco Lo Monaco
Hello Robert,
I did a similar analysis months ago on the SVF topology Andrew posted at
that time.
The implicit (and most logical) convention is to consider the OTAs as output
current generator (as they should, so that a current flows into the cap),
the unity/k/mo0/m1/m2 gains as having infinite input impedance and zero
output impedance, the summing nodes as having infinite input impedence on
each addend input and zero output on the summed voltage node.
By doing this the analog counterpart of Andrews scheme works.

Ciao

Marco

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di robert bristow-johnson
> Inviato: domenica 21 dicembre 2014 20:25
> A: music-dsp@music.columbia.edu
> Oggetto: Re: [music-dsp] Sallen Key with sin only coefficient computation
> 
> On 12/21/14 1:01 PM, Andrew Simper wrote:
> > I've updated the diagram of the filter to be a little prettier in the
> > full pdf, and I've also uploaded it as a jpg here:
> >
> > http://cytomic.com/files/dsp/SkfInputMixing.jpg
> >
> 
> i don't see how one analyzes that circuit since c1 and c2 are not
connected to
> any other impedances.  there is no way to determine what the two
> capacitors do.  it's really a signal flow diagram (like we do with DSP)
but with
> two mysterious elements added.
> 
> --
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Introducing: Axoloti

2014-12-09 Thread Marco Lo Monaco
Very cool project, my compliments Johannes!
Do you think it would be a big effort to make it compatible with the NXP
LPC4330 like this http://www.nxp.com/demoboard/OM13027.html#overview ?

(Just asking because I have it here with me and I need an excuse to start
playing with it :) )

Thanks for sharing.

Marco

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Johannes Taelman
> Inviato: venerdì 5 dicembre 2014 21:06
> A: music-dsp@music.columbia.edu
> Oggetto: [music-dsp] Introducing: Axoloti
> 
> Hi,
> 
> I'm pleased to announce the open source release of Axoloti.
> 
> Axoloti is a platform for sketching music-DSP algorithms running on
> standalone hardware build around an ARM Cortex M4F microcontroller.
> Axoloti has a graphical patcher that generates C++ code, and also manages
> compilation and upload to the microcontroller. The GUI runs on Windows,
> OSX and Linux.
> 
> It's still in alpha stage, many improvements left to be made at all
layers.
> But it's already complete enough to support a variety of techniques and
> applications.
> 
> Axoloti Core boards are not available currently, but with a few documented
> changes in the code, the editor runs with the STM32F4Discovery kit.
> 
> Website: www.axoloti.be
> Source code: https://github.com/JohannesTaelman/axoloti
> 
> You're invited to comment, test, report bugs, contribute, etc...
> 
> thanks,
> Johannes Taelman
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: LLVM or GCC for DSP Architectures

2014-12-09 Thread Marco Lo Monaco
Hello Stefan,
if you want to make your own audio dsp toolkit compatible with the SHARC
family you should consider maybe a spinoff of your projects, or not
supporting at all the compatibility with it (at least being a mess to
support it easily without headaches and cross-compilation issues) because
Harvard architecture is really a different thing compared to the Von Neumann
in ARMs.
Even if most of the SIMD functions can be switched via #ifdef within a
higher level SIMD function call (like add/mul packed floats or even basic
FIR/IIR/biquad implementations, that way CMSIS-DSP could be used as well as
native SIMD SHARC funcs) you will mainly have to deal with different segment
memory allocation for coefficients and states when implementing a filter.
Moreover AFAIK there is no malloc support on PM (program memory) on the
SHARC, so you should do a customized allocator if you need runtime
allocation utilities of filters.

I never worked with Tis, but being HARC based I guess the problems that may
arise are very similar.
Or maybe you could use the same memory segment accepting a performance
penalty, but that would invalidate the usage of SHARCs.

Ciao

Marco


> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Stefan Sullivan
> Inviato: lunedì 8 dicembre 2014 22:35
> A: A discussion list for music-related DSP
> Oggetto: [music-dsp] LLVM or GCC for DSP Architectures
> 
> Hey music DSP folks,
> 
> I'm wondering if anybody knows much about using these open source
> compilers to compile to various DSP architectures (e.g. SHARC, ARM, TI,
etc).
> To be honest I don't know so much about the compilers/toolchains for these
> architectures (they are mostly proprietary compilers right?). I'm just
> wondering if anybody has hooked the back-end of the compilers to the
> architectures to a more widely used compiler.
> 
> The reason I ask is because I've done quite a bit of development lately
with
> C++ and template programming. I'm always struggling with being able to
> develop more advanced widely applicable audio code, and being able to
> address lower-level DSP architectures. I am assuming that the more
> advanced feature set of c++11 (and eventually c++14) would be more slow to
> appear in these proprietary compilers.
> 
> Thanks all,
> Stefan
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: magic formulae

2014-11-27 Thread Marco Lo Monaco
Yes I agree with Tito.
It looks like a "A New Kind Of Music" :P

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Tito Latini
> Inviato: giovedì 27 novembre 2014 17:09
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] magic formulae
> 
> On Thu, Nov 27, 2014 at 01:54:15PM +, Victor Lazzarini wrote:
> > Thanks everyone for the links. Apart from an article in arXiv written
> > by viznut, I had no further luck finding papers on the subject (the
> > article was from 2011, so I thought that by now there would have been
> > something somewhere, beyond the code examples and overviews etc.).
> 
> It seems a cellular automata 1D with a loop of rules for any cell, where a
rule
> is determined by a bitwise operation. A generic example with only one byte
> (&mask could fix the number of the states with a int):
> 
> start   1 1 0 1 0 0 1 0
> rule 1  x x x x x x x x
> rule 2  x x x x x x x x
> ...
> rule n  0 1 0 0 1 0 1 1
> rule 1  x x x x x x x x
> rule 2  x x x x x x x x
> ...
> rule n  0 1 1 1 0 1 1 0
> rule 1  x x x x x x x x
> ...
> 
> and I presume the possible effects follow the four Wolfram's classes
(limit
> points, cyclic pattern, chaotic and more complex behaviour).
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Release pyo 0.7.2 (Python dsp library)

2014-10-18 Thread Marco Lo Monaco
Hello Sampo, Oliver, RBJ

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Sampo Syreeni
> Inviato: venerdì 17 ottobre 2014 23:51
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Release pyo 0.7.2 (Python dsp library)
> ... 
> input to the delay... How do you deal with a ten second delay commanded
> down to a one second delay, if that's done within two seconds in total?
How
> do you keep it from glitching or pitch shifting during those two seconds?
>
Using crossfading for changing delaytime in that way (2 playheads) should
produce time-stretch like effect and not a pitch-shift one.
Or am I not correctly understanding this implementation? 

Cheers,
Marco

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: Simulating Valve Amps

2014-06-27 Thread Marco Lo Monaco
> True, but wait until we get to "silky smooth treble response"!!. Maybe we
> need a meaty explanation for these words, especially for "sinky, spongy
> grid". ;-)
> 
Ahaha, Steffan that's funny!
Actually I dont like either to use those subjective terms (which you have to
deal with if you are working with guitarists!).
The Mesa Boogie uses the word "spongy" to basically indicate (and here I go
into the EE realm) to guitarists that the (pwramp) bias shift is changing
pretty fast, meaning that the equivalent envelope follower which is driving
the bias has a fast release time.
A "sinky" (I admit I invented it just to adjust the tone of conversation to
those gnarly words) to me means that the grid is gonna sink "a lot" of
current, thus changing the bias of the grid pretty quickly as you have
enough swing. Of course the time constant depends mainly on the decoupling
caps, which on tube amp is pretty small around dozen of nF. A more correct
term should be "leakage", like in the days of the "fuzz face" leakage was
the main cause that drived the sound into a fuzzed one instead of a
distorted one (due to bias shift), since "leakage" is itself a feature of
germanium transistor base currents.

Finally, as RBJ says, I dunno about "silky smooth treble response", but I
bet it comes out from a guitar player!

Ciao

M. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: Simulating Valve Amps

2014-06-26 Thread Marco Lo Monaco
About bias shifts: actually grid bias shifts tend to make the sound fuzzier
more than compressed, especially when the the decoupling cap is connected to
a sinky spongy grid (like the 6L6), roughly changing with the "envelope" of
the input signal. In the push pull you wont notice that (because effects on
both swings compensate), but you can really notice excessive grid sink even
in a 12AX7 for large voltage swings. For sake of clarity, if you put a sine
tone on the decoupling cap+gnd you will have a square-ish signal on the
plate whose duty cycle is not 50% and it changes the more the swing raises.
My experience with bias shift is that they are cause by the decoupling cap
90% of the time, then a 10% by the bypass caps (in terms of quantity). Then
there is sag, but it is mostly on power amps rather than on preamps and it
causes a blend of crossover distortion and reduced gain/increased
saturation.

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di STEFFAN DIEDRICHSEN
> Inviato: giovedì 26 giugno 2014 05:39
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] R: R: Simulating Valve Amps
> 
> 
> On 26 Jun 2014, at 14:13, robert bristow-johnson
>  wrote:
> 
> > grid current is zero only if V_gc < 0, which is normally how it's
biased, but
> large signal swings can change that.  when V_gc > 0, the grid is like a
> miniature plate, it will draw some electrons off.
> >
> > looks a little like a diode.
> 
> 
> Exactly. Power tubes like the 6L6 can deliver a large grid current, that
causes a
> clipping of the _input_ signal. This shifts the bias point substantially
and
> contributes to the compression / distortion effect.
> The grid current “operating area” is used in some applications to produce
a
> juicy (meaty, … [insert your suggestion here]) distortion with a lot of
sustain.
> 
> Steffan
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: Simulating Valve Amps

2014-06-26 Thread Marco Lo Monaco
Hi Steffan,
> 
> A model is as good as you understand your subject.
> 
> The problem with the tube equations from Norman Koren is, that they don't
> account for grid current. Having done some live investigation in tube
amps,
> my conclusion is, that grid currents contribute largely to the operation
of a
> tube amp, if you drive them into distortion.
> Once you saw this and understand how it works, it's easy to model this
with
> pleasing result.
> 

Well that's what I meant when I was talking a few post ago about Koren's
incompleteness...Actually for grid current he uses a diode plus 2K series
resistor almost in every tube model, which is a good thing for computation
and newtown raphson (iG depends only on vG and not on vP), but I still don't
know if its enough or not.

What I wanted to say is that even you understand your subject and everything
is set up with that good understanding of it, the model could behave good at
95% of the times, leaving that 5% un-modeled correctly and tradeoffs at that
point could be all done and there is nothing more that you can do.

M. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: Simulating Valve Amps

2014-06-25 Thread Marco Lo Monaco
> i am most concerned about Miller capacitance and the bypass capacitance in
> parallel with the cathode resistor.

All the paths leading to the bipole ports not directly connected to
independent electrical sources imply analog feedback, which is dynamic if
there are reactances (bypass/decoupling/miller caps).

> >   Actually a DC load line and it's relative nonlinear equation always
> > result by a sheared version of the nonlinear eqs thru some resistor
> > values,
> 

> not sure what you mean by a "a sheared version of the nonlinear eqs".
> do you mean the hard clipping at either the Ip or Vpc axis (what we would
call
> "saturation" or "cutoff" on the corresponding transistor curves)?

Consider a classic nonlinear system y = Fnl(x) with negative feedback at
output thru a gain K (no reactances, so everything is memoryless):
Due to the feedback connection we have:
x_d = x - K*y the differential signal that goes into Fnl, the difference
from actual input x and output y

overall you will have

y = Fnl(x - K*y) (Oh look, y is on both sides, you have to solve it via
numerical scheme)

if you set p = x - K*y you have the shear transform of the xy plane  (a
graphical interpretation http://en.wikipedia.org/wiki/Shear_mapping) thru
the scalar K.

So if you have a Fnl which is typically an exp-like (diode) due to shearing
(and its stretching) it becomes almost a piecewise curve in the p-domain,
with a sharpened knee on the forward voltage.
What you can do in this SISO case and using your beloved LUT, is to compute
a set of x and y values like (x0,y0) , (x1, y1) ... (xn,yn), where yn =
Fnl(xn)
And then remap by means of shearing in the following curve (x0 - K*y0, y0),
(x1 - K*y1, y1), ..., (xn - K*yn, yn) basically leading to a y = G(p)
function (its existence is set by the implicit function theorem by Dini, of
which I spoke months ago on the list).

This is why every feedback system is always tied to shearing its
nonlinearities (which also as the known consequence to reduce gain but
harden the transfer static characteristic of the nonlinear component).

> 
> 
> >   and this is something that I have learn 10 years
> > ago and it is not very often taught in EE course: the concept of shear
is
> > tightly tied with analog feedback involving nonlinear amplifiers.
> 
> okay, i better look up "shear".  dunno what you're referring to.  like
> the "fixed-point iteration", i am afraid it might just be another
> semantic to something we've been dealing with all along.
> 
> but reading the Wikipedia articles, i *did* learn that "the Brouwer
> fixed-point theorem
>  ... says
> that any continuous 
> function from the closed unit ball
>  in /n/-dimensional Euclidean
> space  to itself must
> have a fixed point" which seems to make sense.  and i am fully aware of
> the relationship this has with using the Newton-Raphson method.  being
> an EE and not a mathematician, i would prefer to call them "self-mapping
> points" instead of "fixed points", but i understand the semantic.
>

Not sure if this makes sense: AFAIK the theorem works for such continuous
functions that maps _TO THEMSELVES_.
And of course this is not a feature of every nonlinear law in electronic
components. So this theorem applies only for few fortunate classes of
functions.
 
> > Using a load line (AC doesn't help, because it is used only for sine
like
> > tones
> 
> no, it can be any waveform that has all frequency components high enough
> that the output coupling capacitor becomes negligible impedance and the
> load resistance is effectively in parallel with the plate resistance
> (which changes the load line).  doesn't have to be a sine nor even
> periodic.  but it can't be slow, and if we're trying to model the stage
> well, we can't make any assumptions about it.
> 
I am referring to the many (infinite)  AC load lines that can exist
considering different freqs because of the AC impedance. You are probably
referring to the limit load line where you neglect caps (short) and
inductors (open).
http://en.wikipedia.org/wiki/Load_line_(electronics)

> thing is, often between stages there *is* no output coupling cap.  the
> bias voltages are known and they can bring the DC bias down to be
> suitable for the following grid with a little B- added in.  if the
> grid-cathode remains negatively biased, there is little current draw,
> but i don't imagine that we can count on that for a guitar amp
> emulation, especially when the amp is cranked up to "11".
> 
> >   and give a ball park of what a preamp stage is gonna rail and
> > eventually distort at some freq) is a method of finding a clipper
> > representing the DC
> > nonlinearity assuming no dynamic feedback  in the path (i.e. due to
> > decoupling caps or miller caps) just like a .DC sweep does in spice
(feed
> > the grid wit

[music-dsp] R: R: Simulating Valve Amps

2014-06-25 Thread Marco Lo Monaco


> sorry Marco, i never said a word about that.  you are projecting and you
are
> mistaken.  (more precisely, you are misrepresenting another's position).
> 
Ok, apologizes. What it is important to understand with the "new machinery"
is that the nonlinear processing and eqs that are needed DON'T come out
straightly from any load-line concepts, because they are sheared by an
amount that is dependent on the _dynamic properties_ of the system
(inductors and capacitors). Actually a DC load line and it's relative
nonlinear equation always result by a sheared version of the nonlinear eqs
thru some resistor values, and this is something that I have learn 10 years
ago and it is not very often taught in EE course: the concept of shear is
tightly tied with analog feedback involving nonlinear amplifiers.

Using a load line (AC doesn't help, because it is used only for sine like
tones and give a ball park of what a preamp stage is gonna rail and
eventually distort at some freq) is a method of finding a clipper
representing the DC
nonlinearity assuming no dynamic feedback  in the path (i.e. due to
decoupling caps or miller caps) just like a .DC sweep does in spice (feed
the grid with a voltage and measure the plate voltage how it clips). Of
course you have a SISO (1 dimension) table, but you wont be considering any
real dynamic/nonlinear interaction due to feedback. This old-style approach
(you may know) was used (plus oversampling) by LINE6 and all the others
competitors back in the 1990s, because it was the only method affordable on
fixed-point low resources DSP (but I know you know this).

> i *am* "clinging on an old-school concept" of meaningfully sectioning
> different stages of the amp for simulation.
> 
> 

Actually there is no meaningful method beside "neglecting-or-keeping"
interaction among stages. Macak's work (the czhech guy, refererenced by
Andrew) analyzed (thru "connection current" usage) a way to simplify it
using DK method.
This is one of the hardest choices to do actually, because sometimes the
audio results are really subtle and not perceivable, so the big effort to
deal with it might be useless.
When the guys there were talking about quality and CPU hogginess, they meant
just this. Considering holistically the entire circuit as a whole is CPU
intensive and gives the best quality you can have (once also oversampling is
issued), supposed you can really hear the difference. (D)K-Method are
HOLISTIC methods, where you take "everything or nothing". It's really hard
to tune and adjust things other than changing the values of components and
finding the best nonlinear law (Koren's actually). This is btw one the known
features/drawback of general physical modeling (in contrast to signal
modeling): if something doesn't match you can really do anything on the
model itself once you are sure that everything has been
done/implemented/analyzed properly.

> i got that, and i am still reading (and wondering what the specific
> relevance is).  again, never heard of "fixed-point iteration" before.

What you suggested it's not quite a FP-iteration but it looks like.

M.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Simulating Valve Amps

2014-06-25 Thread Marco Lo Monaco
RBJ:
> > for the specific examples discussed in this thread, and for a modest
> > sampling grid resolution. If that number seems preposterous, perhaps
> > consider the curse of dimensionality. Things that work great in 2 or 3
> > dimensions can become astronomically complex in 10+ dimensions.
> 
> well, for modeling the mapping function from grid to plate, with a load
line of
> known slope (but the V_B+ might sag, so that's a variable) and with a
> possible reactive component in feedback, how many dimensions are we
> gonna end up with?  not the whole damn amp; *one* stage of the amp.
> 

Sorry Robert, but here you are clinging on an old-school concept of
modeling, which is filter-clipper-filter.

Don't cling on load-lines: this are methods to deal with DC op and find a
static clipper.
Using the (D)KMethods solves basically a non-linear differential set of eqs,
and dimensions are really and often more the 1 due the theory, again see the
papers.

For one stage, even assuming sag or not, (it doesn't matter actually),
dimensions is R2->R2, as I already said.
Interconnecting more than one stage scales up by 2,4,6,8 etc dimensions.

> >   Then we can directly solve the fixed-point equations
> 
> what "fixed-point equations"?  i am not precluding either floating-point
nor
> fixed-point implementation.  why do you keep bringing that up?
> 
> whether this is implemented as floating or fixed point is a separate (and
later
> question).  i would probably first simulate with floating-point and when
that
> starts working to our satisfaction, then start thinking about how to do
this in a
> stomp box with a fixed-point DSP.
> 

 See my last post.

M. 


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: [admin] Re: Simulating Valve Amps

2014-06-25 Thread Marco Lo Monaco
I will try to give my contribute about this last results in the discussion
to make it clear, even if I could hardly follow up all the details in the
superlenghty thread:

> which is the point.  so these iterations don't have to happen *during*
> runtime.

They actually do, and it desirable if the nonlinearity depends on the
parametric beahavior of the circuits (knobs/pots/switches/etc). See below
more details.

> i hadn't figured out the points about fixed-point.  seems to be
> unrelated to the issue.

We are talking about fixed-point iteration numerical scheme, not fixed point
precision: I know you got it, but it's just for sake of clarity. The scheme
you suggested (assuming 2 /3 iterations always - which is not the case as
already said) looks like a fixed point iteration, which works only if the
function is Lipschitz continuous with constant < 1 (see
http://en.wikipedia.org/wiki/Fixed-point_iteration) and this are the
conditions that must be met (never found a case in my life that analog
nonlinearties were Lipschitz continuous though).

> 
> you have a known function g() with arguments y0, x1, x2, x3... and you
> are implementing this proceedure:
> 
>   float f(float x1, float x2, float x3, ...)
>   {
>   y = some_initial_estimate;
> 
>   do
>   {
>   y0 = y;
>   y = g(y0, x1, x2, x3,...);
>   }
>   while ( abs(y - y0) > epsilon );
> 
>   return y;
>   }
> 
> you enter the top of the proceedure with given arguments, x1, x2, x3
> ...  and you come out with a result y.
> 
> all's i am saying is to explore representing and implementing f()
> directly in terms of *actual* independent arguments.  not just for the
> simplification of the runtime algorithm, but also to explore the nature
> of the *actual* mapping (so maybe you have some idea of how bad the
> aliasing might be) which is obscured when iterating on g().
> 

Yes it's a memoryless function and you are right numerical schemes are not
mandatory (also they need a lid as you say about the max number of
iterations), BUT pls keep in mind these facts:
1) tanh() is a simple example, most real world nonlinearities are MIMO
(meaning R^n->R^m) and interpolating with your look up tables is merely
impractical because of dimensions > 2. AFAIK, some still uses bicubic
splines to void the NR scheme but you need memory and the overall cost can
be comparable to NR iterations at the end of the day...
2) the actual nonlinearity used in the DK-Method (K-Method) which
Ethan/Urs/Andrew are all basically using (aware or not) it's not like the
one you can think when you imagine a diode or a triode law because it is a
SHEARED nonlinearity: so lets say your differential amplifier (like the one
found in the moog ladder) is tanh-like:
(*) Inl = tanh(Vd) 
Your actual nonlinear memoryless function (due to the elimination of the
delay loop in analog domain to digital domain via implicit methods) IS NOT
THAT ONE but a SHEARED version via the K matrix. Urs used: 
y = g * ( x - tanh( y ) ) + s  (g,x,s, known, y unknown)
which is basically rewritable in this form
y = - g * tanh(y) + g*x + s = K Fnl(y) + p, where K = -g and p = g*x+s
which is (*) sheared by a factor K. ONLY in the SISO case (R->R) you can
void the iterative scheme and use the geometrical proprieties of shearing
and build a look up table which can be used with the desired resolution (see
paper reference below for more details). As I said, some use tables also for
R^2->R^2 MIMO, but mostly everyone prefers to use NR because:
3) memory and resolution are an issue when the shearing K matrix (a scalar
in the SISO system above) DEPENDS on KNOBS. That means that you should
rebuild a sheared table every time a knob is changed, which is not bad for
SISO but impractal for MIMO, because you would need to offload the LUT
precomputation using of course again NR and store all the values in the
bunch of memory you need. 

Pls refer to the orginal DE POLI/BORIN/ROCCHESSO paper which back in the la
1990s discovered all this new machinery that basically now everybody is
starting to use (without being or not aware of). It contains all the basic
concepts about what we have been saying.

G. Borin, G. De Poli, and D. Rocchesso, "Elimination of delay-free loops in
discrete-time models of nonlinear acoustic systems," in Applications of
Signal Processing to Audio and Acoustics, 1997. 1997 IEEE ASSP Workshop on,
oct 1997, p. 4 pp

A note on aliasing: I don't know how easy is to understand how bad aliasing
is in a (sheared) MIMO memoryless non linearity. You don't need to be stuck
on the MIMO case (tanh-like) because real circuits are much more
complicated.

> approximations happen, as they do in obtaining the original g() anyway,
> unless you're using a theoretical tube curve model like
> 
>  I_p  =  P * (V_gc + V_pc/mu)^(3/2)
> 
We all use Koren's, which actually is a MIMO R^2->R^2 (considering also the
grid current) and needs newton raphson to be solved becasue most of the
times its shearing fac

[music-dsp] R: Simulating Valve Amps

2014-06-23 Thread Marco Lo Monaco
My experience is that 2-3 average number of iterations is very likely for
simple circuits (overdrives) but the problem is that the max number of
iterations can easily worsen on non linear component transition from linear
to off/saturated region.
Given the general well accepted rule to use a starting point the previous
sample value of the non linear current, care must be taken when the
component starts saturating and the i_NL(n-1) value could be not enough
close to the final solution. Better start at that point from a saturated
value (like 1 in the tannish function), but the choice to use such heuristic
always wastes some iterations since there is no way to predict what is gonna
happen AFAIK.

Things get more complicated when you use not a simple SISO tannish function
(which by the the KMethod and DKMethod - which is a derived application of
it - solve very easily for SISO nonlinearities with the trick of SHEARING a
PWL look up table, which is the method I prefer in terms of efficiency where
memory access is not a high cost and with only some muls/add you reach the
final value), BUT a MIMO like a triode or tetrode, especially if then blocks
are interconnected (a 3 stage preamp is technically a R6->R6 nonlinearity to
be solved with NR, not exactly an unpainful task, mainly due to the
potential cpu hog about inverting the jacobian).
A simple cathode bias stage with Miller cap using Koren's eqs can lead of
max 30 iterations in worst case analysis (which in my case was high voltage
on grid at 10kHz freq), considering bisection or other heuristics to get a
good initial value.

The thing about oversampling then is that the avg iteration number lowers,
but not the initial guess of the starting point, like someone said here.
This should be cleared out.

So, I would use sheared LUT for SIS (all diode based overdrives) and Newtown
Raphson for MIMO systems. I also have failed in using look up tables in
R2->R2 MIMO like the cech guys are doing, because the requirements for a
good resolution are too much memory and the LUT is useless if it requires a
recomputation of the KMatrix (i.e. depends on the knobs of the circuits): at
that point is much better to use iterative solvers.

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Urs Heckmann
> Inviato: lunedì 23 giugno 2014 08:37
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Simulating Valve Amps
> 
> 
> On 23.06.2014, at 16:37, robert bristow-johnson
>  wrote:
> 
> > because it was claimed that a finite (and small) number of iterations
was
> sufficient.
> 
> Well, to be precise, all I claimed was an *average* of 2 iterations for a
given
> purpose, and with given means to optimise (e.g. vector registers). I did
so to
> underline that an implementation for real time use is possible. I had no
> intention of saying that any finite (and small) number of iterations was
> sufficient in any arbitrary case and condition - I can only speak about
the
> models that we have implemented and observed.
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Simulating Valve Amps

2014-06-18 Thread Marco Lo Monaco
Thaks a lot Nigel for your info.
It is in my plan to improve my actual Leslie simulator, and I had listen a
lot of rumors about this VENTILATOR and also BURN (by Scognamiglio, another
Italian that you may know) and I was impressed because I also think thatn
nowadays there cant be any incredible technology under the hood for a
leslie, but maybe a big bag of tricks.

Ciao

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Nigel Redmon
> Inviato: mercoledì 18 giugno 2014 11:25
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Simulating Valve Amps
> 
> BTW, I do know that it was developed with the Sonic Core SCOPE SDK, and I
> suspect it’s just using fairly routine DSP blocks, with a lot of care in
tweaking
> the sound. (It runs in my mind that I might have seen some block diagrams
> on a forum back when he was developing it—the point is that I don’t think
> there is any cutting-edge tech involved.) But that’s all I know.
> 
> On Jun 18, 2014, at 10:40 AM, Nigel Redmon  wrote:
> 
> > No, Marco, sorry. I wish I did. The result is very good, and a huge
> > leap from the sim in the CX-3. Very annoying that they don’t support
> > MIDI switching of the speed…
> >
> > http://www.earlevel.com/main/2013/02/16/ventilator-adapter-in-a-mint-t
> > in/
> >
> >
> > On Jun 18, 2014, at 10:11 AM, Marco Lo Monaco
>  wrote:
> >
> >> Ciao Nigel, talking about the VENTILATOR, do you know something more
> >> about its secrets/internals?
> >> :)
> >>
> >> M.
> >>
> >>> -Messaggio originale-
> >>> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> >>> boun...@music.columbia.edu] Per conto di Nigel Redmon
> >>> Inviato: mercoledì 18 giugno 2014 08:22
> >>> A: A discussion list for music-related DSP
> >>> Oggetto: Re: [music-dsp] Simulating Valve Amps
> >>>
> >>> Well, some people think it’s close enough for rock n rock (amp
> >>> sims),
> >> others
> >>> don’t. It’s the same with analog synths and virtual analog. But
> >>> there’s
> >> also
> >>> the comfort of tube amps, and there’s the comfort of the limited
> >>> sound palette of using the amp that you know and love. Amp sims are
> >>> really about variety (can’t afford a Plexi, a Twin Reverb, SLO,
> >>> AC-30, and a few
> >> boutique
> >>> amps? Now you can).
> >>>
> >>> I appreciate the old stuff, but I appreciate the convenience and
> >> flexibility of
> >>> the new stuff—to me it *is* close enough for rock n roll (new stuff
> >>> in general—I don’t play much guitar). I play a B3 clone because I
> >>> hauled a Hammond decades ago, and I hauled and still have (needs
> >>> work) a Leslie,
> >> but
> >>> I’d just as soon use my Ventilator pedal on the CX-3 (yes, with
> >> programmable
> >>> leakiness and aging of the tone wheels, etc.)—more convenient, and
> >>> gets the sound I want. Others would shudder at the thought. Well,
> >>> until their backs start giving out…I know a hardcore, old-time B3
> >>> blues player
> >> (“Mule”—
> >>> a nickname he earning for hauling around his B3 and Leslies) who
> >>> picked up
> >> a
> >>> clone for his aging back after hearing my CX-3 though the
> >>> Ventilator. Not
> >> for
> >>> all gigs, mind you, but as an option to go with for some gigs. The
> >>> point
> >> is that
> >>> if the tradeoffs are attractive enough, it’s easier to let yourself
> >>> try
> >> new things
> >>> even if you feel that it falls ever so slightly short of what you’re
> >>> used
> >> to, or
> >>> strays from your comfort zone.
> >>>
> >>> So while some might feel that amp sims haven’t arrived yet, other
> >>> might
> >> feel,
> >>> “where the heck have you been the past decade?" ;-)
> >>>
> >>>
> >>> On Jun 17, 2014, at 6:59 PM, robert bristow-johnson
> >>>  wrote:
> >>>
> >>>> On 6/17/14 8:24 PM, Nigel Redmon wrote:
> >>>>> (Thinking outside the nest…)
> >>>>>
> >>>>>> (...maybe that means opening up the LPF as the gain knob setting
> >>>>>> is
> >>>>>> reduced)
> >>>>>

[music-dsp] R: Simulating Valve Amps

2014-06-18 Thread Marco Lo Monaco
Ciao Nigel, talking about the VENTILATOR, do you know something more about
its secrets/internals?
:)

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Nigel Redmon
> Inviato: mercoledì 18 giugno 2014 08:22
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Simulating Valve Amps
> 
> Well, some people think it’s close enough for rock n rock (amp sims),
others
> don’t. It’s the same with analog synths and virtual analog. But there’s
also
> the comfort of tube amps, and there’s the comfort of the limited sound
> palette of using the amp that you know and love. Amp sims are really about
> variety (can’t afford a Plexi, a Twin Reverb, SLO, AC-30, and a few
boutique
> amps? Now you can).
> 
> I appreciate the old stuff, but I appreciate the convenience and
flexibility of
> the new stuff—to me it *is* close enough for rock n roll (new stuff in
> general—I don’t play much guitar). I play a B3 clone because I hauled a
> Hammond decades ago, and I hauled and still have (needs work) a Leslie,
but
> I’d just as soon use my Ventilator pedal on the CX-3 (yes, with
programmable
> leakiness and aging of the tone wheels, etc.)—more convenient, and gets
> the sound I want. Others would shudder at the thought. Well, until their
> backs start giving out…I know a hardcore, old-time B3 blues player
(“Mule”—
> a nickname he earning for hauling around his B3 and Leslies) who picked up
a
> clone for his aging back after hearing my CX-3 though the Ventilator. Not
for
> all gigs, mind you, but as an option to go with for some gigs. The point
is that
> if the tradeoffs are attractive enough, it’s easier to let yourself try
new things
> even if you feel that it falls ever so slightly short of what you’re used
to, or
> strays from your comfort zone.
> 
> So while some might feel that amp sims haven’t arrived yet, other might
feel,
> “where the heck have you been the past decade?" ;-)
> 
> 
> On Jun 17, 2014, at 6:59 PM, robert bristow-johnson
>  wrote:
> 
> > On 6/17/14 8:24 PM, Nigel Redmon wrote:
> >> (Thinking outside the nest…)
> >>
> >>> (...maybe that means opening up the LPF as the gain knob setting is
> >>> reduced)
> >> Yes
> >>
> >> And good discussion elsewhere in there, thanks Robert.
> >>
> > yer welcome, i guess.
> >
> > you may be thinking outside the nest; i'm just thinking out loud.
> >
> > i think, like a multieffects box, we oughta be able to simulate all
these amps
> (don't forget the Mesa Boogie) and their different settings in a single
DSP
> box with enough MIPS and a lotta oversampling.  dunno if simulating the
> 50/60 Hz hum and shot noise would be good or not (i know of a B3 emulation
> that simulates the "din" of all 60-whatever keys leaking into the mix even
> when they're all key-up).  but they oughta be able to model each
> deterministic thing: the power supply sag, changing bias points,
hysteresis in
> transformers, capacitance in feedback around a non-linear element (might
> use Euler's forward differences in doing that), whatever.  whatever it is,
if
> you take out the hum and shot noise, it's a deterministic function of
solely
> the guitar input and the knob settings, and if we can land human beings on
> the moon, we oughta be able to figure out what that deterministic function
> is.  for each amp model.  it shouldn't be more mystical than that (but
there
> *is* a sorta mysticism with musicians about this old analog gear that we
just
> cannot adequately mimic).
> >
> > and thanks to you, Nigel.
> >
> > L8r,
> >
> > r b-j
> >
> >
> >> On Jun 17, 2014, at 4:07 PM, robert bristow-
> johnson  wrote:
> >>> On 6/17/14 3:30 PM, Nigel Redmon wrote:
>  This is getting…nesty...
> >>> yah 'vell, vot 'r ya gonna do?  :-)
> >>>
>  On Jun 17, 2014, at 10:42 AM, robert bristow-
> johnson   wrote:
> 
> > On 6/17/14 12:57 PM, Nigel Redmon wrote:
> >> On Jun 17, 2014, at 9:09 AM, robert bristow-
> johnsonwrote:
> >>
> >>> On 6/17/14 5:30 AM, Nigel Redmon wrote:
> >>>
> > ...
>  Anyway, just keep in mind that the particular classic amps
>  don’t sound "better" simply because they are analog. They sound
>  better because over the decades they’ve been around, they
>  survived—because they do sound good. There are plenty of awful
>  sounding analog guitar amps (and compressors, and preamps,
>  and…) that didn’t last because they didn’t sound particularly
>  good. Then, the modeling amp has the disadvantage that they are
>  usually employed to recreate a classic amp exactly. So the best
>  they can do is break even in sound, then win in versatility.
>  And an AC-30 or Matchless preset on a modeler that doesn’t
>  sound exactly like the amp it models loses automatically—even
>  if it sounds better— because it failed to hit the target. (And
>  it doesn’t helped that amps of t

[music-dsp] R: Simulating Valve Amps

2014-06-18 Thread Marco Lo Monaco
Wow, what a subject! It seems that everyone here has been involved with
analog modeling in the past and gtr amp simulation :)))

I agree with Robert (except that I actually use implicit methods and
nowadays I think everyone is using BLN and/or multistep methods even in
nonlinear modeling, like I know Andrew is doing), there is a lot of
mysticism and even if the model compares dB-to-dB to the original golden
unit still musicians (well guitarists!)  psycologically prefer the analog
and the digital is never enough close to reality (maybe they need a real hw
with same knob colors to have the same feel): this is at least my experience
in doing analog modeling professionally for over a decade and having to deal
with guitarists opinions!

I think that it's only a matter of finding the sweet spots and features to
model (we don’t need to model each passive component nonlinearity unless it
looks proved to be very meaningful), I don’t even think that Volterra
kernels/NARMAX are the panacea (like Kemper is doing). I could sound a bit
oldschool, but yes if we deterministically know everything about a system
(i.e. equations, especially  nonlinear ones) it's only a matter of designing
good algorithms (and theory is quite strong nowadays and has been over 15
years) and having enough CPU to push oversampling to the right amount. So
it's a matter of time at this point, because the tricks and the knowledge is
already there! Koren has done a nice job with his phenomenological model,
but still lacks of a bit of something in the formulas which I guess could be
a future improvements. The main issue is that Koren's eqs are already hard
to solve with exploicit method and needs a lot of heuristics even for a
simple cathode biased triode stage!

And yes, as someone already said, analog is all about feedback and feedback
creates delay loop problems in digital, thus explicit methods and etc etc...
I think the the new trend is to finally model loudspeakers nonlinearities
and most of all have a method of parametrizing them (which requires good
instrumentation and at least a laser vibrometer).
I think that Klippel has done it over more than 10 years and he reached a
good point. Also some works of Balasz Bank, Yeh and the finnish guys are
interesting and could be a simpler/more effective approach for modern CPUs
(basically using clippers and pre/post filtering).

My 2 eurocents!

Marco



> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di robert bristow-johnson
> Inviato: mercoledì 18 giugno 2014 04:00
> A: music-dsp@music.columbia.edu
> Oggetto: Re: [music-dsp] Simulating Valve Amps
> 
> On 6/17/14 8:24 PM, Nigel Redmon wrote:
> > (Thinking outside the nest…)
> >
> >> (...maybe that means opening up the LPF as the gain knob setting is
> >> reduced)
> > Yes
> >
> > And good discussion elsewhere in there, thanks Robert.
> >
> yer welcome, i guess.
> 
> you may be thinking outside the nest; i'm just thinking out loud.
> 
> i think, like a multieffects box, we oughta be able to simulate all these
amps
> (don't forget the Mesa Boogie) and their different settings in a single
DSP
> box with enough MIPS and a lotta oversampling.  dunno if simulating the
> 50/60 Hz hum and shot noise would be good or not (i know of a B3 emulation
> that simulates the "din" of all 60-whatever keys leaking into the mix even
> when they're all key-up).  but they oughta be able to model each
> deterministic thing: the power supply sag, changing bias points,
hysteresis in
> transformers, capacitance in feedback around a non-linear element (might
> use Euler's forward differences in doing that), whatever.  whatever it is,
if
> you take out the hum and shot noise, it's a deterministic function of
solely
> the guitar input and the knob settings, and if we can land human beings on
> the moon, we oughta be able to figure out what that deterministic function
> is.  for each amp model.  it shouldn't be more mystical than that (but
there
> *is* a sorta mysticism with musicians about this old analog gear that we
just
> cannot adequately mimic).
> 
> and thanks to you, Nigel.
> 
> L8r,
> 
> r b-j
> 
> 
> > On Jun 17, 2014, at 4:07 PM, robert bristow-
> johnson  wrote:
> >> On 6/17/14 3:30 PM, Nigel Redmon wrote:
> >>> This is getting…nesty...
> >> yah 'vell, vot 'r ya gonna do?  :-)
> >>
> >>> On Jun 17, 2014, at 10:42 AM, robert bristow-
> johnson   wrote:
> >>>
>  On 6/17/14 12:57 PM, Nigel Redmon wrote:
> > On Jun 17, 2014, at 9:09 AM, robert bristow-
> johnsonwrote:
> >
> >> On 6/17/14 5:30 AM, Nigel Redmon wrote:
> >>
>  ...
> >>> Anyway, just keep in mind that the particular classic amps don’t
> >>> sound "better" simply because they are analog. They sound better
> >>> because over the decades they’ve been around, they
> >>> survived—because they do sound good. There are plenty of awful
> >>> sounding analog guitar amps (and compressors, a

[music-dsp] R: R: Dither video and articles

2014-03-29 Thread Marco Lo Monaco
Yes but let's remember that some "obsolete" architectures (like X87)
supported internal 80bits for float (IEEE 32 bits) and moreover if you were
pretty clever to dump as less as possible all your temp results to memory
(where 80 to 32 bit truncation had to happen) you could benefit of 80bits in
successive computations.
Unfortunatley with SSE this doesn't seem to happen and result can be quite
different than old X87 extended precision.

Thanks Nigel and RBJ, I want to look closer to the details you exposed as
soon as I have a bit of time.

M.

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Nigel Redmon
> Inviato: sabato 29 marzo 2014 17:37
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] R: Dither video and articles
> 
> (Not address to you, Robert, because you know it well...)
> 
> One thing people don't realize is that integer processors like the 56k
family
> had a full-precision accumulator for 24-bit multiply results (48-bit),
plus 8 bits
> of headroom (56 bit accumulator). Floating point, in general, truncates on
> every operation.
> 
> Of course if you've got double precision floats, which are just about free
for
> native (host based) DSP), life is pretty easy...
> 
> 
> On Mar 29, 2014, at 7:55 AM, robert bristow-johnson
>  wrote:
> 
> > On 3/29/14 4:43 AM, Nigel Redmon wrote:
> >> 20 * log10(2^num_bits)
> >>
> >
> > which is about 6.0206 * num_bits.
> >
> >> So, 32 bits is 192.7 dB.
> >
> > no headroom.
> >
> >> 32-bit floating point has 23 bits for mantissa, plus a hidden bit from
> normalization, plus a sign bit, for 25 bits of precision, so that works
out to
> 150.5 dB.
> >
> > and all the headroom in the world (i think about 750 dB, i wonder what
750
> dB above my threshold of hearing might sound like :-).
> >
> >
> > actually, depending on the pdf of the signal, you get a little bit
> > more S/N with float, because when the signal sample values are much
> > smaller than the max signal (which is the rails minus the dB
> > headroom), so also is the quantization error reduced.  if you assume
> > uniform pdf up to the max signal level (which is what i do with fixed,
> > therefore no annoying 1.76 dB constant added as when it's a sine
> > wave), this additional S/N you get from float is 10*log10(7/4) = 2.43
> > dB and you get this number from
> >
> >   +inf
> >   SUM{ (1/2 * 2^(-n))  *  (2^(-n))^2 }=   4/7
> >   n=0
> >
> >
> > the first factor is the percentage of time the signal is in that
particular range
> (assuming uniform pdf) and the latter factor is the relative power of the
> quantization error.
> >
> > so that 150.5 dB is really closer to 153 dB and the difference very
close to 40
> dB.
> >
> >
> >
> >> On Mar 29, 2014, at 1:06 AM, Marco Lo
> Monaco  wrote:
> >>
> >>> Hey Robert, can you give a quick detailed computation about the
> >>> 32bit fixed vs floating (or where the 40dB limit headroom just comes
> from)?
> >>> Fixed point to me is kind of the dark side of the force...
> >
> > in general (well, assuming uniform pdf up to the max signal level), to
> compare the tradeoff between fixed and float of the same word-width, it's:
> >
> >   8.45 dB  +  (headroom in dB) >   (6.02 dB) * (num bits in exponent)
> >
> >
> > if that inequality is satisfied, float is better (has higher S/N).  if
not, fixed
> has lower S/N.
> >
> > so, if you're doing something with an ASIC or FPGA or something, if you
> decide you only need 10 dB of headroom, then 3 bits of exponent is all you
> need.  any more is a waste of bits in the binary word.  when i was at a
not-to-
> be-named music synth company (with their own ASIC development), they
> had a format with 5 bits of exponent.  probably all they would ever need.
> sometimes those were 16-bit numbers so there were only 11 bits left for
the
> sign and mantissa.  counting the "hidden 1 bit" it would be like nicely
> normalized 12-bit linear.  a 74 dB S/N for all levels and a nearly
unlimited
> dynamic range (192 dB).
> >
> >
> > --
> >
> > r b-j  r...@audioimagination.com
> >
> > "Imagination is more important than knowledge."
> >
> >
> >
> > --
> > dupswapdrop -- the music-dsp mailing list and website:
> > subscription info, FAQ, source code archive, list archive, book
> > reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> > http://music.columbia.edu/mailman/listinfo/music-dsp
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Dither video and articles

2014-03-29 Thread Marco Lo Monaco
Hey Robert, can you give a quick detailed computation about the 32bit fixed
vs floating (or where the 40dB limit headroom just comes from)?
Fixed point to me is kind of the dark side of the force...

:)

M.


> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di robert bristow-johnson
> Inviato: venerdì 28 marzo 2014 18:04
> A: music-dsp@music.columbia.edu
> Oggetto: Re: [music-dsp] Dither video and articles
> 
> On 3/28/14 12:25 PM, Didier Dambrin wrote:
> > my opinion is: above 14bit, dithering is pointless (other than for
> > marketing reasons),
> 
> 14 bits???  i seriously disagree.  i dunno about you, but i still listen
to red-
> book CDs (which are 2-channel, uncompressed 16-bit fixed-point).
> they would sound like excrement if not well dithered when mastered to the
> 16-bit medium.
> 
> in fact, i think that in a very real manner, Stan Lipshitz and John
Vanderkooy
> and maybe their grad student, Robert Wannamaker, did no less than *save*
> the red-book CD format in the late 80s, early 90s.  and they did it
without
> touching the actual format.  same 44.1 kHz, same 2-channels, same 16-bit
> fixed-point PCM words.  they did it with optimizing the quantization to 16
bits
> and they did that with (1) dithering the quantization and (2)
noise-shaping
> the quantization.
> 
> the idea is to get the very best 16-bit words you can outa audio that has
been
> recorded, synthesized, processed, and mixed to a much higher precision.
i'm
> still sorta agnostic about float v. fixed except that i had shown that for
the
> standard IEEE 32-bit floating format (which has 8 exponent bits), that you
do
> better with 32-bit fixed as long as the headroom you need is less than 40
dB.
> if all you need is 12 dB headroom (and why would anyone need more than
> that?) you will have 28 dB better S/N ratio with 32-bit fixed-point.
> 
> > and all of the "demonstrations" will always make you hear 10bit worth
> > of audio in a 16bit file & tell you to crank the volume to death
> 
> to *hear* a difference non-subtly, you may have to go down to as few as
> 7 bits.  in 2008 i presented a side-by-side comparison between
floating-point
> and fixed-point quantization (
> http://www.aes.org/events/125/tutorials/session.cfm?code=T19 ) trying to
> compare apples-to-apples.  and i wanted people to readily hear
differences.
> in order to do that i had to go down to 7 bits (the floats had 3 exponent
bits, 1
> sign bit, 3 additional mantissa bits and a hidden leading "1").
> 
> --
> 
> r b-j  r...@audioimagination.com
> 
> "Imagination is more important than knowledge."
> 
> 
> 
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: The Uncertainties in Frequency Recognition

2014-03-12 Thread Marco Lo Monaco
 
> Cool!  I hadn't seen that name before--also it has a nice formula :) I'll
read
> more later.

Hey Robert, that's where the sincM formula (I told you about) comes from and
that's how BLIT was cranked down by Stilson using that approach. The idea of
convolving the Dirichlet kernel with many filtering shapes I think came out
for the first with his paper.
Other formulations are Discrete Summation Formulae by Moorer, which uses the
same ideas in the proof of the trigonometric functions.

So funny how everything is (and must) be linked in the theory.

M.

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Best way to do sine hard sync?

2014-03-10 Thread Marco Lo Monaco
Hello Tobias,
You should also have a look at the BLOO method, explained in this thread a
long time ago
http://music.columbia.edu/pipermail/music-dsp/2009-june/067853.html
That thread is quite long and the discussion pretty animated but you could
get some new ideas from the the paper of George at the link
http://s1gnals.blogspot.it/2008/12/bloo_6897.html and look for additonal
interpretations of in in the thread itself.

Hope to have helped

Marco

> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu [mailto:music-dsp-
> boun...@music.columbia.edu] Per conto di Tobias Münzer
> Inviato: martedì 25 febbraio 2014 15:54
> A: music-dsp@music.columbia.edu
> Oggetto: [music-dsp] Best way to do sine hard sync?
> 
> Hi,
> 
> I would like to implement a hard-synced sine oscillator in my synth and I
am
> wondering which is the best way to do so.
> I read the paper 'Generation of bandlimited sync transitions for sine
> waveforms' by Vadim Zavalishin which compares several approaches.
> Are there any better ways then the 'frequency shifting method' described
in
> the paper?  (Better in terms of less aliasing, faster,..)
> 
> Thanks a lot
> 
> Best Regards
> Tobias
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
dsp
> links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: music-dsp Digest, Vol 123, Issue 9

2014-03-05 Thread Marco Lo Monaco
Ahahahah FLSD!

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di robert
bristow-johnson
Inviato: mercoledì 5 marzo 2014 16:04
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] music-dsp Digest, Vol 123, Issue 9

On 3/4/14 4:57 PM, robert bristow-johnson wrote:
> On 3/4/14 11:53 AM, Ethan Duni wrote:
>> LDS, LSD... you do the math...
>
> ya.  too much of the latter for me, i'm afraid.
>
> i've risked death on it.
>
> on my motorcycle in the '80s.
>
> https://maps.google.com/maps?f=q&source=s_q&hl=en&sll=41.87985,-87.617
> 34&sspn=0.020002,0.045447&vpsrc=6&t=h&ie=UTF8&ll=41.87985,-87.61734&sp
> n=0.020002,0.045447&z=15&ei=vEsWU-zYOcHMsgTvhIKIBw&pw=2
>

for some reason, the link didn't show the labels which was supposed to make
my pun clear.  the "LSD" here was s'posed to be Chicago's Lake Shore Drive.

i have risked death on LSD on my motorcycle in Chicago during the Reagan
years.

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-05 Thread Marco Lo Monaco
Ciao Greg,
any chances to download your paper somewhere? I am also interested in it :)
Thanks
Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di
gjberc...@charter.net
Inviato: martedì 4 marzo 2014 20:14
A: music-dsp@music.columbia.edu
Oggetto: Re: [music-dsp] Iterative decomposition of an arbitrary frequency
response by biquad IIR

On Tue, 04 Mar 2014 13:49:43 -0500, Uli Brueggemann wrote:

>>Hello Greg,
>>
>>I've unsuccessfully tried to find more about FDLS.
>>Can you please give me a tip or even send me some info by PM?
>>
>>- Uli

PM sent.

- Greg

=

Everybody has their moment of great opportunity in life.
If you happen to miss the one you care about, then everything else in life
becomes eerily easy.

-- Douglas Adams
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-03 Thread Marco Lo Monaco
Stefan/Uli
I use Scilab and there is not so much as in MAtlab for filter estimation.
My experience with Scilab's invfreqz (which is based on the Levi paper I
think) was always deluding for practical analog filter identification, but I
bet I was unlucky or I didnt get the point of how to use it effectively.
What I don’t personally like in these methods is that they also need a
weight vector on the data set, which adds a new degree of freedom that you
must guess (not only the order).

Let me know how is/was your experience then...

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Stefan Sullivan
Inviato: lunedì 3 marzo 2014 12:17
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] Iterative decomposition of an arbitrary frequency
response by biquad IIR

For matching just the magnitude response, MATLAB has a built-in function for
it:
http://www.mathworks.com/help/signal/ref/yulewalk.html

And maybehaps some more parametric modelling techniques will be useful for
you http://www.mathworks.com/help/signal/ug/parametric-modeling.html

-Stefan

On Mon, Mar 3, 2014 at 12:00 PM, Uli Brueggemann 
wrote:
> Hello music-dsp,
>
> I like to decompose an arbitrary frequency response by biquads. So I'm 
> searching for an algorithm or paper on how to run an iterative 
> decomposition. In my imagination it should be possible to
> a) find a first set of biquad parameters with a best fit frequency 
> response in comparison to the given response
> b) create a IIR filter with inverse gain
> c) apply the filter to the given response to get a new one
> d) repeat a)-d) until some end criteria is reached
>
> a) should include the different filter types like peaking filter, 
> lowpass, highpass, lowshelf, highshelf...
>
> Is there any good information for such an approach around? Is there a 
> downside for such an approach?
>
> Uli
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Iterative decomposition of an arbitrary frequency response by biquad IIR

2014-03-03 Thread Marco Lo Monaco
Thanks Peter, that sounds interesting, have you ever tried DE on filter
estimation?

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Peter S
Inviato: lunedì 3 marzo 2014 13:04
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] Iterative decomposition of an arbitrary frequency
response by biquad IIR

On 03/03/2014, Uli Brueggemann  wrote:
> I like to decompose an arbitrary frequency response by biquads. So I'm 
> searching for an algorithm or paper on how to run an iterative 
> decomposition.
...
> Is there any good information for such an approach around? Is there a 
> downside for such an approach?

I guess you can always use a general iterative problem solving approach like
differential evolution if you apply that to audio
filters:

https://www.google.com/search?q=differential+evolution

Such genetic algorithms are great to iteratively find a 'best fit' set of
parameters.

- Peter
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Introductory literature for loudspeaker predistortion

2014-03-03 Thread Marco Lo Monaco
Hello jerry,
Klippel is one of the most experienced in the field and I believe that
looking thru his literature (papers) you will find a lot of inspiration.
AFAIK he uses a generic non linear dynamic model and performs various
techniques of identification, among the others Volterra (which actually can
emulate only mild nonlinearities).
I remember there is a paper by him using NARMAX method which could be
interesting for a university project.
Generally speaking, once you have an identified model, by applying inversion
you can in theory compensate also all nonlinarities: inverting a linear
system is pretty standard, it couldn’t be so easy for a non linear.

For all of your questions:
1) How dependent on the signal is a nonlinear model for a speaker?
A: Well, I could say a lot on large signal (speaker breakup), not so much
for small ones (only freq response taken into account). I would define it a
nonlinear process so the question maybe is too generic.

1) Is it possible to do some good by measuring a single nonlinearity curve
for the loudspeaker under some condition and (assuming it is invertible)
applying the inverse nonlinearity as predistortion?
A: Technically yes but AFAIK a static nonlinearity is too simple also
because of difficulties of keeping the cone always on axis far large values
of voltage applied. While on its dynamic behavior the nonlinearity is thus
different (take that as an intuition thought).

2) Surely providing motion information (e.g. an accelerometer attached to
the cone) into a feedback loop would help to linearize things. Some
commercial subwoofers do this.
A: Klippel uses a laser vibrometer, which I believe is quite standard for
parameter estimation of the model. I guess that an accelerometer is good for
low bandwidth signal (subwoofers) but not for mid range/tweeters.

4) The first thing I would try is drive a loudspeaker with a sine or
triangle and look at the input-output curve on an x-y oscilloscope. If the
line isn't straight then there is distortion and if the "line" opens up then
there is hysteresis -> memory. Right?
A: Yes you would certainly see some memory effect for large signals applied,
I bet.

Pls take my thoughts as they are, because I never worked on nonlinear
speaker emulation, even if is something I would love to do in the future.

Ciao

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Jerry
Inviato: venerdì 28 febbraio 2014 00:50
A: A discussion list for music-related DSP
Oggetto: [music-dsp] Introductory literature for loudspeaker predistortion

Does anyone know the literature for loudspeaker predistortion--literature
appropriate for senior-year electrical engineering students? (That's not
me.) I suppose this would rule out fancy stuff like Volterra series
inversion and use of psychoacoustic metrics.

How dependent on the signal is a nonlinear model for a speaker?

Is it possible to do some good by measuring a single nonlinearity curve for
the loudspeaker under some condition and (assuming it is invertible)
applying the inverse nonlinearity as predistortion?

Surely providing motion information (e.g. an accelerometer attached to the
cone) into a feedback loop would help to linearize things. Some commercial
subwoofers do this.

The first thing I would try is drive a loudspeaker with a sine or triangle
and look at the input-output curve on an x-y oscilloscope. If the line isn't
straight then there is distortion and if the "line" opens up then there is
hysteresis -> memory. Right?

I'm vaguely aware of the work of Klippel http://www.klippel.de/ but not at
all familiar with it.

I'm just looking for some information to feed the senior projects, not
change the world.

Jerry
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: R: R: R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
>TIIR and resampling might both be JOS, but otherwise they're not the same
thing.  resampling is Julius and Gossett and TIIR is Julius and Wang.

Oh man. It was a long time ago that I looked on TIIR and I have been more
used to JOS resampling in the last years. I basically confused the idea of
TIIR with the windowed (truncated in mind) band limited impulse.  8-O Sorry!

>>if memory is no problem, you can have a whole bunch of windowed sincs
stored in a table with different fractional delays ready to rock-n-roll.  if
the window is good (like a Kaiser) and the length is long enough (i think 16
>>samples is pretty good), i don't think there is a practical aliasing issue
at all.  a grain that is a windowed sinc need not have *any* practical
aliasing issues at all.  and if you have enough windowed sincs at various
fractional delays, >>you can construct a BLIT of any fundamental frequency.

>>because of what i know about resampling (for sample rate conversion and
for fractional delay filters), if the windowed sinc was 32 samples long and
you had a version of it at 512 different equally-spaced fractional delays
and >>you linearly interpolated between two adjacent windowed sincs, any
aliasing issues are down 120 dB.  i don't worry about -120 dB in a music
synthesis alg.  i still think that a BLI that is 16 samples long and at,
maybe, 64 different >>fractional delays, will be more than good enough using
linear interpolation.

That is BLIT-SWS: windowed sinc linearly interpolated. So we are clear on
this and agree on everything. Just know that at the beginning I was on the
quest of perfect algo for zero-alias, so accepting it even (if reduced) to
me was a turn off (but I am a perfectionist so apologies :D ).

>> If you want no aliasing in BLIT you must use DSF or sincM who suffer 
>> of what I already explained.

sorry, i'm clueless about the alfabet soup.  i know "BLIT" and i know
"sinc".

DSF is the Discrete Sumation Formulae (originally by Moorer I guess) and
sincM(x) = sin(pi x)/M sin(pi x/M): both provide a periodic pulse train
bandlimited (NO ALIAS).

>> Using non-leaky integrators (ideal) leads to what you say (basically 
>> roundoff errors which are never forgot) and AFAIK no one uses them.

>noise shaping deals with roundoff errors.

Yes but...I dont follow you here (I intended no one uses ideal integrators
in BLIT). Ideal integration to me means the simple running sum as a trivial
example: so a running sum of past values has roundoff errors that at some
point will blow the integrator up. More generally DC gain is infinite in
integrators so any roundoff resulting in DC can be amplified infinitely.
That's why everybody uses leaky ones, and the TRI/SAW/SQR waveforms are not
"textbook-like".


>take care of your kids.  i'm doing similarly (but my baby is now 14 years
old).

Thanks, same to you.

If you are interested in the subject I suggest to read the Stilson paper
about BLIT (if you haven't yet). I have basically explained in this thread
my experience in implementing all his techniques.
https://ccrma.stanford.edu/~stilti/papers/blit.pdf

Marco

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: R: R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
I dont think in literature exist proofs of timevarying filter at audio rate
with no artifacts: afaik there is the minimum norm class and other
techniques to understand the minimum requirements for a topology to be
changed at every N sample minimum. If I am wrong I would love to have the
details and read the papers.
Yes I am talking about IIR and feedback of course and I have no problem
actually in converting any analog network changing at audio rate (thus
modulation of filter parameters etc). In my case I used a simple leaky
integrator time varying at audio rate tuned with the cutoff proportional to
the BLIT freq.

If you use BLI to make it periodic you need the SWS approach which contains
aliasing (creating a grain of a BLI which is windowed [thus aliasing],
something similar to Julius resampling TIIR method).

If you want no aliasing in BLIT you must use DSF or sincM who suffer of what
I already explained.

Using non-leaky integrators (ideal) leads to what you say (basically
roundoff errors which are never forgot) and AFAIK no one uses them.

Bottom line: yes you can use BLIT but to avoid all the annoying side-effects
you must take into account hacks that produce some aliasing which to me
sounds a fail, because theoretically the method is clean and robust. So if I
must have aliasing and fix all the side effects, to me it's better to switch
to a set of wavetable approach with some acceptable aliasing, which is easy
to modulate.

Talking about the zero-crossing was an possible hack only because BLIT has
NO aliasing if you use DSF/sincM, but you have clicks. It's not needed if
you use BLIT-SWS but you have aliasing and much smoothed clicks (for free
due to windowing).

Maybe not so clearly told, but I am in a rush cooking for my babies :)

Ciao

Marco


-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di robert
bristow-johnson
Inviato: mercoledì 26 febbraio 2014 20:28
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: R: R: Best way to do sine hard sync?

On 2/26/14 1:55 PM, Marco Lo Monaco wrote:
> Actually I can do a filter with time varying filter at audio rate with 
> _NO_ glithces or artifacts, stable (essentially behaving like an analog
one).

you're not the only one that can do that.  but, with an IIR, there are
problems that arise and must be dealt with.  it's because of the feedback.

> The point is that the BLIT harmonic content changes in modulation 
> (because you have to cut harmonics as you are sweeping hi)

that is the case with wavetable or non-BLIT-like methods, but i don't see
this at all with BLIT.  the single BLI is bandlimited sufficiently below
Nyquist and it doesn't matter if there is one BLI or a BLIT at a high rate
or low rate as long as you overlap and add them correctly.  
there remains no harmonics above Nyquist, no matter how many bandlimited
impulses that find themselves in your output.

and the integrator will not introduce new frequency components, since it is
only a filter.

but the integrator has feedback.  to keep the short-term DC of the
integrator output from blowing up, for every positive value going in, there
must be an equal negative value.  but, when a parameter changes, that might
not be the case and similar to a TIIR filter (which is a form of FIR like a
moving-average filter) you might be stuck with a turd in the integrator that
will not go away because the anti-turd never gets in there and integrated
because of the parameter change.  and, especially if it is not leaky (or not
very leaky), it *never* forgets.

>   and that results in spikes
> in amplitudes that are before the integrator (so the BLIT itself in 
> modulation clicks and pops). Since freq change could not happen at 
> perfect zerocrossings this effect is unavoidable (unless keeping the 
> same harmonic content for BLIT thus aliasing in sweeps which is awful 
> to hear). Also trying to change freq at zero crossing fails as your 
> freq reaches high values where periods are of a handful of samples.

i don't really see it working that way.  a BLIT oscillator should not be
designed around zero-crossings and such.  in fact, i don't think any
"linear" process should be designed around zero-crossings.  i *might* design
a pitch shifter or a time scaler using zero-crossings, but not a linear
process.  and, since there might be a lot more than a single pair of
zero-crossings per cycle, i almost never pay any attention to them; they
cannot be depended on.

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-

[music-dsp] R: R: R: R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
Actually I can do a filter with time varying filter at audio rate with _NO_
glithces or artifacts, stable (essentially behaving like an analog one).
The point is that the BLIT harmonic content changes in modulation (because
you have to cut harmonics as you are sweeping hi) and that results in spikes
in amplitudes that are before the integrator (so the BLIT itself in
modulation clicks and pops). Since freq change could not happen at perfect
zerocrossings this effect is unavoidable (unless keeping the same harmonic
content for BLIT thus aliasing in sweeps which is awful to hear). Also
trying to change freq at zero crossing fails as your freq reaches high
values where periods are of a handful of samples.

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di robert
bristow-johnson
Inviato: mercoledì 26 febbraio 2014 19:15
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: R: Best way to do sine hard sync?

On 2/26/14 12:37 PM, Marco Lo Monaco wrote:
> Moreover in my experience BLIT with leaky integrators fails on 
> frequency modulations,

i can imagine why.  it's sorta like how some IIR filter topologies fail with
coefficient modulation.

this is another reason that i am a proponent of wavetable synthesis in all
contexts where memory resources allow.  wavetable synthesis is more like a
basic FIR filter: nothing to blow up when parameters are modulated.

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
Agreed 100%.
Moreover in my experience BLIT with leaky integrators fails on frequency
modulations, other approach like BLIT-SWS are more complicated but if memory
is not an issue wavetable is the choice.

BTW a smarter approach is needed for hardsync because a user request is to
change at processtime  continuously the slave frequency and of course it
looks unfeasible to tabulate samples for all possible hardsync ratios (even
if you have a lot lot of memory sounds like an overkill to me).

BLIT BLEP BLAMP sounds like a quote from Mel Brook's "Space balls" :)

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di robert
bristow-johnson
Inviato: mercoledì 26 febbraio 2014 17:44
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: Best way to do sine hard sync?

On 2/26/14 10:49 AM, Marco Lo Monaco wrote:
> Yes it is. And it is true even in analog domain...if you only could 
> have dirac pulse realized on a circuit :) Mathematically and in 
> continuous time they are the same: it is the basic starting concept of 
> BLIT (see also Stilson paper) Hope to have helped...and sorry if I 
> misunderstood your words Robert :-)

well, i was wondering if i misunderstood something, which happens often.

just to be clear:  while i *have* implemented some alias-suppressed saws and
squares (with adjustable duty cycles) and an alias-suppressed sync saw, i
have never implemented any BLIT or BLEP or BLAP or whatever.  i have only
read about them.

if memory available to the oscillator is no problem, then i would synthesize
*any* periodic or quasi-periodic waveform with wavetable synthesis and
interpolate (crossfade) between wavetables.  that includes these sync saws
or sync squares or sync whatever.  but sometimes the hardware one must work
in does *not* have very much memory (like a half-dozen registers at most),
and then you have to do this algorithmically.  at this point i gotta keep my
mouth shut.

but if you're doing this in a plugin or with a general-purpose CPU, i
wouldn't bother with any of this BLIT or BLAP stuff.  or some other
algorithmic approach.  i would just do it with wavetables and interpolate.
to define the wavetables to be sufficiently bandlimited, you might need to
write a few MATLAB scripts.

-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
Yes it is. And it is true even in analog domain...if you only could have
dirac pulse realized on a circuit :)
Mathematically and in continuous time they are the same: it is the basic
starting concept of BLIT (see also Stilson paper)
Hope to have helped...and sorry if I misunderstood your words Robert :-)

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di robert
bristow-johnson
Inviato: mercoledì 26 febbraio 2014 15:16
A: music-dsp@music.columbia.edu
Oggetto: Re: [music-dsp] R: Best way to do sine hard sync?

On 2/26/14 4:03 AM, Marco Lo Monaco wrote:
>>> yup that was the BLIT stuff, i think, so a sawtooth is the integral 
>>> if
> this BandLimited Impulse Train (with a little DC added).
>
> Ahaha, funny! Did you set sarcasm mode = on? :)))
>

i guess i hadn't.  a bandlimited sawtooth is not the integral of a BLIT
(with a touch of DC)??


-- 

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
>> yup that was the BLIT stuff, i think, so a sawtooth is the integral if
this BandLimited Impulse Train (with a little DC added).

Ahaha, funny! Did you set sarcasm mode = on? :))) 

Ciao Robert

Marco


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Best way to do sine hard sync?

2014-02-26 Thread Marco Lo Monaco
Hello Tobias,
following Ross advice one of the drawback that you will have to deal is the
CPU usage at high master frequencies. Placing in an overlap and add fashion
a grain is very convenient at low freqs but not so much at high. More over
you will have to deal with some sort of DC offsets on the way (OLAdding the
minBLEP grain - which is min-phase and not zero-phase and you should
consider also some sort of look-ahead - with small hopsizes and maybe for
short times because you are under a frequency master modulation will result
in unpredictable offsets over your sin.
Unfortunately Eli Brandts does not face this problem in the paper.

Hard-sync is hard so expect aliasing to be reduced but not completely
eliminated.

Ciao

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Ross Bencina
Inviato: martedì 25 febbraio 2014 20:49
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] Best way to do sine hard sync?

On 26/02/2014 2:25 AM, robert bristow-johnson wrote:
> are you trying to do multiple cycles of the sine and then have a 
> discontinuity as it snaps back in sync with the side-chain waveform?  
> if so, that doesn't sound very "bandlimited" to me.

As I understand it, the question is now to make such snap-back band limited.

The approach that I am familiar with is the "corrective grains" approach
(AKA BLIT/BLEP/BLAMP etc) where you basically run a granulator that
generates grains that cancel the aliasing caused by the phase discontinuity.
The exact grain needed is dependent on the derivatives of the signal (doable
for sine waves). The original paper for this technique is Eli Brandt (2001),
"Hard sync without aliasing", Proc. ICMC
2001.: http://www.cs.cmu.edu/~eli/L/icmc01/hardsync.html

I have not read Vadim's paper so I am not familiar with the alternatives.

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: R: R: Implicit integration is an important term, ZDF is not

2013-11-16 Thread Marco Lo Monaco
Hi Max,
you are welcome. You can start with the David Yeh phd thesis (someone posted
the link here in this thread), which is a good summary of the state of the
art. Then you will likely go thru the references to find the master papers
dated late 1990s by Borin-DePoli (my professors btw).
Just wanted to tell you that implicit FD is intriguing and nowadays almost
everywhere in pro audio.
When I was at the university I simulated in _realtime_ the Van Der Pol and
the Chua Felderhoff circuits, leading to chaotic behaviors. I don’t think
you can get those behaviors with explcit methods, because they become pretty
quick unstable.

Have fun!

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
Inviato: sabato 16 novembre 2013 16:06
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: R: R: Implicit integration is an important
term, ZDF is not

Hi Marco
Yes, you're right, you can simulate at the lower component level which ought
then to be monotonic. It is a trap for the unwary though: if you try to use
implicit FD at the 'functional' level you can get non-monotonicity even if
there is monotonicity at the component level, or so it would seem from our
discussion here.
I think using explicit FD is hard enough as it is, my expectation would be
that implicit FD is much more difficult to use in practice - considering
that we haven't even begun to look at all the other issues such as those
that arise in numerical methods for root finding, which don't arise in
explicit FD.
But that's another topic of course!
Anyway, thanks for bringing up this interesting theory about uniqueness, if
you have any academic references I'd be grateful!
Cheers
Max
On 16 Nov 2013 12:55, "Marco Lo Monaco"  wrote:

> Hi Max,
> yes you can do that, of course it is quite typical. But a multiplier 
> itself is made of a lot of "monotone" components that arranged in that 
> way create a quadratic law. I was referring to basic discrete 
> circuits.
> BTW the Van der Pole was made by him, with tetrodes in a fancy 
> feedback
> topology: I never investigated on that analog model (only studied the 
> general form). I guess that monotonicity is likely lost when building 
> in oscillators.
>
> In my experience choosing the branch is always done on physical 
> considerations about signals. I will investigate further when it will 
> be the time to face those problem again.
> Btw, in the Wikipedia example you choose the right positive branch of 
> the sqrt because you have constraints on the signal y belonging to 
> [0,a]. What happen if you wouldn’t have such a constraint?
>
> M.
> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu
> [mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
> Inviato: sabato 16 novembre 2013 12:49
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] R: R: R: Implicit integration is an important 
> term, ZDF is not
>
> Hi Marco
>
> Thanks - yes, OK with monotonicity that makes sense. So, I picked out 
> the squaring example and the van der Pol equations to try to prompt 
> whether this was required or not. For the rather trivial square 
> example, can't you just rig up an analogue multiplier with standard to 
> square the input? I picked up my old copy of Horowitz and Hill and on 
> page 140, they describe analogue multipliers 'exploiting the 
> g_m-versus-I_C characteristics of bipolar transistors, using matched 
> arrays to circumvent problems of offset and bias shifts'. Sounds like 
> this can be built using standard components. So, I remain to be 
> convinced that implicit FD, coupled with this uniqueness theorem, will 
> always be a good choice for FD simulations of all analogue audio 
> circuitry, regardless. I think you still have to pay careful attention 
> to what the circuit actually implements.
>
> In any case, as you say, you can always choose one branch or the other 
> if monotonicity doesn't hold, but I'm not sure if you can always make 
> the 'right' choice of branch in every case - there may be situations 
> where you have to 'hop branches' so to speak. I think it gets quite
difficult then.
>
> Cheers
> Max
>
> On 16 November 2013 10:40, Marco Lo Monaco 
> wrote:
> > Yes Max, it has been for 10 years that I have been intrigued by this 
> > approach :)
> >
> > AFAIK there is no electronic component among the ones I mentioned 
> > that can realize a Fnl(x) = x^2 nonlinearity (meaning no bipole in
"nature"
> > can do that).
> > To tell the truth, I forgot to say that mononicity is a required 
> > thing to make things work for sure :)))

[music-dsp] R: R: R: R: Implicit integration is an important term, ZDF is not

2013-11-16 Thread Marco Lo Monaco
Hi Max,
yes you can do that, of course it is quite typical. But a multiplier itself
is made of a lot of "monotone" components that arranged in that way create a
quadratic law. I was referring to basic discrete circuits.
BTW the Van der Pole was made by him, with tetrodes in a fancy feedback
topology: I never investigated on that analog model (only studied the
general form). I guess that monotonicity is likely lost when building in
oscillators.

In my experience choosing the branch is always done on physical
considerations about signals. I will investigate further when it will be the
time to face those problem again.
Btw, in the Wikipedia example you choose the right positive branch of the
sqrt because you have constraints on the signal y belonging to [0,a]. What
happen if you wouldn’t have such a constraint?

M.
-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
Inviato: sabato 16 novembre 2013 12:49
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: R: Implicit integration is an important term,
ZDF is not

Hi Marco

Thanks - yes, OK with monotonicity that makes sense. So, I picked out the
squaring example and the van der Pol equations to try to prompt whether this
was required or not. For the rather trivial square example, can't you just
rig up an analogue multiplier with standard to square the input? I picked up
my old copy of Horowitz and Hill and on page 140, they describe analogue
multipliers 'exploiting the g_m-versus-I_C characteristics of bipolar
transistors, using matched arrays to circumvent problems of offset and bias
shifts'. Sounds like this can be built using standard components. So, I
remain to be convinced that implicit FD, coupled with this uniqueness
theorem, will always be a good choice for FD simulations of all analogue
audio circuitry, regardless. I think you still have to pay careful attention
to what the circuit actually implements.

In any case, as you say, you can always choose one branch or the other if
monotonicity doesn't hold, but I'm not sure if you can always make the
'right' choice of branch in every case - there may be situations where you
have to 'hop branches' so to speak. I think it gets quite difficult then.

Cheers
Max

On 16 November 2013 10:40, Marco Lo Monaco  wrote:
> Yes Max, it has been for 10 years that I have been intrigued by this 
> approach :)
>
> AFAIK there is no electronic component among the ones I mentioned that 
> can realize a Fnl(x) = x^2 nonlinearity (meaning no bipole in "nature" 
> can do that).
> To tell the truth, I forgot to say that mononicity is a required thing 
> to make things work for sure : (almost all nonlinear electronic 
> devices are monotone in some sense, a diode is an example! On the 
> contrary a tunnel diode or a neon bulb have nonlinearities that are 
> not monotone and can be more tricky to handle, but they are rarely(never?)
used in audio).
> Also the K matrix systems has always practical values that lead to 
> Dini condition to be satisfied easily.
>
> In the ODE you suggested the implicit function G(p, y) = (K*y+p)^2 - y 
> = 0 is only locally explicitable when:
>
> G'(y) = 2*K*(p + K*y) - 1 != 0
>
> Setting up the system:
> G'(y) = 0,
> G(p,y) = 0
>
> And solving for p0,y0 leads
>
> (p0, y0) = [1/4/K,  1/4/K^2]
>
> thus that point is the only one point where the implicit function 
> Dini's theorem fails to be applied.
>
> For every point other than p0,y0, there can be more than one solution 
> (branches of implicit function) and an appropriate choice must be made 
> (exactly the same thing that happens when choosing the sign of the sqr 
> root in the Wikipedia example). Nonetheless the solution is guaranteed 
> and unique. Note that this kind of non-monotone functions are 
> typically involved with deterministic chaos (Chua RLC and Van der Pole 
> are two famous examples, so very likely period doubling and 
> bifurcation will happen when a driving force is applied).
>
> So, to answer your question I would say that the solutions are always 
> unique by appropriate choice of the branch of implicit function G. 
> Nonetheless such a quadratic nonlinearity AFAIK is not met in any 
> practical electronic nonlinear device (at least in my experience).
>
> Hope to have helped again.
>
> Ciao
>
> Marco
>
> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu
> [mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
> Inviato: giovedì 14 novembre 2013 21:34
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] R: R: Implicit integration is an important 
> term, ZDF is not
>
> Hi Marco
>
> Thanks - sound

[music-dsp] R: R: R: Implicit integration is an important term, ZDF is not

2013-11-16 Thread Marco Lo Monaco
Yes Max, it has been for 10 years that I have been intrigued by this
approach :)

AFAIK there is no electronic component among the ones I mentioned that can
realize a Fnl(x) = x^2 nonlinearity (meaning no bipole in "nature" can do
that).
To tell the truth, I forgot to say that mononicity is a required thing to
make things work for sure : (almost all nonlinear electronic devices are
monotone in some sense, a diode is an example! On the contrary a tunnel
diode or a neon bulb have nonlinearities that are not monotone and can be
more tricky to handle, but they are rarely(never?) used in audio).
Also the K matrix systems has always practical values that lead to Dini
condition to be satisfied easily.

In the ODE you suggested the implicit function G(p, y) = (K*y+p)^2 - y = 0
is only locally explicitable when:

G'(y) = 2*K*(p + K*y) - 1 != 0 

Setting up the system:
G'(y) = 0,
G(p,y) = 0

And solving for p0,y0 leads

(p0, y0) = [1/4/K,  1/4/K^2]

thus that point is the only one point where the implicit function Dini's
theorem fails to be applied.

For every point other than p0,y0, there can be more than one solution
(branches of implicit function) and an appropriate choice must be made
(exactly the same thing that happens when choosing the sign of the sqr root
in the Wikipedia example). Nonetheless the solution is guaranteed and
unique. Note that this kind of non-monotone functions are typically involved
with deterministic chaos (Chua RLC and Van der Pole are two famous examples,
so very likely period doubling and bifurcation will happen when a driving
force is applied).

So, to answer your question I would say that the solutions are always unique
by appropriate choice of the branch of implicit function G. Nonetheless such
a quadratic nonlinearity AFAIK is not met in any practical electronic
nonlinear device (at least in my experience).

Hope to have helped again. 

Ciao

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
Inviato: giovedì 14 novembre 2013 21:34
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: Implicit integration is an important term,
ZDF is not

Hi Marco

Thanks - sounds intriguing although I don't really follow your argument. So,
to simplify, let us consider this ODE:

dy/dt=-y^2

If you can build a circuit using your component examples, then you would
have non-uniqueness with the backward Euler method. Or, are you effectively
saying that you can't possibly build such a circuit out of the components
you mention?

Max

On 14 November 2013 20:20, Marco Lo Monaco  wrote:
> Hi Max,
> you need to convert your nonlinear system into a state space model 
> with some addiction for dealing non-linearity, reaching a set of 6 
> matrixes instead of the 4 usual ones (see Yeh's PHD thesis for a good
starting point).
> Once you use an implicit integration scheme (like bilinear) you will 
> always reach (after all the calculations) to a non linear system of 
> eqs in the form of y - Fnl(Ky + p) = 0 where p is an historical 
> contribute(=memory) of the system known at time n. You will have to 
> solve in some way (typically Newton
> Raphson) that nonlinear system.
> Given the type of Fnl in normal audio circuits (diodes, tubes, 
> transistor and opamp) the uniqueness of the solution of that eq is 
> guaranteed with the conditions I told you, and it is a straight 
> consequence of Dini's implicit function
(http://en.wikipedia.org/wiki/Implicit_function_theorem)  theorem.
> So if you let
> y - Fnl(Ky + p) = G(p,y)
> Since you want to solve the implicit function G(p, y) = 0, the 
> conditions to do this locally (around some y0) is dG/dy != 0. Given 
> the form of G in the multivariate case (MIMO) you will reach the 
> condition I explained in my last post.
> Note that differentiability of Fnl is _not_ requested. What is 
> requested is the differentiability of dG/dy, which if met for all y 
> implies global explicability of G, thus only one solution is possible 
> and unique. This conditions is less restrictive than you could expect, 
> in fact, for instance it works also for simplified PWL (non 
> differentiable) saturator characteristic function like opamps.
> I've never found in my 10 yrs experience switching capacitors audio 
> circuits to be modeled and with prominent nonlinearity (I only dealt 
> with bucket brigade analog delay lines but it’s a simplified story for 
> them). I think nonetheless it is possible to model switching circuits 
> by exchanging the state variables among different nonlinear state 
> space representations (samplerate is then a limit). I don’t know if it 
> is realistic or worth to do that, but as a matter of principle it probably
should be at least possible.
>
> Hope this helped
>
> Marco
>
>

[music-dsp] R: R: Implicit integration is an important term, ZDF is not

2013-11-14 Thread Marco Lo Monaco
Hi Max,
you need to convert your nonlinear system into a state space model with some
addiction for dealing non-linearity, reaching a set of 6 matrixes instead of
the 4 usual ones (see Yeh's PHD thesis for a good starting point).
Once you use an implicit integration scheme (like bilinear) you will always
reach (after all the calculations) to a non linear system of eqs in the form
of y - Fnl(Ky + p) = 0 where p is an historical contribute(=memory) of the
system known at time n. You will have to solve in some way (typically Newton
Raphson) that nonlinear system.
Given the type of Fnl in normal audio circuits (diodes, tubes, transistor
and opamp) the uniqueness of the solution of that eq is guaranteed with the
conditions I told you, and it is a straight consequence of Dini's implicit
function (http://en.wikipedia.org/wiki/Implicit_function_theorem)  theorem.
So if you let
y - Fnl(Ky + p) = G(p,y)
Since you want to solve the implicit function G(p, y) = 0, the conditions to
do this locally (around some y0) is dG/dy != 0. Given the form of G in the
multivariate case (MIMO) you will reach the condition I explained in my last
post.
Note that differentiability of Fnl is _not_ requested. What is requested is
the differentiability of dG/dy, which if met for all y implies global
explicability of G, thus only one solution is possible and unique. This
conditions is less restrictive than you could expect, in fact, for instance
it works also for simplified PWL (non differentiable) saturator
characteristic function like opamps.
I've never found in my 10 yrs experience switching capacitors audio circuits
to be modeled and with prominent nonlinearity (I only dealt with bucket
brigade analog delay lines but it’s a simplified story for them). I think
nonetheless it is possible to model switching circuits by exchanging the
state variables among different nonlinear state space representations
(samplerate is then a limit). I don’t know if it is realistic or worth to do
that, but as a matter of principle it probably should be at least possible.

Hope this helped

Marco



-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
Inviato: giovedì 14 novembre 2013 19:07
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: Implicit integration is an important term, ZDF
is not

Hi Marco

I don't know, in this way you're ruling out rather simple non-invertible
nonlinearities such as the humble quadratic which can occur in 'working'
nonlinear circuits:
http://en.wikipedia.org/wiki/Van_der_Pol_equation

The rather trivial example here shows the problem quite clearly (see
backward Euler method):
http://en.wikipedia.org/wiki/Explicit_and_implicit_methods

Also this analysis only works if you have differentiability and there are
some rather ubiquitous, practical nonlinear circuits (switched capacitors
for example) where this doesn't hold.

Max

On 14 November 2013 14:52, Marco Lo Monaco  wrote:
> Hi Max,
> the uniquess is granted by the Dini's theorem which is satisfied 
> always in common practical analog schematic (unless you are dealing 
> with some exotheric Chua's multiple DC operating points).
>
> That condition is met if
>
> det(Jnl*K-I)!=0
>
> where Jnl is the Jacobian of the MIMO non linearity and K is the K 
> matrix (see DePoli et alias).
> It's when the implicit method becomes "explicit" only locally (or 
> globally) and you can break the uncomputabilty of the graph, defeating 
> the instantanoues delay in z-domain.
> In all the practical case the nonlinear implicit function is 
> guaranteed to be globally "explicitable".  That also makes sense since 
> you are modeling a physical system that is actually working and must 
> have uniqueness of solution.
>
> Marco
>
> -Messaggio originale-
> Da: music-dsp-boun...@music.columbia.edu
> [mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
> Inviato: giovedì 14 novembre 2013 15:14
> A: A discussion list for music-related DSP
> Oggetto: Re: [music-dsp] Implicit integration is an important term, 
> ZDF is not
>
> Thanks Ross.
>
> Good point about the practical utility of implicit FD and increasing 
> computational power. There's also all the issues about uniqueness of 
> implicit FDs arising from nonlinear IVPs, and then there's stability, 
> convergence, weather the resulting method is essentially 
> non-oscillatory etc. I suppose there are additional issues to do with 
> frequency response which may be what matters most in audio DSP.
>
> Max
>
>
> On 14 November 2013 14:06, Ross Bencina 
wrote:
>> On 14/11/2013 11:41 PM, Max Little wrote:
>>>
>>> I may have misread, but the discussion seems to suggest that this 
>>> disc

[music-dsp] R: Implicit integration is an important term, ZDF is not

2013-11-14 Thread Marco Lo Monaco
Hi Max,
the uniquess is granted by the Dini's theorem which is satisfied always in
common practical analog schematic (unless you are dealing with some
exotheric Chua's multiple DC operating points). 

That condition is met if 

det(Jnl*K-I)!=0

where Jnl is the Jacobian of the MIMO non linearity and K is the K matrix
(see DePoli et alias).
It's when the implicit method becomes "explicit" only locally (or globally)
and you can break the uncomputabilty of the graph, defeating the
instantanoues delay in z-domain.
In all the practical case the nonlinear implicit function is guaranteed to
be globally "explicitable".  That also makes sense since you are modeling a
physical system that is actually working and must have uniqueness of
solution.

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Max Little
Inviato: giovedì 14 novembre 2013 15:14
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] Implicit integration is an important term, ZDF is
not

Thanks Ross.

Good point about the practical utility of implicit FD and increasing
computational power. There's also all the issues about uniqueness of
implicit FDs arising from nonlinear IVPs, and then there's stability,
convergence, weather the resulting method is essentially non-oscillatory
etc. I suppose there are additional issues to do with frequency response
which may be what matters most in audio DSP.

Max


On 14 November 2013 14:06, Ross Bencina  wrote:
> On 14/11/2013 11:41 PM, Max Little wrote:
>>
>> I may have misread, but the discussion seems to suggest that this 
>> discipline is just discovering implicit finite differencing! Is that 
>> really the case? If so, that would be odd, because implicit methods 
>> have been around for a very long time in numerical analysis.
>
>
> Hi Max,
>
> I think you would be extrapolating too far to say that a few people 
> tossing around ideas on a mailing list are representative of the 
> trends of an entire discipline. On this mailing list I would struggle 
> to guess which "this discipline" you are refering to. Suffice to say 
> that a lot of the people discussing things in this thread are developers
not research scientists.
>
> Some practitioners are just "discovering" new practicable applications 
> of implicit finite differencing in the last 10 years or so. One good 
> reason for this is that in the past these techniques were completely 
> irrelevant because they were too expensive to apply in real time at 
> the required scale (100+ synthesizer voices, 100+ DAW channels). It 
> also seems that the market has changed such that people will pay for a 
> monophonic synth that burns a whole
> i7 CPU core.
>
> Cheers,
>
> Ross.
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book 
> reviews, dsp links http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp



-- 
Max Little (www.maxlittle.net)
Wellcome Trust/MIT Fellow and Assistant Professor, Aston University
TED Fellow (fellows.ted.com/profiles/max-little)
Visiting Assistant Professor, MIT
Room MB318A, Aston University
Aston Triangle, Birmingham, B4 7ET, UK
UK +44 7710 609564/+44 121 204 5327
Skype dr.max.little
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: R: Trapezoidal integrated optimised SVF v2

2013-11-13 Thread Marco Lo Monaco
Agreed 100% Vadim. Only a human can detect the design principle, thus focusing 
on the core points. 

Moreover the engineer must also know why that gear is so good sounding and then 
investigate on its history. I remember the fuzz face was very difficult to 
produce because of the germanium transistors had a lot of parameter dispersions 
at that time. Jimi Hendrix himself tried a dozen at the shop and chose the one 
which was sounding best.
Nonetheless germanium had leakeage but they were the only option at that time: 
afaik that design was completely random (the fuzz fx was not intended and 
probably they simply wanted to distort). 
This thing happens very often in analog gear design and that's why I think a 
dsp engineer must also have a musical background to understand how important 
are these aspects.

My concern is what will happen to us when all the analog vintage stuff will be 
modeled muahahahah!
I guess(hop) I will be retired! :)

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Vadim Zavalishin
Inviato: mercoledì 13 novembre 2013 12:54
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: Trapezoidal integrated optimised SVF v2

On 13-Nov-13 11:56, Marco Lo Monaco wrote:
> I personally don’t think that automatic systems (DK) will be the 
> panacea of nonlinear modeling (even if everybody here is dreaming of a 
> realtime spice). Very often only a human can see patterns in circuits 
> and find shortcuts to simplify things.

+1

Besides the shortcuts, only a human can judge the critical aspects of the 
analog model being discretized. Such as

- how precise should the component models be (e.g. if Ebers-Moll transistor 
model is sufficient or not), where in principle this question should be 
answered for each component separately

- whether the difference between parameter values of identically marked 
components is having any critical effect

- whether the effect caused by a certain element of the model (e.g. a
nonlinearity) is musically insignificant (so that the element may be
dropped)

- to which extent we can assume independence of different parts of the device 
(ignore the current leakage and other crosstalk)

and so on.

Perhaps, if in future we have computational powers several orders higher than 
the currently available ones, such automatic system would be more realistic, as 
we will be able to afford ridiculously precise and detailed analog component 
models as the basis of our discretization. But from my feeling it's still a 
long long way. And then, how important is being able to automatically convert 
from analog schematics to digital? I mean there has been some amount of 
brilliant engineering work to design those analog devices, but it's not 
happening much more. So, after we have modelled them all, we are not gonna need 
any further modelling.

OTOH, the lessons we learned from attempting to model those things (and you 
learn more, if you do this "by hand" rather than by some automated
toolkit) should form an invaluable basis for the development of future 
software. We can design *new* filters, effects, etc, which all are gonna have 
"that analog sound". For that purpose of new designs (rather than modelling the 
old stuff), I believe the *continuous-time* block-diagram based approaches are 
more useful than the differential equations, as they are offering a more 
intuitive view of the signal processing (YMMV). 
The discrete-time block-diagrams are not that intuitive, in my opinion, but 
then again, you don't need to use them, if you implicitly understand the 
discretized version of the same analog block-diagram.

Regards,
Vadim

-- 
Vadim Zavalishin
Reaktor Application Architect
Native Instruments GmbH
+49-30-611035-0

www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

[music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-13 Thread Marco Lo Monaco
Andy, 
FYI besides the KMethod, DePoli/Borin/Sarti et alias always in the 1990s also 
formulated the W-method which was the dual of KMethod but using the wave 
digital filter theory by Fettweis.
AFAIK is not so easily used because of the increased complexity of 
nonlinearities adaptors involved (you have shear AND rotation transformations).
To me KMethod is much more attractive and it is the be biggest step forward in 
the last 2 decades to approaching non linear simulation, especially regarding 
it's robust formalism and elegant math in it.
Moreover using KCL/KVL is naturally  straightforward when dealing with analog 
schematics.
Using tables seems attractive but for MIMO systems (with order greater than 2) 
it is not feasible (memory and finding algorithms are an issue). Unfortunately  
if you have a timevarying element (i.e. a potentiometer) you have to invert a 
matrix (H), which is good only at control rate, not audio rate.
Pls note that when modeling a tone stack, the KMethod degenerates in the 
state-space classic approach, which is equivalent to yours (C=F=0).
I consider Yeh's work very good since the first time I read it years ago. I 
personally don’t think that automatic systems (DK) will be the panacea of 
nonlinear modeling (even if everybody here is dreaming of a realtime spice). 
Very often only a human can see patterns in circuits and find shortcuts to 
simplify things. Moreover there are so many other dicretization schemes to be 
investigated (multistep).
IMHO the very "big leap ahead" was the KMethod formalism/theory by 
Borin/DePoli, an extension of StateSpace one,  and I guess that in the future 
we will see more papers to come, even if I admit that lately the computer music 
group of University of Padova has been changing their research targets.

Regards,
Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu 
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Andrew Simper
Inviato: mercoledì 13 novembre 2013 07:12
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: Trapezoidal integrated optimised SVF v2

On 10 November 2013 18:12, Dominique Würtz  wrote:
> Am Freitag, den 08.11.2013, 11:03 +0100 schrieb Marco Lo Monaco:
> I think a crucial point is that besides replicating steady state 
> response of your analog system, you also want to preserve the 
> time-varying behavior (modulating cutoff frequency) in digital domain.
> To achieve the latter, your digital system must use a state space 
> representation equivalent to the original circuit, or, how Vadim puts 
> it, "preserve the topology". By starting from an s-TF, however, all 
> this information is lost. This is in particular visible from the fact 
> that implementing different direct forms yields different modulation 
> behavior.

Yes, modulation behaviour is a very important point to me.


> BTW, in case you all aren't aware: a work probably relevant to this 
> discussion is the thesis of David Yeh found here:
>
> https://ccrma.stanford.edu/~dtyeh/papers/pubs.html
>
> When digging through it, in particular the so-called "DK method", you 
> will find many familiar concepts incorporated in a more systematic and 
> general way of discretizing circuits, including nonlinear ones. Can't 
> say how novel all this really is, still it's an interesting read anyway.
>
> Dominique

Thanks very much for this link! I have read most of these papers in isolation 
previously, but missed David Yeh's dissertation 
https://ccrma.stanford.edu/~dtyeh/papers/DavidYehThesissinglesided.pdf
which contains a great description of MNA and how it relates to the DK-method. 
I highly recommend everyone read it, thanks David!!

I really hope that an improved DK-method that handles multiple nonlinearities 
more elegantly that it currently does. A couple of things to note here, in 
general, this method uses multi-dimensional tables to pre-calculate the 
difficult implicit equations to solve the non-linearities, but as the number of 
non-linearities increases so does the size of your table as noted in 6.2.2:

"The dimension of the table lookup for the stored nonlinearity in K-method 
grows with the number of nonlinear devices in the circuit. A straightforward 
table lookup is thus impractical for circuits with more than two transistors or 
vacuum tubes. However, function approximation approaches such as neural 
networks or nonlinear regression may hold promise for efficiently providing 
means to implement these high-dimensional lookup functions."

Also note that also in section 2.2 some basic "tone stack" circuits are 
discussed, which contain 3 capacitors 2 pots and a resistor, which are trivial 
enough to solve using direct integration methods. Yeh notes that WDF can only 
handle serial or parallel connections of components, not arbitrary ones like in 
the tonestack, an

[music-dsp] R: A rephrasing of some of my sampling theory related concerns

2013-11-13 Thread Marco Lo Monaco
Dave, that image of you yelling °FEEDBACK° in such a funny way made my day,
thanks!
:)))

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di vadim.zavalishin
Inviato: martedì 12 novembre 2013 18:22
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] A rephrasing of some of my sampling theory related
concerns

 On Tue, 12 Nov 2013 17:10:15 +, Dave Gamble wrote:
> As soon as I see a
> pole in the transfer function, I yell "FEEDBACK" and run around the 
> room waving my arms. ;)

 In doing this, are you trying to compute the residue at the pole by using
the Cauchy integral formula?
 Sorry, couldn't resist the pun :-D :-D :-D

--
 Vadim Zavalishin
 Reaktor Application Architect | R&D
 Native Instruments GmbH
 +49-30-611035-0

 www.native-instruments.com
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Marco Lo Monaco
Hi Dave,
Agreed on Vadim's interpretation of the mktg gimmick about the zdf/whatever
acronyms (btw: it took me a while to understand the 0df wasnt a hex
number!!! :) )
As I already told in my previous post, according to my knowledge the "free
delay loop" (which implies instantaneous feedback) term has been forged in
the 1970s by Mitra.
The Serafini's/Simper/Vadim's method for linear case we all know that is
equivalent to the classic SS formulation. Classic approach use voltages on
capacitors passing thru ABCD diff eqs system, while the straight integration
(via blt) uses capacitor currents, but the ABCD formalism in the discrete
time is equivalent.
As far as the nonlinear case is concerned, maybe Serafini was the first to
use the same technique (straight discretization from diff eqs).
But (again I want to restate this for sake of completion) the formalism was
designed even much before him, by De Poli/Rocchesso et alias in the mid
1990s.
Those techniques (recently extended by Yeh in his phd thesis) have strong
formalism and theory behind (as well as statespace has): it's a rich
technique and there are some theorems (which I guess not very well known in
the community) that let the overall theory be used in a very wide
application field (i.e. even to system with nonlinearity _with_ memory).

The fictious delay inserted in the loop is still used around - I think -
especially in those cases where the bandwidth of the feedback signal is low
compared to the audio rate (i.e. envelope followers and NOT the case of moog
ladder).  Adding a unit delay in the feedback filter which is not somewhat
intended to be a delay fx to me it's a big mistake and I don’t get any
surprise realizing that the final magnitude responses are so far off as to
be worthless
In the other cases where low bandwidth is practical, even if not the best
theoretic solution, I guess it is a good trade off since in some
circumstances the signal can be really considered quasi-static between two
samples and gives a benefit in the cpu cost.

Ciaoo
Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Dave Gamble
Inviato: lunedì 11 novembre 2013 17:33
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] Trapezoidal integrated optimised SVF v2


On Monday, November 11, 2013, robert bristow-johnson wrote:
>
>
>> So "delay-free" is a pointless expression to me,
>
>
> it has been used to discuss or advertise delay-free feedback which, to me,
still remains an impossibility for discrete-time systems.
>
>
> and i've seen the papers.  when it all boils down to it in a real-time
filter, you are defining your current output sample in terms of the current
input sample and previous input samples and, if it's recursive, the previous
output samples.  but you cannot define your current output sample in terms
of the current output sample.
>
Sorry to interject, but I believe I can clarify this.
Obviously you're correct in your assertion (RBJ), but the phrase
"delay-free" is something akin to a marketing term that was being bandied
about a year or so ago.

Evidently it has not been clearly explained if you're none the wiser to
this.

It stems from this, and its ilk:
http://musicdsp.org/showArchiveComment.php?ArchiveID=24
which is an implementation of Stilson and Smiths (now classic?) analysis of
the Moog diode-ladder VCF.
As you can verify by inspection, a single-sample delay has been added in to
the implementation to approximate the feedback path.
It is *this* entirely spurious delay that is being used in the phrase
"delay-free". 

Above is the entire point. I'm now going to provide what I believe to be the
history of usage of the phrase, which I don't expect to be very interesting
to anyone besides myself, I'm just noting it down here.

What appears to have happened is that the musicdsp.org sample code came into
common usage for synth filters. If you quickly check the maths, you'll see
that the xfer function of the filter is rather destroyed by this "tweak for
computability". 
I am aware of this "trick" (insert a sample delay into a feedback path)
being used in quite a few places where the alternative proved too costly to
compute. By my reckoning, it died out a few years ago, though I suspect it's
still around somewhere.

While my memory is absolutely not to be trusted, I think I remember chatting
with Andy about this, and I'm sure I have this wrong, but at the time, he
wasn't convinced that things such as the RBJ filters were technically "delay
free". I think I made the argument that if someone were to go about
inserting arbitrary unit delays into circuit models, the magnitude responses
would be so far off as to be worthless. In any case, I'm extremely pleased
to see Andy clearly state the case as above.

At some point, the process of using algebraic rearrangements such as Andy
and Vadim (as progenitor of this phase. The previous phase obvious to me
being Serafini, but I'm

[music-dsp] R: Time Varying BIBO Stability Analysis of Trapezoidal integrated optimised SVF v2

2013-11-11 Thread Marco Lo Monaco
Hi Ross/Andrew,
I couldnt wait to put my hands on it :) (the creativity/curiosity spark is
always lit)

Here is my MuPad notebook :

https://www.dropbox.com/s/er1s0aeheuv3igg/VCF-StateVariableFilter.pdf

I basically demonstrate what I already said in my previous posts.
The standard state-space approach leads to identical results to your
algorithm, I would say even without the trick of the TPT, because of course
we are talking about an instantaneous _linear_ feedback.
That makes sense: "my" ABCD statespace matrixes are identical to "yours",
same thing applies for CPU load in terms of MULs/ADDs.

Of course the main purpose of my analysis was to keep in mind that you will
_always_ have to deal with an "implicit"/hidden inversion of a matrix A of
the analog system (actually (I-A*h/2)) of the same order of your system,
which in this SVF fortunate case is 2 and also easy. For a moment think
about an analog system who has 10 capacitors and you want it to change it at
audio rate: you will have to deal with a0,a1,a2... coeffs that will be
rationaly polynomials, meaning lots of DIVs at runtime). Generally speaking,
inverting a matrix at audio rate is not a good idea :)

As I already said, we agree that these approaches are dated a long ago.

Ross showed the same results of matrix A,B (he didn't showed C,D because
they are not needed for the Laroche BIBO analysis). His results match my
same ones.

Sorry if I omitted some code in collecting state/in/out variables in your
original algo, but I already got 6 pages of pdf this way and I thought it
would have been a good idea to keep it simple.

Hope to have helped you clear my my point of view.

Ciao

Marco


-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Ross Bencina
Inviato: domenica 10 novembre 2013 16:58
A: A discussion list for music-related DSP
Oggetto: [music-dsp] Time Varying BIBO Stability Analysis of Trapezoidal
integrated optimised SVF v2

Hi Everyone,

I took a stab at converting Andrew's SVF derivation [1] to a state space
representation and followed Laroche's paper to perform a time varying BIBO
stability analysis [2]. Please feel free to review and give feedback. I only
started learning Linear Algebra recently.

Here's a slightly formatted html file:

http://www.rossbencina.com/static/junk/SimperSVF_BIBO_Analysis.html

And the corresponding Maxima worksheet:

http://www.rossbencina.com/static/junk/SimperSVF_BIBO_Analysis.wxm

I had to prove a number of the inequalities by cut and paste to Wolfram
Alpha, if anyone knows how to coax Maxima into proving the inequalities I'm
all ears. Perhaps there are some shortcuts to inequalities on rational
functions that I'm not aware of. Anyway...

The state matrix X:

[ic1eq]
[ic2eq]

The state transition matrix P:

[-(g*k+g^2-1)/(g*k+g^2+1), -(2*g)/(g*k+g^2+1) ]
[(2*g)/(g*k+g^2+1),(g*k-g^2+1)/(g*k+g^2+1)]

(g > 0, k > 0 <= 2)

Laroche's method proposes two time varying stability criteria both using the
induced Euclidian (p2?) norm of the state transition matrix:

Either:

Criterion 1: norm(P) < 1 for all possible state transition matrices.

Or:

Criterion 2: norm(TPT^-1) < 1 for all possible state transition matrices,
for some fixed constant change of basis matrix T.

norm(P) can be computed as the maximum singular value or the positive square
root of the maximum eigenvalue of P.transpose(P). I've taken a shortcut and
not taken square roots since we're testing for norm(P) strictly less than 1
and the square root doesn't change that.

 From what I can tell norm(P) is 1, so the trapezoidal SVF filter fails to
meet Criterion 1.

The problem with Criterion 2 is that Laroche doesn't tell you how to find
the change of basis matrix T. I don't know enough about SVD, induced p2 norm
or eigenvalues of P.P' to know whether it would even be possible to cook up
a T that will reduce norm(P) for all possible transition matrices. Is it
even possible to reduce the norm of a unit-norm matrix by changing basis?

 From reading Laroche's paper it's not really clear whether there is any way
to prove Criterion 2 for a norm-1 matrix. He kind-of side steps the issue
with the norm=1 Normalized Ladder and ends up proving that norm(P^2)<1. This
means that the Normalized Ladder is time-varying BIBO stable for parameter
update every second sample.

Using Laroche's method I was able to show that Andrew's trapezoidal SVF
(state transition matrix P above) is also BIBO stable for parameter update
every second sample. This is the final second of the linked file above.

If anyone has any further insights on Criterion 2 (is it possible that T
could exist?) I'd be really interested to hear about it.

Constructive feedback welcome :)

Thanks,

Ross


[1] Andrew Simper trapazoidal integrated SVF v2
http://www.cytomic.com/files/dsp/SvfLinearTrapOptimised2.pdf

[2] On the Stability of Time-Varying Recursive Filters
http://www.aes.org/e-lib/browse.cfm?elib=14168
--
dup

[music-dsp] R: R: R: Trapezoidal integrated optimised SVF v2

2013-11-10 Thread Marco Lo Monaco
Hi Andrew,
you misinterpreted my words :) I know you are not "intentionally" hiding
anything.
The computation are intrinsically hidden themselves because of the nature of
your approach (solving directly the differential eqs instead using  the ABCD
matrixes). All the coeffs involved in input, output and state vars belong to
some matrix, which is why I say it's a statespace-like algo.

AFAIK the "transient suppressor technique" is a nice analysis of the problem
and suggest a fix (btw which is expensive). I am not aware of anything that
worked fine to keep a a given filter timevarying with theoretically no
artifacts, even if I am working on something about recently (just got some
ideas and I cant wait to test them).
Laroche suggested an analysis to understand if a filter is BIBO stable
during transients, but I remember in his paper he says that's not the
glitch-free problem test (of course).

M.

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Andrew Simper
Inviato: domenica 10 novembre 2013 13:18
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: R: Trapezoidal integrated optimised SVF v2

On 10 November 2013 18:43, Marco Lo Monaco  wrote:
> if you look at Yeh's work you can have an idea. The (D)KMethod is a 
> generalization/extension of the state space ABCD approach to analog
systems.
> Vadim's and Andrew are basically the same thing and the inversion is 
> hidden in the calculation of the coeffs and also takes benefit of the 
> order 2 size of matrix A (which is very simple to invert).

I didn't mean to hide anything from you :) I have mentioned MNA or modified
nodal analysis, and in all the links to qucs it shows how to add entries to
the matrices involved in the solution of the circuit equations. Here is a
direct link to the MNA matrix formulation:

http://qucs.sourceforge.net/tech/node14.html


> There is also a lot of good work made by the finnish guys (Valimaki et 
> al) about the usage of the so called "transient suppressors".

Transient suppressors just screams to me of an underlying problem that
should be fixed.


> Without telling too much (sorry I cant :) ) if I have time I will show 
> the similarity and the matrix inversion problem analyzing the SVF via 
> a statespace approach similar to Andrew's. I am unfortunately fully 
> loaded of work, but as I get some free time I will try to publish a pdf.
>
> My 0.02EUR >;-)
>
> Marco

No rush on any of this, whenever you get a chance it would be appreciated.

All the best,

Andy
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-10 Thread Marco Lo Monaco
Hi Andrew/Dominque,
The DK-Method is a systematic way to implement on automatic systems the much
more breathru approach called KMethod, who was (how fun) discovered by my
professor in the late 1990s.
I cant either tell the all the story (it would require a lot of time because
it's a well defined and robust theory analogous to the state space one), but
if you look at Yeh's work you can have an idea. The (D)KMethod is a
generalization/extension of the state space ABCD approach to analog systems.
Vadim's and Andrew are basically the same thing and the inversion is hidden
in the calculation of the coeffs and also takes benefit of the order 2 size
of matrix A (which is very simple to invert). What I want to point out is
that it is an intrinsic state space matrix formulation where time variance
of the system demands a matrix inversion either at audio or control rate,
almost the same how you convert on Matlab via bilinear() the analog ABCD
matrix in the equivalent digital one (via bilinear).
Beware that using statespace for timevarying systems is better but not the
best. In ANY system of course you preserve the values of the state variables
(which is good compared to TF) but that doesn’t mean that you wont have
artifacts or transients at all.
There is also a lot of good work made by the finnish guys (Valimaki et al)
about the usage of the so called "transient suppressors".

Without telling too much (sorry I cant :) ) if I have time I will show the
similarity and the matrix inversion problem analyzing the SVF via a
statespace approach similar to Andrew's. I am unfortunately fully loaded of
work, but as I get some free time I will try to publish a pdf.

My 0.02EUR >;-)

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Dominique Würtz
Inviato: domenica 10 novembre 2013 11:13
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: Trapezoidal integrated optimised SVF v2

Am Freitag, den 08.11.2013, 11:03 +0100 schrieb Marco Lo Monaco:
> Being in the linear modeling field, I would rather have analized the 
> filter in the classic virtual analog way, reaching an s-domain 
> transfer function which has the main advantage that is ready to many 
> discretization
> techniques: bilinear (trapezoidal), euler back/fwd, but also multi 
> step like AdamsMoulton etc. Once you have the s-domain TF you just 
> need to push in s the correct formula involving z and simplify the new 
> H(z) which is read to be implemented in DF1/2.

I think a crucial point is that besides replicating steady state response of
your analog system, you also want to preserve the time-varying behavior
(modulating cutoff frequency) in digital domain.
To achieve the latter, your digital system must use a state space
representation equivalent to the original circuit, or, how Vadim puts it,
"preserve the topology". By starting from an s-TF, however, all this
information is lost. This is in particular visible from the fact that
implementing different direct forms yields different modulation behavior.

BTW, in case you all aren't aware: a work probably relevant to this
discussion is the thesis of David Yeh found here:

https://ccrma.stanford.edu/~dtyeh/papers/pubs.html

When digging through it, in particular the so-called "DK method", you will
find many familiar concepts incorporated in a more systematic and general
way of discretizing circuits, including nonlinear ones. Can't say how novel
all this really is, still it's an interesting read anyway.

Dominique


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Trapezoidal integrated optimised SVF v2

2013-11-09 Thread Marco Lo Monaco
Hi Andrew,

>>I think it's useful for everyone, and especially those wanting to handle
non-linearities or other music behaviour.

Yes, but people who are working in this field and doing virtual analog have
known these tricks for at least 10 years ago. :)

> Being in the linear modeling field, I would rather have analized the 
> filter in the classic virtual analog way, reaching an s-domain 
> transfer function which has the main advantage that is ready to many 
> discretization
> techniques: bilinear (trapezoidal), euler back/fwd, but also multi 
> step like AdamsMoulton etc. Once you have the s-domain TF you just 
> need to push in s the correct formula involving z and simplify the new 
> H(z) which is read to be implemented in DF1/2.

>>You don't need the laplace space to apply all these different
discretization (numerical integration) 
>>methods, they all come from the time domain and can be derived >>pretty
quickly from first principles.

Well, of course the s = (T/2)(z-1)/(z+1) conversion comes from discretizing
a differential equation. Using Laplace is simply more handy IMHO. Mainly
because once you have your state space model (or TF) representation you can
choose (via a CAS tool) to discretize it at best. Moreover modeling an
analog network with s-domain impedence sometimes is quicker/easier. But I
understand it's only a routine behavior and it's a matter of tastes. I
remember I have once modeled 50eqs linear system via KVL/KCL always
remaining in the continuos domain and because I wanted to see different
behavior I chose the Laplace representation. If I had to solve several times
those linear system with different integration rules I would have gone nuts
:)


> What I would say more about this method is that , since it is 
> intrinsicly a biquad, you not only have to prewarp the cutoff fc but 
> also the Q. In such

>>Are you talking about bell filters here? For a low pass resonant filter it
is hard to warp the Q since there is no 
>> extra degree of freedom to keep its gain down, so I'm not sure how
prewarping the Q is possible in this case, but I'd love to hear if it can be
done.

Not only, but wait I could be wrong on this. I always took RBJ cookbook as a
bible and he doesn't really say that the Q cant be prewarped for LPF/HPF
starting from the analog Q. Maybe RBJ can correct me :)

>> You can do all the same warping no matter if you go through the laplace
space or 
>> directly integrate the circuits, and it doesn't matter what realisation
you are using, in 
>>particular you can have a look at my previous workbook
> >where I matched the SVF to Roberts RBJ shapes:

I would love to see it, if you have the chance.

>>The basic idea in most circuit simulators is to linearise the non-linear
bits, then iterate to converge on a solution (find the
>>zero) of the equations which also include handling your integration method
(but a final single step after convergence is needed to update the states).
This is all done by turning everything into y = m x + b form, since then if
"y" >>is on both sides like y = m (x - y) + b then you can easily solve it:
y + m y = m x + b, y (1 + m) = m x + b, y = (m x + b)/(1 + m), and that is
about as hard as it gets. For each implicit dependency you have a division
to eliminate it, and
>> sometimes you can group the divisions. Note that the implicit dependency
could be either a linear one (which means you can solve it in one step) or a
non-linear one (which means you need to iterate).

>>I'll post another workbook showing how straightforward this is when I get
a chance.

I guess that this method is very similar to Vadim and it's basically the
very famous "delay-free loop problem" that has been faced for the first time
AFAIK in 1975 by Szczupak and Mitra in "Detection location and removal of
delayfree loops in digital filter configuration". So to me it's nothing new,
but I work in the field so I guess I am more used to this stuff than others
who are not involved in virtual analog plus nonlinearity. Others have then
enhanced and generalized it in the following years.

Btw, you approach (and Vadim's) looks like more a state-space rather than a
TF one. It can be easily generalized for the MIMO case (your SVF is SIMO 1
in 3 outs). Note that if you try to generalize you will end to have a matrix
representation where the transition matrix is to be inverted. So my guess is
that for time varying parameters at audio rate that could be not so
cpu-friendly (inversion is always a pain in the ass). Nonetheless the SVF is
quite a fortunate topology that simplifies the inversion problem.

Take care

Marco

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Trapezoidal integrated optimised SVF v2

2013-11-08 Thread Marco Lo Monaco
Hi guys,
the work that Andrew did is of course a classic way to implement
discretization of any analog filter by not considering any s-domain
analysis, but discretizing directly from time domain differential eqs. I
think it is useful for people here not having a Master's degree education to
review from time to time those concepts and that's why I think it should be
well accepted.
Being in the linear modeling field, I would rather have analized the filter
in the classic virtual analog way, reaching an s-domain transfer function
which has the main advantage that is ready to many discretization
techniques: bilinear (trapezoidal), euler back/fwd, but also multi step like
AdamsMoulton etc. Once you have the s-domain TF you just need to push in s
the correct formula involving z and simplify the new H(z) which is read to
be implemented in DF1/2.

What I would say more about this method is that , since it is intrinsicly a
biquad, you not only have to prewarp the cutoff fc but also the Q. In such
cases I typically use the analog s-domain TF and then also compensate the Q
via the very famous RBJ cookbook (compute the analog Q and redesign the
digital biquad with the fc, Q and gain params). Compensating the Q is
important not only because you prevent the stretching as your cutoff reaches
Nyquist but also because it minimizes the same stretch at different sampling
frequency.

Nonetheless I would like to ask Andrew if he has time to show how he deals
with a tanh-like nonlinearity with his approach: I think that it would be
very interesting also and raise the level of the discussion to a higher one.

Ciaoo

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Andrew Simper
Inviato: mercoledì 6 novembre 2013 10:46
A: A discussion list for music-related DSP
Oggetto: [music-dsp] Trapezoidal integrated optimised SVF v2

Here is an updated version of the optimised trapezoidal integrated svf which
bundles up all previous state into equivalent currents for the capacitors,
which is how I solve non-linear circuits (although this solution is just the
linear one that I'm posting here). The only thing to note that with
trapezoidal integration you have gain terms of g =
tan(pi*cutoff/samplerate) which become very large with high cutoff, so care
needs to be taken if these "g" terms stand alone since the scaling can get
large and could impact numerical performance:

http://www.cytomic.com/files/dsp/SvfLinearTrapOptimised2.pdf

Here is a similar thing done for a Sallen Key low pass:

http://www.cytomic.com/files/dsp/SkfLinearTrapOptimised2.pdf

Please note there is absolutely nothing new here, this is all standard
circuit maths that has been around for ages, and all the maths behind it was
invented by people like Netwon, Leibnitz, and Euler, they deserve all the
credit and came up with ways of solving not just this linear case but also
the non-linear case. Depending on what you are doing trapezoidal may not be
the best integrator to use so most systems of solving these equations
support several types of integrator. Here are some handy references:

http://en.wikipedia.org/wiki/Capacitor
http://en.wikipedia.org/wiki/Nodal_analysis
http://qucs.sourceforge.net/tech/node26.html
http://www.ecircuitcenter.com/SPICEtopics.htm

Please let me know if there are any mistakes. Enjoy!

Andy
--
cytomic - sound music software
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Effects paradigms

2013-10-10 Thread Marco Lo Monaco
Hi Stephen, I would also suggest you the article from Dattorro in which
explains modulation effects. Be warned that there is a mistake in the block
scheme about the comb filtering used within, related t the dry and wet
signal paths.
https://ccrma.stanford.edu/~dattorro/EffectDesignPart1.pdf
https://ccrma.stanford.edu/~dattorro/EffectDesignPart2.pdf

The error is in the fig 36 of the part II, where the signal to be blended is
to be taken before the summing node and not after it.

Have fun

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di ChordWizard
Software
Inviato: venerdì 4 ottobre 2013 06:58
A: music-dsp@music.columbia.edu
Oggetto: [music-dsp] Effects paradigms

Hi all,

I'm wondering if someone can point me to some good background articles that
illuminate what is happening to the signal with common effects such as
reverb, chorus, flanger, etc.

I'm not specifically talking about algorithm strategies here - although I am
also interested in them, if you can recommend anything.

Rather I am generally curious about the conceptual processes that are being
implemented with these effects.

For example, I imagine that reverb is adding a series of duplicates of the
original signal at regular delays with descending amplitudes.  Or is there
something more to it?

Obviously overdrive involves clipping, but there are so many varieties
around, there must be a lot more to it than that.

And I have very little idea what is happening with chorus, flanger, phaser,
etc.  Why does chorus often have a stereo output, does this naturally arise
from the effect design?

Any pointers much appreciated.

Regards,

Stephen Clarke
Managing Director
ChordWizard Software Pty Ltd
corpor...@chordwizard.com
http://www.chordwizard.com
ph: (+61) 2 4960 9520
fax: (+61) 2 4960 9580


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: R: Sweeping tones via alias-free BLIT synthesis and TRI gain adjust formula

2013-05-18 Thread Marco Lo Monaco
Thank you Theo for sharing your point of view.
Of course I cannot take into account real DA reconstruction filters effects,
which of course are not ideal. We stop at least considering the quality by
seeing the signals in digital time domain.
As far as the integration methods, there is a nice method which basically a
minimum phase integrator, taken as the average of 2 Z-transform of 1/s,
discretized as follows: one the bilinear and the other one the euler
difference. The paper is titled "Novel Digital Integrator and
Differentiator" (M.A. Al-Alaoui). This one behaves much better than the
classical weighted sum method.

Hope this could sound interesting to you

Marco

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Theo Verelst
Inviato: venerdì 17 maggio 2013 15:20
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] R: Sweeping tones via alias-free BLIT synthesis and
TRI gain adjust formula

Marco Lo Monaco wrote:
> Thank you Nigel.
> Well my synth has strict specs, and of course wavetable is a solution.
.

The sinc function is a solution for preparing Pefrect Reconstruction
(meaning getting theoretically back to the analog waveform which was sampled
or virtually sampled). When you do integration in the sample domain, you'd
need to take into account most integration methods do not honor perfect
reconstruction. Also, *any* FM modulation in theory has unlimited spectrum,
so there should in principle be methods built in to deal with this, I mean
if you want to put the dots on i's. Also, when you use (like most everybody)
a far from perfect reconstructing DA conversion (or serious upsampling),
which you can know when the DA conversion or upsampling is at most taking
milliseconds delay (it could well take seconds throughput delay to do Perf.
Reconstr. DA conversion), the more nominal DA convertors have their own
impulse response/averaging logic, which can be used to make medium long
signal considerations to average out aliasing or what you want to call it DA
errors. Of course that depends on what your synth output is connected to, in
some cases you want to do just makesure aliasing isn't bad, in other cases
you want an exact prepared waveform through the DA converter, and in yet
other cases you want to make sure your output can be recognized by pro
equipment (which for instance can recognize the intent of your filtering and
waveform mangling from certain general criteria ). Hope this helps.

Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


[music-dsp] R: Sweeping tones via alias-free BLIT synthesis and TRI gain adjust formula

2013-05-18 Thread Marco Lo Monaco
Tank you Nigel for you advices.
The problem of overlapping the minBLEP at higher freqs is not only the
increased CPU cost, but the fact that subsequents overlaps do not add to a
constant, and at the beginning of the tone generation you will see a
staircase function (depending on the period in samples < 32) added with the
real requested waveform. 
With periods >32 samples steps add only when the average signal is zero,
whilst with periods lets say 16 samples, starting from silence, you will
have a minBLEP reaching ~1.0 value at around 16 samples distance and there
another one starting/adding to ~2.0 value for the remaining 16 samples. Hope
you got what I mean.
That's why I think the solution is to use smaller table size.

Marco
PS: yes if you have to upload some videos I would be interested. :)

-Messaggio originale-
Da: music-dsp-boun...@music.columbia.edu
[mailto:music-dsp-boun...@music.columbia.edu] Per conto di Nigel Redmon
Inviato: sabato 18 maggio 2013 03:25
A: A discussion list for music-related DSP
Oggetto: Re: [music-dsp] Sweeping tones via alias-free BLIT synthesis and
TRI gain adjust formula

Thanks for the detailed explanation, Marco.

For minBLEPs, I did generate my own tables, though that was lost and I'd
have to recreate that…but at the moment I'm not in search of a "classic
synth waveforms only" solution. (And if I were, I might well go the BLIT
route instead.)

For periods less than the minBLEP size, I'm pretty sure you need to overlap
multiple minBLEPS, making the solution less attractive at higher
frequencies…

Nigel


On May 17, 2013, at 2:29 AM, Marco Lo Monaco 
wrote:

> Thank you Nigel.
> Well my synth has strict specs, and of course wavetable is a solution.
> My mem usage computation was related to have perfectly NO alias for 
> all midi notes. I well understand that accepting small (inaudible) 
> alias tables can be shrinked (and I will consider your suggestion 
> since a diadic approach is of course needed as you go up with frequency).
> 
> The 300 Mb requirement was done considering that the specs are to have 
> 1 sample rise time at lowest midinote. That means that oversampling 
> there should use around 5000 pts for full harmonic content to Nyquist. 
> Then, keeping that resolution as minimum, scaling to Nyquist for all 
> other frequencies requiring absolutely no foldover, means that we need 
> 2756 harmonics for midi note 0 and thus  2756 wavetables as harmonics 
> disappear with increasing midi notes. 2756 (max harms) * (5001 * 4 
> floats) 3
> (TRI/SAW/SQR) = 165Mb (maybe I said 300Mb considering 64 bit floats,
duh!).
> As a starting point I tried that approach on Scilab and I found 2
problems: 
> 1) the PWL interpolation spectrum is not comparable with BLIT which 
> has the lowest roundoff noise
> 2) how to deal with added/cut harmonic (wavetable switch) without 
> glitching audio (xfades solutions are likely, meaning that we need 
> twice read access when pitchbending)..
> Yes PWM is usally done via SAW subtraction, and of course I could use 
> the same tables to make a SQR, at the cost of double memory read 
> access (a good suggestion though, ;) )
> 
> If you didn’t try BLIT, I think it is the best pure tone solution for 
> steady signals. No memory usage, simple quadrature oscillator to be 
> used, few add/mul to generate an aliasfree tone. I think that with 
> 64bit precision even leakage is not a very big issue. My synth specs 
> request a lot of voices, crystal quality tones and less memory as possible
(ha!).
> 
> I also of tried minBLEP (because I need hardysnc). Very strangely on 
> Scilab I implemented rceps() with the same functions that the Matlab 
> help gives, but I get a different shaped minBLEP. Then I used Eli's 
> original 64x OS
> Nz=32 .mat file (imported to Scilab) that I found on the music-dsp 
> archive by  lo...@rpgfan.demon.co.uk and by using it I still get 
> aliasing on the saw, which hasn’t even the time domain shape of a text 
> book alias free saw (the sinc-like oscillations are only on the 
> falling front and not at the end of the ramp). So all the expectations 
> that I had on such method has been deflated quite a bit! :(
> 
> Moreover how to use that minBLEP for periods of less than 32 samples 
> is still an open question. Maybe just use different minBLEP tables, 
> but their resolution would be always less in terms of Nz, and I don’t 
> understand cleary how it would affect the aliasing (Eli is not so 
> clear about it or at least I don’t geti it :) ).
> 
> My idea is that I will have to use different techniques depending on 
> the
> contest: BLIT is working very well, the sincM formula is so pretty. I 
> am also trying to use the SWS method (truncated sinc in an OLA 
> fashion) to see if frequency modulation can be reliable in the

[music-dsp] R: Sweeping tones via alias-free BLIT synthesis and TRI gain adjust formula

2013-05-17 Thread Marco Lo Monaco
hs more critically sampled (second table 1024, etc.,
although I'd add a little back by making the top tables oversampled). So,
something like a factor of four—22k for the set instead of 88k. Also, you
can cut either method (variable or fixed table sizes) in half if you decide
40 Hz is low enough and you can give up a little in the bottom octave
(again, you won't hear it on most things, because you'll either use a wave
with little energy up there, or a lowpass filter).

Still, if it's running on a host computer, memory is cheap. Even the 88k per
table is no big deal, but you can drop that to a fourth or eighth if you
want.

For a square/rectangle wave, just use two phases of saw—no new wavetables.

For sine, you already have it in the top octave of the saw—or any other
waveform–no new table.

For triangle, the harmonics drop off quickly (inverse square) that you don't
need as many tables, if you want to optimize; it would be a bigger savings
with variable tables (about half, just by losing the lowest wavetable).

For arbitrary waveforms, you build only as many tables in a set as the
harmonic content requires. It's easy enough to have your table-generation
algorithm be smart about the strength of upper harmonics too, to optimize.

And of course, even if you support 1000 wave types, it's unlikely you'll
have more than a few loaded into memory at one time.

Check out my tutorial, with code (start for the bottom up):

http://www.earlevel.com/main/category/digital-audio/oscillators/wavetable-os
cillators/

I'm trying to finish a short video with examples, to upload soon.

Nigel



On May 16, 2013, at 9:05 AM, Marco Lo Monaco 
wrote:

> Hi guys, here is a repost of a conversation between me and RBJ under 
> his permission, since he couldnt send to the NG via plain text from his
browser.
> Pls if some of you guys have some suggestion, it would be very much 
> appreciated.
> Marco
> 
>  Original Message 
> --------
> Subject: [music-dsp] Sweeping tones via alias-free BLIT synthesis and 
> TRI gain adjust formula
> From: "Marco Lo Monaco" 
> Date: Tue, May 14, 2013 6:34 am
> To: music-dsp@music.columbia.edu
> --
> 
>>> I am here asking what is the best practice to deal with frequency
> modulation via BLIT generation.
> 
> hi Marco,
> 
> i've fiddled with BLIT longer ago. as i recall i generated a string of
> sinc() functions instead of a string of impulses and i integrated them 
> along with a little DC bias to get a saw. for squareit were two little 
> BLITs, alternating in sign, per cycle, and integrated. and triangle 
> was an integrated square. i found generating the BLITs to be 
> difficult, i eventually just used wavetables for the BLITs at 
> different ranges. and then i thought why not just use wavetable to
generate the waveforms directly.
> i know with the sawtooth, the bias going into the integrator had to 
> change as the pitch changes. i dunno what to do about leftover 
> "charge" left in the integrator other than maybe make the integrator a 
> little bit leaky. so that is my only suggestion, if you have the rest 
> of the BLIT already licked.
> 
> the net advice i can give you is to consider some other method than 
> BLIT (like wavetable) and if you're using BLIT with a digital 
> integrator, you might have to make the integrator a little "leaky" so 
> that an DC component inside can "leak" out.
> 
> bestest,
> 
> r b-j
> 
>  Original Message 
> 
> Subject: R: [music-dsp] Sweeping tones via alias-free BLIT synthesis 
> and TRI gain adjust formula
> From: "Marco Lo Monaco" 
> Date: Wed, May 15, 2013 4:46 am
> To: r...@audioimagination.com
> --
> 
> 
> Hi Robert,
> I tried with leaky: the thing is that it seems you need to compensate 
> for amplitudes if you are using a fixed cutoff leaky integrator (if 
> you filter a 12kHz blit, its amplitude with an LPF 5Hz will be much 
> lower than a 100Hz blit and compensating the amplitude could generate 
> roundoff noise at high freqs). As a hack one could use a varying 
> cutoff depending on the f0 tone to be synthesized, but that could have 
> problem again in sweeping tones (an LPF changing cutoff at audio rate 
> is not generally artifact free). In both cases transient and not 
> steady textbook waveforms are the result, which could be a problem if 
> there is a distortion stage following (like a moog ladder with its non
linearities).
> 
> I tried also with wavetables, and the clean solution is a memory 
> eager, starting

[music-dsp] R: Sweeping tones via alias-free BLIT synthesis and TRI gain adjust formula

2013-05-16 Thread Marco Lo Monaco
Hi guys, here is a repost of a conversation between me and RBJ under his
permission, since he couldnt send to the NG via plain text from his browser.
Pls if some of you guys have some suggestion, it would be very much
appreciated.
Marco

 Original Message 
Subject: [music-dsp] Sweeping tones via alias-free BLIT synthesis and TRI
gain adjust formula
From: "Marco Lo Monaco" 
Date: Tue, May 14, 2013 6:34 am
To: music-dsp@music.columbia.edu
--
>> I am here asking what is the best practice to deal with frequency
modulation via BLIT generation.

hi Marco,

i've fiddled with BLIT longer ago. as i recall i generated a string of
sinc() functions instead of a string of impulses and i integrated them along
with a little DC bias to get a saw. for squareit were two little BLITs,
alternating in sign, per cycle, and integrated. and triangle was an
integrated square. i found generating the BLITs to be difficult, i
eventually just used wavetables for the BLITs at different ranges. and then
i thought why not just use wavetable to generate the waveforms directly.
i know with the sawtooth, the bias going into the integrator had to change
as the pitch changes. i dunno what to do about leftover "charge" left in the
integrator other than maybe make the integrator a little bit leaky. so that
is my only suggestion, if you have the rest of the BLIT already
licked.

the net advice i can give you is to consider some other method than BLIT
(like wavetable) and if you're using BLIT with a digital integrator, you
might have to make the integrator a little "leaky" so that an DC component
inside can "leak" out.
 
bestest,
 
r b-j

 Original Message 
Subject: R: [music-dsp] Sweeping tones via alias-free BLIT synthesis and TRI
gain adjust formula
From: "Marco Lo Monaco" 
Date: Wed, May 15, 2013 4:46 am
To: r...@audioimagination.com
--

Hi Robert,
I tried with leaky: the thing is that it seems you need to compensate for
amplitudes if you are using a fixed cutoff leaky integrator (if you filter a
12kHz blit, its amplitude with an LPF 5Hz will be much lower than a 100Hz
blit and compensating the amplitude could generate roundoff noise at high
freqs). As a hack one could use a varying cutoff depending on the f0 tone to
be synthesized, but that could have problem again in sweeping tones (an LPF
changing cutoff at audio rate is not generally artifact free). In both cases
transient and not steady textbook waveforms are the result, which could be a
problem if there is a distortion stage following (like a moog ladder with
its non linearities).

I tried also with wavetables, and the clean solution is a memory eager,
starting to midi note 0 (8Hz) you need thousands of harmonics until Nyquist,
with PWL interpolation with at list 2001pts …you can easily reach 300 MB for
SQR/TRI/SAW!!! Probably a tradeoff and accepting aliasing a bit is the only
solution. The open problem there is the click (also happening with sincM)
that you get when you simply add/cut an harmonic in a sweeping context.
Maybe the SWS BLIT method is the only solution to avoid this and I must
investigate.

I also tried HardSync ala Eli Brandt and its consequent method of generating
aliasfree waveforms, but I get too much aliasing with his implementation for
minBLEP and with a 32 zerocrossing impulse seems that you cant treat
waveforms that has a lower 32 samples period (because the OLA method would
add up creating subsequent DC steps).

I thought it was much simpler to do an alias free synth, honestly!!!

Thank you for time

Marco

 Original Message 
From: r...@audioimagination.com [mailto:r...@audioimagination.com] 
Date: mercoledì 15 maggio 2013 17:08
To: Marco Lo Monaco
Subject: Re: [music-dsp] Sweeping tones via alias-free BLIT synthesis and
TRI gain adjust formula
--
 
i don't quite see the memory issues of wavetable as bad as you do, with
back-of-envelope calculation.  how many ranges do you need?  maybe 2 per
octave?  there are 10 octaves of MIDI notes.  maybe 4K per wavetable (unless
you do some tricky stuff so that the high-pitch wavetables may have fewer
points)  so that's 80K for a single waveform going up and down the whole
MIDI range.  HardSync will have a large collection of waveforms given the
different values of oscillator ratios (the main mod control for hardsync).
is it hardsync saw or hardsync square?
 
i gotta reacquaint myself with Eli Brandt's hardsync.  i don't know it.
 
L8r,
 
r b-j


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archiv

[music-dsp] Sweeping tones via alias-free BLIT synthesis and TRI gain adjust formula

2013-05-14 Thread Marco Lo Monaco
Hello guys,
I have been working on some scripts for generating BLIT analog waveform and
I get quite good results even with ideal integration for steady frequency
signals.
I only notice a problem in compensating the DC offset of the second
integration in the TRI generation expecially at high freqs (Above 8000 Hz at
44.1k): probably the drifts are not showing up for relatively short tones
(10 sec is the max that I actually tried). Moreover the Stilson formula for
compensating the slopes g(f,d) = 2f/d(1-d) is not so clear to me because for
d != 0.5 two different gain slopes should be use to center the waveform 
according to positive or negative ways up/down). Nonetheless I found that
for d=0.5 integrating in continuous time domain a perfect SQR should give a
different result like g(f) = 2f, because the peak of the TRI is simply the
area of the positive swing of the SQR (where daoes the ¼ factor comes
from?!? That formula is indeed working good).

Back to the sweeping thing, when I try to do a sweep generation by
modulating the phasor in the sincM BL generation, I have noticed that the
integrators generates unstable signals and unpredictable behavior (depending
on the range or on the speed of the sweep).

I am here asking what is the best practice to deal with frequency modulation
via BLIT generation.

By thinking of a SQR wave modulated in frequency I find obvious that the
drift is not being canceled within the period because generally speaking
between 2 zero crossing with positive slope the waveform itself looks like a
PULSE and has no zero DC mean value. Subsequent periods changed in frequency
will only worsen the integration steps. Of course going on TRI is even worse
and SAW too (even if I use the DC compensation f0/FS theoretical value
changed samplewise with the sweepeing f0) doesn’t work.

Does any of you guys ever tried to get the best of it in a sweeping context?
Any bag of tricks?

Thank you

Marco

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp