On 6/21/14 7:21 AM, Urs Heckmann wrote:
On 20.06.2014, at 17:37, robert bristow-johnson<r...@audioimagination.com>  
wrote:

On 6/20/14 10:57 AM, Andrew Simper wrote:
On 20 June 2014 17:11, Tim Goetze<t...@quitte.de>   wrote:

[Andrew Simper]
On 18 June 2014 21:01, Tim Goetze<t...@quitte.de>   wrote:
I absolutely agree that this looks to be the most promising approach
in terms of realism.  However, the last time I looked into this, the
computational cost seemed a good deal too high for a realtime
implementation sharing a CPU with other tasks.  But perhaps I'll need
to evaluate it again?
The computational costs of processing the filters isn't high at all, just
like with DF1 you can compute some simplified coefficients and then call
process using those. Since everything is linear you end up with a bunch of
additions and multiplies just like you do in a DF1, but the energy in your
capacitors is preserved when you change coefficients just like it is when
you change the knobs on a circuit.
Yeh's work on the Fender tonestack is just that: symbolic nodal
analysis leading to an equivalent linear digital filter.   I
mistakenly thought you were proposing nodal analysis including also
the nonlinear aspects of the circuit including valves and output
transformer (which without being too familiar with the method I
believe to lead to a system of equations that's a lot more complicated
to solve).


Nodal analysis can refer to linear or non-linear, so sorry for the
confusion.
well, Kirchoff's laws apply to either linear or non-linear.  but the methods we know as 
"node-voltage" (what i prefer) or "loop-current" do *not* work with non-linear. 
 these circuits (that we apply the node-voltage method to) have dependent or independent voltage or 
current sources and impedances between the nodes.
I don't quite understand. Are you saying that one can not write down an 
equation for each node once there's a diode or a transistor in the circuit?

I was of the opinion that transfer functions of, say, diodes are well known, 
and even though the resulting equations need more than just adds and multiplies 
and delays, they are still correct equations albeit ones that require more 
complex measures than undergrade math to solve. I could be wrong though, and in 
this case I'd be happy to learn!

it's possible that this is only a semantic issue. "Nodal analysis" seems a little less specific, but when i google it, all of the primary hits go to the "Node-voltage method", which together with the "Loop-current method" are the two primary ways electrical engineers are taught to analyze circuits containing independent or proportionally-dependent voltage or current sources and LTI impedances. only that.

the underlying physics: Kirchoff's current law, Kirchoff's voltage law, and the volt-amp characteristics of each element *is* applicable to any "lumped element" circuit, one with any combination of linear or nonlinear elements or memoryless or non-memoryless elements. but the Loop-current (sometimes called "mesh-current" analysis) and Node-voltage methods allow one to write the matrix of linear equations governing the circuit directly from inspection of the circuit. it's an analysis discipline we learn in EE class. write the matrix equation down on a big piece of paper, then go about solving the system of linear equations. but Loop-current and Node-voltage does not work with nonlinear volt-amp characteristics.

another semantic to be careful about is "transfer function". we mean something different when it's applied to LTI systems (the "H(z)" or "H(s)") than when applied to a diode. the latter semantic i don't use. i would say "volt-amp characteristic" of the diode or vacuum tube. or if it was a nonlinear "system" with an input and output, i would say "input-output mapping function" and leave "transfer function" to the linear and time-invariant.


  I was trying to point out that the linear analysis done by Yeh
starts the circuit but then throws it away and instead uses a DF1, and a
DF1 does not store the state of each capacitor individually, so when you
turn the knob you don't get the right time varying behaviour.
in the steady-state (say, a second after the knob is turned) is there the right 
behavior with the DF1?
But of course - in a linear filter, such as one composed out of RC networks. 
But what about voltage controlled filters?


of course a VCF driven by a constantly changing LFO waveform (or its digital model) is a different thing. i was responding to the case where there is an otherwise-stable filter connected to a knob. sometimes the knob gets adjusted before the song or the set or gig starts and never gets moved after that for the evening. for filter applications like that, i am not as worried myself about the "right" time-varying behavior (whatever "right" is). for the case of modulated filters, we gotta worry about it and a few different papers have made that case (i remember a good one from Laroche). in that case usually my worry ends when i implement a lattice or normalized ladder filter topology (with cos(w0) = 1 - 2*(sin(w0/2))^2 substituted to deal with the "cosine problem"). the filter work i have seen of Andrew's is, as best as i remember, substituting finite difference equations (perhaps Euler's forward difference? can't remember, Andrew) in for the derivatives we get at the capacitors and inductors of existing analog circuits. for circuits with nonlinear elements *and* elements with memory, i don't see any decent alternative, except i wouldn't model every goddamn part, i would model a few *specific* nodes in the stages of the amp and the parts connected between them, and i might lump some nonlinear things together, i think even if you don't and model ever part, some (but not all) this lumping comes out in the wash.

perhaps, the only decent way to model a CryBaby is to model each circuit part and apply Euler's forward differences (and have a high sample rate, so that "dt" is sufficiently modeled by "Delta t") to the capacitors.


  I am saying
you don't have to throw the circuit away, you can still get an efficient
implementation since in the linear case everything reduces to a bunch of
adds and multiplies.
and delay states.
But of course there has to be a state history. But Andy talks about 
computational efficency here, so he was referring to actual mathematical 
operations,

i know. this goes back a few years when i first saw Andy's hand-written work emailed to me by a third-party (i think it was legit with authorization, and since i've seen this in the public domain, maybe here, so i think we can talk about Andy's work without fear of exposing any secrets) but i will not identify the third party, if that's okay. this is the only "caveat" or issue i thought at the time, and it's not a big nasty criticism, it's just this: when modeling an op-amp circuit with resistors and capacitors, that's not clipping (so it's an "H(s)"), with difference equations replacing the derivatives at each capacitor, *if* the knob's ain't getting twisted, it's LTI and the analysis boils down to an H(z). now Euler's forward difference (or backward differences or predictor-corrector or whatever) does quite well at low frequencies because the "dt" and "delta t" are very close to each other. but up by Nyquist, maybe less so. but that's fine, other methods like bilinear screw up at high frequencies close to Nyquist. but there are different kinds of screwups.

emulating with finite differences may come up with a frequency response that deviates from the analog target in ways that are, how shall i put it, less predictable. on the other hand, bilinear transform guarantees that every single bump and ripple in the analog frequency response has a corresponding bump or ripple in the digital frequency response, but perhaps at a slightly different frequency. and with pre-warping, for every degree of freedom in the filter specs, you can make sure that one frequency is *exactly* mapped from analog to digital. with other methods, you might have a 2 dB bump and it comes out a little different than 2 dB. or, because you don't have frequency warping you might not be able to map the resonant frequency spot on like you do with bilinear (or you have to derive a *specific* frequency warping for that circuit, sorta like Hal Chamberlin did with the SVF, which mitigates my concern but i couldn't tell that Andrew did that in his analysis)

however, when things are modulated, the sound of the modulated filters really depends on the topology. but i am not sure that the best analog topology is a Sallen-Key circuit (i can't remember what Andy modeled anymore) and i dunno if i would bother modeling some analog circuits just to get a static filter somewhere. but maybe i would model the CryBaby circuit to make sure the emulation sounds the same.

  not loads and stores which may be optimised away if the results can be kept 
in a CPU register within a tight loop.

i'm not worried so much about loads and stores. that's a programming issue and we know that, say, the DF1 has more than the DF2, and other topologies (like ladders or lattices or Gold-Rader or Hal's SVF or Harris-Brooking or whatever) have computational issues that might make them more costly and loops less tight.
For non-linear modelling you need additional steps, and depending on the
circuit there are many different methods that can be tried to find the best
fit for the particular requirements
if, the sample rate is high enough (and it *should* be pretty fast because of the 
aliasing issue) the "deltaT" used in forward differences or backward 
differences (or predictor-corrector) or whatever should be pretty small.  in my opinion, 
if you have a bunch of memoryless non-linear elements connected in a circuit with linear 
elements (with or without memory), it seems to me that the simple Euler's forward method 
(like we learned in undergraduate school) suffices to model it.
That's where it's a bit tricky. In my naive way of thinking, I see non-linear 
elements, say a diode, or something tanh-ish, as voltage dependent resistors. 
Therefore the signal in a non-linear filter, naively spoken, influences the 
overall filter frequency. This explains why for instance in an analogue 
synthesizer the frequency of the self oscillating filter sounds like it's 
modulated by the filter input (in actual fact, run an LFO through a 
self-oscillating filter and you get a filter sweep. Alternatively, drive a 
square wave really hard in a lowpass filter and hear the upper harmonics 
attenuate)

Now, if we could agree that, naively spoken, the signal performs audio rate 
modulation of the cutoff frequency in a non-linear filter, then we would have 
to determine whether or not the effect of using simple Euler has any 
disadvantage over, say, trapezoidal integration and solving feedback loops 
delayless. That is, is there an audible difference in a practical case, even at 
really high samplerates?

i remember we were discussing this a while ago. i hadn't really understood the difference between "trapazoidal integration" and the bilinear transform without any prewarping. consider emulating a capacitor.

   Fs = 1/T

                        t
   v(t)  =  1/C  integral{ i(u) du }
                     -inf

in the s-domain it's

   V(s)  =  1/C *  1/s  * I(s)


so trapazoidal integration at discrete times is:

                              n
   v(nT) =approx   1/C * T * SUM {  1/2 * ( i((k-1)T) + i(kT) )
                            k=-inf


         =  v((n-1)T)  +   1/C * T * 1/2 * ( i((n-1)T) + i(nT) )


or as discrete-time sample values

   v[n]  =  v[n-1]  +   T/(2C) * (i[n] + i[n-1])

applying the Z transform

   V(z)  =  z^(-1)*V(z)  +  T/(2C) * (I(z) + z^(-1)*I(z))

solving for V


   V(z)  =  T/(2C) * (1 + z^(-1))/(1 - z^(-1)) * I(z)

looks like we're substituting

   1/s <---  T/2 * (1 + z^(-1))/(1 - z^(-1))

or

   s <---  2/T * (z-1)/(z+1)


how is that any different from the bilinear transform without prewarping? if it's an LTI circuit with an H(s), then what's the difference?

BTW, Euler's forward difference equation approximating the capacitor would be simply

   v(nT)  =  v((n-1)T)  +   1/C * T * i(nT)

and the s to z substitution would be simply

   s <---  1/T * (z-1)/z

We have done this, and as we express opinions here, my opinion is that using Euler sounds 
like plastic while using "the other method" sounds dope.

"the other method" is BLT. if that's what you're doing, why neglect prewarping each independently-specified frequency parameter?

  We need a tad more than undergraduate math, but we think it's worth it.

what *is* that? trapazoidal integration? bilinear transform? Euler's forward method? dunno about you, but i think all of these things appear, by one name or another, in an undergrad EE education.

  That is, for our line of work, which is building software implemented 
synthesizers and effects. Not sure how this translates to valve amps.

Andrew, i realize that you had been using something like that to emulate linear 
circuits with capacitors and resistors and op-amps.  it does make a difference 
in time-variant situations, but for the steady state (a second or two after the 
knob is twisted), i'm a little dubious of what difference it makes.
I think that's what I tried to say is, in a non-linear filter there is no 
steady state.

only if the thing goes chaotic (which is wildly non-linear with a helluva lotta feedback). otherwise, there is no reason a non-chaotic non-linear circuit can't get to a steady state if the input is stationary. "steady state" means the transients have died down to negligible levels. like a second or two after twisting the knob.

  Add a DC component to the input and the filter frequency changes. Therefore 
the filter frequency is not just dependent on coefficients alone.

well, coefficients in a non-linear system do not find their way into the Fourier transform as they would in an LTI.

  It's like someone sits inside that thing twiddling the cutoff knob like 
there's no tomorrow and faster than the eye can see ¯\(°_o)/¯

no, modeling non-linear is *not* the same as modeling time-variant. that's not the way to do it (or conceive the modeling problem). with a time-variant system, the knobs are being twiddled *independently* of the signal (with a linear but time-variant system, there is still an impulse response or a family of impulse responses and there is still a convolution integral, but it's a little different). if the coefficients (of an ostensibly linear system) vary only as a function of the signal, it's time-invariant and (likely) non-linear.

--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to