On 6/22/14 7:11 AM, Urs Heckmann wrote:
Dear Robert,
On 22.06.2014, at 04:19, robert bristow-johnson<r...@audioimagination.com>
wrote:
it's possible that this is only a semantic issue.
Thanks for clearing this up. It's indeed a semantic issue (use of the term "nodal
analysis"), which then leads to further misunderstandings.
What we do is, for each node we write down equations that may indeed contain
non-linear terms. We end up with equations that look like this:
Vout1 = g * ( tanh( Vin - feedback * Vout2 ) - tanh( Vout1 ) ) + iceq1
Vout2 = g * ( tanh( Vout1 ) - tanh( Vout2 ) ) + iceq2
Even in this simplified case it's clearly impossible to solve these linearly,
be it in a matrix or brooding over one big sausage of an equation for each
unknown value.
What guys like us have done for a decade or two is what Hal Chamberlin has
written in his famous book and what Smith/Stilson described in their equally
famous paper: We have simply added artificial delay elements. We did so in
order to do two things:
1. Drag the computation of the non-linear term out of the equation so that both
sides can be written as linearly dependent
2. Make the computation of each equation independent of each other, so they
need not be solved simultaneously
Sometimes only one of those two methods is necessary, sometimes they are the
same, and often adding a delay element is identical with Euler's method (i.e.
Chamberlin's SVF implementation). In case of above equations, one would end up
with following:
Vout1 = g * ( tanh( Vin - feedback * Vout2z1 ) - tanh( Vout1z1 ) ) + iceq1
Vout2 = g * ( tanh( Vout1 ) - tanh( Vout2z1 ) ) + iceq2
I've marked delayed elements with a z1 suffix. By doing so the two equations
can be solved in sequence and no complex math or iterative computation is
required. However, it doesn't really sound right. Those delay elements create a
plethora of problems.
When I talked about "the other method", I was talking about solving the above
set of equations simultaneously and without any added delay element. Before we run into
another misunderstanding, like Andy pointed out, the iceq values that represent the
current source equivalents of the capcitors *are* delay elements. They are necessary for
integration of the time step, ....
i don't think i agree with the following claim, Urs,
... but no matter what method of integration we use, we always end up with the
same set of equations to solve for the actual step.
different methods of performing numerical integration result in
different equations that define y[n] from y[n-1] and earlier values. am
i missing something? the statement appears, at face value, to be mistaken.
Two methods occur to be common to solve these equations simultaneously in
realtime applications, without artificially added delay elements:
1. Treat the non-linear elements as piecewise linear and correct the result in
iterative steps.
but, because of the breakpoints, you can't use LTI analysis. only if
you're sure it never crosses a breakpoint can you "linearize" the analysis.
2. Get the computer to crunch numbers by iteratively predicting, evaluating and
refining values using the actual non-linear equations until a solution is found.
perhaps in analysis. i would hate to see such iterative processing in
sample processing code. (it's also one reason i stay away from
terminology like "zero-delay feedback" in a discrete-time system.)
Both methods work well and make use of root finding algorithms such as Newton's
method. Both methods buy numerical accuracy at the expense of computational
effort over the methods using extra delay elements. Nevertheless, computers
have recently become fast enough to make these methods viable for musical
applications. Hence our enthusiasm!
The gist is, being able to eliminate those delays was such a nice step forward
in making good sounding synthesizers, someone had to coin a term for it. That
term, as insufficient as it is, became zero delay feedback filters. It means
what I said before, no unit delays were artificially added on top of the delays
required for integration of the time step. Nowadays we're certainly using more
differentiated terms as well, but that's the one that's most widely used (and
abused) among our crowd. Please take my apologies for popularising this term
all too lightheaded - I guess it wasn't ever clear enough that we meant to
apply it *only* to filters based on nodal analysis as we define it and like I
described above.
--
r b-j r...@audioimagination.com
"Imagination is more important than knowledge."
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp