On 6/22/14 6:01 PM, Urs Heckmann wrote:
On 22.06.2014, at 20:24, robert bristow-johnson<r...@audioimagination.com>  
wrote:

On 6/22/14 1:20 PM, Urs Heckmann wrote:
On 22.06.2014, at 19:04, robert bristow-johnson<r...@audioimagination.com>   
wrote:

2. Get the computer to crunch numbers by iteratively predicting, evaluating and 
refining values using the actual non-linear equations until a solution is found.
perhaps in analysis.  i would hate to see such iterative processing in sample processing 
code.  (it's also one reason i stay away from terminology like "zero-delay 
feedback" in a discrete-time system.)
We're doing this a lot. It shifts the problem from implementation to 
optimisiation. With proper analysis of the system behaviour, for any kind of 
circuit we've done (ladder, SVF, Sallen-Key) one can choose initial estimates 
that converge with results after an average of just over 2 iterations. 
Therefore it's not even all that expensive, and oversampling makes the initial 
guesses easier
so, if you can guarantee that you're limited to 2 iterations (or 3 or 4, but fix it), doesn't this 
mean you can "roll out" the looped iterations into a "linear" process?  then, 
at the bottom line, you still have y[n] as a function of y[n-1], y[n-2], ... and x[n], x[n-1], 
x[n-2]...  isn't that the case?  you *don't* have y[n] as a function of y[n] (because you didn't 
define y[n] yet).  no?
No, if we take a typical equation like

y = g * ( x - tanh( y ) ) + s

We form it into

Y_result = g * ( x - tanh( Y_estimate ) ) + s

as x, g and s are known, we simply use an iterative root finding algorithm 
(Newton's method, bisection method etc.) over Y_estimate until Y_estimate and 
Y_result become nearly equal. During that process g, x and s do not change.

i understand this. so let's use either indices or different labels for the same quantities. what are you gonna start with for Y_estimate? it's not explicit, but maybe y[n-1] is a good initial guess (if i am wrong, then you have to explicitly define the first Y_estimate). let's say, for shits and grins, that 3 iterations is enough.


    y0 = y[n-1]
    y1 = g * ( x[n] - tanh( y0 ) ) + s
    y2 = g * ( x[n] - tanh( y1 ) ) + s
    y3 = g * ( x[n] - tanh( y2 ) ) + s
    y[n] = y3

since we have already fixed the maximum number of iterations, then y2 and y3 are sufficiently equal. (at least, if we were to iterate again and get a y4, it wouldn't be enough different from y3 to bother with.)

now, by the mathimagical use of substitution, we have

   y[n] = g*(x[n]-tanh( g*(x[n]-tanh( g*(x[n]-tanh( y[n-1] ))+s ))+s ))+s

which is an explicit function of two variables x[n] and y[n-1] with some parameters derived from g and s.

but, by the nature of your expectation of convergence. this is really just a function of x[n]. y[n-1] was just an initial guess for y[n]. so you have a net function of a single variable, x[n]. if this is not the case and you have to stay with a function of two variables, then you cannot assert convergence (i.e. y[n-1] is a crappy guess or you haven't iterated enough or, perhaps, no amount of iteration will converge this thing to an unambiguous y[n], independent of how good (within reason) the initial guess is.

so whether it's a function of a single variable or a function of two variables with your previous output in recursion, why not just explicitly define that function and evaluate it? if it's about tube curves being the nonlinearity inside, fine, use your initial tube curve data in a MATLAB (or C or whoever) analysis program and crank out a dense curve for y[n] = func(x[n]). then fit a finite order polynomial to that to get some idea what your upsampling ratio has gotta be, if you wanna stay clean. if you're not worried about staying perfectly clean, then continue to use the same func(..) derived from the tube curves (so it's not the finite order polynomial that would approximate it), but you still use the upsampling ratio derived as if it *were* that finite-order polynomial.

  Hence, as a result we have found y as a function of itself.

this iterative recursion does not create a new paradigm of mathematics. you can *still* evaluate the function the old-fashioned way, and you have an idea on how curvy or nasty it is.


An alternative notation would be

error = g * ( x - tanh( y ) ) + s - y

where the root finding algorithm optimises y to minimize the error, like this

while( abs( error )>  0.0001 )
{
     y = findABetterGuessUsingMethodOfChoice( y, error );
     error = g * ( x - tanh( y ) ) + s - y;
}

This way we hadn't had to find an actual mathematical solution, we let the 
computer crunch it.

and all's i'm saying is to crunch it in advance. then represent your data in such a way that minimizes the crunching during the real-time sample processing.

sorta the same philosophy as wavetable synthesis compared to adding up the harmonics in real time as you would with additive synthesis. just do the nastiest number crunching in advance. if, for no other reason, for you to be able to get a grip on and see the net, resulting function (and see how nasty it may be).

  Therefore the implementation is very simple (even trivial...), and all it 
boils down to is finding a good method to come up with an Y_estimate that's as 
close as possible to Y_result.

Urs, i know and understand this. if the number of iterations is known, you can roll it out into an non-iterative "linear" (by that i mean linear code: one step after another, no looping) procedure and define the net function that maps x to y. then look for ways to evaluate that function directly (power series, table-lookup, whatever).


--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to