On 22.06.2014, at 20:24, robert bristow-johnson <r...@audioimagination.com> 
wrote:

> On 6/22/14 1:20 PM, Urs Heckmann wrote:
>> On 22.06.2014, at 19:04, robert bristow-johnson<r...@audioimagination.com>  
>> wrote:
>> 
>>>> 2. Get the computer to crunch numbers by iteratively predicting, 
>>>> evaluating and refining values using the actual non-linear equations until 
>>>> a solution is found.
>>> perhaps in analysis.  i would hate to see such iterative processing in 
>>> sample processing code.  (it's also one reason i stay away from terminology 
>>> like "zero-delay feedback" in a discrete-time system.)
>> We're doing this a lot. It shifts the problem from implementation to 
>> optimisiation. With proper analysis of the system behaviour, for any kind of 
>> circuit we've done (ladder, SVF, Sallen-Key) one can choose initial 
>> estimates that converge with results after an average of just over 2 
>> iterations. Therefore it's not even all that expensive, and oversampling 
>> makes the initial guesses easier
> 
> so, if you can guarantee that you're limited to 2 iterations (or 3 or 4, but 
> fix it), doesn't this mean you can "roll out" the looped iterations into a 
> "linear" process?  then, at the bottom line, you still have y[n] as a 
> function of y[n-1], y[n-2], ... and x[n], x[n-1], x[n-2]...  isn't that the 
> case?  you *don't* have y[n] as a function of y[n] (because you didn't define 
> y[n] yet).  no?

No, if we take a typical equation like

y = g * ( x - tanh( y ) ) + s

We form it into

Y_result = g * ( x - tanh( Y_estimate ) ) + s

as x, g and s are known, we simply use an iterative root finding algorithm 
(Newton's method, bisection method etc.) over Y_estimate until Y_estimate and 
Y_result become nearly equal. During that process g, x and s do not change. 
Hence, as a result we have found y as a function of itself.

An alternative notation would be

error = g * ( x - tanh( y ) ) + s - y

where the root finding algorithm optimises y to minimize the error, like this

while( abs( error ) > 0.0001 )
{
    y = findABetterGuessUsingMethodOfChoice( y, error );
    error = g * ( x - tanh( y ) ) + s - y;
}

This way we hadn't had to find an actual mathematical solution, we let the 
computer crunch it. Therefore the implementation is very simple (even 
trivial...), and all it boils down to is finding a good method to come up with 
an Y_estimate that's as close as possible to Y_result.

Cheers,

- Urs
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to