>rbj
>>Urs
>Regarding the iterative method, unrolling like you did
>>
>>   y0 = y[n-1]
>>  y1 = g * ( x[n] - tanh( y0 ) ) + s
>>  y2 = g * ( x[n] - tanh( y1 ) ) + s
>> y3 = g * ( x[n] - tanh( y2 ) ) + s
>>  y[n] = y3
>> is *not* what I described in general.
>
>it *is* precisely equivalent to the example you were describing
>with one more iteration than you were saying was necessary.

No, the iterations you have written out there are fixed-point iterations
(simply applying the function repeatedly and hoping it converges). That's a
very simple, special case of the general approach the Urs suggested. It
will work under certain conditions, but even then isn't very efficient. Urs
was pretty clear about using Newton or bisection for the root finding, not
fixed-point iteration. For Newton, each iteration would look like:

y1 = y0 - f(y0)/f'(y0)

where f(y) = g*(x[n] - tanh(y)) + s - y, and f'(y) is its derivative wrt y.

>if there is a solid, fixed, and finite maximum number of iterations
needed,
>that the iterated process can be rolled out into some "linear" code (code
>with a beginning and an end).

I think the term you want here is "non-iterative" or some such, rather than
"linear." I was getting confused by the usage of "linear," thinking you
were arguing that the end result is a linear function or something like
that.

>computationally more efficient, like table-lookup with a very big table.

Well, that's only more computationally efficient if the computational
resources you care about are processor cycles and not memory. In some
contexts, the priorities are the other way around.

>or in a manner that allows some theoretical understanding of the nature
>  of the function to allow one to determine what oversampling ratio is
>  needed to keep aliasing at bay.

That's a good point. The trouble with these implicit functions is that it's
hard to see what you're dealing with directly. Probably we've all at some
point had the experience of trying to model some circuit in PSPICE and
needing to kind of arbitrarily crank up the step size resolution until we
get results that are stable and make sense. And maybe that's fine for doing
off-line circuit simulations as an undergrad student, but for applying this
approach to audio in a systematic way it does seem to leave a big question
mark over key issues such as aliasing/oversampling.

Moreover, this is kind of a neat way of addressing the issue. I.e., many
times the underlying implicit functions may not even admit any analytical
parameterization, which makes them a pain. But if we can identify iterative
approximations that we are convinced are good enough, we can then leverage
those approximations to parameterize the functions and get ahold of them,
via "loop unrolling" as rbj describes. The results are likely to be nasty,
but presumably one could always turn to Mathematica or the like to assist
there.

But I think the biggest practical issue there is what Andy (I think it
was?) described earlier, that once you start coupling together various
non-linearities in non-trivial circuits, the dimensionality of these
quantities blows up and you end up with a huge mess.

E


On Mon, Jun 23, 2014 at 10:18 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
> hi Urs,
>
> On 6/23/14 11:36 AM, Urs Heckmann wrote:
>
>> On 23.06.2014, at 16:37, robert bristow-johnson<r...@audioimagination.com>
>>  wrote:
>>
>>  because it was claimed that a finite (and small) number of iterations
>>> was sufficient.
>>>
>> Well, to be precise, all I claimed was an *average* of 2 iterations for a
>> given purpose, and with given means to optimise (e.g. vector registers). I
>> did so to underline that an implementation for real time use is possible. I
>> had no intention of saying that any finite (and small) number of iterations
>> was sufficient in any arbitrary case and condition - I can only speak about
>> the models that we have implemented and observed.
>>
>
> okay, so the above consequentially brings us back to the original issue:
>
>  On 6/22/14 1:20 PM, Urs Heckmann wrote:
>>
>>> On 22.06.2014, at 19:04, robert bristow-johnson<r...@audioimagination.com>
>>>  wrote:
>>>
>>>> On 6/22/14 7:11 AM, Urs Heckmann wrote:
>>>>
>>>>  2. Get the computer to crunch numbers by iteratively predicting,
>>>>> evaluating and refining values using the actual non-linear equations until
>>>>> a solution is found.
>>>>>
>>>> perhaps in analysis.  i would hate to see such iterative processing in
>>>> sample processing code.  (it's also one reason i stay away from terminology
>>>> like "zero-delay feedback" in a discrete-time system.)
>>>>
>>> We're doing this a lot. It shifts the problem from implementation to
>>> optimisiation.
>>>
>>
> now, i know that real-time algorithms that run native have to deal with
> the I/O latency issues of the Mac or PC, and i am not sure (nor am
> concerned) about how you guys deal with that and the operating system.  i
> know Apple deals with it with audio units, and i seem to remember that this
> was a fright with the PC and the Windoze OS.  but there is no physical
> reason it can't be dealt with in either platform, i just dunno the details.
>
> but this i *do* know about *any* hardware realization of a real-time
> processing algorithm:  you must place a outer maximum limit on the
> processing (the worst-case), *even* if the *average* processing time is
> what is salient.  the average processing time becomes the salient measure
> when buffering is used, but buffering introduces delay (if you buffer both
> input and output, the delay is two block lengths).
>
> in a non-real-time application, we might not worry about how many
> iterations of the processing loop are needed to converge acceptably to a
> consistent output value.  you just run the code, wait a few seconds if
> necessary (in the olden days, we might get a cup of coffee or something),
> and get your results.  but in a real-time process, whether it's buffered or
> not, you *must* put a lid on the number of iterations.  so "about the
> models that [you] have implemented and observed" if it's an "an
> implementation for real time use", you *must* put a maximum number of
> iterations on the loop, otherwise suffer the risk of a hiccup in your live
> real-time processing that might not sound very friendly.  this is a normal
> and basic issue about doing live, real-time DSP for *any* application, not
> just for audio.
>
> then, i will go back to my original point about hating "to see such
> iterative processing in [live, real-time] processing code."  to make it
> safe, you must impose a finite and known limit in the number of iterations.
>  then, in worst case, you may as well just always do it for those number of
> iterations.  so your normal case is also the worst case, and your
> processing cycle budget for the algorithm is known and you've made
> allocation for it and no hiccups will occur for that reason.
>
> then, if you are running this iteration a known number of times, you can
> unroll it into linear code *exactly* as i have said.  i cannot fathom the
> problem you have said you're having about this:
>
> On 6/23/14 4:45 AM, Urs Heckmann wrote:
>
>> On 23.06.2014, at 06:37, robert bristow-johnson<r...@audioimagination.com>
>>  wrote:
>>
>>  ...
>
>> Regarding the iterative method, unrolling like you did
>>
>>     y0 = y[n-1]
>>>    y1 = g * ( x[n] - tanh( y0 ) ) + s
>>>    y2 = g * ( x[n] - tanh( y1 ) ) + s
>>>    y3 = g * ( x[n] - tanh( y2 ) ) + s
>>>    y[n] = y3
>>>
>> is *not* what I described in general.
>>
>
> it *is* precisely equivalent to the example you were describing with one
> more iteration than you were saying was necessary.
>
>    It's a subset won't ever converge in 3 iterations :^)
>>
>> Thing is, while starting with y[n-1] is a possibility, it's not the only
>> one and in my experience it's hardly ever a good one.
>>
>
> okay, so 3 iterations is not enough for the worst case.  then increase it
> to 4 or 5 or 10.  fine.
>
> so y[n-1] is not a good initial guess.  fine.  then set y0 to zero.  (then
> it is explicit that this is a function *only* of x[n] with g and s as
> parameters of the function, not the argument.)
>
> my points are,
>
> 1. that you *must* for live and real-time operation, get a grip on the
> number of iterations that will work for all possible inputs.
>
> 2. that if there is a solid, fixed, and finite maximum number of
> iterations needed, that the iterated process can be rolled out into some
> "linear" code (code with a beginning and an end).
>
> 3. and this code does *not* use the output y[n] (which is what it is
> computing) as an input.  it has *only* past outputs and the current and
> past inputs as possible arguments.  no zero-delay assumptions on anything
> other than the current input sample, x[n].
>
> 4. finally, since ultimately the process maps an x[n] to a y[n], and in
> this example, it's *only* that (i.e. a memoryless mapping), then why not,
> offline, do your iterative thing to define the net function and implement
> that net function in a manner that is:
>
>   4a) computationally more efficient, like table-lookup with a very big
> table.
>
>   4b) or in a manner that allows some theoretical understanding of the
> nature
>       of the function to allow one to determine what oversampling ratio is
>       needed to keep aliasing at bay.  for me, normally that's a
> finite-order
>       polynomial, but maybe you'll figger something else out.  maybe you'll
>       implement it as that finite-order polynomial or maybe not (like
> you'll
>       use table lookup or even the iterative loop with a maximum on the
> number
>       of iterations).  but at least you'll have an idea what you need to do
>       to sufficiently upsample.
>
> this is getting us back to the original central issue i've been having in
> this thread.  the "zero-delay feedback" is an ancillary issue and Andy's
> "trapezoid rule integration" to implement filters is another ancillary
> issue.
>
>
> --
>
> r b-j                  r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
> --
> dupswapdrop -- the music-dsp mailing list and website:
> subscription info, FAQ, source code archive, list archive, book reviews,
> dsp links
> http://music.columbia.edu/cmc/music-dsp
> http://music.columbia.edu/mailman/listinfo/music-dsp
>
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to