On 6/22/14 11:24 PM, Andrew Simper wrote:
so whether it's a function of a single variable or a function of two
variables with your previous output in recursion, why not just explicitly
define that function and evaluate it?  if it's about tube curves being the
nonlinearity inside, fine, use your initial tube curve data in a MATLAB (or
C or whoever) analysis program and crank out a dense curve for y[n] =
func(x[n]).  then fit a finite order polynomial to that to get some idea
what your upsampling ratio has gotta be, if you wanna stay clean.  if you're
not worried about staying perfectly clean, then continue to use the same
func(..) derived from the tube curves (so it's not the finite order
polynomial that would approximate it), but you still use the upsampling
ratio derived as if it *were* that finite-order polynomial.


   Hence, as a result we have found y as a function of itself.

this iterative recursion does not create a new paradigm of mathematics.  you
can *still* evaluate the function the old-fashioned way, and you have an
idea on how curvy or nasty it is.
Yes, you are right, in the simple one pole example that urs posted

Andy, please note what Ethan Duni just said about "bit of confusion here".

the iterative mapping thing Urs posted is not a "one pole" anything. sometimes semantics are important. we already have a use for the terms "poles" and "zeros" and this thing from Urs is not about poles. ultimately, it's a non-linear and (if 2 or 3 iterations are enough for convergenc) a memoryless function. no poles. no frequency response. the recursion y[n-1] is just supposed to be an initial guess for y[n]. it should converge to nearly the same y[n] with any other starting value of the same scale.

you
have a function of two variables that you can explicitly evaluate
using your favourite route finding mechanism, and then use an
approximation to avoid evaluating this at run time. This 2D
approximation is pretty efficient and will be enough to solve this
very basic case. But each non-linearity that is added increases the
space by at least one dimension, so your function gets big very
quickly and you have to start using a non-linear mapping into the
space to keep things under control.

i haven't been able to decode what you just wrote.

  If can be more efficient to only
use these approximations locally and then use a root finding method to
find the global solution

<sigh> it's a function. given his parameters, g and s, then x[n] goes in, iterate the thing 50 times, and an unambiguous y[n] comes out. doesn't matter what the initial guess is (start with 0, what the hell). i am saying that *this* net function is just as deserving a candidate for modeling as is the original tanh() or whatever. just run an offline program using MATLAB or python or C or the language of your delight. get the points of that function defined with a dense look-up-table. then consider ways of modeling *that* directly. maybe leave it as a table lookup. whatever. but at least you can see what you're dealing with and use that net function to help you decide how much you need to upsample.


--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to