You will get the best possible results using the regression fit routines if
you adjust your input data arrays so that they contain numbers that are
"small".  These routines suffer from numeric errors internally when
processing large numbers, like timestamps.

I do this to the data before passing it to the regression function by
subtracting each input array's minimum value from all array elements.  You
should do this with both the X and Y input arrays (unless you know the
values of the array data will always be close to zero).  This must be done
if one of the arrays contains timestamps.  Then after the fit is calculated,
you add the offset back in to the output arrays appropriately.  I have
recommended to NI tech support several times that NI should enhance the
regression functions to automatically do this.  They have refused to take my
advice.

=====================================================
when making a polynomial fit on a data set with n points it should always be
possible to find a n-1 degree polynomial expression that gives a mean
squared error equal to 0 (well, rounding errors make this an 'almost' 0)
When using the SVD algoritm in the General Plynomial Fit.vi this isn't
always the case.
Is this due to the qualities of the SVD algorithm ?
E.g. the Givens algorithm seems more stable. My math is a little rusty so



Reply via email to