rmserror is the white (theoricaly if AR fitting is good) noise variance 
estimator.
this is compute recursively as you state it with :
NoiseVariance[i] = NoiseVariance[i-1] * (1 - K[i]^2)
where i is the number of the actual iteration, K reflexion ceof.
For i = 0 (before begining iteration from i=1 to P, P the final order 
desired for the AR),
NoiseVariance[0] = Autocorrelation_data[0];

This result comes from Durbin-Levison algorythm wich is used for Burg and 
Yule-Walker metod.
Durbin levison algo gives by recursion : reflexion coef and noise variance.

>From this noise variance you can compute Order AR selection for each order 
during the recursion (FPE, etc...).

Your formula seems not good because the reflexion coefs K are not multiplied 
by anything !?



Numerical recipes to take an exemple ( 
http://www.nrbook.com/a/bookfpdf/f13-6.pdf ) :

/* Compute Autocorrelation[0] from data and put it as XMS[0] */
p=0
do 11 j=1,n
p=p+data(j)**2
enddo 11
xms=p/n

/* during recursion, update is done with */
xms=xms*(1.-d(k)**2)
/* where d(k) is last coef. reflex. in the k-th iteration */


Hope it helps.

Cheers,
Mich.


----- Original Message -----
From: Paul Ho
To: [email protected]
Sent: Wednesday, November 15, 2006 11:55 PM
Subject: RE: [amibroker] Re: Polynomial Trendlines


Yes Mich, I noticed that as well, In addition,
Currently, memcof seems to calculate the rmserror as sum(data^2) - sum(1 - 
reflection Coeff^2).
Is this valid? if not what do you use to calculate it recursively.
Cheers
Paul.




From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf 
Of Tom Tom
Sent: Thursday, 16 November 2006 7:56 AM
To: [email protected]
Subject: Re: [amibroker] Re: Polynomial Trendlines


Hi !

Thanks Paul !
It is around the same for MEM yes. I find a way to compute it during the
recursive process (as you tell it).
I have made comparaison between MEM in Numerical Recipes and formula i make
from original mathematical recursive formula from Burg.
In NR, they make the recurrent loop to compute the Num and Den (use to
calculate the coefficient of reflexion k), loop from 1 to M-i (M is number
of quotes data, i is incrementing from 1 to ORDER_AR). So for high order AR,
most recent data are not taken in consideration !? Same for updating the
forward and backward error from the lattice filter, they just considere from
1 to M-i.
Original burg formula goes loop from i to M-1, so last data are always here
even for high order.
-> memcof on Numerical Recipes doesn't respect original algorithm.

I don't know why they do this on NR mem algo !? i don't find any source
stating than taking [1:M-i] (memcof NR) is better than [i:M-1] (original
burg).

Mich.

_________________________________________________________________
Découvrez Windows Live Messenger : le futur de MSN Messenger ! 
www.windowslivemessenger.fr

Reply via email to