Hello

What I would like to do is analyse how accurate the predictions were by
calculating the sigmas of the new data, and comparing the compared sigmas
against the actual sigmas.

I have read over the vignette, and numerous blog posts, but couldn't find a
solution (I'm sure it's out there somewhere, but I couldn't find it).

It's easy to pull the sigma values from rugarch's ugarchforecast function:
spec <- ugarchspec()
fit <- ugarchfit(ugarchspec, data_original, out.sample=10)
fore <- ugarchforecast(fit, n.ahead = 10, n.roll=9)
sigma(fore)

Now I have new data (let's call it data_new), which contains 10 new
realised values.

So, what would be the appropriate way of calculating the sigmas at the 10
new time intervals? Obviously, the standard formula for standard deviation
is:
sigma = sqrt(sum((x-xbar)^2) / (n-1)).

But how many time intervals (n) should be used?

I would like a way to calculate the sigmas exactly the same as the rugarch
methods do.

MN Hatton

        [[alternative HTML version deleted]]

_______________________________________________
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.

Reply via email to