Hi R users!

I've come across using kpss tests for time series analysis and i have a 
question that troubles me since i don't have much experience with time series 
and the mathematical part underlining it.

x<-c(253, 252, 275, 275, 272, 254, 272, 252, 249, 300, 244, 
258, 255, 285, 301, 278, 279, 304, 275, 276, 313, 292, 302, 
322, 281, 298, 305, 295, 286, 327, 286, 270, 289, 293, 287, 
267, 267, 288, 304, 273, 264, 254, 263, 265, 278)
x <- ts(x, frequency = 12)
library (urca)
library (uroot)
library (tseries) 
 
Now, doing an ur.kpss test (mu, lag=3) i cannot reject the null  hypothesis of  
level stationarity.
Doing a kpss.test (mu, lag=1 by default ) the p value becomes smaller than 0.05 
thus rejecting the null of stationarity. Same with KPSS.test (lag=1)

So, as i have noticed that increasing the number of lags on each of the tests 
rejecting the null becomes harder and harder. I saw that books always cite use 
of Bartlett window but the lags determination is left to the analyst. My 
question: which is the "proper" number of lags, so i don't make any false 
statements on the data? 

Thank you and have a great day!
 

       
---------------------------------

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to