Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 2:59 PM,   wrote:
> On Fri, Oct 14, 2011 at 2:29 PM,   wrote:
>> On Fri, Oct 14, 2011 at 2:18 PM, Alan G Isaac  wrote:
>>> On 10/14/2011 1:42 PM, josef.p...@gmail.com wrote:
 If I remember correctly, signal.lfilter doesn't require stationarity,
 but handling of the starting values is a bit difficult.
>>>
>>>
>>> Hmm.  Yes.
>>> AR(1) is trivial, but how do you handle higher orders?
>>
>> I would have to look for it.
>> You can invert the stationary part of the AR polynomial with the numpy
>> polynomial classes using division. The main thing is to pad with
>> enough zeros corresponding to the truncation that you want. And in the
>> old classes to watch out because the order is reversed high to low
>> instead of low to high or the other way around.
>
> I found it in the examples folder (pure numpy)
> https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/examples/tsa/lagpolynomial.py
>
 ar = LagPolynomial([1, -0.8])
 ma = LagPolynomial([1])
 ma.div(ar)
> Polynomial([ 1. ,  0.8], [-1.,  1.])
 ma.div(ar, maxlag=50)
> Polynomial([  1.e+00,   8.e-01,   6.4000e-01,
>         5.1200e-01,   4.0960e-01,   3.2768e-01,
>         2.62144000e-01,   2.09715200e-01,   1.67772160e-01,
>         1.34217728e-01,   1.07374182e-01,   8.58993459e-02,
>         6.87194767e-02,   5.49755814e-02,   4.39804651e-02,
>         3.51843721e-02,   2.81474977e-02,   2.25179981e-02,
>         1.80143985e-02,   1.44115188e-02,   1.15292150e-02,
>         9.22337204e-03,   7.37869763e-03,   5.90295810e-03,
>         4.72236648e-03,   3.77789319e-03,   3.02231455e-03,
>         2.41785164e-03,   1.93428131e-03,   1.54742505e-03,
>         1.23794004e-03,   9.90352031e-04,   7.92281625e-04,
>         6.33825300e-04,   5.07060240e-04,   4.05648192e-04,
>         3.24518554e-04,   2.59614843e-04,   2.07691874e-04,
>         1.66153499e-04,   1.32922800e-04,   1.06338240e-04,
>         8.50705917e-05,   6.80564734e-05,   5.44451787e-05,
>         4.35561430e-05,   3.48449144e-05,   2.78759315e-05,
>         2.23007452e-05], [-1.,  1.])
>
 ar = LagPolynomial([1, -0.8, 0.2, 0.1, 0.1])
 ma.div(ar, maxlag=50)
> Polynomial([  1.e+00,   8.e-01,   4.4000e-01,
>         9.2000e-02,  -1.9440e-01,  -2.9792e-01,
>        -2.52656000e-01,  -1.32300800e-01,  -6.07744000e-03,
>         7.66558080e-02,   1.01035814e-01,   7.93353139e-02,
>         3.62032515e-02,  -4.67362386e-03,  -2.90166622e-02,
>        -3.38324615e-02,  -2.44155995e-02,  -9.39695872e-03,
>         3.65046531e-03,   1.06245701e-02,   1.11508188e-02,
>         7.37039040e-03,   2.23864501e-03,  -1.86070097e-03,
>        -3.78841070e-03,  -3.61949191e-03,  -2.17570579e-03,
>        -4.51755084e-04,   8.14527351e-04,   1.32149267e-03,
>         1.15703475e-03,   6.25052041e-04,   5.50326804e-05,
>        -3.28837006e-04,  -4.52284820e-04,  -3.64068927e-04,
>        -1.73417745e-04,   1.21917720e-05,   1.26072341e-04,
>         1.52168186e-04,   1.12642678e-04,   4.58540937e-05,
>        -1.36693133e-05,  -4.65873557e-05,  -5.03856990e-05,
>        -3.42095661e-05], [-1.,  1.])

I just realized that my LagPolynomial has a filter method

>>> marep = ma.div(ar, maxlag=50)
>>> u = np.random.randn(5000)
>>> x = marep.filter(u)[1000:]
>>> import  scikits.statsmodels.api as sm
>>> sm.tsa.AR(x).fit(4, trend='nc').params
array([ 0.80183437, -0.22098967, -0.08484519, -0.12590277])
>>> #true (different convention)
>>> -np.array([1, -0.8, 0.2, 0.1, 0.1])[1:]
array([ 0.8, -0.2, -0.1, -0.1])

not bad, if the sample is large enough. I don't remember what numpy
polynomial use under the hood (maybe convolve)

>
> no guarantees, I don't remember how much I tested these things, but I
> spent a lot of time doing it 3 or 4 different ways.

Josef


>
> Josef
>
>>
>> I switched to using mostly lfilter, but I guess the statsmodels
>> sandbox (and the mailing list) still has my "playing with ARMA
>> polynomials" code.
>> (I think it might be pretty useful for teaching. I wished I had the
>> functions to calculate some examples when I learned this.)
>>
>> Josef
>>>
>>> Thanks,
>>> Alan
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 2:29 PM,   wrote:
> On Fri, Oct 14, 2011 at 2:18 PM, Alan G Isaac  wrote:
>> On 10/14/2011 1:42 PM, josef.p...@gmail.com wrote:
>>> If I remember correctly, signal.lfilter doesn't require stationarity,
>>> but handling of the starting values is a bit difficult.
>>
>>
>> Hmm.  Yes.
>> AR(1) is trivial, but how do you handle higher orders?
>
> I would have to look for it.
> You can invert the stationary part of the AR polynomial with the numpy
> polynomial classes using division. The main thing is to pad with
> enough zeros corresponding to the truncation that you want. And in the
> old classes to watch out because the order is reversed high to low
> instead of low to high or the other way around.

I found it in the examples folder (pure numpy)
https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/examples/tsa/lagpolynomial.py

>>> ar = LagPolynomial([1, -0.8])
>>> ma = LagPolynomial([1])
>>> ma.div(ar)
Polynomial([ 1. ,  0.8], [-1.,  1.])
>>> ma.div(ar, maxlag=50)
Polynomial([  1.e+00,   8.e-01,   6.4000e-01,
 5.1200e-01,   4.0960e-01,   3.2768e-01,
 2.62144000e-01,   2.09715200e-01,   1.67772160e-01,
 1.34217728e-01,   1.07374182e-01,   8.58993459e-02,
 6.87194767e-02,   5.49755814e-02,   4.39804651e-02,
 3.51843721e-02,   2.81474977e-02,   2.25179981e-02,
 1.80143985e-02,   1.44115188e-02,   1.15292150e-02,
 9.22337204e-03,   7.37869763e-03,   5.90295810e-03,
 4.72236648e-03,   3.77789319e-03,   3.02231455e-03,
 2.41785164e-03,   1.93428131e-03,   1.54742505e-03,
 1.23794004e-03,   9.90352031e-04,   7.92281625e-04,
 6.33825300e-04,   5.07060240e-04,   4.05648192e-04,
 3.24518554e-04,   2.59614843e-04,   2.07691874e-04,
 1.66153499e-04,   1.32922800e-04,   1.06338240e-04,
 8.50705917e-05,   6.80564734e-05,   5.44451787e-05,
 4.35561430e-05,   3.48449144e-05,   2.78759315e-05,
 2.23007452e-05], [-1.,  1.])

>>> ar = LagPolynomial([1, -0.8, 0.2, 0.1, 0.1])
>>> ma.div(ar, maxlag=50)
Polynomial([  1.e+00,   8.e-01,   4.4000e-01,
 9.2000e-02,  -1.9440e-01,  -2.9792e-01,
-2.52656000e-01,  -1.32300800e-01,  -6.07744000e-03,
 7.66558080e-02,   1.01035814e-01,   7.93353139e-02,
 3.62032515e-02,  -4.67362386e-03,  -2.90166622e-02,
-3.38324615e-02,  -2.44155995e-02,  -9.39695872e-03,
 3.65046531e-03,   1.06245701e-02,   1.11508188e-02,
 7.37039040e-03,   2.23864501e-03,  -1.86070097e-03,
-3.78841070e-03,  -3.61949191e-03,  -2.17570579e-03,
-4.51755084e-04,   8.14527351e-04,   1.32149267e-03,
 1.15703475e-03,   6.25052041e-04,   5.50326804e-05,
-3.28837006e-04,  -4.52284820e-04,  -3.64068927e-04,
-1.73417745e-04,   1.21917720e-05,   1.26072341e-04,
 1.52168186e-04,   1.12642678e-04,   4.58540937e-05,
-1.36693133e-05,  -4.65873557e-05,  -5.03856990e-05,
-3.42095661e-05], [-1.,  1.])

no guarantees, I don't remember how much I tested these things, but I
spent a lot of time doing it 3 or 4 different ways.

Josef

>
> I switched to using mostly lfilter, but I guess the statsmodels
> sandbox (and the mailing list) still has my "playing with ARMA
> polynomials" code.
> (I think it might be pretty useful for teaching. I wished I had the
> functions to calculate some examples when I learned this.)
>
> Josef
>>
>> Thanks,
>> Alan
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 2:39 PM, Skipper Seabold  wrote:
> On Fri, Oct 14, 2011 at 2:18 PM, Alan G Isaac  wrote:
>>
>> On 10/14/2011 1:42 PM, josef.p...@gmail.com wrote:
>> > If I remember correctly, signal.lfilter doesn't require stationarity,
>> > but handling of the starting values is a bit difficult.
>>
>>
>> Hmm.  Yes.
>> AR(1) is trivial, but how do you handle higher orders?
>>
>
> Not sure if this is what you're after, but here I go the other way
> signal -> noise with known initial values of an ARMA(p,q) process.
> Here I want to set it such that the first p error terms are zero, I
> had to solve for the zi that make this so
>
> https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/tsa/arima_model.py#L295
>
> This is me talking to myself about this.
>
> http://thread.gmane.org/gmane.comp.python.scientific.user/27162/focus=27162

with two more simultaneous threads on the statsmodels mailing list. :)

Josef

>
> Skipper
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread Skipper Seabold
On Fri, Oct 14, 2011 at 2:18 PM, Alan G Isaac  wrote:
>
> On 10/14/2011 1:42 PM, josef.p...@gmail.com wrote:
> > If I remember correctly, signal.lfilter doesn't require stationarity,
> > but handling of the starting values is a bit difficult.
>
>
> Hmm.  Yes.
> AR(1) is trivial, but how do you handle higher orders?
>

Not sure if this is what you're after, but here I go the other way
signal -> noise with known initial values of an ARMA(p,q) process.
Here I want to set it such that the first p error terms are zero, I
had to solve for the zi that make this so

https://github.com/statsmodels/statsmodels/blob/master/scikits/statsmodels/tsa/arima_model.py#L295

This is me talking to myself about this.

http://thread.gmane.org/gmane.comp.python.scientific.user/27162/focus=27162

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 2:18 PM, Alan G Isaac  wrote:
> On 10/14/2011 1:42 PM, josef.p...@gmail.com wrote:
>> If I remember correctly, signal.lfilter doesn't require stationarity,
>> but handling of the starting values is a bit difficult.
>
>
> Hmm.  Yes.
> AR(1) is trivial, but how do you handle higher orders?

I would have to look for it.
You can invert the stationary part of the AR polynomial with the numpy
polynomial classes using division. The main thing is to pad with
enough zeros corresponding to the truncation that you want. And in the
old classes to watch out because the order is reversed high to low
instead of low to high or the other way around.

I switched to using mostly lfilter, but I guess the statsmodels
sandbox (and the mailing list) still has my "playing with ARMA
polynomials" code.
(I think it might be pretty useful for teaching. I wished I had the
functions to calculate some examples when I learned this.)

Josef
>
> Thanks,
> Alan
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread Alan G Isaac
On 10/14/2011 1:42 PM, josef.p...@gmail.com wrote:
> If I remember correctly, signal.lfilter doesn't require stationarity,
> but handling of the starting values is a bit difficult.


Hmm.  Yes.
AR(1) is trivial, but how do you handle higher orders?

Thanks,
Alan

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 1:26 PM, Alan G Isaac  wrote:
>>> Assuming stationarity ...
>
> On 10/14/2011 1:22 PM, josef.p...@gmail.com wrote:
>> maybe ?
>
> I just meant that the MA approximation is
> not reliable for a non-stationary AR.
> E.g., http://www.jstor.org/stable/2348631

section 5: simulating an ARIMA: simulate stationary ARMA, then cumsum it.

I guess, this only applies to simple integrated processes, where we
can split it up into ar(L)(1-L) y_t with ar(L) a stationary
polynomials.
(besides seasonal integration, I haven't seen or used any other
non-stationary AR processes.)

If I remember correctly, signal.lfilter doesn't require stationarity,
but handling of the starting values is a bit difficult.

Josef

>
> Cheers,
> Alan
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread Alan G Isaac
>> Assuming stationarity ...

On 10/14/2011 1:22 PM, josef.p...@gmail.com wrote:
> maybe ?

I just meant that the MA approximation is
not reliable for a non-stationary AR.
E.g., http://www.jstor.org/stable/2348631

Cheers,
Alan

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 12:49 PM, Alan G Isaac  wrote:
> On 10/14/2011 12:21 PM, josef.p...@gmail.com wrote:
>> One other way to simulate the AR is to get the (truncated)
>> MA-representation, and then convolve can be used
>
>
> Assuming stationarity ...

maybe ?
If it's integrated, then you need a starting point and cumsum might
still work. (like in a random walk)
No idea about seasonal integration, it would require too much thinking

(not tested)

Josef

>
> Alan
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread Alan G Isaac
On 10/14/2011 12:21 PM, josef.p...@gmail.com wrote:
> One other way to simulate the AR is to get the (truncated)
> MA-representation, and then convolve can be used


Assuming stationarity ...

Alan

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 11:56 AM, Fabrice Silva  wrote:
> Le vendredi 14 octobre 2011 à 10:49 -0400, josef.p...@gmail.com a
> écrit :
>> On Fri, Oct 14, 2011 at 10:24 AM, Alan G Isaac  wrote:
>> > As a simple example, if I have y0 and a white noise series e,
>> > what is the best way to produces a series y such that y[t] = 0.9*y[t-1] + 
>> > e[t]
>> > for t=1,2,...?
>> >
>> > 1. How can I best simulate an autoregressive process using NumPy?
>> >
>> > 2. With SciPy, it looks like I could do this as
>> > e[0] = y0
>> > signal.lfilter((1,),(1,-0.9),e)
>> > Am I overlooking similar (or substitute) functionality in NumPy?
>>
>> I don't think so. At least I didn't find anything in numpy for this.
>> An MA process would be a convolution, but for simulating AR I only
>> found signal.lfilter. (unless numpy has gained extra features that I
>> don't have in 1.5)
>>
>> Except, I think it's possible to do it with fft, if you want to
>> fft-inverse-convolve (?)
>> But simulating an ARMA with fft was much slower than lfilter in my
>> short experimentation with it.
>
> About speed comparison between lfilter, convolve, etc...
> http://www.scipy.org/Cookbook/ApplyFIRFilter

One other way to simulate the AR is to get the (truncated)
MA-representation, and then convolve can be used, as in
AppluFIRFilter.

numpy polynomials can be used it invert the AR-polynomial (with a bit
of juggling.)

Josef

>
> --
> Fabrice Silva
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread Fabrice Silva
Le vendredi 14 octobre 2011 à 10:49 -0400, josef.p...@gmail.com a
écrit :
> On Fri, Oct 14, 2011 at 10:24 AM, Alan G Isaac  wrote:
> > As a simple example, if I have y0 and a white noise series e,
> > what is the best way to produces a series y such that y[t] = 0.9*y[t-1] + 
> > e[t]
> > for t=1,2,...?
> >
> > 1. How can I best simulate an autoregressive process using NumPy?
> >
> > 2. With SciPy, it looks like I could do this as
> > e[0] = y0
> > signal.lfilter((1,),(1,-0.9),e)
> > Am I overlooking similar (or substitute) functionality in NumPy?
> 
> I don't think so. At least I didn't find anything in numpy for this.
> An MA process would be a convolution, but for simulating AR I only
> found signal.lfilter. (unless numpy has gained extra features that I
> don't have in 1.5)
> 
> Except, I think it's possible to do it with fft, if you want to
> fft-inverse-convolve (?)
> But simulating an ARMA with fft was much slower than lfilter in my
> short experimentation with it.

About speed comparison between lfilter, convolve, etc...
http://www.scipy.org/Cookbook/ApplyFIRFilter

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread josef . pktd
On Fri, Oct 14, 2011 at 10:24 AM, Alan G Isaac  wrote:
> As a simple example, if I have y0 and a white noise series e,
> what is the best way to produces a series y such that y[t] = 0.9*y[t-1] + e[t]
> for t=1,2,...?
>
> 1. How can I best simulate an autoregressive process using NumPy?
>
> 2. With SciPy, it looks like I could do this as
> e[0] = y0
> signal.lfilter((1,),(1,-0.9),e)
> Am I overlooking similar (or substitute) functionality in NumPy?

I don't think so. At least I didn't find anything in numpy for this.
An MA process would be a convolution, but for simulating AR I only
found signal.lfilter. (unless numpy has gained extra features that I
don't have in 1.5)

Except, I think it's possible to do it with fft, if you want to
fft-inverse-convolve (?)
But simulating an ARMA with fft was much slower than lfilter in my
short experimentation with it.

Josef

>
> Thanks,
> Alan Isaac
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion