on.
Cheers,
Karol
--
written by Karol Langner
Wed Nov 21 10:25:22 CET 2007
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
he timeseries package in the scipy SVN ? We (Matt Knox
> and I) tried to address some of these issues for environmental and
> financial time series.
> http://www.scipy.org/SciPyPackages/TimeSeries
You might also want to look at ta-lib: http://ta-lib.org/
--
written by Karol Lan
I opened a ticket for this (#602). Hopefully someone will confirm that adding
that Py_DECREF call fixes the leak and someone with write access patches it
in svn.
- Karol
--
written by Karol Langner
Sun Oct 28 23:29:18 EDT 2007
___
Numpy-discussion
... which
calls ufunc_update_use_defaults as of r3040. A call to Py_DECREF(errobj) is
missing there after calling PyUFunc_GetPyValues.
So using the following patch on the current svn revision of ufuncobject.c
should fix this leak:
3206a3207
> Py_DECREF(errobj);
Cheers,
Karol
--
wr
b = a.__str__()
...
which causes the resident size of the process to grow by about 1MB/s as
mentioned earlier. Interestingly, using non-float dtypes does not cause the
loop to leak.
Karol
--
written by Karol Langner
Sun Oct 28 00:47:31 EDT 2007
depends on the previous iteration, like a cumulative sum,
> etc.
>
> And as for the speed, I'm basically trying to determine what is the most
> efficient way to do it, outside of writing it in C. And I don't really want
> to create intermediate lists, ideally things would
_sub).accumulate(x)
Why can't you simply use list comprehensions? Too slow?
For example:
def expmave(x, k):
return [x[0]] + [x[i-1] + k*(x[i]-x[i-1]) for i in range(1,len(x))]
Karol
--
written by Karol Langner
Fri Jan 5 17:58:27 CET 2007
___
N