On Wed, Mar 21, 2012 at 12:28 AM, Sandro Tosi wrote:
> Hello,
> I've reported http://projects.scipy.org/numpy/ticket/2085 and Ralf
> asked for bringing that up here: is anyone able to replicate the
> problem described in that ticket?
>
> The debian bug tracking the problem is:
> http://bugs.debia
On Mon, Mar 26, 2012 at 1:27 AM, Charles R Harris wrote:
>
>
> On Sun, Mar 25, 2012 at 3:14 PM, Ralf Gommers > wrote:
>
>>
>>
>> On Sat, Mar 24, 2012 at 10:13 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> There several problems with numpy master that need to
Hi,
On Tue, Mar 27, 2012 at 12:12 PM, Nicole Stoffels <
nicole.stoff...@forwind.de> wrote:
> **
> Dear all,
>
> I get the following memory error while running my program:
>
> *Traceback (most recent call last):
> File "/home/nistl/Software/Netzbetreiber/FLOW/src/MemoryError_Debug.py",
> line 9,
On Sun, Mar 25, 2012 at 6:30 PM, Pierre Haessig
wrote:
> Hi,
>
> A quick question I've had in mind for some time but didn't find a solution :
> Is there a significant difference between "numpy.percentile" and
> "scipy.stats.scoreatpercentile" ?
>
> Of course the signatures are somewhat different,
Sure, that would be easy enough to implement. I don't really have
a preference, is there a reason you would prefer that API?
No, just exploring possibilities. Another would be a different name,
searchargsorted or some such. I actually think that is a better
alternative than the pair,
Le 27 mars 2012 06:04, Nicole Stoffels a écrit
:
> **
> Hi Pierre,
>
> thanks for the fast answer!
>
> I actually have timeseries of 24 hours for 459375 gridpoints in Europe.
> The timeseries of every grid point is stored in a column. That's why in my
> real program I already transposed the data,
> Both work on my computer, while your example indeed leads to a MemoryError
> (because shape 459375*459375 would be a decently big matrix...)
Nicely understated :)
For 32-bit values "decently big" => 786GB
___
NumPy-Discussion mailing list
NumPy-Discus
Hi Pierre,
thanks for the fast answer!
I actually have timeseries of 24 hours for 459375 gridpoints in Europe.
The timeseries of every grid point is stored in a column. That's why in
my real program I already transposed the data, so that the correlation
is made column by column. What I finall
Hi Nicole,
Le 27/03/2012 11:12, Nicole Stoffels a écrit :
> *if __name__ == '__main__':
>
> data_records = random.random((459375, 24))
> correlation = corrcoef(data_records)*
May I assume that your data_record is made of 24 different variables of
which you have 459375 observations ?
Dear all,
I get the following memory error while running my program:
*Traceback (most recent call last):
File
"/home/nistl/Software/Netzbetreiber/FLOW/src/MemoryError_Debug.py", line
9, in
correlation = corrcoef(data_records)
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_
Thanks for your response, David.
> What do you mean by own extensions to NumPy ? If you mean building
> extensions against the C API of NumPy, then you don't need to build your own
> NumPy. Building NumPy with Intel Compilers and MKL is a non-trivial process,
> so I would rather avoid it.
I want to
11 matches
Mail list logo