On Wed, Oct 15, 2008 at 10:41 PM, Charles R Harris <
[EMAIL PROTECTED]> wrote:
> OK, I take this as a go ahead with the proviso that it's my problem. The
> big question is naming. Scipy has
>
> lu
> lu_factor
> lu_solve
>
> cholesky
>
> cho_factor
> cho_solve
>
> The code for lu and lu_factor isn'
OK, I take this as a go ahead with the proviso that it's my problem. The big
question is naming. Scipy has
lu
lu_factor
lu_solve
cholesky
cho_factor
cho_solve
The code for lu and lu_factor isn't the same, although they both look to
call the same underlying function; the same is true of the chole
> If LU is already part of lapack_lite and somebody is willing to put in
> the work to expose the functionality to the end user in a reasonable
> way, then I think it should be added.
+1
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http:/
Charles R Harris wrote:
>
>
> I would just add the bits that are already there and don't add any
> extra dependencies, i.e., they are there when numpy is built without
> ATLAS or other external packages. The determinant function in linalg
> uses the LU decomposition, so I don't see why that shou
2008/10/15 Robert Kern <[EMAIL PROTECTED]>:
> Which bits? The current set has worked fine for more than 10 years.
I'm surprised no-one has requested the LU decomposition in NumPy
before -- it is a fundamental building block in linear algebra. I
think it is going too far, stating that NumPy's line
On Wed, Oct 15, 2008 at 15:33, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Wed, Oct 15, 2008 at 2:26 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Wed, Oct 15, 2008 at 15:21, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>> >
>> > On Wed, Oct 15, 2008 at 2:04 PM, Robert Kern <[EMAIL P
On 10/15/2008 4:26 PM Robert Kern apparently wrote:
> Which bits?
Those in lapack_lite?
Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Oct 15, 2008 at 2:26 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 15:21, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> > On Wed, Oct 15, 2008 at 2:04 PM, Robert Kern <[EMAIL PROTECTED]>
> wrote:
> >>
> >> On Wed, Oct 15, 2008 at 14:49, Charles R Harris
> >> <[
On Wed, Oct 15, 2008 at 15:21, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Wed, Oct 15, 2008 at 2:04 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Wed, Oct 15, 2008 at 14:49, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>> >
>> > On Wed, Oct 15, 2008 at 1:06 PM, Robert Kern <[EMAIL P
On Wed, Oct 15, 2008 at 2:04 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 14:49, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> > On Wed, Oct 15, 2008 at 1:06 PM, Robert Kern <[EMAIL PROTECTED]>
> wrote:
> >>
> >> On Wed, Oct 15, 2008 at 00:23, Charles R Harris
> >> <[
On Wed, Oct 15, 2008 at 10:52 AM, Ken Basye <[EMAIL PROTECTED]> wrote:
> Hi Folks,
> In porting some code to a 64-bit machine, I ran across the following
> issue.
> On the 64-bit machine, an array with dtype=int32 prints the dtype
> explicitly, whereas on
> a 32 bit machine it doesn't. The same
On Wed, Oct 15, 2008 at 14:49, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Wed, Oct 15, 2008 at 1:06 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Wed, Oct 15, 2008 at 00:23, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>> > Hi All,
>> >
>> > numpy.linalg has qr and cholesky factoriz
On Wed, Oct 15, 2008 at 14:43, Stéfan van der Walt <[EMAIL PROTECTED]> wrote:
> 2008/10/15 Robert Kern <[EMAIL PROTECTED]>:
>>> numpy.linalg has qr and cholesky factorizations, but LU factorization is
>>> only available in scipy. That doesn't seem quite right. I think is would
>>> make sense to inc
On Wed, Oct 15, 2008 at 02:20, Geoffrey Irving <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Currently in numpy comparing dtypes for equality with == does an
> internal PyArray_EquivTypes check, which means that the dtypes NPY_INT
> and NPY_LONG compare as equal in python. However, the hash function
> f
On Wed, Oct 15, 2008 at 1:06 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 00:23, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> > Hi All,
> >
> > numpy.linalg has qr and cholesky factorizations, but LU factorization is
> > only available in scipy. That doesn't seem quite r
2008/10/15 Robert Kern <[EMAIL PROTECTED]>:
>> numpy.linalg has qr and cholesky factorizations, but LU factorization is
>> only available in scipy. That doesn't seem quite right. I think is would
>> make sense to include the LU factorization in numpy among the basic linalg
>> operations, and probab
When you slice an array, you keep the original array in memory until
the slice is deleted. The slice uses the original array memory and is
not a copy. The second example explicitly makes a copy.
Perry
On Oct 15, 2008, at 2:31 PM, emil wrote:
>
>> Huang-Wen Chen wrote:
>>> Robert Kern wrote:
On Wed, Oct 15, 2008 at 00:23, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> Hi All,
>
> numpy.linalg has qr and cholesky factorizations, but LU factorization is
> only available in scipy. That doesn't seem quite right. I think is would
> make sense to include the LU factorization in numpy among th
> Huang-Wen Chen wrote:
>> Robert Kern wrote:
from numpy import *
for i in range(1000):
a = random.randn(512**2)
b = a.argsort(kind='quick')
>>> Can you try upgrading to numpy 1.2.0? On my machine with numpy 1.2.0
>>> on OS X, the memory usage is stable.
>>>
>> I tried t
Hi Folks,
In porting some code to a 64-bit machine, I ran across the following
issue.
On the 64-bit machine, an array with dtype=int32 prints the dtype
explicitly, whereas on
a 32 bit machine it doesn't. The same is true for dtype=intc (since
'intc is int32' --> True),
and the converse is tr
On Wed, Oct 15, 2008 at 9:19 AM, David Cournapeau <[EMAIL PROTECTED]>wrote:
> On Wed, Oct 15, 2008 at 11:45 PM, Travis E. Oliphant
> <[EMAIL PROTECTED]> wrote:
> > Gabriel Gellner wrote:
> >> Some colleagues noticed that var uses biased formula's by default in
> numpy,
> >> searching for the reaso
Hi,
While I disagree, I really do not care because this is documented. But
perhaps a clear warning is need at the start so it clear what the
default ddof means instead of it being buried in the Notes section.
Also I am surprised that you did not directly reference the Stein
estimator (your mi
On Wed, Oct 15, 2008 at 09:45:39AM -0500, Travis E. Oliphant wrote:
> Gabriel Gellner wrote:
> > Some colleagues noticed that var uses biased formula's by default in numpy,
> > searching for the reason only brought up:
> >
> > http://article.gmane.org/gmane.comp.python.numeric.general/12438/match=v
Me too.
S
On Wednesday 15 October 2008 11:31:44 am Paul Barrett wrote:
> I'm behind Travis on this one.
>
> -- Paul
>
> On Wed, Oct 15, 2008 at 11:19 AM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
> > On Wed, Oct 15, 2008 at 11:45 PM, Travis E. Oliphant
> >
> > <[EMAIL PROTECTED]> wrote:
> >>
I'm behind Travis on this one.
-- Paul
On Wed, Oct 15, 2008 at 11:19 AM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 11:45 PM, Travis E. Oliphant
> <[EMAIL PROTECTED]> wrote:
>> Gabriel Gellner wrote:
>>> Some colleagues noticed that var uses biased formula's by default
On Wed, Oct 15, 2008 at 11:45 PM, Travis E. Oliphant
<[EMAIL PROTECTED]> wrote:
> Gabriel Gellner wrote:
>> Some colleagues noticed that var uses biased formula's by default in numpy,
>> searching for the reason only brought up:
>>
>> http://article.gmane.org/gmane.comp.python.numeric.general/12438
Gabriel Gellner wrote:
> Some colleagues noticed that var uses biased formula's by default in numpy,
> searching for the reason only brought up:
>
> http://article.gmane.org/gmane.comp.python.numeric.general/12438/match=var+bias
>
> which I totally agree with, but there was no response? Any reason
Some colleagues noticed that var uses biased formula's by default in numpy,
searching for the reason only brought up:
http://article.gmane.org/gmane.comp.python.numeric.general/12438/match=var+bias
which I totally agree with, but there was no response? Any reason for this? Is
there any way I can
On 10/14/2008 9:23 PM frank wang apparently wrote:
> I have a large ndarray that I want to dump to a file. I know that I can
> use a for loop to write one data at a time. Since Python is a very
> powerfully language, I want to find a way that will dump the data fast
> and clean. The data can be
Hi Uwe
2008/10/15 Uwe Schmitt <[EMAIL PROTECTED]>:
> I got a matrix of 2100 lines, and I want to calculate blockwise mean
> vectors.
> Each block consists of 10 consecutive rows.
>
> My code looks like this:
>
> rv = []
> for i in range(0, 2100, 10):
> rv.append( mean(matrix[i:i+10], ax
On Oct 14 15:29 -1000, Eric Firing wrote:
> frank wang wrote:
> > Hi,
> >
> > I have a large ndarray that I want to dump to a file. I know that I can
> > use a for loop to write one data at a time. Since Python is a very
> > powerfully language, I want to find a way that will dump the data fas
That's cool. Thanks for your fast answer.
Greetings, Uwe
On 15 Okt., 12:56, "Charles R Harris" <[EMAIL PROTECTED]>
wrote:
> On Wed, Oct 15, 2008 at 4:47 AM, Uwe Schmitt <[EMAIL PROTECTED]
>
>
>
> > wrote:
> > Hi,
>
> > I got a matrix of 2100 lines, and I want to calculate blockwise mean
> > vect
On Wed, Oct 15, 2008 at 4:47 AM, Uwe Schmitt <[EMAIL PROTECTED]
> wrote:
> Hi,
>
> I got a matrix of 2100 lines, and I want to calculate blockwise mean
> vectors.
> Each block consists of 10 consecutive rows.
>
> My code looks like this:
>
> rv = []
> for i in range(0, 2100, 10):
> rv.a
Hi,
I got a matrix of 2100 lines, and I want to calculate blockwise mean
vectors.
Each block consists of 10 consecutive rows.
My code looks like this:
rv = []
for i in range(0, 2100, 10):
rv.append( mean(matrix[i:i+10], axis=0))
return array(rv)
Is there a more elegant and may
Huang-Wen Chen wrote:
> Robert Kern wrote:
>>> from numpy import *
>>> for i in range(1000):
>>> a = random.randn(512**2)
>>> b = a.argsort(kind='quick')
>> Can you try upgrading to numpy 1.2.0? On my machine with numpy 1.2.0
>> on OS X, the memory usage is stable.
>>
> I tried the code frag
Hello,
Currently in numpy comparing dtypes for equality with == does an
internal PyArray_EquivTypes check, which means that the dtypes NPY_INT
and NPY_LONG compare as equal in python. However, the hash function
for dtypes reduces id(), which is therefore inconsistent with ==.
Unfortunately I can'
36 matches
Mail list logo