Re: [Numpy-discussion] Minimum and maximum values of numpy datatypes?

2007-12-11 Thread Hans Meine
Am Montag, 10. Dezember 2007 17:23:07 schrieb Matthieu Brucher:
> I had the same problem sooner today, someone told me the answer : use
> numpy.info object ;)

I saw this shortly after posting (what a coincidence), and I planned to reply 
to myself, but my mail did not make it to the list very quickly.. (nearly two 
hours!)

Thanks, that's indeed what I was looking for.

-- 
Ciao, /  /
 /--/
/  / ANS
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.clip only with lower or upper values?

2007-12-11 Thread Hans Meine
Am Montag, 10. Dezember 2007 23:46:17 schrieb Timothy Hochberg:
> > TypeError: function takes at least 2 arguments (1 given)
> >
> > (I could simulate that by passing max = maximum_value_of(a.dtype), if
> > that existed, see my other mail.)
>
> Why not just use minimum or maximum as needed instead of overloading clip?

You mean one of the following?
  a.clip(min = 10, max = numpy.finfo(a.dtype).max)
  a.clip(min = 10, max = numpy.iinfo(a.dtype).max)

Three reasons:
- this is not very concise
- it is less efficient than specialized clip variants
  (not much / too important though)
- I would have to discriminate integral and floating point types

How is the latter done in numpy?  Is there a good reason not to have 
numpy.rangetraits(sometype) with min/max as in iinfo/finfo?
Should I use isinstance(mytype, int)?

-- 
Ciao, /  /
 /--/
/  / ANS
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread David Cournapeau
On Dec 11, 2007 3:16 PM, Fernando Perez <[EMAIL PROTECTED]> wrote:
>
> On Dec 10, 2007 11:04 PM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> > On Dec 11, 2007 12:46 PM, Andrew Straw <[EMAIL PROTECTED]> wrote:
> > > According to the QEMU website, QEMU does not (yet) emulate SSE on x86
> > > target, so a Windows installation on a QEMU virtual machine may be a
> > > good way to build binaries free of these issues.
> > > http://fabrice.bellard.free.fr/qemu/qemu-tech.html
> > I tried this, this does not work (it actually emulates SSE). I went
> > further, and managed to disable SSE support in qemu...
> >
> > But again, what's the point:  it takes ages to compile (qemu without
> > the hardware accelerator is slow, like ten times slower), and you will
> > end up with a really bad atlas, since atlas optimizaton is entirely
> > based on runtime timers, which do not make sense anymore.
> >
> > I mean, really, what's the point of doing all this compared to using
> > blas/lapack from netlib ? In practice, is it really slower ? For what
> > ? I know I don't care so much, and I am a heavy user of numpy.
>
> For certain cases the difference can be pretty dramatic,

Is it if you use it on a CPU which is really different than the one
used for the compilation ? For example, the default atlas on debian
(built on pentium 2, no sse) is not really much faster than netlib,
IMHO.

> ship TWO
> binaries of Numpy/Scipy each time:
>

I think that in the short term, that's the only solution. The netlib
one being the default, the atlas one being the optional (with
CoreDuoSSE2 something in the name). It should work by default
everywhere.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Albert Strasheim
Hello

I think this idea is the way to go (maybe along with an ACML build, but my 
limited testing seemed to indicate that MKL works on AMD CPUs).

In fact, I apparently proposed it about a year ago:

https://svn.enthought.com/enthought/ticket/899

No takers so far...

Cheers,

Albert

P.S. NumPy on Windows and Linux built with MKL works like a charm for me.

- Original Message - 
From: "Christopher Barker" <[EMAIL PROTECTED]>
To: "Discussion of Numerical Python" 
Sent: Tuesday, December 11, 2007 7:28 AM
Subject: Re: [Numpy-discussion] Changing the distributed binary for numpy 
1.0.4 for windows ?


> Andrew Straw wrote:
>> A function
>> could be called at numpy import time that specifically checks for the
>> instruction set on the CPU running
>
> Even better would be a run-time selection of the "best" version. I've
> often fantasized about an ATLAS that could do this.
>
> I think the Intel MKL has this feature (though maybe only for Intel
> processors). The MKL runtime is re-distributable, but somehow I doubt
> that we could have one person buy one copy and distribute binaries to
> the entire numpy-using world --- but does anyone know?
>
> http://www.intel.com/cd/software/products/asmo-na/eng/346084.htm
>
> and
>
> http://www.intel.com/cd/software/products/asmo-na/eng/266854.htm#copies
>
> -Chris

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Matthieu Brucher
2007/12/11, Albert Strasheim <[EMAIL PROTECTED]>:
>
> Hello
>
> I think this idea is the way to go (maybe along with an ACML build, but my
> limited testing seemed to indicate that MKL works on AMD CPUs).


I'm trying to build numpy with ACML, but ACML misses the CBLAS interface and
thus the compilation fails. When Scons will be used, it should be doable.

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread David Cournapeau
On Dec 11, 2007 8:47 PM, Albert Strasheim <[EMAIL PROTECTED]> wrote:
> Hello
>
> I think this idea is the way to go (maybe along with an ACML build, but my
> limited testing seemed to indicate that MKL works on AMD CPUs).
>
I am personally totally against it. It is one thing to support
proprietary software, that's quite another to build our official
binaries against it. I consider myself far from any kind of open
source zealot, but that would be crossing a line I would much prefer
avoiding to cross.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.clip only with lower or upper values?

2007-12-11 Thread Timothy Hochberg
On Dec 11, 2007 2:32 AM, Hans Meine <[EMAIL PROTECTED]> wrote:

> Am Montag, 10. Dezember 2007 23:46:17 schrieb Timothy Hochberg:
> > > TypeError: function takes at least 2 arguments (1 given)
> > >
> > > (I could simulate that by passing max = maximum_value_of(a.dtype), if
> > > that existed, see my other mail.)
> >
> > Why not just use minimum or maximum as needed instead of overloading
> clip?
>
> You mean one of the following?
>  a.clip(min = 10, max = numpy.finfo(a.dtype).max)
>  a.clip(min = 10, max = numpy.iinfo(a.dtype).max)


No. I mean:

  numpy.maximum(a, 10)

To correspond to the above example.



>
> Three reasons:
> - this is not very concise
> - it is less efficient than specialized clip variants
>  (not much / too important though)
> - I would have to discriminate integral and floating point types
>
> How is the latter done in numpy?  Is there a good reason not to have
> numpy.rangetraits(sometype) with min/max as in iinfo/finfo?
> Should I use isinstance(mytype, int)?
>
> --
> Ciao, /  /
> /--/
>/  / ANS
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Ray Schumacher
At 02:32 AM 12/11/2007, you wrote:
>If so I'd be happy to contribute part of the purchase price,
>and I assume others would too.
>
>What's more, I *have* an old PIII at home.

The main company I consult for is set to buy the Intel compiler and 
FFT lib for Windows, for the express purpose of compiling Python, 
numpy, and the fastest FFT for each CPU, for new products.
I develop on both an old dual PIII and a new Core Duo, and there is 
also every other flavor in the shop. As I read it, the binaries 
should be distributable.
The catch is, I have never compiled Python or numpy - I had seen a 
poster who offered help. When the company actually purchases the 
product I'd be glad to do it on 2-3 targets if someone can assist 
with the parameters. We have one consultant here who has done it on Linux.


Ray Schumacher
Congitive Vision
8580 Production Ave., Suite B
San Diego, CA 92121
858.578.2778
http://cognitivevision.com/  


-- 
No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.503 / Virus Database: 269.17.0/1180 - Release Date: 12/10/2007 
2:51 PM


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Christopher Barker
David Cournapeau wrote:
>> I think this idea is the way to go (maybe along with an ACML build, but my
>> limited testing seemed to indicate that MKL works on AMD CPUs).
>>
> I am personally totally against it. It is one thing to support
> proprietary software, that's quite another to build our official
> binaries against it. I consider myself far from any kind of open
> source zealot, but that would be crossing a line I would much prefer
> avoiding to cross.

Interesting -- I DO consider myself a kind of Open Source Zealot -- and 
this doesn't bother me a bit.

It would bother me a LOT if numpy could only be built against this lib, 
and not an Open Source one -- but I don't see this as any different than 
providing a binary built with the Microsoft compiler.

The way it "SHOULD" be done is the OS vendor should provide these libs 
-- Apple does, and most of the Linux Distros do (with varying success). 
If MS could just cut a deal with Intel and AMD, we could count on all 
versions of Windows having an optimized lib-- oh well, one can dream!

I'm not sure the licensing really makes it possible though. Numpy isn't 
exactly an application, but rather a development tool, so I'm not sure 
how Intel would feel about it being distributed. Also, it looks like 
they require each "developer" to have license, rather than only the 
person building the final binary -- so having the one person building 
the final distro may not be kosher. IANAL.

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Christopher Barker
Fernando Perez wrote:
> a simple, reasonable solution that is likely to work: ship TWO
> binaries of Numpy/Scipy each time:
> 
> 1. {numpy,scipy}-reference: built with the reference blas from netlib,
> no atlas, period.
> 
> 2. {}-atlas: built with whatever the developers have at the time,
> which will likely mean these days a core 2 duo with SSE2 support.
> What hardware it was built on should be indicated, so people can at
> least know this fact.

I disagree -- having an atlas version that only works on recent hardware 
is just asking for complaints -- I think the ONLY way to go is for the 
"standard" binary to be universal. Instructions should be provided for 
building other versions, and if third parties want to distribute 
processor-dependent versions, then great, but that's an extra.

By the way, I've always been confused by static linking of lapack/atlas 
-- it seems to me that this kind of thing is on of the best uses of 
dynamic linking -- the main binary is processor dependent, and it is 
linked, at runtime, with the host's processor specific lib. -- could we 
do it that way:

The standard distro includes a universal dynamic lib.

Folks build processor-specific libs that can be dropped in to replace 
the universal one if someone so desires.

Then it's a "small" step to include a runtime tool that selects the best 
lib it can find on the system.

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Fernando Perez
On Dec 11, 2007 11:04 AM, Christopher Barker <[EMAIL PROTECTED]> wrote:
> Fernando Perez wrote:
> > a simple, reasonable solution that is likely to work: ship TWO
> > binaries of Numpy/Scipy each time:
> >
> > 1. {numpy,scipy}-reference: built with the reference blas from netlib,
> > no atlas, period.
> >
> > 2. {}-atlas: built with whatever the developers have at the time,
> > which will likely mean these days a core 2 duo with SSE2 support.
> > What hardware it was built on should be indicated, so people can at
> > least know this fact.
>
> I disagree -- having an atlas version that only works on recent hardware
> is just asking for complaints -- I think the ONLY way to go is for the
> "standard" binary to be universal. Instructions should be provided for
> building other versions, and if third parties want to distribute
> processor-dependent versions, then great, but that's an extra.

Well, what I had in mind was not something that would run *only* on
recent hardware.  A typical Atlas build on a recent box will run with
anything with SSE2, which means boxes as far back as the Pentium4, I
think.  Or is Atlas using anything that's core2-duo-specific in its
build?  At the workshop, the only problem John and I noticed was for
the Pentium III user, and there were plenty of old-looking boxes
around that were most certainly not Core2.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.clip only with lower or upper values?

2007-12-11 Thread Hans Meine
On Dienstag 11 Dezember 2007, Timothy Hochberg wrote:
> > You mean one of the following?
> >  a.clip(min = 10, max = numpy.finfo(a.dtype).max)
> >  a.clip(min = 10, max = numpy.iinfo(a.dtype).max)
>
> No. I mean:
>
>   numpy.maximum(a, 10)
>
> To correspond to the above example.

Great, thanks for the hints.  That's good enough; could be pythonic to let 
clip() forward calls to minimum/maximum if only one bound is given though.
I had a look at the code, but I am not used enough to the Python/C API, let 
alone to numpy's internals, to *quickly* hack this.

Ciao, /  /.o.
 /--/ ..o
/  / ANS  ooo


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.clip only with lower or upper values?

2007-12-11 Thread Travis E. Oliphant
Hans Meine wrote:
> On Dienstag 11 Dezember 2007, Timothy Hochberg wrote:
>   
>>> You mean one of the following?
>>>  a.clip(min = 10, max = numpy.finfo(a.dtype).max)
>>>  a.clip(min = 10, max = numpy.iinfo(a.dtype).max)
>>>   
>> No. I mean:
>>
>>   numpy.maximum(a, 10)
>>
>> To correspond to the above example.
>> 
>
> Great, thanks for the hints.  That's good enough; could be pythonic to let 
> clip() forward calls to minimum/maximum if only one bound is given though.
> I had a look at the code, but I am not used enough to the Python/C API, let 
> alone to numpy's internals, to *quickly* hack this.
>   

It is done, now in SVN.

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread David Cournapeau
On Dec 12, 2007 2:58 AM, Christopher Barker <[EMAIL PROTECTED]> wrote:
> David Cournapeau wrote:
> >> I think this idea is the way to go (maybe along with an ACML build, but my
> >> limited testing seemed to indicate that MKL works on AMD CPUs).
> >>
> > I am personally totally against it. It is one thing to support
> > proprietary software, that's quite another to build our official
> > binaries against it. I consider myself far from any kind of open
> > source zealot, but that would be crossing a line I would much prefer
> > avoiding to cross.
>
> Interesting -- I DO consider myself a kind of Open Source Zealot -- and
> this doesn't bother me a bit.
>
> It would bother me a LOT if numpy could only be built against this lib,
> and not an Open Source one -- but I don't see this as any different than
> providing a binary built with the Microsoft compiler.
>
For me it is: when using a MS compiler, you are not forcing people to
use a non open source product (except maybe the C runtime). What will
happen if we offer binaries using MKL ? The ATLAS will not be tested
anymore on windows, it forces every developer to use the MKL to
support it At least, now, with atlas problems, I can reproduce the
problems. With the MKL, not so much.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread David Cournapeau
On Dec 12, 2007 3:04 AM, Christopher Barker <[EMAIL PROTECTED]> wrote:
> Fernando Perez wrote:
> > a simple, reasonable solution that is likely to work: ship TWO
> > binaries of Numpy/Scipy each time:
> >
> > 1. {numpy,scipy}-reference: built with the reference blas from netlib,
> > no atlas, period.
> >
> > 2. {}-atlas: built with whatever the developers have at the time,
> > which will likely mean these days a core 2 duo with SSE2 support.
> > What hardware it was built on should be indicated, so people can at
> > least know this fact.
>
> I disagree -- having an atlas version that only works on recent hardware
> is just asking for complaints -- I think the ONLY way to go is for the
> "standard" binary to be universal. Instructions should be provided for
> building other versions, and if third parties want to distribute
> processor-dependent versions, then great, but that's an extra.
>

But that's the problem: it is next to impossible to build ATLAS which
works on any processor ! At least, it is not supported by ATLAS
developers, and the way it suggested did not work for me.

> By the way, I've always been confused by static linking of lapack/atlas
> -- it seems to me that this kind of thing is on of the best uses of
> dynamic linking -- the main binary is processor dependent, and it is
> linked, at runtime, with the host's processor specific lib. -- could we
> do it that way:
>

I believe that lapack/blas are dynamically linked by default. But for
a portable solution, I don't see any other solution than  dynamic
loading. The BLAS/LAPACK library would be like a plug-in, loaded at
runtime. But this requires some work, and in perticular, doing it in a
cross platform way is not trivial

> The standard distro includes a universal dynamic lib.
>
> Folks build processor-specific libs that can be dropped in to replace
> the universal one if someone so desires.

Asking the users to do is  just asking for new a set of problems,
IMHO. I used this approach for the rpms on ashigabou repository (they
are compiled against netlib blas/lapack, but you can change the used
libraries using runtime library path), but that's not portable. What
is needed is a true runtime system in numpy which can detect the
processor (not too difficult, although doing it for all compilers is
not trivial either), load the right library (difficult to do in a
cross platform way), and use it accordingly (not too difficult, but
requires some work).

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Robert Kern
David Cournapeau wrote:
> On Dec 12, 2007 2:58 AM, Christopher Barker <[EMAIL PROTECTED]> wrote:
>> David Cournapeau wrote:
 I think this idea is the way to go (maybe along with an ACML build, but my
 limited testing seemed to indicate that MKL works on AMD CPUs).

>>> I am personally totally against it. It is one thing to support
>>> proprietary software, that's quite another to build our official
>>> binaries against it. I consider myself far from any kind of open
>>> source zealot, but that would be crossing a line I would much prefer
>>> avoiding to cross.
>> Interesting -- I DO consider myself a kind of Open Source Zealot -- and
>> this doesn't bother me a bit.
>>
>> It would bother me a LOT if numpy could only be built against this lib,
>> and not an Open Source one -- but I don't see this as any different than
>> providing a binary built with the Microsoft compiler.
>>
> For me it is: when using a MS compiler, you are not forcing people to
> use a non open source product (except maybe the C runtime). What will
> happen if we offer binaries using MKL ? The ATLAS will not be tested
> anymore on windows, it forces every developer to use the MKL to
> support it At least, now, with atlas problems, I can reproduce the
> problems. With the MKL, not so much.

I agree. The official-official Win32 binaries (numpy--py.msi
and numpy--py-win32.egg on the SourceForge donwload page)
should be unencumbered. Other versions can be on the download page, too, but
they should be named differently, like numpy-mkl--... .

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Ryan Krauss
Near as I can tell, this is still unresolved for people with non-sse2
machines.  Is that right?

I have a student trying to get started with such a machine.  Numpy is
causing Python to crash.  What is the easiest solution?  Does he need
to build numpy from source on that machine (I actually still have
access to one and could do it)?

Is it just Numpy or also Scipy?

Here is his responses to me:

Laptop - Ok
Windows XP Professional, Service Pack 2
AMD Athlon 64 3400+  (ClawHammer)
1.67 GHz, 768 MB of RAM
Chipset:  SiS 755/755FX
Southbridge:  SiS LPC Bridge
Instructions:  MMX (+), 3DNow! (+), SSE, SSE2, x86-64

Machine 1 - Crashes
Windows XP Professional, Service Pack 2
AMD Athlon XP 2000+  (Thoroughbred)
1.67 GHz, 768 MB of RAM
ASUS A7V8X-X motherboard
Chipset:  VIA KT400 (VT8377)
Southbridge:  VIA VT8235
Instructions:  MMX (+), 3DNow! (+), SSE

Machine 2 - Crashes
Windows XP Professional, Service Pack 2
AMD Athlon XP 2600+  (Barton)
1.92 GHz, 2.0 GB of RAM
ASUS A7V880 motherboard
Chipset:  VIA KT880
Southbridge:  VIA VT8237
Instructions:  MMX (+), 3DNow! (+), SSE

I ran the following statements on both machines which caused it to crash:

import numpy
numpy.test()

Here is the output:

Numpy is installed in C:\Python25\lib\site-packages\numpy
Numpy version 1.0.4
Python version 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC
v.1310 32 bit (Int
el)]
  Found 10/10 tests for numpy.core.defmatrix
  Found 36/36 tests for numpy.core.ma
  Found 223/223 tests for numpy.core.multiarray
  Found 65/65 tests for numpy.core.numeric
  Found 31/31 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 6/6 tests for numpy.core.scalarmath
  Found 14/14 tests for numpy.core.umath
  Found 4/4 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 1/1 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 9/9 tests for numpy.lib.arraysetops
  Found 46/46 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 4/4 tests for numpy.lib.index_tricks
  Found 3/3 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 40/40 tests for numpy.linalg
  Found 2/2 tests for numpy.random
  Found 0/0 tests for __main__








.

Sounds like the problem is the fact that my desktop computers do not
support SSE2 instructions which are in the latest numpy binaries.
This also explains why it works fine on the laptop which does support
SSE2.

On Dec 11, 2007 6:48 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> David Cournapeau wrote:
> > On Dec 12, 2007 2:58 AM, Christopher Barker <[EMAIL PROTECTED]> wrote:
> >> David Cournapeau wrote:
>  I think this idea is the way to go (maybe along with an ACML build, but 
>  my
>  limited testing seemed to indicate that MKL works on AMD CPUs).
> 
> >>> I am personally totally against it. It is one thing to support
> >>> proprietary software, that's quite another to build our official
> >>> binaries against it. I consider myself far from any kind of open
> >>> source zealot, but that would be crossing a line I would much prefer
> >>> avoiding to cross.
> >> Interesting -- I DO consider myself a kind of Open Source Zealot -- and
> >> this doesn't bother me a bit.
> >>
> >> It would bother me a LOT if numpy could only be built against this lib,
> >> and not an Open Source one -- but I don't see this as any different than
> >> providing a binary built with the Microsoft compiler.
> >>
> > For me it is: when using a MS compiler, you are not forcing people to
> > use a non open source product (except maybe the C runtime). What will
> > happen if we offer binaries using MKL ? The ATLAS will not be tested
> > anymore on windows, it forces every developer to use the MKL to
> > support it At least, now, with atlas problems, I can reproduce the
> > problems. With the MKL, not so much.
>
> I agree. The official-official Win32 binaries 
> (numpy--py.msi
> and numpy--py-win32.egg on the SourceForge donwload page)
> should be unencumbered. Other versions can be on the download page, too, but
> they should be named differently, like numpy-mkl--... .
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
>  that is made terrible by our own mad attempt to interpret it as though it had
>  an underlying truth."
>   -- Umberto 

Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Fernando Perez
On Dec 11, 2007 6:45 PM, Ryan Krauss <[EMAIL PROTECTED]> wrote:
> Near as I can tell, this is still unresolved for people with non-sse2
> machines.  Is that right?

Yup.  Your more detailed testing seems to confirm the hunch I had at
the weekend workshop that SSE2 is the culprit.  Thanks for the info.

It would be really great if we could somehow resolve this quickly.  I
have a WinXP install under Linux, but it seems to see my CPU as an
Athlon X2, so that won't work.  But I also have an old laptop with a
dual-boot XP that's a PIII (no SSE2, and probably the oldest
reasonable hardware we can expect to support).

If someone can provide me with simple instructions on what needs to be
done to build numpy on that thing, without buying any software, I can
volunteer to run a build script on every release that's PIII-safe.

I know nothing about developing on XP, but I'm willing to install the
necessary tools (if free) to at least get this built, because this is
really an untenable situation.  Auto-crashing isn't really a selling
point for software, even under Windows :)

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Alan Isaac
On Tue, 11 Dec 2007, Ryan Krauss wrote:
> I have a student trying to get started with such 
> a machine.  Numpy is causing Python to crash.  What is the 
> easiest solution?

Use 1.0.3.1

> Is it just Numpy or also Scipy? 

It is also SciPy.  Stuff that relies
only on NumPy wil work, plus anything
that is pure Python.

Cheers,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-11 Thread Albert Strasheim
Hello all,

> I'm not sure the licensing really makes it possible though. Numpy isn't
> exactly an application, but rather a development tool, so I'm not sure
> how Intel would feel about it being distributed. Also, it looks like
> they require each "developer" to have license, rather than only the
> person building the final binary -- so having the one person building
> the final distro may not be kosher. IANAL.

It comes down to who is allowed to have the link libraries and who isn't. I 
doubt whether Intel's license agreement distinguishes between "normal" 
programs and "development tools".

If you're a developer, you need the link libraries (.lib files) to link your 
program against Intel MKL. According to Intel's redist.txt, you are not 
allowed to redistribute these files. Without these files, you can't link a 
new program against the Intel MKL DLLs (generally speaking).

You are allowed to redistribute the DLLs (as listed in redist.txt), without 
having to pay any further royalties. This means that you give any user the 
files they need to run a program you have linked against Intel MKL.

So as I see it, one NumPy developer would need to pay for Intel MKL.

Cheers,

Albert

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion