Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Fernando Perez
On Dec 10, 2007 11:04 PM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> On Dec 11, 2007 12:46 PM, Andrew Straw <[EMAIL PROTECTED]> wrote:
> > According to the QEMU website, QEMU does not (yet) emulate SSE on x86
> > target, so a Windows installation on a QEMU virtual machine may be a
> > good way to build binaries free of these issues.
> > http://fabrice.bellard.free.fr/qemu/qemu-tech.html
> I tried this, this does not work (it actually emulates SSE). I went
> further, and managed to disable SSE support in qemu...
>
> But again, what's the point:  it takes ages to compile (qemu without
> the hardware accelerator is slow, like ten times slower), and you will
> end up with a really bad atlas, since atlas optimizaton is entirely
> based on runtime timers, which do not make sense anymore.
>
> I mean, really, what's the point of doing all this compared to using
> blas/lapack from netlib ? In practice, is it really slower ? For what
> ? I know I don't care so much, and I am a heavy user of numpy.

For certain cases the difference can be pretty dramatic, but I think
there's a simple, reasonable solution that is likely to work: ship TWO
binaries of Numpy/Scipy each time:

1. {numpy,scipy}-reference: built with the reference blas from netlib,
no atlas, period.

2. {}-atlas: built with whatever the developers have at the time,
which will likely mean these days a core 2 duo with SSE2 support.
What hardware it was built on should be indicated, so people can at
least know this fact.

Just indicate that:

- The atlas version is likely faster, but fully unsupported and likely
to crash older platforms, no refunds.

- If you *really* care about performance, you should build Atlas
yourself or be 100% sure that you're using an Atlas built on the same
chip you're using, so the build-time timing and blocking choices are
actually meaningful.

That sounds like a reasonable bit of extra work (a lot easier than
building a run-time dynamic atlas) with a true payoff in terms of
stability.  No?

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David Cournapeau
On Dec 11, 2007 12:46 PM, Andrew Straw <[EMAIL PROTECTED]> wrote:
> According to the QEMU website, QEMU does not (yet) emulate SSE on x86
> target, so a Windows installation on a QEMU virtual machine may be a
> good way to build binaries free of these issues.
> http://fabrice.bellard.free.fr/qemu/qemu-tech.html
I tried this, this does not work (it actually emulates SSE). I went
further, and managed to disable SSE support in qemu...

But again, what's the point:  it takes ages to compile (qemu without
the hardware accelerator is slow, like ten times slower), and you will
end up with a really bad atlas, since atlas optimizaton is entirely
based on runtime timers, which do not make sense anymore.

I mean, really, what's the point of doing all this compared to using
blas/lapack from netlib ? In practice, is it really slower ? For what
? I know I don't care so much, and I am a heavy user of numpy.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David Cournapeau
On Dec 11, 2007 11:59 AM, Andrew Straw <[EMAIL PROTECTED]> wrote:
> An idea that occurred to me after reading Fernando's email. A function
> could be called at numpy import time that specifically checks for the
> instruction set on the CPU running and makes sure that is completely
> covers the instruction set available through all the various calls,
> including to BLAS. If this kind of thing were added, numpy could fail
> with a loud warning rather than dying with mysterious errors later on.
> The trouble would seem that you can switch your BLAS shared library
> without re-compiling numpy, so numpy would have to do a run-time query
> of ATLAS, etc. for compilation issues. Which is likely
> library-dependent, and furthermore, not having looked into BLAS
> implementations, I'm not sure that (m)any of them provide such
> information. Do they? Is this idea technically possible?

It is possible, and has been done: that's how matlab did it when it
used ATLAS. Now, it is not easy, and would require some changes
Basically, the solution I would see would be to have a wrapper
library, to which every call would be done, and the wrapper library
could "reroute" the calls to the right, dynamically loaded library.
This requires several things which are not theoretically difficult,
but a pain to do right (cross platform library loader, etc...)

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David Cournapeau
On Dec 11, 2007 11:03 AM, Fernando Perez <[EMAIL PROTECTED]> wrote:
> On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>
> > The current situation is untenable. I will gladly accept a slow BLAS for an
> > official binary that won't segfault anywhere. We can look for a faster BLAS 
> > later.
>
> Just to add a note to this: John Hunter and I just finished teaching a
> python workshop here in Boulder, and one attendee had a recurring
> all-out crash on WinXP.  Eventually John was able to track it to a bad
> BLAS call, but the death was an 'illegal instruction'. We then noticed
> that this was on an older Pentium III laptop, and I'd be willing to
> bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
> chip only has plain SSE, not SSE2, and that's the kind of crash I've
> seen when  accidentally running code compiled in my office machine (a
> P4) on my laptop (a similarly old PIII).
>
> It may very well be that it's OK to ship binaries with ATLAS, but just
> to build them without any fancy instruction support (no SSE, SSE2 or
> anything else of that kind, just plain x87 code).
>
The problem is that this is non trivial, and unsupported. I tried to
do it, asked the main author of ATLAS, but building an ATLAS which
does not use any instruction set present on the CPU used for build is
too difficult, and not a priority for the main author of ATLAS. The
build system of ATLAS being extremely complex (code is generated by C
programs themselves compiled on the fly by other C softwares), I gave
up on the idea of doing it by myself.

But anyway, honestly, this is kind of stupid: ATLAS needs to be
compiled with the same CPU it is run for for good performances. For
example, if the L1 cache is of different size, ATLAS performances are
already not so good. So if you are not event using SSE/SSE2...

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Christopher Barker
Andrew Straw wrote:
> A function
> could be called at numpy import time that specifically checks for the
> instruction set on the CPU running 

Even better would be a run-time selection of the "best" version. I've 
often fantasized about an ATLAS that could do this.

I think the Intel MKL has this feature (though maybe only for Intel 
processors). The MKL runtime is re-distributable, but somehow I doubt 
that we could have one person buy one copy and distribute binaries to 
the entire numpy-using world --- but does anyone know?

http://www.intel.com/cd/software/products/asmo-na/eng/346084.htm

and

http://www.intel.com/cd/software/products/asmo-na/eng/266854.htm#copies

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Alan G Isaac
This may be a naive question, but just to be sure...

If troubles building without SSE2 support on an SSE2
processor are the problem, withould the problem be addressed
by purchasing an old PIII like
http://cgi.ebay.com/Dell-OptiPlex-GX110-Pentium-III-1GHz-40GB-256MB-DVD-XP_W0QQitemZ130180707038QQihZ003QQcategoryZ140070QQcmdZViewItem>
or
http://cgi.ebay.com/Dell-Precision-210-Pentium-III-Dual-500MHz-512MB-30GB_W0QQitemZ130181576949QQihZ003QQcategoryZ51225QQcmdZViewItem>

If so I'd be happy to contribute part of the purchase price,
and I assume others would too.

What's more, I *have* an old PIII at home.  (Doesn't 
everybody?)  Unfortunately, I have almost no experience with 
compiled languages.  However if it would be useful, I'd be 
happy to try to build on my home machine (after this 
Friday).  I would have to ask a lot of questions...

Cheers,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Andrew Straw
According to the QEMU website, QEMU does not (yet) emulate SSE on x86
target, so a Windows installation on a QEMU virtual machine may be a
good way to build binaries free of these issues.
http://fabrice.bellard.free.fr/qemu/qemu-tech.html

-Andrew

Travis E. Oliphant wrote:
> Fernando Perez wrote:
>> On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>>   
>>> The current situation is untenable. I will gladly accept a slow BLAS for an
>>> official binary that won't segfault anywhere. We can look for a faster BLAS 
>>> later.
>>> 
>> Just to add a note to this: John Hunter and I just finished teaching a
>> python workshop here in Boulder, and one attendee had a recurring
>> all-out crash on WinXP.  Eventually John was able to track it to a bad
>> BLAS call, but the death was an 'illegal instruction'. We then noticed
>> that this was on an older Pentium III laptop, and I'd be willing to
>> bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
>> chip only has plain SSE, not SSE2, and that's the kind of crash I've
>> seen when  accidentally running code compiled in my office machine (a
>> P4) on my laptop (a similarly old PIII).
>>
>> It may very well be that it's OK to ship binaries with ATLAS, but just
>> to build them without any fancy instruction support (no SSE, SSE2 or
>> anything else of that kind, just plain x87 code).
>>   
> 
> I think this is what the default should be (but plain SSE allowed).  
> However, since I have moved, the machine I was using to build "official" 
> binaries has switched and that is probably at the core of the problem.
> 
> Also,  I've tried to build ATLAS 3.8.0 without SSE without success (when 
> I'm on a machine that has it).
> 
> It would be useful to track which binaries are giving people problems as 
> I built the most recent ones on a VM against an old version of ATLAS 
> (3.6.0) that has been compiled on windows for a long time.
> 
> I'm happy to upload a better binary of NumPy (if I can figure out which 
> one is giving people grief and how to create a decent one).
> 
> -Travis O.
> 
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Travis E. Oliphant
Fernando Perez wrote:
> On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>
>   
>> The current situation is untenable. I will gladly accept a slow BLAS for an
>> official binary that won't segfault anywhere. We can look for a faster BLAS 
>> later.
>> 
>
> Just to add a note to this: John Hunter and I just finished teaching a
> python workshop here in Boulder, and one attendee had a recurring
> all-out crash on WinXP.  Eventually John was able to track it to a bad
> BLAS call, but the death was an 'illegal instruction'. We then noticed
> that this was on an older Pentium III laptop, and I'd be willing to
> bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
> chip only has plain SSE, not SSE2, and that's the kind of crash I've
> seen when  accidentally running code compiled in my office machine (a
> P4) on my laptop (a similarly old PIII).
>
> It may very well be that it's OK to ship binaries with ATLAS, but just
> to build them without any fancy instruction support (no SSE, SSE2 or
> anything else of that kind, just plain x87 code).
>   

I think this is what the default should be (but plain SSE allowed).  
However, since I have moved, the machine I was using to build "official" 
binaries has switched and that is probably at the core of the problem.

Also,  I've tried to build ATLAS 3.8.0 without SSE without success (when 
I'm on a machine that has it).

It would be useful to track which binaries are giving people problems as 
I built the most recent ones on a VM against an old version of ATLAS 
(3.6.0) that has been compiled on windows for a long time.

I'm happy to upload a better binary of NumPy (if I can figure out which 
one is giving people grief and how to create a decent one).

-Travis O.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Andrew Straw
An idea that occurred to me after reading Fernando's email. A function
could be called at numpy import time that specifically checks for the
instruction set on the CPU running and makes sure that is completely
covers the instruction set available through all the various calls,
including to BLAS. If this kind of thing were added, numpy could fail
with a loud warning rather than dying with mysterious errors later on.
The trouble would seem that you can switch your BLAS shared library
without re-compiling numpy, so numpy would have to do a run-time query
of ATLAS, etc. for compilation issues. Which is likely
library-dependent, and furthermore, not having looked into BLAS
implementations, I'm not sure that (m)any of them provide such
information. Do they? Is this idea technically possible?

-Andrew

Fernando Perez wrote:
> On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> 
>> The current situation is untenable. I will gladly accept a slow BLAS for an
>> official binary that won't segfault anywhere. We can look for a faster BLAS 
>> later.
> 
> Just to add a note to this: John Hunter and I just finished teaching a
> python workshop here in Boulder, and one attendee had a recurring
> all-out crash on WinXP.  Eventually John was able to track it to a bad
> BLAS call, but the death was an 'illegal instruction'. We then noticed
> that this was on an older Pentium III laptop, and I'd be willing to
> bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
> chip only has plain SSE, not SSE2, and that's the kind of crash I've
> seen when  accidentally running code compiled in my office machine (a
> P4) on my laptop (a similarly old PIII).
> 
> It may very well be that it's OK to ship binaries with ATLAS, but just
> to build them without any fancy instruction support (no SSE, SSE2 or
> anything else of that kind, just plain x87 code).
> 
> 
> Cheers,
> 
> f
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Fernando Perez
On Dec 10, 2007 4:41 PM, Robert Kern <[EMAIL PROTECTED]> wrote:

> The current situation is untenable. I will gladly accept a slow BLAS for an
> official binary that won't segfault anywhere. We can look for a faster BLAS 
> later.

Just to add a note to this: John Hunter and I just finished teaching a
python workshop here in Boulder, and one attendee had a recurring
all-out crash on WinXP.  Eventually John was able to track it to a bad
BLAS call, but the death was an 'illegal instruction'. We then noticed
that this was on an older Pentium III laptop, and I'd be willing to
bet that the problem is an ATLAS compiled with SSE2 support.  The PIII
chip only has plain SSE, not SSE2, and that's the kind of crash I've
seen when  accidentally running code compiled in my office machine (a
P4) on my laptop (a similarly old PIII).

It may very well be that it's OK to ship binaries with ATLAS, but just
to build them without any fancy instruction support (no SSE, SSE2 or
anything else of that kind, just plain x87 code).


Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Robert Kern
David M. Cooke wrote:
> On Dec 10, 2007, at 10:30 , Matthieu Brucher wrote:
>> 2007/12/10, Alexander Michael <[EMAIL PROTECTED]>: On Dec 10, 2007  
>> 6:48 AM, David Cournapeau <[EMAIL PROTECTED]> wrote:
>>> Hi,
>>>
>>> Several people reported problems with numpy 1.0.4 (See #627 and
>>> #628, but also other problems mentionned on the ML, which I cannot
>>> find). They were all solved, as far as I know, by a binary I  
>> produced
>>> (simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it  
>> would be
>>> good to use those instead ? (I can recompile them if there is a  
>> special
>>> thing to do to build them)
>> Do I understand correctly that you are suggesting removing ATLAS from
>> the Windows distribution? Wouldn't this make numpy very slow? I know
>> on RHEL5 I see a very large improvement between the basic BLAS/LAPACK
>> and ATLAS. Perhaps we should make an alternative Windows binary
>> available without ATLAS just for those having problems with ATLAS?
>> That's why David proposed the netlib version of BLAS/LAPACK and not  
>> the default implementation in numpy.
>>
>> I would agree with David ;)
> 
> Our versions of BLAS/LAPACK are f2c'd versions of the netlib 3.0 BLAS/ 
> LAPACK (actually, of Debian's version of these -- they include several  
> fixes that weren't upstream).
> 
> So netlib's versions aren't going to be any faster, really. And  
> netlib's BLAS is slow. Now, if there is a BLAS that's easier to  
> compile than ATLAS on windows, that'd be improvement.

The current situation is untenable. I will gladly accept a slow BLAS for an
official binary that won't segfault anywhere. We can look for a faster BLAS 
later.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David Cournapeau
On Dec 10, 2007 10:59 PM, Alexander Michael <[EMAIL PROTECTED]> wrote:
> On Dec 10, 2007 6:48 AM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > Several people reported problems with numpy 1.0.4 (See #627 and
> > #628, but also other problems mentionned on the ML, which I cannot
> > find). They were all solved, as far as I know, by a binary I produced
> > (simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it would be
> > good to use those instead ? (I can recompile them if there is a special
> > thing to do to build them)
>
> Do I understand correctly that you are suggesting removing ATLAS from
> the Windows distribution? Wouldn't this make numpy very slow? I know
> on RHEL5 I see a very large improvement between the basic BLAS/LAPACK
> and ATLAS. Perhaps we should make an alternative Windows binary
> available without ATLAS just for those having problems with ATLAS?
If you care about speed, then you should compile your own atlas
anyway. I don't quite understand the discussion about speed:
BLAS/LAPACK is really slower for relatively big sizes, which many
people do not use. And more important, a non working, crashing, fast
BLAS is much slower than a working one :)

We could propose two versions, but that just makes things more
complicated: on average, people using windows are less inclined to try
to understand why things do not work, and when your numpy crashes, it
is not obvious that ATLAS is involved (we still do not know the exact
problem).

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ndarray.clip only with lower or upper values?

2007-12-10 Thread Timothy Hochberg
On Dec 10, 2007 7:21 AM, Hans Meine <[EMAIL PROTECTED]> wrote:

> Hi again,
>
> I noticed that clip() needs two parameters, but wouldn't it be nice and
> straightforward to just pass min= or max= as keyword arg?
>
> In [2]: a = arange(10)
>
> In [3]: a.clip(min = 2, max = 5)
> Out[3]: array([2, 2, 2, 3, 4, 5, 5, 5, 5, 5])
>
> In [4]: a.clip(min = 2)
>
> ---
> exceptions.TypeError Traceback (most
> recent
> call last)
>
> /home/meine/
>
> TypeError: function takes at least 2 arguments (1 given)
>
> (I could simulate that by passing max = maximum_value_of(a.dtype), if that
> existed, see my other mail.)


Why not just use minimum or maximum as needed instead of overloading clip?


-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David M. Cooke
On Dec 10, 2007, at 10:30 , Matthieu Brucher wrote:
> 2007/12/10, Alexander Michael <[EMAIL PROTECTED]>: On Dec 10, 2007  
> 6:48 AM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > Several people reported problems with numpy 1.0.4 (See #627 and
> > #628, but also other problems mentionned on the ML, which I cannot
> > find). They were all solved, as far as I know, by a binary I  
> produced
> > (simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it  
> would be
> > good to use those instead ? (I can recompile them if there is a  
> special
> > thing to do to build them)
>
> Do I understand correctly that you are suggesting removing ATLAS from
> the Windows distribution? Wouldn't this make numpy very slow? I know
> on RHEL5 I see a very large improvement between the basic BLAS/LAPACK
> and ATLAS. Perhaps we should make an alternative Windows binary
> available without ATLAS just for those having problems with ATLAS?
> That's why David proposed the netlib version of BLAS/LAPACK and not  
> the default implementation in numpy.
>
> I would agree with David ;)


Our versions of BLAS/LAPACK are f2c'd versions of the netlib 3.0 BLAS/ 
LAPACK (actually, of Debian's version of these -- they include several  
fixes that weren't upstream).

So netlib's versions aren't going to be any faster, really. And  
netlib's BLAS is slow. Now, if there is a BLAS that's easier to  
compile than ATLAS on windows, that'd be improvement.

-- 
|>|\/|<
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Minimum and maximum values of numpy datatypes?

2007-12-10 Thread Travis E. Oliphant
Hans Meine wrote:
> Hi!
>
> Is there a way to query the minimum and maximum values of the numpy datatypes?
>   

numpy.iinfo  (notice the two i's) (integer information)
numpy.finfo (floating point information)

Example:

numpy.iinfo(numpy.uint8).max
numpy.iinfo(numpy.int16).min

You pass the datatype to the numpy.iinfo() constructor and then get 
attributes on the result..

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Minimum and maximum values of numpy datatypes?

2007-12-10 Thread Matthieu Brucher
I had the same problem sooner today, someone told me the answer : use
numpy.info object ;)

Matthieu

2007/12/10, Hans Meine <[EMAIL PROTECTED]>:
>
> Hi!
>
> Is there a way to query the minimum and maximum values of the numpy
> datatypes?
>
> E.g. numpy.uint8.max == 255, numpy.uint8.min == 0 (these attributes exist,
> but
> they are functions, obviously for technical reasons).
>
> Ciao, /  /
>  /--/
> /  / ANS
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Matthieu Brucher
2007/12/10, Alexander Michael <[EMAIL PROTECTED]>:
>
> On Dec 10, 2007 6:48 AM, David Cournapeau <[EMAIL PROTECTED]>
> wrote:
> > Hi,
> >
> > Several people reported problems with numpy 1.0.4 (See #627 and
> > #628, but also other problems mentionned on the ML, which I cannot
> > find). They were all solved, as far as I know, by a binary I produced
> > (simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it would be
> > good to use those instead ? (I can recompile them if there is a special
> > thing to do to build them)
>
> Do I understand correctly that you are suggesting removing ATLAS from
> the Windows distribution? Wouldn't this make numpy very slow? I know
> on RHEL5 I see a very large improvement between the basic BLAS/LAPACK
> and ATLAS. Perhaps we should make an alternative Windows binary
> available without ATLAS just for those having problems with ATLAS?


That's why David proposed the netlib version of BLAS/LAPACK and not the
default implementation in numpy.

I would agree with David ;)

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] data transit

2007-12-10 Thread Renato Serodio
Hello there,

indeed, the tasks you described correspond to what I'm seeking to
implement. The thing is, for the sake of encapsulation (and laziness
in the programming sense), I'm keeping responsibilities well defined
in several objects. I guess this type of coding is pretty much
ordinary for an OO person - it's just me having trouble with the
philosophy.

So, upon much thought, it breaks down like this:
- Crunchers: lowest level objects that encapsulate stuff I do with
Numpy/Scipy functions, on Numpy objects. Say, get data from arguments,
unbias the data, zero-stuff, fft the set, etc. They are meant to be
written as needed.

- DataContainers: abstraction layer to data sources (DB, files, etc)
and to other data objects still in memory. Data returned by Crunchers
is stored inside - in practice, piped here by an Analysis object. So
far, I see no need for nesting DCs inside other DCs.

- Analysis: these are the glue between Crunchers, DataContainers and
the user (batch, GUI, CLI). An Analysis is instanciated by the user,
and directs both data flow into DCs, as well as out of them. While
each Analysis has one and only one 'results' attribute, which points
to some place within a DataContainer, I imagine Analysis made by
concatenating several Analysis - just call Analysis.result() to access
data at a certain stage of processing.

Well, so it is. Hopefully this setup will lend a good degree of
flexibility to my application - the crunchers are hard to develop,
since I haven't seen the data yet.

Nadav: I had looked into pytables while investigating low-level
interfaces that would have to be supported. It's a lot below what I
was looking for - my DataContainers do obtain their nature from other
classes which are responsible for talking to DBs, files and the like -
but it is the design of these containers that's hard to conceive!

Cheers,

Renato



On Dec 7, 2007 6:48 PM, Alan Isaac <[EMAIL PROTECTED]> wrote:
> It sounds like you want a new class X that does three things:
> knows where the data is and how to access  it,
> knows how the data are to be processed and can do this when asked,
> is able to provide a "results" object when requested.
> The results object can store who made it and with what
> processing module, thus indirectly linking to the data and techniques
> (which the X instance may or may not store, as is convenient).
>
> fwiw,
> Alan Isaac
>
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] RE : Re: "NaN"

2007-12-10 Thread Jean-Luc Régnier
many thanks for answering Matthieu 
  Actually my problem concerned the old command "matrixmultiply" vs "dot".
  I solved it now.
  However I have a question regarding Tkinter.
  I am doing a small 3D engine using Tkinter, Pmw and, numpy.
  I am basically plotting the result from the matrix calculation on
  the Tkinter canevas and it appears to be relatively slow depending
  on the processor. I did a test on two different PC.
  Is there someone here who has experienced such problem ? It looks like the 
Tkinter canevas needs time to react with my bindings
  Vpython for instance is really fast whatever the PC. Should I use Wxpython
  instead of Tkinter ?
  Best regards
  Jean-Luc
   
  It's "Not A Number". It can occur when you have a division by zero, a 
difference between two infinite numbers, ...

Matthieu

 
-
 Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Minimum and maximum values of numpy datatypes?

2007-12-10 Thread Hans Meine
Hi!

Is there a way to query the minimum and maximum values of the numpy datatypes?

E.g. numpy.uint8.max == 255, numpy.uint8.min == 0 (these attributes exist, but 
they are functions, obviously for technical reasons).

Ciao, /  /
 /--/
/  / ANS
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ndarray.clip only with lower or upper values?

2007-12-10 Thread Hans Meine
Hi again,

I noticed that clip() needs two parameters, but wouldn't it be nice and 
straightforward to just pass min= or max= as keyword arg?

In [2]: a = arange(10)

In [3]: a.clip(min = 2, max = 5)
Out[3]: array([2, 2, 2, 3, 4, 5, 5, 5, 5, 5])

In [4]: a.clip(min = 2)
---
exceptions.TypeError Traceback (most recent 
call last)

/home/meine/

TypeError: function takes at least 2 arguments (1 given)

(I could simulate that by passing max = maximum_value_of(a.dtype), if that 
existed, see my other mail.)

Ciao, /  /
 /--/
/  / ANS
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread Alexander Michael
On Dec 10, 2007 6:48 AM, David Cournapeau <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Several people reported problems with numpy 1.0.4 (See #627 and
> #628, but also other problems mentionned on the ML, which I cannot
> find). They were all solved, as far as I know, by a binary I produced
> (simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it would be
> good to use those instead ? (I can recompile them if there is a special
> thing to do to build them)

Do I understand correctly that you are suggesting removing ATLAS from
the Windows distribution? Wouldn't this make numpy very slow? I know
on RHEL5 I see a very large improvement between the basic BLAS/LAPACK
and ATLAS. Perhaps we should make an alternative Windows binary
available without ATLAS just for those having problems with ATLAS?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Changing the distributed binary for numpy 1.0.4 for windows ?

2007-12-10 Thread David Cournapeau
Hi,

Several people reported problems with numpy 1.0.4 (See #627 and 
#628, but also other problems mentionned on the ML, which I cannot 
find). They were all solved, as far as I know, by a binary I produced 
(simply using mingw + netlib BLAS/LAPACK,  no ATLAS). Maybe it would be 
good to use those instead ? (I can recompile them if there is a special 
thing to do to build them)

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Epsilon for each dtype

2007-12-10 Thread Matthieu Brucher
Excellent, that was what I was looking for, thank you.

Matthieu

2007/12/10, Fabrice Silva <[EMAIL PROTECTED]>:
>
> Le lundi 10 décembre 2007 à 11:38 +0100, Matthieu Brucher a écrit :
> > Hi,
> > Is there somewhere a equivalent to std::numerical_limits<>::epsilon,
> > that is, the greatest value such that 1. + epsilon is numerically
> > equal to 1. ?
> > I saw something that could be related in oldnumeric, but nothing in
> > numpy itself.
>
> if X is a numpy object:
> numpy.finfo(type(X)).eps gives the epsilon of the type of X.
> You may look at other attributes of finfo too..
>
> --
> Fabrice Silva <[EMAIL PROTECTED]>
> LMA UPR CNRS 7051
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Epsilon for each dtype

2007-12-10 Thread Fabrice Silva
Le lundi 10 décembre 2007 à 11:38 +0100, Matthieu Brucher a écrit :
> Hi,
> Is there somewhere a equivalent to std::numerical_limits<>::epsilon,
> that is, the greatest value such that 1. + epsilon is numerically
> equal to 1. ?
> I saw something that could be related in oldnumeric, but nothing in
> numpy itself. 

if X is a numpy object:
numpy.finfo(type(X)).eps gives the epsilon of the type of X.
You may look at other attributes of finfo too..

-- 
Fabrice Silva <[EMAIL PROTECTED]>
LMA UPR CNRS 7051

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Epsilon for each dtype

2007-12-10 Thread Matthieu Brucher
Hi,

Is there somewhere a equivalent to std::numerical_limits<>::epsilon, that
is, the greatest value such that 1. + epsilon is numerically equal to 1. ?
I saw something that could be related in oldnumeric, but nothing in numpy
itself.

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] "NaN"

2007-12-10 Thread Matthieu Brucher
It's "Not A Number". It can occur when you have a division by zero, a
difference between two infinite numbers, ...

Matthieu

2007/12/10, Jean-Luc Régnier <[EMAIL PROTECTED]>:
>
> Hello,
> I switched from numarray to numpy and I have now some "NaN"
> in my matrix. What that means ?
> None a numeric ?
> regards
>
>
>
>
> Jean-Luc REGNIER
> ACR Mimarlik Ltd. Sti
> Savas Cad. 26/B Sirinyali
> ANTALYA, TURKEY
> Tel. & Fax: 0090-(0).242.316.08.09
> GSM: 0090-0.532.303.36.21
> http://www.acrmim.com
>
>
> --
> Ne gardez plus qu'une seule adresse mail ! Copiez vos 
> mailsvers
>  Yahoo! Mail
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>


-- 
French PhD student
Website : http://miles.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] "NaN"

2007-12-10 Thread Jean-Luc Régnier
Hello,
  I switched from numarray to numpy and I have now some "NaN" 
  in my matrix. What that means ?
  None a numeric ? 
  regards
   
   


Jean-Luc REGNIER 
ACR Mimarlik Ltd. Sti 
Savas Cad. 26/B Sirinyali 
ANTALYA, TURKEY 
Tel. & Fax: 0090-(0).242.316.08.09 
GSM: 0090-0.532.303.36.21 
http://www.acrmim.com
   
   

 
-
 Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion