Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Pearu Peterson
On Wed, March 11, 2009 7:50 am, Christopher Barker wrote:

Python does not distinguish between True and
False -- Python makes the distinction between something and nothing.

 In that context, NaN is nothing, thus False.

Mathematically speaking, NaN is a quantity with undefined value. Closer
analysis of a particular case may reveal that it may be some finite number,
or an infinity with some direction, or be intrinsically undefined.
NaN is something that cannot be defined because its value is not unique.
Nothing would be the content of empty set.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero

2009-03-11 Thread David Cournapeau
Hi,

For the record, I have just added the following functionalities to
numpy, which may simplify some C code:
- NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf,
positive and negative zeros. Rationale: some code use NAN, _get_nan,
etc... NAN is a GNU C extension, INFINITY is not available on many C
compilers. The NPY_ macros are defined from the IEEE754 format, and as
such should be very fast (the values should be inlined).
- we can now use inline safely in numpy C code: it is defined to
something recognized by the compiler or nothing if inline is not
supported. It is NOT defined publicly to avoid namespace pollution.
- NPY_INLINE is a macro which can be used publicly, and has the same
usage as inline.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0

2009-03-11 Thread David Cournapeau
On Wed, Mar 11, 2009 at 3:22 AM, Pauli Virtanen p...@iki.fi wrote:
 Tue, 10 Mar 2009 15:27:32 +0900, David Cournapeau wrote:
 For the upcoming 1.3.0 release, I would like to distribute the (built)
 documentation in some way. But first, I need to be able to build it :)

 Yep, buildability would be a nice feature :)

Yes, indeed. Ideally, I would the doc to build on as many platforms as possible.


 What are the exact requirements to build the documentation ? Is sphinx
 0.5 enough ? I can't manage to build it on either mac os x or linux:

 Sphinx 0.5.1 worksforme, and on two different Linux machines (and Python
 versions), so I doubt it's somehow specific to my setup.

Yes, it is strange - I can make it work on my workstation, which has
the same distribution as  my laptop (where it was failing). I am still
unsure about the possible differences (sphinx version was of course
the same).


 Sphinx 0.6.dev doesn't work at the moment with autosummary. It's a bit of
 a moving target, so I haven't made keeping it working a priority.

Sure - I was actually afraid I needed sphinx 0.6.


 This is a Sphinx error I run into from time to time. Usually

        make clean

 helps, but I'm not sure what causes this. The error looks a bit like

        http://bitbucket.org/birkenfeld/sphinx/issue/81/

 but I think Ctrl+C is not a requirement for triggering it. Did you get
 this error from a clean build?

Ah, that may be part of the problem. I can't make a clean build on mac
os x, because of the too many opened files thing. Maybe mac os x has
a ulimit kind of thing I should set up to avoid this.


 There are also some errors on mac os x about too many opened files
 (which can be alleviated by running the make html again, but obviously,
 that's not great). I don't know if there are easy solutions to that
 problem,

 At which step did this error occur?

It occurs at the html output phase (writing output)- maybe there is
a bug in sphinx with some files which are not closed properly.

Concerning the doc, I would like to add a few notes about the work we
did for the C math lib: is it ok to add a chapter to the C reference
guide, or is there a more appropriate place ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Automatic differentiation (was Re: second-order gradient)

2009-03-11 Thread Sebastian Walter
There are several possibilities, some of them are listed on
http://en.wikipedia.org/wiki/Automatic_differentiation

== pycppad
http://www.seanet.com/~bradbell/pycppad/index.xml
pycppad is a wrapper of the C++ library CppAD  ( http://www.coin-or.org/CppAD/ )

the wrapper can do up to second order derivatives very efficiently in
the so-called reverse mode of AD
requires boost::python

== pyadolc
http://github.com/b45ch1/pyadolc
which is a wrapper for the C++ library ADOL-C (
http://www.math.tu-dresden.de/~adol-c/ )

this can do abritrary degree of derivatives and works quite well with
numpy, i.e. you can work with numpy arrays
also quite efficient in the so-called reverse mode of AD
requires boost::python

== ScientificPython
 http://dirac.cnrs-orleans.fr/ScientificPython/ScientificPythonManual/
can provide first order derivatives. But as far as I understand only
first order derivatives of functions
f: R - R
and only in the usually not so efficient forward mode of AD

pure python

== Algopy
http://github.com/b45ch1/algopy/tree/master
pure python, arbitrary derivatives in forward and reverse mode
still quite experimental.
Offers also the possibility to differentiate functions that make heavy
use of matrix operations.

== sympy
this is not automatic differentiation but symbolic differentiation but
is sometimes useful

hope that helps,
Sebastian



On Wed, Mar 11, 2009 at 4:13 AM, Osman os...@fuse.net wrote:
 Hi,

 I just saw this python package : PyDX  which may answer your needs.
 The original URL is not working, but the svn location exists.

 http://gr.anu.edu.au/svn/people/sdburton/pydx/doc/user-guide.html

 svn co http://gr.anu.edu.au/svn/people/sdburton/pydx

 br
 -osman

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0

2009-03-11 Thread Pauli Virtanen
Wed, 11 Mar 2009 16:20:47 +0900, David Cournapeau wrote:
 On Wed, Mar 11, 2009 at 3:22 AM, Pauli Virtanen p...@iki.fi wrote:
[clip]
 Sphinx 0.5.1 worksforme, and on two different Linux machines (and
 Python versions), so I doubt it's somehow specific to my setup.
 
 Yes, it is strange - I can make it work on my workstation, which has the
 same distribution as  my laptop (where it was failing). I am still
 unsure about the possible differences (sphinx version was of course the
 same).

Did you check Pythonpath and egg-overriding-pythonpath issues? There's 
also some magic in the autosummary extension, but it's not *too* black, 
so I'd be surprised if it was behind these troubles.

[clip: Sphinx issue #81]
 Ah, that may be part of the problem. I can't make a clean build on mac
 os x, because of the too many opened files thing. Maybe mac os x has a
 ulimit kind of thing I should set up to avoid this.

Perhaps it even has ulimit, being a sort of POSIX system?

 Concerning the doc, I would like to add a few notes about the work we
 did for the C math lib: is it ok to add a chapter to the C reference
 guide, or is there a more appropriate place?

C reference guide is probably the correct place. Since the topic is a bit 
orthogonal to anything else there currently, I'd suggest creating a new 
file c-api.npymath.rst and linking it to the toctree in c-api.rst

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Sturla Molden
Charles R Harris wrote:
 It isn't 0 so it should be True. Any disagreement?... Chuck
NaN is not a number equal to 0, so it should be True?

NaN is not a number different from 0, so it should be False?

Also see Pearu's comment.

Why not raise an exception when NaN is evaluated in a boolean context? 
bool(NaN) has no obvious interpretation, so it should be considered an 
error.

Sturla Molden








___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Sturla Molden
Charles R Harris wrote:

 #include math.h
 #include stdio.h

 int main() {
double nan = sqrt(-1);
printf(%f\n, nan);
printf(%i\n, bool(nan));
return 0;
 }

 $ ./nan
 nan
 1


 So resolved, it is True.
Unless specified in the ISO C standard, I'd say this is system and 
compiler dependent.

Should NumPy rely on a specific binary representation of NaN?

A related issue is the boolean value of Inf and -Inf.

Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy documentation: status and distribution for 1.3.0

2009-03-11 Thread David Cournapeau
Pauli Virtanen wrote:

 Did you check Pythonpath and egg-overriding-pythonpath issues? There's 
 also some magic in the autosummary extension, but it's not *too* black, 
 so I'd be surprised if it was behind these troubles.
   

I think the problem boils down to building from scratch at once.

 Perhaps it even has ulimit, being a sort of POSIX system?
   

Yes, and it works. I am not convinced it is not a bug in sphinx, but
increasing from 256 to 1000 max files opened work.

 C reference guide is probably the correct place. Since the topic is a bit 
 orthogonal to anything else there currently, I'd suggest creating a new 
 file c-api.npymath.rst and linking it to the toctree in c-api.rst
   

That's what I ended up doing.

Thanks, now I can build the doc on windows, mac os x and linux,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Bruce Southey
Sturla Molden wrote:
 Charles R Harris wrote:
   
 #include math.h
 #include stdio.h

 int main() {
double nan = sqrt(-1);
printf(%f\n, nan);
printf(%i\n, bool(nan));
return 0;
 }

 $ ./nan
 nan
 1


 So resolved, it is True.
 
 Unless specified in the ISO C standard, I'd say this is system and 
 compiler dependent.

 Should NumPy rely on a specific binary representation of NaN?

 A related issue is the boolean value of Inf and -Inf.

 Sturla Molden
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   
This is one link that shows the different representation of these 
numbers in IEEE 754:
http://www.psc.edu/general/software/packages/ieee/ieee.php
It is a little clearer than Wikipedia:
http://en.wikipedia.org/wiki/IEEE_754-1985

Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not nothing so not zero. 
Also, I think that conversion to an integer should be an error for all 
of these because there is no equivalent representation of these floating 
point numbers as integers and I think that using zero for NaN is wrong.

Now for the other two special representations, I would presume that 
Numpy's PZERO (positive zero) and NZERO (negative zero) are treated as 
nothing. Conversion to integer for these should be zero.

However, I noticed that the standard has just been revised that may 
eventually influence Numpy:
http://en.wikipedia.org/wiki/IEEE_754r
http://en.wikipedia.org/wiki/IEEE_754-2008

Note this defines the min/max behavior:

* |min(x,NaN) = min(NaN,x) = x|
* |max(x,NaN) = max(NaN,x) = x|


Bruce

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array 2 string

2009-03-11 Thread Michael McNeil Forbes

On 10 Mar 2009, at 10:33 AM, Michael S. Gilbert wrote:

 On Tue, 10 Mar 2009 17:21:23 +0100, Mark Bakker wrote:
 Hello,

 I want to convert an array to a string.

 I like array2string, but it puts these annoying square brackets  
 around
 the array, like

 [[1 2 3],
 [3 4 5]]

 Anyway we can suppress the square brackets and get (this is what is
 written with savetxt, but I cannot get it to store in a variable)
 1 2 3
 4 5 6

How about using StringIO:

  a = np.array([[1,2,3],[4,5,6]])
  f = StringIO()
  savetxt(f, a, fmt=%i)
  s = f.getvalue()
  f.close()
  print s
1 2 3
4 5 6

Michael.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Lou Pecora

--- On Wed, 3/11/09, Bruce Southey bsout...@gmail.com wrote:

 From: Bruce Southey bsout...@gmail.com
 Subject: Re: [Numpy-discussion] What is the logical value of nan?
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Date: Wednesday, March 11, 2009, 10:24 AM

 This is one link that shows the different representation of
 these 
 numbers in IEEE 754:
 http://www.psc.edu/general/software/packages/ieee/ieee.php
 It is a little clearer than Wikipedia:
 http://en.wikipedia.org/wiki/IEEE_754-1985

Thanks.  Useful sites.

 Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not
 nothing so not zero. 

Agreed.  +1

 Also, I think that conversion to an integer should be an
 error for all of these because there is no equivalent 
 representation of these floating 
 point numbers as integers and I think that using zero for
 NaN is wrong.

Another  +1

 Now for the other two special representations, I would
 presume that 
 Numpy's PZERO (positive zero) and NZERO (negative zero)
 are treated as 
 nothing. Conversion to integer for these should be zero.

Yet another  +1.

-- Lou Pecora,   my views are my own.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Charles R Harris
On Wed, Mar 11, 2009 at 8:24 AM, Bruce Southey bsout...@gmail.com wrote:

 Sturla Molden wrote:
  Charles R Harris wrote:
 
  #include math.h
  #include stdio.h
 
  int main() {
 double nan = sqrt(-1);
 printf(%f\n, nan);
 printf(%i\n, bool(nan));
 return 0;
  }
 
  $ ./nan
  nan
  1
 
 
  So resolved, it is True.
 
  Unless specified in the ISO C standard, I'd say this is system and
  compiler dependent.
 
  Should NumPy rely on a specific binary representation of NaN?
 
  A related issue is the boolean value of Inf and -Inf.
 
  Sturla Molden
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 This is one link that shows the different representation of these
 numbers in IEEE 754:
 http://www.psc.edu/general/software/packages/ieee/ieee.php
 It is a little clearer than Wikipedia:
 http://en.wikipedia.org/wiki/IEEE_754-1985

 Numpy's nan/NaN/NAN, inf/Inf/PINF, and NINF are not nothing so not zero.
 Also, I think that conversion to an integer should be an error for all
 of these because there is no equivalent representation of these floating
 point numbers as integers and I think that using zero for NaN is wrong.

 Now for the other two special representations, I would presume that
 Numpy's PZERO (positive zero) and NZERO (negative zero) are treated as
 nothing. Conversion to integer for these should be zero.

 However, I noticed that the standard has just been revised that may
 eventually influence Numpy:
 http://en.wikipedia.org/wiki/IEEE_754r
 http://en.wikipedia.org/wiki/IEEE_754-2008

 Note this defines the min/max behavior:

* |min(x,NaN) = min(NaN,x) = x|
* |max(x,NaN) = max(NaN,x) = x|


We have this behavior in numpy with the fmax/fmin functions.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero

2009-03-11 Thread Charles R Harris
On Wed, Mar 11, 2009 at 12:43 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Hi,

For the record, I have just added the following functionalities to
 numpy, which may simplify some C code:
- NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf,
 positive and negative zeros. Rationale: some code use NAN, _get_nan,
 etc... NAN is a GNU C extension, INFINITY is not available on many C
 compilers. The NPY_ macros are defined from the IEEE754 format, and as
 such should be very fast (the values should be inlined).
- we can now use inline safely in numpy C code: it is defined to
 something recognized by the compiler or nothing if inline is not
 supported. It is NOT defined publicly to avoid namespace pollution.
- NPY_INLINE is a macro which can be used publicly, and has the same
 usage as inline.


Great. This should be helpful.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Don't call e Euler's constant.

2009-03-11 Thread Charles R Harris
Traditionally, Euler's constant is 0.57721 56649 01532 86060 65120 90082
40243 10421 59335 93992... see
wikipediahttp://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant.
The constant e is sometimes called Euler's number -- shouldn't that be
Napier or Bernoulli in a pc world -- but I think e is more universally
understood and the distinction between constant and number is rather
obscure.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Don't call e Euler's constant.

2009-03-11 Thread Chris Colbert
as long as we all agree that e has a value of 2.71828 18284 59045 23536, its
just a matter of semantics.

the constant you reference is indicated by greek lower gamma

Chris

On Wed, Mar 11, 2009 at 11:39 AM, Charles R Harris 
charlesr.har...@gmail.com wrote:

 Traditionally, Euler's constant is 0.57721 56649 01532 86060 65120 90082
 40243 10421 59335 93992... see 
 wikipediahttp://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant.
 The constant e is sometimes called Euler's number -- shouldn't that be
 Napier or Bernoulli in a pc world -- but I think e is more universally
 understood and the distinction between constant and number is rather
 obscure.

 Chuck

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Don't call e Euler's constant.

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 12:39 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
 Traditionally, Euler's constant is 0.57721 56649 01532 86060 65120 90082
 40243 10421 59335 93992...

You're right, Euler constant is generally gamma. Euler number is not
that great either (euler numbers in geometry for example), so I just
renamed it to base of natural logarithm,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Christopher Barker
Sturla Molden wrote:
 Why not raise an exception when NaN is evaluated in a boolean
 context? bool(NaN) has no obvious interpretation, so it should be
 considered an error.

+1

Though there is clearly a lot of legacy around this, so maybe it's best
to follow C convention (sigh).

Bruce Southey wrote:
 Also, I think that conversion to an integer should be an error for
 all of these because there is no equivalent representation of these
 floating point numbers as integers and I think that using zero for
 NaN is wrong.

+1

A silent wrong conversion is MUCH worse than an exception!

As for MATLAB, it was entirely doubles for a long time -- I don't think
it's a good example of well thought-out float-integer interactions.


 Now for the other two special representations, I would presume that 
 Numpy's PZERO (positive zero) and NZERO (negative zero) are treated
 as nothing. Conversion to integer for these should be zero.

+1

 Note this defines the min/max behavior:
 
 * |min(x,NaN) = min(NaN,x) = x| * |max(x,NaN) = max(NaN,x) = x|

nice -- it's nice to have these defined -- of course, who knows how long 
it will be (never?) before compilers/libraries support this.

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Charles R Harris
On Wed, Mar 11, 2009 at 11:06 AM, Christopher Barker
chris.bar...@noaa.govwrote:

 Sturla Molden wrote:
  Why not raise an exception when NaN is evaluated in a boolean
  context? bool(NaN) has no obvious interpretation, so it should be
  considered an error.

 +1

 Though there is clearly a lot of legacy around this, so maybe it's best
 to follow C convention (sigh).

 Bruce Southey wrote:
  Also, I think that conversion to an integer should be an error for
  all of these because there is no equivalent representation of these
  floating point numbers as integers and I think that using zero for
  NaN is wrong.

 +1

 A silent wrong conversion is MUCH worse than an exception!

 As for MATLAB, it was entirely doubles for a long time -- I don't think
 it's a good example of well thought-out float-integer interactions.


  Now for the other two special representations, I would presume that
  Numpy's PZERO (positive zero) and NZERO (negative zero) are treated
  as nothing. Conversion to integer for these should be zero.

 +1

  Note this defines the min/max behavior:
 
  * |min(x,NaN) = min(NaN,x) = x| * |max(x,NaN) = max(NaN,x) = x|

 nice -- it's nice to have these defined -- of course, who knows how long
 it will be (never?) before compilers/libraries support this.


Raising exceptions in ufuncs is going to take some work as the inner loops
are void functions without any means of indicating an error.  Exceptions
also need to be thread safe. So I am not opposed but it is something for the
future.

Casting seems to be implemented in arraytypes.inc.src as void functions also
without provision for errors. I would also like to see casting implemented
as ufuncs but that is a separate discussion.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
Hi,

I noticed the following in numpy/distutils/system_info.py while trying to
get numpy to build against MKL:

if cpu.is_Itanium():
plt = '64'
#l = 'mkl_ipf'
elif cpu.is_Xeon():
plt = 'em64t'
#l = 'mkl_em64t'
else:
plt = '32'
#l = 'mkl_ia32'

So in the autodetection for MKL, the only way to get plt (platform) set to
'em64t' is to test true for a Xeon.  This function returns false on my Core2
Duo system, even though the platform is very much 'em64t'.  I think that
check should instead read:

elif cpu.is_Xeon() or cpu.is_Core2():

Thoughts?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Sturla Molden
Charles R Harris wrote:

 Raising exceptions in ufuncs is going to take some work as the inner 
 loops are void functions without any means of indicating an error.  
 Exceptions also need to be thread safe. So I am not opposed but it is 
 something for the future.

I just saw David Cournapeau's post regarding a NPY_NAN macro. As it uses 
the IEEE754 binary format, at least NPY_NAN should be True in a boolean 
context. So bool(nan) is True then.

And that's what happens now on my computer as well:

  bool(nan)
True

I don't like Python exception's raised inside ufuncs. In the future we 
NumPy might add OpenMP support to ufuncs (multicore CPUs are getting 
common), and Python exceptions would prevent that, or at least make it 
difficult (cf. the GIL).


S.M.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Francesc Alted
A Wednesday 11 March 2009, Ryan May escrigué:
 Hi,

 I noticed the following in numpy/distutils/system_info.py while
 trying to get numpy to build against MKL:

 if cpu.is_Itanium():
 plt = '64'
 #l = 'mkl_ipf'
 elif cpu.is_Xeon():
 plt = 'em64t'
 #l = 'mkl_em64t'
 else:
 plt = '32'
 #l = 'mkl_ia32'

 So in the autodetection for MKL, the only way to get plt (platform)
 set to 'em64t' is to test true for a Xeon.  This function returns
 false on my Core2 Duo system, even though the platform is very much
 'em64t'.  I think that check should instead read:

 elif cpu.is_Xeon() or cpu.is_Core2():

 Thoughts?

This may help you to see the developer's view on this subject:

http://projects.scipy.org/numpy/ticket/994

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Charles R Harris
On Wed, Mar 11, 2009 at 12:19 PM, Sturla Molden stu...@molden.no wrote:

 Charles R Harris wrote:
 
  Raising exceptions in ufuncs is going to take some work as the inner
  loops are void functions without any means of indicating an error.
  Exceptions also need to be thread safe. So I am not opposed but it is
  something for the future.
 
 I just saw David Cournapeau's post regarding a NPY_NAN macro. As it uses
 the IEEE754 binary format, at least NPY_NAN should be True in a boolean
 context. So bool(nan) is True then.

 And that's what happens now on my computer as well:

   bool(nan)
 True

 I don't like Python exception's raised inside ufuncs. In the future we
 NumPy might add OpenMP support to ufuncs (multicore CPUs are getting
 common), and Python exceptions would prevent that, or at least make it
 difficult (cf. the GIL).


I think numpy needs someway to raise these errors, but error handling is
always tricky. Do you have any suggestions as to how you would like to do
it? I was thinking that adding an int return to the loops would provide some
way of indicating errors without specifying how they were to be handled at
this point.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 3:15 AM, Ryan May rma...@gmail.com wrote:
 Hi,

 I noticed the following in numpy/distutils/system_info.py while trying to
 get numpy to build against MKL:

     if cpu.is_Itanium():
     plt = '64'
     #l = 'mkl_ipf'
     elif cpu.is_Xeon():
     plt = 'em64t'
     #l = 'mkl_em64t'
     else:
     plt = '32'
     #l = 'mkl_ia32'

 So in the autodetection for MKL, the only way to get plt (platform) set to
 'em64t' is to test true for a Xeon.  This function returns false on my Core2
 Duo system, even though the platform is very much 'em64t'.  I think that
 check should instead read:

 elif cpu.is_Xeon() or cpu.is_Core2():

 Thoughts?

I think this whole code is inherently fragile. A much better solution
is to make the build process customization easier and more
straightforward. Auto-detection will never work well.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
On Wed, Mar 11, 2009 at 1:41 PM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Mar 12, 2009 at 3:15 AM, Ryan May rma...@gmail.com wrote:
  Hi,
 
  I noticed the following in numpy/distutils/system_info.py while trying to
  get numpy to build against MKL:
 
  if cpu.is_Itanium():
  plt = '64'
  #l = 'mkl_ipf'
  elif cpu.is_Xeon():
  plt = 'em64t'
  #l = 'mkl_em64t'
  else:
  plt = '32'
  #l = 'mkl_ia32'
 
  So in the autodetection for MKL, the only way to get plt (platform) set
 to
  'em64t' is to test true for a Xeon.  This function returns false on my
 Core2
  Duo system, even though the platform is very much 'em64t'.  I think that
  check should instead read:
 
  elif cpu.is_Xeon() or cpu.is_Core2():
 
  Thoughts?

 I think this whole code is inherently fragile. A much better solution
 is to make the build process customization easier and more
 straightforward. Auto-detection will never work well.

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


Fair enough.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
On Wed, Mar 11, 2009 at 1:34 PM, Francesc Alted fal...@pytables.org wrote:

 A Wednesday 11 March 2009, Ryan May escrigué:
  Hi,
 
  I noticed the following in numpy/distutils/system_info.py while
  trying to get numpy to build against MKL:
 
  if cpu.is_Itanium():
  plt = '64'
  #l = 'mkl_ipf'
  elif cpu.is_Xeon():
  plt = 'em64t'
  #l = 'mkl_em64t'
  else:
  plt = '32'
  #l = 'mkl_ia32'
 
  So in the autodetection for MKL, the only way to get plt (platform)
  set to 'em64t' is to test true for a Xeon.  This function returns
  false on my Core2 Duo system, even though the platform is very much
  'em64t'.  I think that check should instead read:
 
  elif cpu.is_Xeon() or cpu.is_Core2():
 
  Thoughts?

 This may help you to see the developer's view on this subject:

 http://projects.scipy.org/numpy/ticket/994

 Cheers,

 --
 Francesc Alted
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



You know, I knew this sounded familiar.  If you regularly build against MKL,
can you send me your site.cfg.  I've had a lot more success getting the
build to work using the autodetection than the blas_opt and lapack_opt
sections.   Since the autodetection doesn't seem like the accepted way, I'd
love to see how to get the accepted way to actually work. :)

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Francesc Alted
A Wednesday 11 March 2009, Ryan May escrigué:
 You know, I knew this sounded familiar.  If you regularly build
 against MKL, can you send me your site.cfg.  I've had a lot more
 success getting the build to work using the autodetection than the
 blas_opt and lapack_opt sections.   Since the autodetection doesn't
 seem like the accepted way, I'd love to see how to get the accepted
 way to actually work. :)

Not that I'm an expert in that sort of black magic, but the next worked 
fine for me and numexpr:

[mkl]

# Example for using MKL 10.0
#library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t
#include_dirs =  /opt/intel/mkl/10.0.2.018/include

# Example for the MKL included in Intel C 11.0 compiler
library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/em64t/
include_dirs =  /opt/intel/Compiler/11.0/074/mkl/include/

##the following set of libraries is suited for compilation
##with the GNU C compiler (gcc). Refer to the MKL documentation
##if you use other compilers (e.g., Intel C compiler)
mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core

HTH,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-11 Thread Ryan May
On Wed, Mar 11, 2009 at 2:20 PM, Francesc Alted fal...@pytables.org wrote:

 A Wednesday 11 March 2009, Ryan May escrigué:
  You know, I knew this sounded familiar.  If you regularly build
  against MKL, can you send me your site.cfg.  I've had a lot more
  success getting the build to work using the autodetection than the
  blas_opt and lapack_opt sections.   Since the autodetection doesn't
  seem like the accepted way, I'd love to see how to get the accepted
  way to actually work. :)

 Not that I'm an expert in that sort of black magic, but the next worked
 fine for me and numexpr:

 [mkl]

 # Example for using MKL 10.0
 #library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t
 #include_dirs http://10.0.2.018/lib/em64t%0A#include_dirs =
  /opt/intel/mkl/10.0.2.018/include

 # Example for the MKL included in Intel C 11.0 compiler
 library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/em64t/
 include_dirs =  /opt/intel/Compiler/11.0/074/mkl/include/

 ##the following set of libraries is suited for compilation
 ##with the GNU C compiler (gcc). Refer to the MKL documentation
 ##if you use other compilers (e.g., Intel C compiler)
 mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core


Thanks.  That's actually pretty close to what I had.  I was actually
thinking that you were using only blas_opt and lapack_opt, since supposedly
the [mkl] style section is deprecated.  Thus far, I cannot get these to work
with MKL.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 3:36 AM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Wed, Mar 11, 2009 at 12:19 PM, Sturla Molden stu...@molden.no wrote:

 Charles R Harris wrote:
 
  Raising exceptions in ufuncs is going to take some work as the inner
  loops are void functions without any means of indicating an error.
  Exceptions also need to be thread safe. So I am not opposed but it is
  something for the future.
 
 I just saw David Cournapeau's post regarding a NPY_NAN macro. As it uses
 the IEEE754 binary format, at least NPY_NAN should be True in a boolean
 context. So bool(nan) is True then.

 And that's what happens now on my computer as well:

   bool(nan)
 True

 I don't like Python exception's raised inside ufuncs. In the future we
 NumPy might add OpenMP support to ufuncs (multicore CPUs are getting
 common), and Python exceptions would prevent that, or at least make it
 difficult (cf. the GIL).

 I think numpy needs someway to raise these errors, but error handling is
 always tricky. Do you have any suggestions as to how you would like to do
 it? I was thinking that adding an int return to the loops would provide some
 way of indicating errors without specifying how they were to be handled at
 this point.

I think that we should think carefully about how to set up a good
error system within numpy. If we keep adding ad-hoc error handling, I
am afraid it will be hard to read and maintain. We could have
something like:

typedef struct
{
int error ;
const char  *str ;
} ErrorStruct ;

static
ErrorStruct UfuncErrors [] =
{ {CODE1, error 1 string}, ...};

and the related functions to get strings from code. Currently, we
can't really pass errors through several callees because we don't have
a commonly agreed set of errors. If we don't use an errno, I don't
think there are any other options,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-11 Thread Ryan May
Hi,

This is what I'm getting when I try to build scipy HEAD:

building library superlu_src sources
building library arpack sources
building library sc_c_misc sources
building library sc_cephes sources
building library sc_mach sources
building library sc_toms sources
building library sc_amos sources
building library sc_cdf sources
building library sc_specfun sources
building library statlib sources
building extension scipy.cluster._vq sources
error:
/home/rmay/.local/lib64/python2.5/site-packages/numpy/distutils/command/../mingw/gfortran_vs2003_hack.c:
No such file or directory

This didn't happen until I updated to *numpy* SVN HEAD.  Numpy itself is
building without errors and no tests fail on my system.  Any ideas?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 4:52 AM, Ryan May rma...@gmail.com wrote:
 Hi,

 This is what I'm getting when I try to build scipy HEAD:

 building library superlu_src sources
 building library arpack sources
 building library sc_c_misc sources
 building library sc_cephes sources
 building library sc_mach sources
 building library sc_toms sources
 building library sc_amos sources
 building library sc_cdf sources
 building library sc_specfun sources
 building library statlib sources
 building extension scipy.cluster._vq sources
 error:
 /home/rmay/.local/lib64/python2.5/site-packages/numpy/distutils/command/../mingw/gfortran_vs2003_hack.c:
 No such file or directory

 This didn't happen until I updated to *numpy* SVN HEAD.  Numpy itself is
 building without errors and no tests fail on my system.  Any ideas?

Yes, as the name implies, it is an ugly hack to support gfortran on
windows - and the hack itself is implemented in an ugly way. I will
fix it tomorrow - in the mean time, copying the file from svn into the
directory where the file is looked for should do it - the file is not
used on linux anyway.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-11 Thread David Cournapeau
Hi,

I was looking at #936, to implement correctly the hashing protocol for
dtypes. Am I right to believe that tp_hash should recursively descend
fields for compound dtypes, and the hash value should depend on the
size/ndim/typenum/byteorder for each atomic dtype + fields name (and
titles) ? Contrary to comparison, we can't reuse the python C api,
since PyObject_Hash cannot be applied to the fields dict, right ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-11 Thread Robert Kern
On Wed, Mar 11, 2009 at 15:06, David Cournapeau courn...@gmail.com wrote:
 Hi,

 I was looking at #936, to implement correctly the hashing protocol for
 dtypes. Am I right to believe that tp_hash should recursively descend
 fields for compound dtypes, and the hash value should depend on the
 size/ndim/typenum/byteorder for each atomic dtype + fields name (and
 titles) ? Contrary to comparison, we can't reuse the python C api,
 since PyObject_Hash cannot be applied to the fields dict, right ?

Usually, one constructs a hashable analogue; e.g. taking the .descr
and converting all of the lists to tuples. Then use PyObject_Hash on
that.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin

2009-03-11 Thread Chris Colbert
i don't know the correct answer... but i imagine it would be fairly easy to
compile a couple of representative scipts on each compiler and compare their
performance.

On Wed, Mar 11, 2009 at 4:29 PM, Sebastian Haase ha...@msg.ucsf.edu wrote:

 Hi,
 I was wondering if people could comment on which compiler produces faster
 code,
 MS-VS2003 or cygwin g++ ?
 I use Python 2.5 and SWIG.  I have C/C++ routines for large (maybe
 10MB, 100MB or even 1GB (on XP 64bit)) data processing.
 I'm not talking about BLAS or anything like that  just for-loops
 mostly on contiguous memory.
 Or should the speed / memory performance of the resulting code be the same
 ?


 Thanks,
 Sebastian Haase
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A module for homogeneous transformation matrices, Euler angles and quaternions

2009-03-11 Thread Chris Colbert
there has already been a port of the robotics toolbox for matlab into python
which is built on numpy:

http://code.google.com/p/robotics-toolbox-python/

which contains all the function you are describing.


Chris

On Wed, Mar 4, 2009 at 6:10 PM, Gareth Elston 
gareth.elston.fl...@googlemail.com wrote:

 I found a nice module for these transforms at
 http://www.lfd.uci.edu/~gohlke/code/transformations.py.htmlhttp://www.lfd.uci.edu/%7Egohlke/code/transformations.py.html.
  I've
 been using an older version for some time and thought it might make a
 good addition to numpy/scipy. I made some simple mods to the older
 version to add a couple of functions I needed and to allow it to be
 used with Python 2.4.

 The module is pure Python (2.5, with numpy 1.2 imported), includes
 doctests, and is BSD licensed. Here's the first part of the module
 docstring:

 Homogeneous Transformation Matrices and Quaternions.

 A library for calculating 4x4 matrices for translating, rotating,
 mirroring,
 scaling, shearing, projecting, orthogonalizing, and superimposing arrays of
 homogenous coordinates as well as for converting between rotation matrices,
 Euler angles, and quaternions.
 

 I'd like to see this added to numpy/scipy so I know I've got some
 reading to do (scipy.org/Developer_Zone and the huge scipy-dev
 discussions on Scipy development infrastructure / workflow) to make
 sure it follows the guidelines, but where would people like to see
 this? In numpy? scipy? scikits? elsewhere?

 I seem to remember that there was a first draft of a guide for
 developers being written. Are there any links available?

 Thanks,
 Gareth.
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] is it a bug?

2009-03-11 Thread shuwj5...@163.com
Hi,

import numpy as np
x = np.arange(30)
x.shape = (2,3,5)

idx = np.array([0,1])
e = x[0,idx,:]
print e.shape   
# return (2,5). ok.

idx = np.array([0,1])
e = x[0,:,idx]
print e.shape  

#- return (2,3). I think the right answer should be (3,2). Is
#   it a bug here? my numpy version is 1.2.1.


Regards

David
-- 
 


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-11 Thread Jonathan Taylor
You lost me on
 x = np.arange(30)
 x.shape = (2,3,5)

For me I get:
In [2]: x = np.arange(30)

In [3]: x.shape
Out[3]: (30,)

which is what I would expect.   Perhaps I missed something?

Jon.

On Wed, Mar 11, 2009 at 8:55 PM, shuwj5...@163.com shuwj5...@163.com wrote:
 Hi,

 import numpy as np
 x = np.arange(30)
 x.shape = (2,3,5)

 idx = np.array([0,1])
 e = x[0,idx,:]
 print e.shape
 # return (2,5). ok.

 idx = np.array([0,1])
 e = x[0,:,idx]
 print e.shape

 #- return (2,3). I think the right answer should be (3,2). Is
 #       it a bug here? my numpy version is 1.2.1.


 Regards

 David
 --
  


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-11 Thread Robert Kern
On Wed, Mar 11, 2009 at 21:51, Jonathan Taylor
jonathan.tay...@utoronto.ca wrote:
 You lost me on
 x = np.arange(30)
 x.shape = (2,3,5)

 For me I get:
 In [2]: x = np.arange(30)

 In [3]: x.shape
 Out[3]: (30,)

 which is what I would expect.   Perhaps I missed something?

He is reshaping x by assigning (2,3,5) to its shape tuple, not
asserting that it is equal to (2,3,5) without modification.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-11 Thread josef . pktd
On Wed, Mar 11, 2009 at 9:51 PM, Jonathan Taylor
jonathan.tay...@utoronto.ca wrote:
 You lost me on
 x = np.arange(30)
 x.shape = (2,3,5)

 For me I get:
 In [2]: x = np.arange(30)

 In [3]: x.shape
 Out[3]: (30,)

 which is what I would expect.   Perhaps I missed something?

 Jon.
 - Show quoted text -
 On Wed, Mar 11, 2009 at 8:55 PM, shuwj5...@163.com shuwj5...@163.com wrote:
 Hi,

 import numpy as np
 x = np.arange(30)
 x.shape = (2,3,5)

 idx = np.array([0,1])
 e = x[0,idx,:]
 print e.shape
 # return (2,5). ok.

 idx = np.array([0,1])
 e = x[0,:,idx]
 print e.shape

 #- return (2,3). I think the right answer should be (3,2). Is
 #       it a bug here? my numpy version is 1.2.1.


 Regards

 David

same problem with reshape instead of assigning to shape:

 x = np.arange(30).reshape(2,3,5)
 idx = np.array([0,1]); e = x[0,:,idx]; e.shape
(2, 3)
 idx = np.array([0,1]); e = x[0,:,:2]; e.shape
(3, 2)
 e = x3[0,:,[0,1]];e.shape
(2, 3)
 e = x3[0,np.arange(3)[:,np.newaxis],[0,1]]; e.shape
(3, 2)
 e = x3[0,0:3,[0,1]];e.shape
(2, 3)

I was trying to figure out what the broadcasting rules are doing, but
the combination of slice : and an index looks weird, and I'm using
this pattern all the time.

Josef
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-11 Thread Robert Kern
On Wed, Mar 11, 2009 at 19:55, shuwj5...@163.com shuwj5...@163.com wrote:
 Hi,

 import numpy as np
 x = np.arange(30)
 x.shape = (2,3,5)

 idx = np.array([0,1])
 e = x[0,idx,:]
 print e.shape
 # return (2,5). ok.

 idx = np.array([0,1])
 e = x[0,:,idx]
 print e.shape

 #- return (2,3). I think the right answer should be (3,2). Is
 #       it a bug here? my numpy version is 1.2.1.

It's certainly weird, but it's working as designed. Fancy indexing via
arrays is a separate subsystem from indexing via slices. Basically,
fancy indexing decides the outermost shape of the result (e.g. the
leftmost items in the shape tuple). If there are any sliced axes, they
are *appended* to the end of that shape tuple.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 5:29 AM, Sebastian Haase ha...@msg.ucsf.edu wrote:
 Hi,
 I was wondering if people could comment on which compiler produces faster 
 code,
 MS-VS2003 or cygwin g++ ?
 I use Python 2.5 and SWIG.  I have C/C++ routines for large (maybe
 10MB, 100MB or even 1GB (on XP 64bit)) data processing.
 I'm not talking about BLAS or anything like that  just for-loops
 mostly on contiguous memory.

On windows xp 64 bits, the choice is easy: there is no working native
g++ compiler yet, there are quite a few bugs ( in particular the
driver is broken, which means you have to call the compiler, assembler
and linker manually). AFAIK, cygwin cannot run 64 bits binaries
(cygwin itself is only available on 32 bits for sure), and you can't
cross compile easily.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 12:38 PM, David Cournapeau courn...@gmail.com wrote:
 and you can't
 cross compile easily.

Of course, this applies to numpy/scipy - you can cross compile your
own extensions relatively easily (at least I don't see why it would
not be possible).

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-11 Thread David Cournapeau
On Thu, Mar 12, 2009 at 5:36 AM, Robert Kern robert.k...@gmail.com wrote:
 On Wed, Mar 11, 2009 at 15:06, David Cournapeau courn...@gmail.com wrote:
 Hi,

 I was looking at #936, to implement correctly the hashing protocol for
 dtypes. Am I right to believe that tp_hash should recursively descend
 fields for compound dtypes, and the hash value should depend on the
 size/ndim/typenum/byteorder for each atomic dtype + fields name (and
 titles) ? Contrary to comparison, we can't reuse the python C api,
 since PyObject_Hash cannot be applied to the fields dict, right ?

 Usually, one constructs a hashable analogue; e.g. taking the .descr
 and converting all of the lists to tuples. Then use PyObject_Hash on
 that.

Is the .descr of two dtypes guaranteed to be equal whenever the dtypes
are equal ? It is not obvious to me that PyArray_EquivTypes is
equivalent to comparing the descr ?

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-11 Thread Robert Kern
On Wed, Mar 11, 2009 at 22:49, David Cournapeau courn...@gmail.com wrote:
 On Thu, Mar 12, 2009 at 5:36 AM, Robert Kern robert.k...@gmail.com wrote:
 On Wed, Mar 11, 2009 at 15:06, David Cournapeau courn...@gmail.com wrote:
 Hi,

 I was looking at #936, to implement correctly the hashing protocol for
 dtypes. Am I right to believe that tp_hash should recursively descend
 fields for compound dtypes, and the hash value should depend on the
 size/ndim/typenum/byteorder for each atomic dtype + fields name (and
 titles) ? Contrary to comparison, we can't reuse the python C api,
 since PyObject_Hash cannot be applied to the fields dict, right ?

 Usually, one constructs a hashable analogue; e.g. taking the .descr
 and converting all of the lists to tuples. Then use PyObject_Hash on
 that.

 Is the .descr of two dtypes guaranteed to be equal whenever the dtypes
 are equal ? It is not obvious to me that PyArray_EquivTypes is
 equivalent to comparing the descr ?

It was an example.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion