[Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Stéfan van der Walt
Hey all,

Gael just asked me why `add`, `subtract` etc. didn't have out
arguments like `cos` and `sin`.  I couldn't give him a good answer.

Could we come to an agreement to add a uniform interface to ufuncs for
1.3?  Either all or none of them should support the out argument.

I see a couple of deg (for degrees) flags floating around as well,
so we could consider adding those to all ufuncs that accept angles.

Thanks for your input,
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Robert Kern
On Thu, Sep 4, 2008 at 02:03, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 Hey all,

 Gael just asked me why `add`, `subtract` etc. didn't have out
 arguments like `cos` and `sin`.  I couldn't give him a good answer.

 Could we come to an agreement to add a uniform interface to ufuncs for
 1.3?  Either all or none of them should support the out argument.

I am confused. add() and subtract() *do* take an out argument.


In [15]: from numpy import *

In [16]: x = arange(10, 20)

In [17]: y = arange(10)

In [18]: z = zeros(10, int)

In [19]: add(x,y,z)
Out[19]: array([10, 12, 14, 16, 18, 20, 22, 24, 26, 28])

In [20]: z
Out[20]: array([10, 12, 14, 16, 18, 20, 22, 24, 26, 28])

In [21]: subtract(x,y,z)
Out[21]: array([10, 10, 10, 10, 10, 10, 10, 10, 10, 10])

In [22]: z
Out[22]: array([10, 10, 10, 10, 10, 10, 10, 10, 10, 10])

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy 1.3: cleaning the math configuration, toward a warning free build

2008-09-04 Thread David Cournapeau
Hi,

For numpy 1.3, there are several things I would like to work on, which
I have already hinted previously. The two big things are:
- cleaning the math configuration
- making numpy buildable warning-free with a good set a warning flags.

For the warning-free build, I have started a nep, since some decisions
are not obvious. Is this appropriate ?

For the first part, I want to clean-up the current umath module, mainly
the configuration: we have quite a big mess ATM with several intervened
configurations, and everytime I am trying to touch it, I break something
(adding trunc, modifying mingw configuration, etc...). I think it would
be nice to clean-up, have a per-function check (instead of the
HAVE_LONGDOUBLE heuristic); I noticed for example that depending on the
configuration, we have different implementations for the same function,
etc... Robert mentioned the configuration time cost increase, but I
think I have a solution to make it as fast as before for most
configurations, the ones which work today, and it will be longer only
for platforms which need a better configuration (mingw, etc...).

Should I write a nep for that too ? Or a patch is enough ?

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Updated Numpy reference guide

2008-09-04 Thread Ondrej Certik
Hi Pauli!

On Sun, Aug 31, 2008 at 8:21 PM, Pauli Virtanen [EMAIL PROTECTED] wrote:

 Hi all,

 I finished the first iteration of incorporating material from Travis
 Oliphant's Guide to Numpy to the Sphinxy reference guide we were
 constructing in the Doc marathon.

 Result is here: (the PDF is a bit ugly, though, some content is almost
 randomly scattered there)

http://www.iki.fi/pav/tmp/numpy-refguide/index.xhtml
http://www.iki.fi/pav/tmp/numpy-refguide/NumPy.pdf

 Source is here: (Stéfan, if it looks ok to you, could you pull and check
 if it builds for you when you have time?)

https://code.launchpad.net/~pauli-virtanen/scipy/numpy-refguide

 What I did with the Guide to Numpy material was:

 - Collapsed each of the reference Chapters 3, 6, 8, 9 (ndarrays, scalars,
  dtypes, ufuncs) with the more introductory material in Chapter 2.

 - As this was supposed to be a reference guide, I tried to compress the
  text from Chapter 2 as much as possible, by sticking to definitions and
  dropping some more tutorial-oriented parts. This may have reduced
  readability at some points...

 - I added some small bits or rewrote parts in the above sections in
  places where I thought it would improve the result.

 - I did not include material that I thought was better to be put into
  appropriate docstrings in Numpy.

  What to do with class docstrings and obscure __xxx__ attributes was not
  so clear a decision, so what I did for these varies.

 - The sections about Ufuncs and array indexing are taken almost verbatim
  from the Guide to Numpy. The ndarray, scalar and dtype sections
  somewhat follow the structure of the Guide, but the text is more heavily
  edited from the original.

 Some things to do:

 - Descriptions about constructing items with __new__ methods should
  probably still be clarified; I just replaced references to __new__ with
  references to the corresponding classes.

 - What to do with the material from numpy.doc.* should be decided, as the
  text there doesn't look like it should go into a reference manual.

 Some questions:

 - Is this good enough to go into Numpy SVN at some point?

  Or should we redo it and base the work closer to the original
  Guide to Numpy?

 - Does it build for you?

  (I'd recommend using the development 0.5 version of Sphinx, so that you
  get the nifty Inter-Sphinx links to the Python documentation.)

  We are unfortunately beating the Sphinx with a big stick to make it
  place the documentation of each function or class into a separate file,
  and to convert the Numpy docstring format to something the Sphinx can
  fathom.

:)


  There's also some magic in place to make toctrees:: of function listings
  more pleasant to the eye.

 Any comments of what should be improved are welcome. (Even better: clone
 the bzr branch, make the changes yourself, and put the result somewhere
 available! E.g. as a bzr bundle or a branch on the launchpad.)

I think it looks excellent. It'd be cool if all the docs could finally
be at one place, instead of scattered all over the wiki. So for me,
any form in sphinx is ok.

Ondrej
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Stéfan van der Walt
2008/9/4 Robert Kern [EMAIL PROTECTED]:
 I am confused. add() and subtract() *do* take an out argument.

So it does.  We both tried a keyword-style argument, which I think is
a reasonable expectation?

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Robert Kern
On Thu, Sep 4, 2008 at 05:01, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 2008/9/4 Robert Kern [EMAIL PROTECTED]:
 I am confused. add() and subtract() *do* take an out argument.

 So it does.  We both tried a keyword-style argument, which I think is
 a reasonable expectation?

It would certainly be *nice*, but many C-implemented functions don't
do this ('cause it's a pain in C). It's even harder to do for ufuncs;
follow the chain of calls down from PyUFunc_GenericFunction().

Hmm. Now that I look at it, it might be feasible to extract the out=
keyword in construct_loop() and append it to the args tuple before
passing that down to construct_arrays(). push onto todo stack

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floating Point Difference between numpy and numarray

2008-09-04 Thread Hanni Ali
Hi,

Is there a way I can set numpy to use dtype='float64' throughout all my code
or force it to use the biggest datatype without adding the dtype='float64'
to every call to mean, stdev etc..



2008/9/3 Sebastian Stephan Berg [EMAIL PROTECTED]

 Hi,

 just guessing here. But numarray seems to calculate the result in a
 bigger dataype, while numpy uses float32 which is the input arrays size
 (at least I thought so, trying it confused me right now ...). In any
 case, maybe the difference will be gone if you
 use .mean(dtype='float64') (or whatever dtype numarray actually uses,
 which seems to be numarray.MaximumType(a.type()) where a is the array
 to take the mean).

 Sebastian

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floating Point Difference between numpy and numarray

2008-09-04 Thread David Cournapeau
Hanni Ali wrote:
 Hi,

 Is there a way I can set numpy to use dtype='float64' throughout all
 my code or force it to use the biggest datatype without adding the
 dtype='float64' to every call to mean, stdev etc..

Since it is the default type for the functions you mention, you can just
remove any call to dtype, but removing all the call to dtype='float64'
is not much less work than replacing dtype = 'float32' :) More
seriously, depending on your program, it may not be doable 100 %
automatically.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Gael Varoquaux
On Thu, Sep 04, 2008 at 05:24:53AM -0500, Robert Kern wrote:
 On Thu, Sep 4, 2008 at 05:01, Stéfan van der Walt [EMAIL PROTECTED] wrote:
  2008/9/4 Robert Kern [EMAIL PROTECTED]:
  I am confused. add() and subtract() *do* take an out argument.

Hum, good!

  So it does.  We both tried a keyword-style argument, which I think is
  a reasonable expectation?

 It would certainly be *nice*, but many C-implemented functions don't
 do this ('cause it's a pain in C). It's even harder to do for ufuncs;
 follow the chain of calls down from PyUFunc_GenericFunction().

 Hmm. Now that I look at it, it might be feasible to extract the out=
 keyword in construct_loop() and append it to the args tuple before
 passing that down to construct_arrays(). push onto todo stack

Cool. Either that, or fix'n the docs. I had a look at the docstring and
it wasn't hinted by the docstring. THe docstring editor is here for
fixing these details, but I think I'll simply wait for you to add the
keyword argument :).

Cheers,

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.3: cleaning the math configuration, toward a warning free build

2008-09-04 Thread Travis E. Oliphant

 Should I write a nep for that too ? Or a patch is enough ?

   
I think a patch with a useful explanation of the patch in the ticket is 
sufficient.

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy build on windows

2008-09-04 Thread Miroslav Sabljic
Hy people!

I'm quite a newbie regarding numpy so please excuse if I'm asking
stupid questions.
I need to build Python and couple of his site-packages on Windows
using Visual studio 2008. I've built Python 2.5.1 and now it's turn to
build numpy and I'm quite stuck because numpy build process is a
little confusing for me. So far I've read the instructions on
http://www.scipy.org/Installing_SciPy/Windows and created site.cfg
file with the following content:

[mkl]
include_dirs = C:\Program Files (x86)\Intel\MKL\10.0.012\include
library_dirs = C:\Program Files (x86)\Intel\MKL\10.0.012\ia32\lib
mkl_libs = mkl_ia32, mkl_c_dll, libguide40
lapack_libs = mkl_lapack

It keeps telling me that it can't find mkl_ia32, mkl_c_dll and
libguide40 at the following location but these libs are there.

Could you please instruct me to some detailed build howto for windows
and MSVC compiler? I would appreciate any help just to get this done
and build it.

I've attached a log file, output of 'python setup.py config'.

Thanks in advance!


-- 
Best regards,
  Miroslav
F2PY Version 2_3649
blas_opt_info:
blas_mkl_info:
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in C:\Program Files 
(x86)\Intel\MKL\10.0.012\ia32\lib
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in 
D:\users\sabljicm\python\Python25\lib
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in C:\
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in 
D:\users\sabljicm\python\Python25\libs
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in 
D:\users\sabljicm\python\Python25\lib
  libraries ptf77blas,ptcblas,atlas not found in C:\
  libraries ptf77blas,ptcblas,atlas not found in 
D:\users\sabljicm\python\Python25\libs
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in 
D:\users\sabljicm\python\Python25\lib
  libraries f77blas,cblas,atlas not found in C:\
  libraries f77blas,cblas,atlas not found in 
D:\users\sabljicm\python\Python25\libs
  NOT AVAILABLE

blas_info:
  libraries blas not found in D:\users\sabljicm\python\Python25\lib
  libraries blas not found in C:\
  libraries blas not found in D:\users\sabljicm\python\Python25\libs
  NOT AVAILABLE

blas_src_info:
  NOT AVAILABLE

  NOT AVAILABLE

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in C:\Program Files 
(x86)\Intel\MKL\10.0.012\ia32\lib
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in 
D:\users\sabljicm\python\Python25\lib
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in C:\
  libraries mkl_ia32,mkl_c_dll,libguide40 not found in 
D:\users\sabljicm\python\Python25\libs
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in 
D:\users\sabljicm\python\Python25\lib
  libraries lapack_atlas not found in D:\users\sabljicm\python\Python25\lib
  libraries ptf77blas,ptcblas,atlas not found in C:\
  libraries lapack_atlas not found in C:\
  libraries ptf77blas,ptcblas,atlas not found in 
D:\users\sabljicm\python\Python25\libs
  libraries lapack_atlas not found in D:\users\sabljicm\python\Python25\libs
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in 
D:\users\sabljicm\python\Python25\lib
  libraries lapack_atlas not found in D:\users\sabljicm\python\Python25\lib
  libraries f77blas,cblas,atlas not found in C:\
  libraries lapack_atlas not found in C:\
  libraries f77blas,cblas,atlas not found in 
D:\users\sabljicm\python\Python25\libs
  libraries lapack_atlas not found in D:\users\sabljicm\python\Python25\libs
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

lapack_info:
  libraries lapack not found in D:\users\sabljicm\python\Python25\lib
  libraries lapack not found in C:\
  libraries lapack not found in D:\users\sabljicm\python\Python25\libs
  NOT AVAILABLE

lapack_src_info:
  NOT AVAILABLE

  NOT AVAILABLE

running config
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build on windows

2008-09-04 Thread David Cournapeau
Miroslav Sabljic wrote:
 Hy people!

 I'm quite a newbie regarding numpy so please excuse if I'm asking
 stupid questions.
   

No stupid question. Stupid answer may follow, though...

 I need to build Python and couple of his site-packages on Windows
 using Visual studio 2008. I've built Python 2.5.1 and now it's turn to
 build numpy and I'm quite stuck because numpy build process is a
 little confusing for me. So far I've read the instructions on
 http://www.scipy.org/Installing_SciPy/Windows and created site.cfg
 file with the following content:
   

Ok, you are building on windows, with a custom python, on windows 64
bits, with MKL... That's a lot at the same time. I would first try
building numpy without the MKL with VS 2008, because this is already a
challenge by itself. Did you you try it ? Did it work ? Did the tests
pass ? Are you trying to build a 32 bits numpy (as suggested by the MKL
in x86 dir), or a 64 bits numpy ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build on windows

2008-09-04 Thread Miroslav Sabljić
David Cournapeau wrote:

 Ok, you are building on windows, with a custom python, on windows 64
 bits, with MKL... That's a lot at the same time. 

I'm building it with custom Python (built with VS 2008) on Win32. It is
a lot for me but I just have to do it and learn as much as posible
during the process. I'm building it with MKL becuase I followed the
instructions from website.


 I would first try 
 building numpy without the MKL with VS 2008, because this is already a
 challenge by itself. Did you you try it ? Did it work ? Did the tests
 pass ? 

I haven't tried building it without MKL because I don't know how. :( As
I followed the instructions I tried to build it directly with MKL. I can
try it tomorrow at work and post a log. Would you be so kind and tell
what is the proper way to build it without MKL?


 Are you trying to build a 32 bits numpy (as suggested by the MKL
 in x86 dir), or a 64 bits numpy ?

32-bit numpy. Everything I'm building for now is 32-bit (Python, PIL,
numpy, ...) on 32-bit Windows. If you need any more information please
let me know and I will post them.

Thanks for your answer!


-- 
Best regards,
  Miroslav
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build on windows

2008-09-04 Thread David Cournapeau
On Fri, Sep 5, 2008 at 2:30 AM, Miroslav Sabljić [EMAIL PROTECTED] wrote:
 David Cournapeau wrote:

 Ok, you are building on windows, with a custom python, on windows 64
 bits, with MKL... That's a lot at the same time.

 I'm building it with custom Python (built with VS 2008) on Win32. It is
 a lot for me but I just have to do it and learn as much as posible
 during the process. I'm building it with MKL becuase I followed the
 instructions from website.

From scipy.org ? If you understood you have to use the MLK, then we
have to improve the documentation (the installation pages are a mess
ATM). It is certainly no mandatory.

Do you have to use VS 2008 ? Building python on windows is not easy
because of the dependencies, this alone requires quite a bit of
effort. Do you already have your built python ?


 I haven't tried building it without MKL because I don't know how. :( As
 I followed the instructions I tried to build it directly with MKL. I can
 try it tomorrow at work and post a log. Would you be so kind and tell
 what is the proper way to build it without MKL?

On windows, if you only build numpy, you only need a C compiler, so in
theory, python setup.py build, with an empty site.cfg, should work.
But VS 2008 is significantly different from VS 2003, which is the most
often used MS compiler to build numpy (on windows, many people use
mingw; the official binaries are built with mingw; but there are
problems between mingw and VS 2008).

I would try that first (an empty site.cfg build).


 Are you trying to build a 32 bits numpy (as suggested by the MKL
 in x86 dir), or a 64 bits numpy ?

 32-bit numpy. Everything I'm building for now is 32-bit (Python, PIL,
 numpy, ...) on 32-bit Windows. If you need any more information please
 let me know and I will post them.

I am surprised: why you do you have Program File (x86) if you run on
32 bits ? I thought the directory only appeared on windows 64 bits, to
hold the 32 bits programs ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] BUG in numpy.loadtxt?

2008-09-04 Thread Ryan May
Stefan (or anyone else who can comment),

It appears that the usecols argument to loadtxt no longer accepts numpy 
arrays:

 from StringIO import StringIO
 text = StringIO('1 2 3\n4 5 6\n')
 data = np.loadtxt(text, usecols=np.arange(1,3))

ValueErrorTraceback (most recent call last)

/usr/lib64/python2.5/site-packages/numpy/lib/io.py in loadtxt(fname, 
dtype, comments, delimiter, converters, skiprows, usecols, unpack)
 323 first_line = fh.readline()
 324 first_vals = split_line(first_line)
-- 325 N = len(usecols or first_vals)
 326
 327 dtype_types = flatten_dtype(dtype)

ValueError: The truth value of an array with more than one element is 
ambiguous. Use a.any() or a.all()

 data = np.loadtxt(text, usecols=np.arange(1,3).tolist())
 data
array([[ 2.,  3.],
[ 5.,  6.]])

Was it a conscious design decision that the usecols no longer accept 
arrays? The new behavior (in 1.1.1) breaks existing code that one of my 
colleagues has.  Can we get a patch in before 1.2 to get this working 
with arrays again?

Thanks,

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Pauli Virtanen
Thu, 04 Sep 2008 15:32:18 +0200, Gael Varoquaux wrote:

[clip]
 Cool. Either that, or fix'n the docs. I had a look at the docstring and
 it wasn't hinted by the docstring. THe docstring editor is here for
 fixing these details, but I think I'll simply wait for you to add the
 keyword argument :).

The ufunc signatures in the docstrings are automatically generated in 
ufuncobject.c.

I changed the code in r5768 so that it's more obvious the various 
functions take output arguments. When the interface gets adjusted so that 
'out' are actually keyword arguments, the signature formatting needs to 
be changed again so that 'outN' look like keyword arguments.

Currently the 'sig' and 'extobj' arguments are not shown in the 
signature. I'm not sure whether it makes sense to add them, since they 
appear to be for quite exotic use only...

todo.pop(); todo.append(B)

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build on windows

2008-09-04 Thread Miroslav Sabljić
David Cournapeau wrote:

 From scipy.org ? If you understood you have to use the MLK, then we
 have to improve the documentation (the installation pages are a mess
 ATM). It is certainly no mandatory.

Yeah, from scipy.org.


 Do you have to use VS 2008 ? Building python on windows is not easy
 because of the dependencies, this alone requires quite a bit of
 effort. Do you already have your built python ?

Yes, I have to use VS 2008. I've successfully built Python 2.5.1 with VS
2008 and I'm currently using it to build other neccessary site-packages
for it.


 On windows, if you only build numpy, you only need a C compiler, so in
 theory, python setup.py build, with an empty site.cfg, should work.
 But VS 2008 is significantly different from VS 2003, which is the most
 often used MS compiler to build numpy (on windows, many people use
 mingw; the official binaries are built with mingw; but there are
 problems between mingw and VS 2008).
 
 I would try that first (an empty site.cfg build).

I will try it tomorrow at work. I would use mingw or even cygwin but
unfortunately I have to use exactly VS 2008 and only it.


 I am surprised: why you do you have Program File (x86) if you run on
 32 bits ? I thought the directory only appeared on windows 64 bits, to
 hold the 32 bits programs ?

Sorry for this misapprehension! I meant to say that I'm building 32-bit
numpy for 32-bit Python. The machine I'm using to build it is 64-bit
Windows which has both 32-bit and 64-bit compilers and libraries
installed. For now I'm doing only 32-bit builds.

Tomorrow I will try to build numpy with empty site.cfg and post results
here. Tnx.


-- 
Best regards,
  Miroslav
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] test results of 1.2.0rc1

2008-09-04 Thread Andrew Straw
Hi, with numpy 1.2.0rc1 running 'python -c import numpy; numpy.test()'
on my Ubuntu Hardy amd64 machine results in 1721 tests being run and 1
skipped. So far, so good.

However, if I run numpy.test(10,10,all=True), I get 1846 tests with:
the message FAILED (SKIP=1, errors=8, failures=68) Furthermore, there
are several matplotlib windows that pop up, many of which are
non-reassuringly blank: Bartlett window frequency response (twice -- I
guess the 2nd is actually for the Blackman window), Hamming window
frequency response, Kaiser window, Kaiser window frequency response,
sinc function. Additionally, the linspace and logspace tests each
generate a plots with green and blue dots at Y values of 0.0 and 0.5,
but it would be nice to have an axes title.

Should I be concerned that there are so many errors and failures with
the numpy test suite? Or am I just running it with unintended settings?
If these tests should pass, I will attempt to find time to generate bug
reports for them, although I don't think there's anything particularly
weird about my setup.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test results of 1.2.0rc1

2008-09-04 Thread Pauli Virtanen
Thu, 04 Sep 2008 11:51:14 -0700, Andrew Straw wrote:
 Hi, with numpy 1.2.0rc1 running 'python -c import numpy; numpy.test()'
 on my Ubuntu Hardy amd64 machine results in 1721 tests being run and 1
 skipped. So far, so good.
 
 However, if I run numpy.test(10,10,all=True), I get 1846 tests with: the
 message FAILED (SKIP=1, errors=8, failures=68) Furthermore, there are
 several matplotlib windows that pop up, many of which are
 non-reassuringly blank: Bartlett window frequency response (twice -- I
 guess the 2nd is actually for the Blackman window), Hamming window
 frequency response, Kaiser window, Kaiser window frequency response,
 sinc function. Additionally, the linspace and logspace tests each
 generate a plots with green and blue dots at Y values of 0.0 and 0.5,
 but it would be nice to have an axes title.
 
 Should I be concerned that there are so many errors and failures with
 the numpy test suite? Or am I just running it with unintended settings?
 If these tests should pass, I will attempt to find time to generate bug
 reports for them, although I don't think there's anything particularly
 weird about my setup.

I'd say that the settings are unintended in the sense that they run all 
examples in all docstrings. There are quite a few of these, and some 
indeed plot some graphs.

Ideally, all examples should run, but this is not assured at the moment. 
Some of the examples were added during the summer's documentation 
marathon, but others date way back.

There was some discussion about what to do with the plots, but as far as 
I know, no conclusions were reached about this, so we sticked writing 
them like all other examples.

Some alternatives offered were

1) Mark them with  and live with not being able to doctest docstrings.

   -0

2) Mark them with  and fake matplotlib so that the plots don't appear.

   +0

3) Mark them with :: and live no syntax highlighting in rendered
   documentation and not being able to test or render plots easily.

   -1

4) Steal or adapt the plot:: ReST directive from matplotlib, and use that,
   as Stéfan suggested at some point. Haven't got yet around to
   implementing this.

   Tentatively +1; depends a bit of how the implementation goes.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] BUG in numpy.loadtxt?

2008-09-04 Thread Travis E. Oliphant
Ryan May wrote:
 Stefan (or anyone else who can comment),

 It appears that the usecols argument to loadtxt no longer accepts numpy 
 arrays:
   

Could you enter a ticket so we don't lose track of this.  I don't 
remember anything being intentional.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] test results of 1.2.0rc1

2008-09-04 Thread Pauli Virtanen
Thu, 04 Sep 2008 19:07:24 +, Pauli Virtanen wrote:
[clip]
 I'd say that the settings are unintended in the sense that they run
 all examples in all docstrings. There are quite a few of these, and some
 indeed plot some graphs.
 
 Ideally, all examples should run, but this is not assured at the moment.
 Some of the examples were added during the summer's documentation
 marathon, but others date way back.

Ok, maybe to clarify: yes, I they are bugs, but only bugs in the 
documentation and not in functionality. Also, some (the plots) are 
related with unresolved technical issues.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] `out` argument to ufuncs

2008-09-04 Thread Robert Kern
On Thu, Sep 4, 2008 at 08:32, Gael Varoquaux
[EMAIL PROTECTED] wrote:
 On Thu, Sep 04, 2008 at 05:24:53AM -0500, Robert Kern wrote:
 On Thu, Sep 4, 2008 at 05:01, Stéfan van der Walt [EMAIL PROTECTED] wrote:
  2008/9/4 Robert Kern [EMAIL PROTECTED]:
  I am confused. add() and subtract() *do* take an out argument.

 Hum, good!

  So it does.  We both tried a keyword-style argument, which I think is
  a reasonable expectation?

 It would certainly be *nice*, but many C-implemented functions don't
 do this ('cause it's a pain in C). It's even harder to do for ufuncs;
 follow the chain of calls down from PyUFunc_GenericFunction().

 Hmm. Now that I look at it, it might be feasible to extract the out=
 keyword in construct_loop() and append it to the args tuple before
 passing that down to construct_arrays(). push onto todo stack

 Cool. Either that, or fix'n the docs. I had a look at the docstring and
 it wasn't hinted by the docstring. THe docstring editor is here for
 fixing these details, but I think I'll simply wait for you to add the
 keyword argument :).

It won't be happening soon. len(todo) == big.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Getting a numpy array from a ctype pointer

2008-09-04 Thread Paulo J. S. Silva
Hello,

I am writing some code interfacing C and Python using ctypes. In a
callback function (in Python) I get in a parameter x which is c_double
pointer and a parameter n which is c_int representing the length of the
array. 

How can I transform this information into a numpy array?

Paulo


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting a numpy array from a ctype pointer

2008-09-04 Thread Travis E. Oliphant
Paulo J. S. Silva wrote:
 Hello,

 I am writing some code interfacing C and Python using ctypes. In a
 callback function (in Python) I get in a parameter x which is c_double
 pointer and a parameter n which is c_int representing the length of the
 array. 

 How can I transform this information into a numpy array?

   
Something like this may work:

from numpy import ctypeslib

r = ctypeslib.as_array(x._type_ * n)

If that doesn't work, then you can create an array from an arbitrary 
buffer or any object that exposes the __array_interface__ attribute.   
So, if you can get the address of c_double then you can use it as the 
data for the ndarray.

Ask if you need more help.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to import data where the decimal separator is a comma ?

2008-09-04 Thread oc-spam66
Hello,

I have data files where the decimal separator is a comma. Can I import this 
data with numpy.loadtxt ? 

Notes : 
- I tried to set the locale LC_NUMERIC=fr_FR.UTF-8 but it didn't change 
anything.
- Python 2.5.2, Numpy 1.1.0

Have a nice day,

O.C.


 Créez votre adresse électronique [EMAIL PROTECTED] 
 1 Go d'espace de stockage, anti-spam et anti-virus intégrés.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] BUG in numpy.loadtxt?

2008-09-04 Thread Ryan May
Travis E. Oliphant wrote:
 Ryan May wrote:
 Stefan (or anyone else who can comment),

 It appears that the usecols argument to loadtxt no longer accepts numpy 
 arrays:
   
 
 Could you enter a ticket so we don't lose track of this.  I don't 
 remember anything being intentional.
 

Done: #905
http://scipy.org/scipy/numpy/ticket/905

I've attached a patch that does the obvious and coerces usecols to a 
list when it's not None, so it will work for any iterable.

I don't think it was a conscious decision, just a consequence of the 
rewrite using different methods.  There are two problems:

1) It's an API break, technically speaking
2) It currently doesn't even accept tuples, which are used in the docstring.

Can we hurry and get this into 1.2?

Thanks,
Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting a numpy array from a ctype pointer

2008-09-04 Thread Paulo J. S. Silva
Em Qui, 2008-09-04 às 15:01 -0500, Travis E. Oliphant escreveu:
 Paulo J. S. Silva wrote:

 Something like this may work:
 
 from numpy import ctypeslib
 
 r = ctypeslib.as_array(x._type_ * n)
 

Unfortunately, it didn't work.

 If that doesn't work, then you can create an array from an arbitrary 
 buffer or any object that exposes the __array_interface__ attribute.   
 So, if you can get the address of c_double then you can use it as the 
 data for the ndarray.
 
 Ask if you need more help.

I think I need help. I discovered how to create a Python string from the
pointer using ctypes' string_at function:

r = C.string_at(x, n*C.sizeof(x._type_))

Now I can use numpy fromstring to get a array version of it. However
this is not enough for my application as fromstring actually copies the
data, hence change to the array are not reflected in the original array
that goes back to the Fortran code.

Any hints?

Paulo 

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting a numpy array from a ctype pointer

2008-09-04 Thread Paulo J. S. Silva
After some trial and erros, I found a solution, described below. Is this
the best one? Looks a little convoluted to me (C represents ctypes
module and np numpy):

Array = n*C.c_double
x = Array.from_address(C.addressof(x.contents))
x = np.ctypeslib.as_array(x)

Thanks,

Paulo

Em Qui, 2008-09-04 às 17:31 -0400, Paulo J. S. Silva escreveu:
 Em Qui, 2008-09-04 às 15:01 -0500, Travis E. Oliphant escreveu:
  Paulo J. S. Silva wrote:
 
  Something like this may work:
  
  from numpy import ctypeslib
  
  r = ctypeslib.as_array(x._type_ * n)
  
 
 Unfortunately, it didn't work.
 
  If that doesn't work, then you can create an array from an arbitrary 
  buffer or any object that exposes the __array_interface__ attribute.   
  So, if you can get the address of c_double then you can use it as the 
  data for the ndarray.
  
  Ask if you need more help.
 
 I think I need help. I discovered how to create a Python string from the
 pointer using ctypes' string_at function:
 
 r = C.string_at(x, n*C.sizeof(x._type_))
 
 Now I can use numpy fromstring to get a array version of it. However
 this is not enough for my application as fromstring actually copies the
 data, hence change to the array are not reflected in the original array
 that goes back to the Fortran code.
 
 Any hints?
 
 Paulo 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting a numpy array from a ctype pointer

2008-09-04 Thread Travis E. Oliphant
Paulo J. S. Silva wrote:
 After some trial and erros, I found a solution, described below. Is this
 the best one? Looks a little convoluted to me (C represents ctypes
 module and np numpy):

 Array = n*C.c_double
 x = Array.from_address(C.addressof(x.contents))
 x = np.ctypeslib.as_array(x)
   

That's a pretty simple approach.   There is a faster approach which uses 
the undocumented function int_asbuffer function from numpy.core.multiarray

(i.e. from numpy.core.multiarray import int_asbuffer)

Then:

x = np.frombuffer(int_asbuffer(C.addressof(x.contents), n*8))

If you don't like the hard-coded '8', then you can get that number from

np.dtype(float).itemsize

There may also be a way to get it from ctypes.


-Travis




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] isnan and co: cleaning up

2008-09-04 Thread David Cournapeau
Hi,

While working on my branch to clean the math configuration, I
noticed that the code for isnan and co became quite convoluted. autoconf
info file has a mention of it, and suggests the following for
portability (section 5.5.1 of autoconf):

 The C99 standard says that `isinf' and `isnan' are macros.  On
 some systems just macros are available (e.g., HP-UX and Solaris
 10), on some systems both macros and functions (e.g., glibc
 2.3.2), and on some systems only functions (e.g., IRIX 6 and
 Solaris 9).  In some cases these functions are declared in
 nonstandard headers like `sunmath.h' and defined in non-default
 libraries like `-lm' or `-lsunmath'.

 The C99 `isinf' and `isnan' macros work correctly with `long
 double' arguments, but pre-C99 systems that use functions
 typically assume `double' arguments.  On such a system, `isinf'
 incorrectly returns true for a finite `long double' argument that
 is outside the range of `double'.

 To work around this porting mess, you can use code like the
 following.

 #include math.h

  #ifndef isnan
  # define isnan(x) \
  (sizeof (x) == sizeof (long double) ? isnan_ld (x) \
   : sizeof (x) == sizeof (double) ? isnan_d (x) \
   : isnan_f (x))
  static inline int isnan_f  (float   x) { return x != x; }
  static inline int isnan_d  (double  x) { return x != x; }
  static inline int isnan_ld (long double x) { return x != x; }
  #endif

  #ifndef isinf
  # define isinf(x) \
  (sizeof (x) == sizeof (long double) ? isinf_ld (x) \
   : sizeof (x) == sizeof (double) ? isinf_d (x) \
   : isinf_f (x))
  static inline int isinf_f  (float   x) { return isnan (x -
x); }
  static inline int isinf_d  (double  x) { return isnan (x -
x); }
  static inline int isinf_ld (long double x) { return isnan (x -
x); }
  #endif

 Use `AC_C_INLINE' (*note C Compiler::) so that this code works on
 compilers that lack the `inline' keyword.  Some optimizing
 compilers mishandle these definitions, but systems with that bug
 typically have missing or broken `isnan' functions anyway, so it's
 probably not worth worrying about.

This is simpler than the current code (I actually understand what the
above case does; the current code in numpy has many cases which I do not
understand), does not need any C code (_isnan and co), and probably more
portable since autoconf has a lot of experience there. Is the code in
_isnan really better than just relying on the nan property x != x ? (Do
we support platforms without a IEE754 FPU ?)

Does anyone has any thought on replacing the current code by the above
(modulo the inline part, which we can drop for now) ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion