[Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Matthew Brett
Hi,

Running the test suite for one of our libraries, there seems to have
been a recent breakage of the behavior of dtype hashing.

This script:

import numpy as np

data0 = np.arange(10)
data1 = data0 - 10

dt0 = data0.dtype
dt1 = data1.dtype

assert dt0 == dt1 # always passes
assert hash(dt0) == hash(dt1) # fails on latest

fails on the current latest-ish - aada93306  and passes on a stock 1.5.0.

Is this expected?

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Multiple Linear Regression

2011-03-16 Thread dileep kunjaai
Dear sir,
 Can we do multiple linear regression(MLR)  in python is there any
inbuilt function for MLR

-- 
DILEEPKUMAR. R
J R F, IIT DELHI
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.6: branching and release notes

2011-03-16 Thread Mark Wiebe
On Tue, Mar 15, 2011 at 8:29 PM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:



 On Wed, Mar 16, 2011 at 2:31 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Tue, Mar 15, 2011 at 12:26 PM, Mark Wiebe mwwi...@gmail.com wrote:

 On Sun, Mar 13, 2011 at 11:59 PM, Ralf Gommers 
 ralf.gomm...@googlemail.com wrote:


 Hi Mark, I see you just did this, but is there anything else you
 want/need to do? If it's necessary I can postpone the first beta by a
 couple of days. Better that than rush things too much and end up with
 an API you have reservations about.


 I pushed one more small API change to PyArray_NewLikeArray, adding a
 'subok' parameter which lets you disable sub-types. The things missing still
 are documentation (maybe others can help?). The Python nditer exposure is
 undocumented, as well as the new parameters to ufuncs (full list: 'casting',
 'out', 'order', 'subok', 'dtype').


 Okay, I'll write some docs for the nditer object.


 We should probably postpone the beta by a few days, there are some other
 loose ends floating about.

 Besides documentation the only thing I can think of is the structured
 array non-existing filed segfault thing. And perhaps #1619 would be good to
 have fixed, but not essential for a beta IMHO. Anything else?


I took a look at #1619, and added a comment suggesting an approach to fix
it. For the goal of having structured types work reasonably, this one
definitely needs a fix.

-Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OpenOpt Suite release 0.33

2011-03-16 Thread Dmitrey
 Hi all,
   I'm glad to inform you about new release 0.33 of our completely free
   (license: BSD) cross-platform software:

   OpenOpt:



 * cplex has been connected

  * New global solver interalg with guarantied precision, competitor
   to LGO, BARON, MATLAB's intsolver and Direct (also can work in
   inexact mode)

  * New solver amsg2p for unconstrained medium-scaled NLP and NSP

 

   FuncDesigner:



 * Essential speedup for automatic differentiation when
   vector-variables are involved, for both dense and sparse cases

  * Solving MINLP became available

  * Add uncertainty analysis

  * Add interval analysis

  * Now you can solve systems of equations with automatic
   determination is the system linear or nonlinear (subjected to
   given set of free or fixed variables)

  * FD Funcs min and max can work on lists of oofuns

  * Bugfix for sparse SLE (system of linear equations), that slowed
   down computation time and demanded more memory

  * New oofuns angle, cross

  * Using OpenOpt result(oovars) is available, also, start points
   with oovars() now can be assigned easier

 

   SpaceFuncs (2D, 3D, N-dimensional geometric package with abilities for
   parametrized calculations, solving systems of geometric equations and
   numerical optimization with automatic differentiation):



 * Some bugfixes

 

   DerApproximator:



 * Adjusted with some changes in FuncDesigner

 

   For more details visit our site http://openopt.org.

   

   Regards, Dmitrey.

   
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.6: branching and release notes

2011-03-16 Thread Mark Wiebe
On Tue, Mar 15, 2011 at 10:42 PM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:



 On Wed, Mar 16, 2011 at 11:29 AM, Ralf Gommers 
 ralf.gomm...@googlemail.com wrote:



 On Wed, Mar 16, 2011 at 2:31 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Tue, Mar 15, 2011 at 12:26 PM, Mark Wiebe mwwi...@gmail.com wrote:

 I pushed one more small API change to PyArray_NewLikeArray, adding a
 'subok' parameter which lets you disable sub-types. The things missing 
 still
 are documentation (maybe others can help?). The Python nditer exposure is
 undocumented, as well as the new parameters to ufuncs (full list: 
 'casting',
 'out', 'order', 'subok', 'dtype').


 Okay, I'll write some docs for the nditer object.


 Hi Mark, could you review and fill in a few blanks:
 https://github.com/rgommers/numpy/tree/nditer-docs


I've changed some and filled in more details. Sphinx appears to get this
totally wrong though, it completely ignores the Attributes section (maybe
those need to be separated out?), and links to numpy.copy instead of
nditer.copy in the Methods section.

-Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.6: branching and release notes

2011-03-16 Thread Ralf Gommers
On Wed, Mar 16, 2011 at 4:16 PM, Mark Wiebe mwwi...@gmail.com wrote:



 On Tue, Mar 15, 2011 at 10:42 PM, Ralf Gommers 
 ralf.gomm...@googlemail.com wrote:



 On Wed, Mar 16, 2011 at 11:29 AM, Ralf Gommers 
 ralf.gomm...@googlemail.com wrote:



 On Wed, Mar 16, 2011 at 2:31 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Tue, Mar 15, 2011 at 12:26 PM, Mark Wiebe mwwi...@gmail.com wrote:

 I pushed one more small API change to PyArray_NewLikeArray, adding a
 'subok' parameter which lets you disable sub-types. The things missing 
 still
 are documentation (maybe others can help?). The Python nditer exposure is
 undocumented, as well as the new parameters to ufuncs (full list: 
 'casting',
 'out', 'order', 'subok', 'dtype').


 Okay, I'll write some docs for the nditer object.


 Hi Mark, could you review and fill in a few blanks:
 https://github.com/rgommers/numpy/tree/nditer-docs


 I've changed some and filled in more details. Sphinx appears to get this
 totally wrong though, it completely ignores the Attributes section (maybe
 those need to be separated out?), and links to numpy.copy instead of
 nditer.copy in the Methods section.

 Yes, that's a bug in the Sphinx autosummary extension. And also in the
class template in the numpy reference guide source. I already filed #1772
for that.

Attributes section should be fine as it is.

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] convert dictionary of arrays into single array

2011-03-16 Thread John
Hello,

I have a dictionary with structured arrays, keyed by integers 0...n.
There are no other keys in the dictionary.

What is the most efficient way to convert the dictionary of arrays to
a single array?

All the arrays have the same 'headings' and width, but different lengths.

Is there something I can do that would be more efficient than looping
through and using np.concatenate (vstack)?
--john
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiple Linear Regression

2011-03-16 Thread Angus McMorland
On 16 March 2011 02:53, dileep kunjaai dileepkunj...@gmail.com wrote:
 Dear sir,
  Can we do multiple linear regression(MLR)  in python is there any
 inbuilt function for MLR

Yes, you can use np.linalg.lstsq [1] for this.
Here's a quick example:

import numpy as np
# model is y = b0.x0 + b1.x1 + b2.x2
b = np.array([3.,2.,1.])
noise = np.random.standard_normal(size=(10,3)) * 0.1
bn = b[None] + noise
x = np.random.standard_normal(size=(10,3))
y = np.sum(bn * x, axis=1)
be = np.linalg.lstsq(x,y)

and be[0] should be close to the original b (3,2,1.).

[1] 
http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.linalg.lstsq.html

 --
 DILEEPKUMAR. R
 J R F, IIT DELHI
-- 
AJC McMorland
Post-doctoral research fellow
Neurobiology, University of Pittsburgh
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread Sturla Molden
Den 16.03.2011 00:01, skrev Neal Becker:

Here is how Fortran compares:

 * 1-d, 2-d only or N-d??

Any of those.

 * support for slice views?  What exactly kind of support?

Fortran 90 pointers create a view.

real*8, target :: array(n,m)
real*8, pointer :: view

view = array(::2, ::2)

Slicing alone creates a view or new array depending on context.


 * semantics like numpy, that make many operations avoid copy?

Whenever the compiler can  infer that it's safe, i.e. if the same data 
cannot be referenced on lhs and rhs. Otherwise it will make a temporary 
copy.

 * what support for arithmetic operations?

Any.

 Do views support arithmetic
 operations?

Yes.


 * expression templates?

No.

 * How's the performance with indexing?  Multi-D indexing?  How about 
 iteration?

Same as C for release builds.

Boundschecking is a compiler dependent option e.g. for debugging.

 * What are the semantics of iterators?  I don't think I've seen 2 libs that
 treat multi-d iterators the same way (and I've tried a lot of them).

do k = 1,size(x)
 x(k) = blabblabla.
end do



Sturla



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] When was the ddof kwarg added to std()?

2011-03-16 Thread Darren Dale
Does anyone know when the ddof kwarg was added to std()? Has it always
been there?

Thanks,
Darren
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiple Linear Regression

2011-03-16 Thread dileep kunjaai
Thank you for your time and consideration.

On Wed, Mar 16, 2011 at 5:17 PM, Angus McMorland amcm...@gmail.com wrote:

 On 16 March 2011 02:53, dileep kunjaai dileepkunj...@gmail.com wrote:
  Dear sir,
   Can we do multiple linear regression(MLR)  in python is there any
  inbuilt function for MLR

 Yes, you can use np.linalg.lstsq [1] for this.
 Here's a quick example:

 import numpy as np
 # model is y = b0.x0 + b1.x1 + b2.x2
 b = np.array([3.,2.,1.])
 noise = np.random.standard_normal(size=(10,3)) * 0.1
 bn = b[None] + noise
 x = np.random.standard_normal(size=(10,3))
 y = np.sum(bn * x, axis=1)
 be = np.linalg.lstsq(x,y)

 and be[0] should be close to the original b (3,2,1.).

 [1]
 http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.linalg.lstsq.html

  --
  DILEEPKUMAR. R
  J R F, IIT DELHI
 --
 AJC McMorland
 Post-doctoral research fellow
 Neurobiology, University of Pittsburgh




-- 
DILEEPKUMAR. R
J R F, IIT DELHI
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] When was the ddof kwarg added to std()?

2011-03-16 Thread Scott Sinclair
On 16 March 2011 14:52, Darren Dale dsdal...@gmail.com wrote:
 Does anyone know when the ddof kwarg was added to std()? Has it always
 been there?

Does 'git log --grep=ddof' help?

Cheers,
Scott
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Paul Anton Letnes
Hi!

This little snippet of code tricked me (in a more convoluted form). The *= 
operator does not change the datatype of the left hand side array. Is this 
intentional? It did fool me and throw my results quite a bit off. I always 
assumed that 'a *= b' means exactly the same as 'a = a * b' but this is clearly 
not the case!

Paul.

++
 from numpy import *
 a = arange(10)
 b = linspace(0,1,10)
 a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 b
array([ 0.,  0.,  0.,  0.,  0.,
   0.5556,  0.6667,  0.7778,  0.8889,  1.])
 a * b
array([ 0.,  0.,  0.,  1.,  1.7778,
   2.7778,  4.,  5.,  7.,  9.])
 a *= b
 a
array([0, 0, 0, 1, 1, 2, 4, 5, 7, 9])


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Angus McMorland
On 16 March 2011 09:24, Paul Anton Letnes paul.anton.let...@gmail.com wrote:
 Hi!

 This little snippet of code tricked me (in a more convoluted form). The *= 
 operator does not change the datatype of the left hand side array. Is this 
 intentional? It did fool me and throw my results quite a bit off. I always 
 assumed that 'a *= b' means exactly the same as 'a = a * b' but this is 
 clearly not the case!

This is intentional: a *= b works inplace, i.e. it's the equivalent,
not of a = a * b, but of a[:] = a * b

Angus.

 Paul.

 ++
 from numpy import *
 a = arange(10)
 b = linspace(0,1,10)
 a
 array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
 b
 array([ 0.        ,  0.,  0.,  0.,  0.,
       0.5556,  0.6667,  0.7778,  0.8889,  1.        ])
 a * b
 array([ 0.        ,  0.,  0.,  1.        ,  1.7778,
       2.7778,  4.        ,  5.,  7.,  9.        ])
 a *= b
 a
 array([0, 0, 0, 1, 1, 2, 4, 5, 7, 9])


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
AJC McMorland
Post-doctoral research fellow
Neurobiology, University of Pittsburgh
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Dag Sverre Seljebotn
On 03/16/2011 02:24 PM, Paul Anton Letnes wrote:
 Hi!

 This little snippet of code tricked me (in a more convoluted form). The *= 
 operator does not change the datatype of the left hand side array. Is this 
 intentional? It did fool me and throw my results quite a bit off. I always 
 assumed that 'a *= b' means exactly the same as 'a = a * b' but this is 
 clearly not the case!


In [1]: a = np.ones(5)

In [2]: b = a

In [3]: c = a * 2

In [4]: b
Out[4]: array([ 1.,  1.,  1.,  1.,  1.])

In [5]: a *= 2

In [6]: b
Out[6]: array([ 2.,  2.,  2.,  2.,  2.])


-- Dag
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Multiple Linear Regression

2011-03-16 Thread Skipper Seabold
On Wed, Mar 16, 2011 at 2:53 AM, dileep kunjaai dileepkunj...@gmail.com wrote:
 Dear sir,
  Can we do multiple linear regression(MLR)  in python is there any
 inbuilt function for MLR


You might be interested in statsmodels

http://statsmodels.sourceforge.net/

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Paul Anton Letnes
Heisann!

On 16. mars 2011, at 14.30, Dag Sverre Seljebotn wrote:

 On 03/16/2011 02:24 PM, Paul Anton Letnes wrote:
 Hi!
 
 This little snippet of code tricked me (in a more convoluted form). The *= 
 operator does not change the datatype of the left hand side array. Is this 
 intentional? It did fool me and throw my results quite a bit off. I always 
 assumed that 'a *= b' means exactly the same as 'a = a * b' but this is 
 clearly not the case!
 
 
 In [1]: a = np.ones(5)
Here, a is numpy.float64:
 numpy.ones(5).dtype
dtype('float64')

 In [2]: b = a
 
 In [3]: c = a * 2
 
 In [4]: b
 Out[4]: array([ 1.,  1.,  1.,  1.,  1.])
 
 In [5]: a *= 2
So since a is already float, and b is the same object as a, the resulting a and 
b are of course floats.
 
 In [6]: b
 Out[6]: array([ 2.,  2.,  2.,  2.,  2.])
 
This is not the case I am describing, as in my case, a was of dtype integer.
Or did I miss something?

Paul.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Charles R Harris
On Wed, Mar 16, 2011 at 7:24 AM, Paul Anton Letnes 
paul.anton.let...@gmail.com wrote:

 Hi!

 This little snippet of code tricked me (in a more convoluted form). The *=
 operator does not change the datatype of the left hand side array. Is this
 intentional? It did fool me and throw my results quite a bit off. I always
 assumed that 'a *= b' means exactly the same as 'a = a * b' but this is
 clearly not the case!


Yes, it is intentional. Numpy is more C than Python in this case, it
actually does the multiplication in-place so that the result must have the
same type as the left hand side. In this case Python just creates a new
object.

snip

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] When was the ddof kwarg added to std()?

2011-03-16 Thread Charles R Harris
On Wed, Mar 16, 2011 at 6:52 AM, Darren Dale dsdal...@gmail.com wrote:

 Does anyone know when the ddof kwarg was added to std()? Has it always
 been there?


IIRC, a few years back there was a long thread on the list about unbiased vs
biased estimates of std and ddof may have been added then... or it may have
been there from the beginning. I don't know.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] When was the ddof kwarg added to std()?

2011-03-16 Thread Darren Dale
On Wed, Mar 16, 2011 at 9:10 AM, Scott Sinclair
scott.sinclair...@gmail.com wrote:
 On 16 March 2011 14:52, Darren Dale dsdal...@gmail.com wrote:
 Does anyone know when the ddof kwarg was added to std()? Has it always
 been there?

 Does 'git log --grep=ddof' help?

Yes: March 7, 2008

Thanks
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread Neal Becker
Sturla Molden wrote:

 Den 16.03.2011 13:25, skrev Sturla Molden:

 Fortran 90 pointers create a view.

 real*8, target :: array(n,m)
 real*8, pointer :: view

 view =  array(::2, ::2)
 
 Pardon, the second line should be
 
 real*8, pointer :: view(:,:)
 
 
 Sturla

Also:
* can it adopt external memory?
* can it interwork with numpy? (kinda required for this audience)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert dictionary of arrays into single array

2011-03-16 Thread John Salvatier
Loop through to build a list of arrays, then use vstack on the list.

On Wed, Mar 16, 2011 at 1:36 AM, John washa...@gmail.com wrote:

 Hello,

 I have a dictionary with structured arrays, keyed by integers 0...n.
 There are no other keys in the dictionary.

 What is the most efficient way to convert the dictionary of arrays to
 a single array?

 All the arrays have the same 'headings' and width, but different lengths.

 Is there something I can do that would be more efficient than looping
 through and using np.concatenate (vstack)?
 --john
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread Yung-Yu Chen
On Wed, Mar 16, 2011 at 09:46, Neal Becker ndbeck...@gmail.com wrote:

 Sturla Molden wrote:

  Den 16.03.2011 13:25, skrev Sturla Molden:
 
  Fortran 90 pointers create a view.
 
  real*8, target :: array(n,m)
  real*8, pointer :: view
 
  view =  array(::2, ::2)
 
  Pardon, the second line should be
 
  real*8, pointer :: view(:,:)
 
 
  Sturla

 Also:
 * can it adopt external memory?


What is external memory?


 * can it interwork with numpy? (kinda required for this audience)


You can use f2py, swig, ctypes, or just Python extension to invoke Fortran
code and pass numpy arrays to it and get them back.  Not sure whether or not
Cython has Fortran support, but Cython can be used as a simplified approach
of Python extension.

Usually numpy arrays are allocated on heap.  In this way we can minimize
allocate()/deallocate() inside Fortran subroutines and let Python to do the
memory management.

Fortran pointers are more like a view rather than C pointers which point to
the beginning of a memory block.

yyc

___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yung-Yu Chen
PhD candidate of Mechanical Engineering
The Ohio State University, Columbus, Ohio
+1 (614) 859 2436
http://solvcon.net/yyc/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert dictionary of arrays into single array

2011-03-16 Thread Bruce Southey

On 03/16/2011 08:56 AM, John Salvatier wrote:

Loop through to build a list of arrays, then use vstack on the list.

On Wed, Mar 16, 2011 at 1:36 AM, John washa...@gmail.com 
mailto:washa...@gmail.com wrote:


Hello,

I have a dictionary with structured arrays, keyed by integers 0...n.
There are no other keys in the dictionary.

What is the most efficient way to convert the dictionary of arrays to
a single array?

All the arrays have the same 'headings' and width, but different
lengths.

Is there something I can do that would be more efficient than looping
through and using np.concatenate (vstack)?
--john
___



Numpy does not permit a 'single' array of different shapes - ie a 
'ragged array'.
Sure you could convert this into a structured array (you know n so you 
can create an appropriate empty structured array and assign the array by 
looping across the dict) but that is still not a 'single' array. You can 
use a masked array where you masked the missing elements across your arrays.


Francesc Alted pointed out in the 'ragged array implimentation' thread 
(http://mail.scipy.org/pipermail/numpy-discussion/2011-March/055219.html) that 
pytables does support this.


Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert dictionary of arrays into single array

2011-03-16 Thread John Salvatier
I think he wants to stack them (same widths) so stacking them should be
fine.

On Wed, Mar 16, 2011 at 7:30 AM, Bruce Southey bsout...@gmail.com wrote:

  On 03/16/2011 08:56 AM, John Salvatier wrote:

 Loop through to build a list of arrays, then use vstack on the list.

 On Wed, Mar 16, 2011 at 1:36 AM, John washa...@gmail.com wrote:

 Hello,

 I have a dictionary with structured arrays, keyed by integers 0...n.
 There are no other keys in the dictionary.

 What is the most efficient way to convert the dictionary of arrays to
 a single array?

 All the arrays have the same 'headings' and width, but different lengths.

 Is there something I can do that would be more efficient than looping
 through and using np.concatenate (vstack)?
 --john
 ___


 Numpy does not permit a 'single' array of different shapes - ie a 'ragged
 array'.
 Sure you could convert this into a structured array (you know n so you can
 create an appropriate empty structured array and assign the array by looping
 across the dict) but that is still not a 'single' array. You can use a
 masked array where you masked the missing elements across your arrays.

 Francesc Alted pointed out in the 'ragged array implimentation' thread (
 http://mail.scipy.org/pipermail/numpy-discussion/2011-March/055219.html)
 that pytables does support this.

 Bruce

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Robert Kern
On Wed, Mar 16, 2011 at 01:18, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 Running the test suite for one of our libraries, there seems to have
 been a recent breakage of the behavior of dtype hashing.

 This script:

 import numpy as np

 data0 = np.arange(10)
 data1 = data0 - 10

 dt0 = data0.dtype
 dt1 = data1.dtype

 assert dt0 == dt1 # always passes
 assert hash(dt0) == hash(dt1) # fails on latest

 fails on the current latest-ish - aada93306  and passes on a stock 1.5.0.

 Is this expected?

According to git log hashdescr.c, nothing has changed in the
implementation of the hash function since Oct 31, before numpy 1.5.1
which also passes the second test. I'm not sure what would be causing
the difference in HEAD.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Chris Barker
On 3/16/11 6:34 AM, Charles R Harris wrote:
 On Wed, Mar 16, 2011 at 7:24 AM, Paul Anton Letnes

 Yes, it is intentional. Numpy is more C than Python in this case,

I don't know that C has anything to do with it -- the *= operators were 
added specifically to be in-place operators -- otherwise they would be 
nothing but syntactic sugar. And IIRC, numpy was one of the motivators.

IMHO, the mistake was even allowing += and friends for immutables, as 
that inherently means something different.

Of course, using += with integers is probably the most common case.

-Chris



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Charles R Harris
On Wed, Mar 16, 2011 at 8:46 AM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Mar 16, 2011 at 01:18, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  Running the test suite for one of our libraries, there seems to have
  been a recent breakage of the behavior of dtype hashing.
 
  This script:
 
  import numpy as np
 
  data0 = np.arange(10)
  data1 = data0 - 10
 
  dt0 = data0.dtype
  dt1 = data1.dtype
 
  assert dt0 == dt1 # always passes
  assert hash(dt0) == hash(dt1) # fails on latest
 
  fails on the current latest-ish - aada93306  and passes on a stock 1.5.0.
 
  Is this expected?

 According to git log hashdescr.c, nothing has changed in the
 implementation of the hash function since Oct 31, before numpy 1.5.1
 which also passes the second test. I'm not sure what would be causing
 the difference in HEAD.


The 1.5.1 branch was based on 1.5.x, not master.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Dag Sverre Seljebotn
On 03/16/2011 02:35 PM, Paul Anton Letnes wrote:
 Heisann!

Hei der,

 On 16. mars 2011, at 14.30, Dag Sverre Seljebotn wrote:

 On 03/16/2011 02:24 PM, Paul Anton Letnes wrote:
 Hi!

 This little snippet of code tricked me (in a more convoluted form). The *= 
 operator does not change the datatype of the left hand side array. Is this 
 intentional? It did fool me and throw my results quite a bit off. I always 
 assumed that 'a *= b' means exactly the same as 'a = a * b' but this is 
 clearly not the case!

 In [1]: a = np.ones(5)
 Here, a is numpy.float64:
 numpy.ones(5).dtype
 dtype('float64')

 In [2]: b = a

 In [3]: c = a * 2

 In [4]: b
 Out[4]: array([ 1.,  1.,  1.,  1.,  1.])

 In [5]: a *= 2
 So since a is already float, and b is the same object as a, the resulting a 
 and b are of course floats.
 In [6]: b
 Out[6]: array([ 2.,  2.,  2.,  2.,  2.])

 This is not the case I am describing, as in my case, a was of dtype integer.
 Or did I miss something?

I was just trying to demonstrate that it is NOT the case that a = a * 2 
is exactly the same as a *= 2. If you assume that the two statements 
are the same, then it does not make sense that b = [1, 1, ...] the first 
time around, but b = [2, 2, 2...] the second time around. And in trying 
to figure out why that happened, perhaps you'd see how it all fits 
together...

OK, it perhaps wasn't a very good explanation...

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Paul Anton Letnes

On 16. mars 2011, at 15.49, Chris Barker wrote:

 On 3/16/11 6:34 AM, Charles R Harris wrote:
 On Wed, Mar 16, 2011 at 7:24 AM, Paul Anton Letnes
 
 Yes, it is intentional. Numpy is more C than Python in this case,
 
 I don't know that C has anything to do with it -- the *= operators were 
 added specifically to be in-place operators -- otherwise they would be 
 nothing but syntactic sugar. And IIRC, numpy was one of the motivators.
 
 IMHO, the mistake was even allowing += and friends for immutables, as 
 that inherently means something different.
 
 Of course, using += with integers is probably the most common case.
 
 -Chris

I see. In that case, I have the following either/or christmas wish:

Either: implement a warning along the following lines:
 from numpy import *
 a = zeros(10, dtype=complex)
 a.astype(float)
/Users/paulanto/Library/Python/2.7/bin/bpython:2: ComplexWarning: Casting 
complex values to real discards the imaginary part
  # EASY-INSTALL-ENTRY-SCRIPT: 'bpython==0.9.7.1','console_scripts','bpython'
array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.])

Or: give me a hint how and where to change the numpy code, and I could try to 
write a patch.

Paul.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Paul Anton Letnes

On 16. mars 2011, at 15.57, Dag Sverre Seljebotn wrote:

 On 03/16/2011 02:35 PM, Paul Anton Letnes wrote:
 Heisann!
 
 Hei der,
 
 On 16. mars 2011, at 14.30, Dag Sverre Seljebotn wrote:
 
 On 03/16/2011 02:24 PM, Paul Anton Letnes wrote:
 Hi!
 
 This little snippet of code tricked me (in a more convoluted form). The *= 
 operator does not change the datatype of the left hand side array. Is this 
 intentional? It did fool me and throw my results quite a bit off. I always 
 assumed that 'a *= b' means exactly the same as 'a = a * b' but this is 
 clearly not the case!
 
 In [1]: a = np.ones(5)
 Here, a is numpy.float64:
 numpy.ones(5).dtype
 dtype('float64')
 
 In [2]: b = a
 
 In [3]: c = a * 2
 
 In [4]: b
 Out[4]: array([ 1.,  1.,  1.,  1.,  1.])
 
 In [5]: a *= 2
 So since a is already float, and b is the same object as a, the resulting a 
 and b are of course floats.
 In [6]: b
 Out[6]: array([ 2.,  2.,  2.,  2.,  2.])
 
 This is not the case I am describing, as in my case, a was of dtype integer.
 Or did I miss something?
 
 I was just trying to demonstrate that it is NOT the case that a = a * 2 
 is exactly the same as a *= 2. If you assume that the two statements 
 are the same, then it does not make sense that b = [1, 1, ...] the first 
 time around, but b = [2, 2, 2...] the second time around. And in trying 
 to figure out why that happened, perhaps you'd see how it all fits 
 together...
 
 OK, it perhaps wasn't a very good explanation...
 
 Dag Sverre

I see, I misunderstood your point. That's another interesting aspect of this, 
though.

Paul.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OT] any image io module that works with python3?

2011-03-16 Thread René Dudfield
pygame

On Sat, Mar 12, 2011 at 9:08 AM, Nadav Horesh nad...@visionsense.comwrote:

   Having numpy, scipy, and matplotlib working reasonably with python3, a
 major piece of code I miss for a major python3 migration is an image IO. I
 found that pylab's imread works fine for png image, but I need to read all
 the other image format as well as png and jpeg output.

  Any hints (including advices how easyly construct my own module) are
 appreciated.

Nadav.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Charles R Harris
On Wed, Mar 16, 2011 at 8:56 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Mar 16, 2011 at 8:46 AM, Robert Kern robert.k...@gmail.comwrote:

 On Wed, Mar 16, 2011 at 01:18, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  Running the test suite for one of our libraries, there seems to have
  been a recent breakage of the behavior of dtype hashing.
 
  This script:
 
  import numpy as np
 
  data0 = np.arange(10)
  data1 = data0 - 10
 
  dt0 = data0.dtype
  dt1 = data1.dtype
 
  assert dt0 == dt1 # always passes
  assert hash(dt0) == hash(dt1) # fails on latest
 
  fails on the current latest-ish - aada93306  and passes on a stock
 1.5.0.
 
  Is this expected?

 According to git log hashdescr.c, nothing has changed in the
 implementation of the hash function since Oct 31, before numpy 1.5.1
 which also passes the second test. I'm not sure what would be causing
 the difference in HEAD.


 The 1.5.1 branch was based on 1.5.x, not master.


David's change isn't in 1.5.x, so apparently it wasn't backported.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Charles R Harris
On Wed, Mar 16, 2011 at 8:58 AM, Paul Anton Letnes 
paul.anton.let...@gmail.com wrote:


 On 16. mars 2011, at 15.49, Chris Barker wrote:

  On 3/16/11 6:34 AM, Charles R Harris wrote:
  On Wed, Mar 16, 2011 at 7:24 AM, Paul Anton Letnes
 
  Yes, it is intentional. Numpy is more C than Python in this case,
 
  I don't know that C has anything to do with it -- the *= operators were
  added specifically to be in-place operators -- otherwise they would be
  nothing but syntactic sugar. And IIRC, numpy was one of the motivators.
 
  IMHO, the mistake was even allowing += and friends for immutables, as
  that inherently means something different.
 
  Of course, using += with integers is probably the most common case.
 
  -Chris

 I see. In that case, I have the following either/or christmas wish:

 Either: implement a warning along the following lines:
  from numpy import *
  a = zeros(10, dtype=complex)
  a.astype(float)
 /Users/paulanto/Library/Python/2.7/bin/bpython:2: ComplexWarning: Casting
 complex values to real discards the imaginary part
  # EASY-INSTALL-ENTRY-SCRIPT:
 'bpython==0.9.7.1','console_scripts','bpython'
 array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.])


This comes up for discussion on a fairly regular basis. I tend towards the
more warnings side myself, but you aren't going to get the current behavior
changed unless you can convince a large bunch of people that it is the right
thing to do, which won't be easy. For one thing, a lot of current code in
the wild would begin to raise warnings that weren't there before.

Or: give me a hint how and where to change the numpy code, and I could try
 to write a patch.


You have to go down to the C level to deal with this.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in repr ?

2011-03-16 Thread Mark Sienkiewicz

 In that case, would you agree that it is a bug for
 assert_array_almost_equal to use repr() to display the arrays, since it
 is printing identical values and saying they are different?  Or is there
 also a reason to do that?
 

 It should probably use np.array_repr(x, precision=16)
   


Ok, thanks - I see the issue.  I'll enter a ticket for an enhancement 
request for assert_array_almost_equal

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Robert Kern
On Wed, Mar 16, 2011 at 10:27, Charles R Harris
charlesr.har...@gmail.com wrote:

 On Wed, Mar 16, 2011 at 8:56 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Wed, Mar 16, 2011 at 8:46 AM, Robert Kern robert.k...@gmail.com
 wrote:

 On Wed, Mar 16, 2011 at 01:18, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  Running the test suite for one of our libraries, there seems to have
  been a recent breakage of the behavior of dtype hashing.
 
  This script:
 
  import numpy as np
 
  data0 = np.arange(10)
  data1 = data0 - 10
 
  dt0 = data0.dtype
  dt1 = data1.dtype
 
  assert dt0 == dt1 # always passes
  assert hash(dt0) == hash(dt1) # fails on latest
 
  fails on the current latest-ish - aada93306  and passes on a stock
  1.5.0.
 
  Is this expected?

 According to git log hashdescr.c, nothing has changed in the
 implementation of the hash function since Oct 31, before numpy 1.5.1
 which also passes the second test. I'm not sure what would be causing
 the difference in HEAD.


 The 1.5.1 branch was based on 1.5.x, not master.


 David's change isn't in 1.5.x, so apparently it wasn't backported.

Hmm. It works just before and just after that change, so the problem
is somewhere else.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Paul Anton Letnes
 
 This comes up for discussion on a fairly regular basis. I tend towards the 
 more warnings side myself, but you aren't going to get the current behavior 
 changed unless you can convince a large bunch of people that it is the right 
 thing to do, which won't be easy. For one thing, a lot of current code in the 
 wild would begin to raise warnings that weren't there before.
That could also be a good thing for locating bugs, right?

 Or: give me a hint how and where to change the numpy code, and I could try to 
 write a patch.
 
 
 You have to go down to the C level to deal with this.

I guess code containing such warnings must exist in other parts of the numpy 
library?

Paul.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.5.1 test failures

2011-03-16 Thread Mark Dixon
Hi,

Sorry if this is a noob question, but I've been trying to install Numpy 
for a while now and I keep having problems getting it to pass the test 
suite.

I'm on a RHEL5 system, building against Python 2.6.5 (self-built 
with GCC 4.1.2), gfortran 4.1.2 and MKL 10.3 Update 2 (shipped with Intel 
compiler 2011.2.137).

I'm failing the following tests (see below for full output):

   FAIL: test_complex (test_numeric.TestCorrelate)
   FAIL: test_complex (test_numeric.TestCorrelateNew)

The imaginary components for the results of test_numeric.TestCorrelateNew 
have the correct magnitude but the wrong sign, and 
test_numeric.TestCorrelate is just wrong wrong wrong.

Is this a known issue? searching for test_complex in the mail archive 
didn't find anything.

Thanks,

Mark
-- 
-
Mark Dixon   Email: m.c.di...@leeds.ac.uk
HPC/Grid Systems Support Tel (int): 35429
Information Systems Services Tel (ext): +44(0)113 343 5429
University of Leeds, LS2 9JT, UK
-

==
FAIL: test_complex (test_numeric.TestCorrelate)
--
Traceback (most recent call last):
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/testing/decorators.py,
 lin e 257, in _deprecated_imp
 f(*args, **kwargs)
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/core/tests/test_numeric.py
 , line 942, in test_complex
 assert_array_almost_equal(z, r_z)
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/testing/utils.py,
 line 774 , in assert_array_almost_equal
 header='Arrays are not almost equal')
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/testing/utils.py,
 line 618 , in assert_array_compare
 raise AssertionError(msg)
AssertionError:
Arrays are not almost equal

(mismatch 100.0%)
  x: array([  1.1401e-312 +2.83449967e-316j,
  2.80975212e-316 +4.94065646e-324j,
  2.81612043e-316 +2.81612082e-316j,...
  y: array([ 3.+1.j,  6.+0.j,  8.-1.j,  9.+1.j, -1.-8.j, -4.-1.j])

==
FAIL: test_complex (test_numeric.TestCorrelateNew)
--
Traceback (most recent call last):
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/core/tests/test_numeric.py
 , line 963, in test_complex
 assert_array_almost_equal(z, r_z)
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/testing/utils.py,
 line 774 , in assert_array_almost_equal
 header='Arrays are not almost equal')
   File 
/nobackup/issmcd/pybuild2.6.5/lib/python2.6/site-packages/numpy/testing/utils.py,
 line 618 , in assert_array_compare
 raise AssertionError(msg)
AssertionError:
Arrays are not almost equal

(mismatch 83.33%)
  x: array([ -4.e+000 -1.e+000j,
 -5.e+000 +8.e+000j,
  1.1000e+001 +5.e+000j,...
  y: array([ -4.+1.j,  -5.-8.j,  11.-5.j,   8.-1.j,   6.-0.j,   3.+1.j])

--
Ran 3006 tests in 63.290s

FAILED (KNOWNFAIL=4, failures=2)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Matthew Brett
Hi,

On Wed, Mar 16, 2011 at 9:21 AM, Robert Kern robert.k...@gmail.com wrote:
 On Wed, Mar 16, 2011 at 10:27, Charles R Harris
 charlesr.har...@gmail.com wrote:

 On Wed, Mar 16, 2011 at 8:56 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Wed, Mar 16, 2011 at 8:46 AM, Robert Kern robert.k...@gmail.com
 wrote:

 On Wed, Mar 16, 2011 at 01:18, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  Running the test suite for one of our libraries, there seems to have
  been a recent breakage of the behavior of dtype hashing.
 
  This script:
 
  import numpy as np
 
  data0 = np.arange(10)
  data1 = data0 - 10
 
  dt0 = data0.dtype
  dt1 = data1.dtype
 
  assert dt0 == dt1 # always passes
  assert hash(dt0) == hash(dt1) # fails on latest
 
  fails on the current latest-ish - aada93306  and passes on a stock
  1.5.0.
 
  Is this expected?

 According to git log hashdescr.c, nothing has changed in the
 implementation of the hash function since Oct 31, before numpy 1.5.1
 which also passes the second test. I'm not sure what would be causing
 the difference in HEAD.


 The 1.5.1 branch was based on 1.5.x, not master.


 David's change isn't in 1.5.x, so apparently it wasn't backported.

 Hmm. It works just before and just after that change, so the problem
 is somewhere else.

I can git-bisect it later in the day, will do so unless it's become
clear in the meantime.

Thanks,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Robert Kern
On Wed, Mar 16, 2011 at 11:43, Matthew Brett matthew.br...@gmail.com wrote:

 I can git-bisect it later in the day, will do so unless it's become
 clear in the meantime.

I'm almost done bisecting.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Robert Kern
On Wed, Mar 16, 2011 at 11:55, Robert Kern robert.k...@gmail.com wrote:
 On Wed, Mar 16, 2011 at 11:43, Matthew Brett matthew.br...@gmail.com wrote:

 I can git-bisect it later in the day, will do so unless it's become
 clear in the meantime.

 I'm almost done bisecting.

6c6dc487ca15818d1f4cc764debb15d73a61c03b is the first bad commit
commit 6c6dc487ca15818d1f4cc764debb15d73a61c03b
Author: Mark Wiebe mwwi...@gmail.com
Date:   Thu Jan 20 20:41:03 2011 -0800

ENH: ufunc: Made the iterator ufunc default

:04 04 15033eb0c0e295161cd29a31677e7b88ac431143
ae077a44ccce0014e017537b31f53261495f870e M  numpy

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Mark Wiebe
On Wed, Mar 16, 2011 at 10:00 AM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Mar 16, 2011 at 11:55, Robert Kern robert.k...@gmail.com wrote:
  On Wed, Mar 16, 2011 at 11:43, Matthew Brett matthew.br...@gmail.com
 wrote:
 
  I can git-bisect it later in the day, will do so unless it's become
  clear in the meantime.
 
  I'm almost done bisecting.

 6c6dc487ca15818d1f4cc764debb15d73a61c03b is the first bad commit
 commit 6c6dc487ca15818d1f4cc764debb15d73a61c03b
 Author: Mark Wiebe mwwi...@gmail.com
 Date:   Thu Jan 20 20:41:03 2011 -0800

ENH: ufunc: Made the iterator ufunc default

 :04 04 15033eb0c0e295161cd29a31677e7b88ac431143
 ae077a44ccce0014e017537b31f53261495f870e M  numpy


I'm guessing this is another case where the type numbers being ambiguous is
the problem. On my 64-bit system:

  np.dtype(np.int) == np.dtype(np.long)
True
 hash(np.dtype(np.int)) == hash(np.dtype(np.long))
False
 np.dtype(np.int).num
7
 np.dtype(np.long).num
9

On a 32-bit system, types 5 and 7 are similarly aliased. By modifying the
example slightly, possibly just switching the data0 - 10 to 10 + data0,
1.5 probably will fail this test as well.

-Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Robert Kern
On Wed, Mar 16, 2011 at 12:15, Mark Wiebe mwwi...@gmail.com wrote:
 On Wed, Mar 16, 2011 at 10:00 AM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Mar 16, 2011 at 11:55, Robert Kern robert.k...@gmail.com wrote:
  On Wed, Mar 16, 2011 at 11:43, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  I can git-bisect it later in the day, will do so unless it's become
  clear in the meantime.
 
  I'm almost done bisecting.

 6c6dc487ca15818d1f4cc764debb15d73a61c03b is the first bad commit
 commit 6c6dc487ca15818d1f4cc764debb15d73a61c03b
 Author: Mark Wiebe mwwi...@gmail.com
 Date:   Thu Jan 20 20:41:03 2011 -0800

    ENH: ufunc: Made the iterator ufunc default

 :04 04 15033eb0c0e295161cd29a31677e7b88ac431143
 ae077a44ccce0014e017537b31f53261495f870e M      numpy

 I'm guessing this is another case where the type numbers being ambiguous is
 the problem. On my 64-bit system:
   np.dtype(np.int) == np.dtype(np.long)
 True
 hash(np.dtype(np.int)) == hash(np.dtype(np.long))
 False
 np.dtype(np.int).num
 7
 np.dtype(np.long).num
 9
 On a 32-bit system, types 5 and 7 are similarly aliased. By modifying the
 example slightly, possibly just switching the data0 - 10 to 10 + data0,
 1.5 probably will fail this test as well.

Correct.

Should we even include the type_num in the key to be hashed for the
builtin dtypes? They are not used in the tp_richcompare comparison,
only the kind and el_size.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread Ravi
On Monday 14 March 2011 15:02:32 Sebastian Haase wrote:
 Sturla has been writing so much about Fortran recently, and Ondrej now
 says he has done the move from C/C++ to Fortran -- I thought Fortran
 was dead ... !?   ;-)
 What am I missing here ?

Comparing Fortran with C++ is like comparing Matlab with Python. Fortran is 
very good at what it does but it does not serve the same audience as C++. The 
typical comparisons work like this:

1. C++ does not have a standard array type but Fortran does. Python does not 
have a standard numerical array type either; numpy is nice, but, it is an 
external package. The point is that you pick the external array type that 
works well for you in Python/C++ but you use the built-in array type in 
Matlab/Fortran. Whether the flexibility is a problem or a god-send is entirely 
up to you. For modeling high-performance algorithms in silicon, usually in 
fixed-point, Fortran/Matlab are worthless, but C++ (and to some extent, 
python) works very well.

2. C++ (resp. python+numpy) does not perform as well as Fortran (resp. 
Matlab). The issue with C++ is that aliasing is allowed, but virtually all 
compilers will allow you to use restrict to get almost the same benefits as 
Fortran. The analogous python+numpy issue is that it is very hard to create a 
good JIT (unladen-swallow was, at best, a partial success) for python while 
Matlab has a very nice JIT (and an even better integration with the java 
virtual machine).

3. Template metaprogramming makes my head hurt. (Equivalently, python 
generators and decorators make my head hurt.) Neither template 
metaprogramming nor python generators/decorators are *required* for most 
scientific programming tasks, especially when you use libraries designed to 
shield you from such details. However, knowledge and use of those techniques 
makes one much more productive in those corner cases where they are useful.

4. I do not need all the extra stuff that C++ (resp. python) provides 
compared to Fortran (resp. Matlab). C++/python are industrial strength 
programming languages which serve a large audience, where each interested 
niche group can create efficient libraries unhindered by the orientation of 
the language itself. Fortran/Matlab are numerical languages with extra general 
purpose stuff tacked on. Building large scale Fortran/Matlab programs are 
possible (see the Joint Strike Fighter models in Matlab or any large-scale 
Fortran application), but the lack of a real programming language is a pain 
whose intensity increases exponentially with the size of the codebase.

Another way it is usually phrased: I will use Fortran's (resp. Matlab's) 
integration with Python (resp. Java) when I need a real programming language.

5. OOP/generic programming are useless for most scientific programming. This 
is usually just elitism. (While I personally do not believe that OO is worth 
one percent of the hype, I do believe that generic programming, as practiced 
in Python  C++, are pretty much our primary hope for reusable software.) When 
not elitism, it is a measure of the immaturity of computational science (some 
would say scientists), where large repositories of reusable code are few and 
far between. OO, generic programming, and functional programming are the only 
techniques of which I am aware for building large scale programs with 
manageable complexity.

I would take any Fortran hype with large grains of salt.

Regards,
Ravi

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread Dag Sverre Seljebotn
On 03/16/2011 08:10 PM, Ravi wrote:
 On Monday 14 March 2011 15:02:32 Sebastian Haase wrote:
 Sturla has been writing so much about Fortran recently, and Ondrej now
 says he has done the move from C/C++ to Fortran -- I thought Fortran
 was dead ... !?   ;-)
 What am I missing here ?
 Comparing Fortran with C++ is like comparing Matlab with Python. Fortran is
 very good at what it does but it does not serve the same audience as C++. The
 typical comparisons work like this:

snip

I think the main point being made by most here though is that *in 
combination with Python*, Fortran can be quite helpful. If one is using 
Python anyway for the high-level stuff, the relative strengths of C++ 
w.r.t. Fortran that you list become much less important. Same for 
code-reuse: When only used from a Python wrapper, the Fortran code can 
become so simplistic that it also becomes reusable.

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re:rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread baker . alexander

My two pence worth, my experience is across python, C++ and fortran (and a few 
other languages) and the posts here are interesting and relevant. I think that 
the true value of any of these languages is knowing any of them well, if you 
happen to work with other folks who share the same skills more the better. No 
more than that.

As a user of a very old large very fortran codebase as well as engineer of more 
structured approaches, I would take the OO toolset everytime, for reasons 
already covered.

The real challenge I see every day in scientific community is the lack of 
software craftmanship skills, code archiving, unit testing. End of two pence.

Alex
Sent from my BlackBerry® wireless device

-Original Message-
From: Dag Sverre Seljebotn d.s.seljeb...@astro.uio.no
Sender: numpy-discussion-boun...@scipy.org
Date: Wed, 16 Mar 2011 20:12:21 
To: Discussion of Numerical Pythonnumpy-discussion@scipy.org
Reply-To: Discussion of Numerical Python numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] Fortran was dead ... [was Re:
 rewriting  NumPy code in C or C++ or similar]

On 03/16/2011 08:10 PM, Ravi wrote:
 On Monday 14 March 2011 15:02:32 Sebastian Haase wrote:
 Sturla has been writing so much about Fortran recently, and Ondrej now
 says he has done the move from C/C++ to Fortran -- I thought Fortran
 was dead ... !?   ;-)
 What am I missing here ?
 Comparing Fortran with C++ is like comparing Matlab with Python. Fortran is
 very good at what it does but it does not serve the same audience as C++. The
 typical comparisons work like this:

snip

I think the main point being made by most here though is that *in 
combination with Python*, Fortran can be quite helpful. If one is using 
Python anyway for the high-level stuff, the relative strengths of C++ 
w.r.t. Fortran that you list become much less important. Same for 
code-reuse: When only used from a Python wrapper, the Fortran code can 
become so simplistic that it also becomes reusable.

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Accessing elements of an object array

2011-03-16 Thread lists_ravi
Hi,
  How do I access elements of an object array? The object array was
created by scipy.io.loadmat from a MAT file. Here's an example:

In [10]: x
Out[10]:
array(array((7.399500875785845e-10, 7.721153414752673e-10, -0.984375),
  dtype=[('cl', '|O8'), ('tl', '|O8'), ('dagc', '|O8')]), dtype=object)

In [11]: x.shape, x.size
Out[11]: ((), 1)

In [12]: x.flat[0]['cl']
Out[12]: array(array(7.399500875785845e-10), dtype=object)

In [13]: x[0]
---
IndexErrorTraceback (most recent call last)

/src/ipython console in module()

IndexError: 0-d arrays can't be indexed



I am using numpy 1.4.1 on Linux x86_64, if that matters.

Regards,
Ravi



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Accessing elements of an object array

2011-03-16 Thread Robert Kern
On Wed, Mar 16, 2011 at 15:18,  lists_r...@lavabit.com wrote:
 Hi,
  How do I access elements of an object array? The object array was
 created by scipy.io.loadmat from a MAT file. Here's an example:

 In [10]: x
 Out[10]:
 array(array((7.399500875785845e-10, 7.721153414752673e-10, -0.984375),
      dtype=[('cl', '|O8'), ('tl', '|O8'), ('dagc', '|O8')]), dtype=object)

 In [11]: x.shape, x.size
 Out[11]: ((), 1)

 In [12]: x.flat[0]['cl']
 Out[12]: array(array(7.399500875785845e-10), dtype=object)

 In [13]: x[0]
 ---
 IndexError                                Traceback (most recent call last)

 /src/ipython console in module()

 IndexError: 0-d arrays can't be indexed

It's not that it's an object array. It's that it is a ()-shape array.
You index it with an empty tuple:

  x[()]

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] hashing dtypes, new variation, old theme

2011-03-16 Thread Mark Wiebe
On Wed, Mar 16, 2011 at 10:53 AM, Robert Kern robert.k...@gmail.com wrote:

 On Wed, Mar 16, 2011 at 12:15, Mark Wiebe mwwi...@gmail.com wrote:
  On Wed, Mar 16, 2011 at 10:00 AM, Robert Kern robert.k...@gmail.com
 wrote:
 
  On Wed, Mar 16, 2011 at 11:55, Robert Kern robert.k...@gmail.com
 wrote:
   On Wed, Mar 16, 2011 at 11:43, Matthew Brett matthew.br...@gmail.com
 
   wrote:
  
   I can git-bisect it later in the day, will do so unless it's become
   clear in the meantime.
  
   I'm almost done bisecting.
 
  6c6dc487ca15818d1f4cc764debb15d73a61c03b is the first bad commit
  commit 6c6dc487ca15818d1f4cc764debb15d73a61c03b
  Author: Mark Wiebe mwwi...@gmail.com
  Date:   Thu Jan 20 20:41:03 2011 -0800
 
 ENH: ufunc: Made the iterator ufunc default
 
  :04 04 15033eb0c0e295161cd29a31677e7b88ac431143
  ae077a44ccce0014e017537b31f53261495f870e M  numpy
 
  I'm guessing this is another case where the type numbers being ambiguous
 is
  the problem. On my 64-bit system:
np.dtype(np.int) == np.dtype(np.long)
  True
  hash(np.dtype(np.int)) == hash(np.dtype(np.long))
  False
  np.dtype(np.int).num
  7
  np.dtype(np.long).num
  9
  On a 32-bit system, types 5 and 7 are similarly aliased. By modifying the
  example slightly, possibly just switching the data0 - 10 to 10 +
 data0,
  1.5 probably will fail this test as well.

 Correct.

 Should we even include the type_num in the key to be hashed for the
 builtin dtypes? They are not used in the tp_richcompare comparison,
 only the kind and el_size.


That sounds like a good fix to me. Whenever objects compare equal, they
should hash to the same value.

-Mark
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] loadmat output (was Re: Accessing elements of an object array)

2011-03-16 Thread lists_ravi
 On Wed, Mar 16, 2011 at 15:18,  lists_r...@lavabit.com wrote:
 In [10]: x
 Out[10]:
 array(array((7.399500875785845e-10, 7.721153414752673e-10, -0.984375),
 Â  Â  Â dtype=[('cl', '|O8'), ('tl', '|O8'), ('dagc', '|O8')]),
 dtype=object)

 In [11]: x.shape, x.size
 Out[11]: ((), 1)

 It's not that it's an object array. It's that it is a ()-shape array.
 You index it with an empty tuple:

   x[()]

Why does loadmat return such arrays? Is there a way to make it produce
arrays that are not object arrays?

Regards,
Ravi



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re:rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread william ratcliff
Related to this, what is the status of fwrap?  Can it be used with fortran
95/2003 language features?  There is a rather large code crystallographic
codebase (fullprof) that is written in fortran 77 that the author has been
porting to fortran 95/2003 and actually using modules for.  I'd like to
write python bindings for it to make it more scriptable...


William

On Wed, Mar 16, 2011 at 3:33 PM, baker.alexan...@gmail.com wrote:


 My two pence worth, my experience is across python, C++ and fortran (and a
 few other languages) and the posts here are interesting and relevant. I
 think that the true value of any of these languages is knowing any of them
 well, if you happen to work with other folks who share the same skills more
 the better. No more than that.

 As a user of a very old large very fortran codebase as well as engineer of
 more structured approaches, I would take the OO toolset everytime, for
 reasons already covered.

 The real challenge I see every day in scientific community is the lack of
 software craftmanship skills, code archiving, unit testing. End of two
 pence.

 Alex
 Sent from my BlackBerry® wireless device

 -Original Message-
 From: Dag Sverre Seljebotn d.s.seljeb...@astro.uio.no
 Sender: numpy-discussion-boun...@scipy.org
 Date: Wed, 16 Mar 2011 20:12:21
 To: Discussion of Numerical Pythonnumpy-discussion@scipy.org
 Reply-To: Discussion of Numerical Python numpy-discussion@scipy.org
 Subject: Re: [Numpy-discussion] Fortran was dead ... [was Re:
  rewriting  NumPy code in C or C++ or similar]

 On 03/16/2011 08:10 PM, Ravi wrote:
  On Monday 14 March 2011 15:02:32 Sebastian Haase wrote:
  Sturla has been writing so much about Fortran recently, and Ondrej now
  says he has done the move from C/C++ to Fortran -- I thought Fortran
  was dead ... !?   ;-)
  What am I missing here ?
  Comparing Fortran with C++ is like comparing Matlab with Python. Fortran
 is
  very good at what it does but it does not serve the same audience as C++.
 The
  typical comparisons work like this:

 snip

 I think the main point being made by most here though is that *in
 combination with Python*, Fortran can be quite helpful. If one is using
 Python anyway for the high-level stuff, the relative strengths of C++
 w.r.t. Fortran that you list become much less important. Same for
 code-reuse: When only used from a Python wrapper, the Fortran code can
 become so simplistic that it also becomes reusable.

 Dag Sverre
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert dictionary of arrays into single array

2011-03-16 Thread John
Yes, stacking is fine, and looping per John's suggestion is what I've
done, I was just wondering if there was possibly a more 'pythonic' or
more importantly efficient way than the loop.

Thanks,
john

On Wed, Mar 16, 2011 at 3:38 PM, John Salvatier
jsalv...@u.washington.edu wrote:
 I think he wants to stack them (same widths) so stacking them should be
 fine.

 On Wed, Mar 16, 2011 at 7:30 AM, Bruce Southey bsout...@gmail.com wrote:

 On 03/16/2011 08:56 AM, John Salvatier wrote:

 Loop through to build a list of arrays, then use vstack on the list.

 On Wed, Mar 16, 2011 at 1:36 AM, John washa...@gmail.com wrote:

 Hello,

 I have a dictionary with structured arrays, keyed by integers 0...n.
 There are no other keys in the dictionary.

 What is the most efficient way to convert the dictionary of arrays to
 a single array?

 All the arrays have the same 'headings' and width, but different lengths.

 Is there something I can do that would be more efficient than looping
 through and using np.concatenate (vstack)?
 --john
 ___

 Numpy does not permit a 'single' array of different shapes - ie a 'ragged
 array'.
 Sure you could convert this into a structured array (you know n so you can
 create an appropriate empty structured array and assign the array by looping
 across the dict) but that is still not a 'single' array. You can use a
 masked array where you masked the missing elements across your arrays.

 Francesc Alted pointed out in the 'ragged array implimentation' thread
 (http://mail.scipy.org/pipermail/numpy-discussion/2011-March/055219.html)
 that pytables does support this.

 Bruce

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Configuration
``
Plone 2.5.3-final,
CMF-1.6.4,
Zope (Zope 2.9.7-final, python 2.4.4, linux2),
Python 2.6
PIL 1.1.6
Mailman 2.1.9
Postfix 2.4.5
Procmail v3.22 2001/09/10
Basemap: 1.0
Matplotlib: 1.0.0
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] loadmat output (was Re: Accessing elements of an object array)

2011-03-16 Thread Matthew Brett
Hi,

On Wed, Mar 16, 2011 at 1:56 PM,  lists_r...@lavabit.com wrote:
 On Wed, Mar 16, 2011 at 15:18,  lists_r...@lavabit.com wrote:
 In [10]: x
 Out[10]:
 array(array((7.399500875785845e-10, 7.721153414752673e-10, -0.984375),
 Â  Â  Â dtype=[('cl', '|O8'), ('tl', '|O8'), ('dagc', '|O8')]),
 dtype=object)

 In [11]: x.shape, x.size
 Out[11]: ((), 1)

 It's not that it's an object array. It's that it is a ()-shape array.
 You index it with an empty tuple:

   x[()]

 Why does loadmat return such arrays? Is there a way to make it produce
 arrays that are not object arrays?

Did you find the struct_as_record option to loadmat?

http://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html

Best,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] *= operator not intuitive

2011-03-16 Thread Christopher Barker
On 3/16/11 9:22 AM, Paul Anton Letnes wrote:

 This comes up for discussion on a fairly regular basis. I tend towards the 
 more warnings side myself, but you aren't going to get the current behavior 
 changed unless you can convince a large bunch of people that it is the right 
 thing to do, which won't be easy.

Indeed, I don't think I'd want a warning for this -- though honestly, 
I'm not sure what you think should invoke a warning:


a = np.ones((3,), dtype=np.float32)

a + = 2.0

Should that be a warning? you are adding a 64 bit float to a 32bit float 
array.

a = np.ones((3,), dtype=np.float32)

a + = 2

now you are adding an integer -- should that be a warning?

a = np.ones((3,), dtype=np.int32)

a + = 2.1

now a float to an integer array -- should that be a warning?

As you can see -- there are a lot of options. As there are defined as 
in-place operators, I don't expect any casting to occur.

I think this is the kind of thing that would be great to have a warning 
the first time you do it, but once you understand, the warnings would be 
really, really annoying!

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re:rewriting NumPy code in C or C++ or similar]

2011-03-16 Thread Dag Sverre Seljebotn
On 03/16/2011 10:14 PM, william ratcliff wrote:
 Related to this, what is the status of fwrap?  Can it be used with 
 fortran 95/2003 language features?  There is a rather large code 
 crystallographic codebase (fullprof) that is written in fortran 77 
 that the author has been porting to fortran 95/2003 and actually using 
 modules for.  I'd like to write python bindings for it to make it more 
 scriptable...

Fwrap 0.1.1 is out; it supports a subset of Fortran 95/2003, biggest 
limitation being modules not being present.

Since then there's been quite a few unreleased improvements (like a much 
better and more flexible build based on waf instead of distutils).

I'm currently working on module support. Or, was ... I've put in a week 
this month, but then some simulation results grabbed my attention and I 
got derailed. Finishing up module support and making an Fwrap 0.2 
release is #2 on my stack of things to work on, and once I get around 
to it I expect it to take about a week. So I think it will happen :-)

If you (or anyone else) want to get involved and put in a day or two to 
help with polishing (command line interface, writing tests, 
documentation...) just email me.

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Build ERROR ConfigParser.MissingSectionHeaderError: File contains no section headers

2011-03-16 Thread Jose Borreguero
Dear Numpy/SciPy users,

I have a build error with Numpy:

$  /usr/local/bin/python2.7 setup.py build

  File /usr/local/lib/python2.7/ConfigParser.py, line 504, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
ConfigParser.MissingSectionHeaderError: File contains no section headers.
file: /projects/tmp/numpy-1.5.1/site.cfg, line: 60
'library_dirs = /usr/local/lib\n'

The relevant lines in my site.cfg file:

library_dirs = /usr/local/lib
include_dirs = /usr/local/include
#[blas_opt]
libraries = f77blas, cblas, atlas
#[lapack_opt]
libraries = lapack, f77blas, cblas, atlas


I have installed BLAS+LAPACK+ATLAS libraries under /usr/local/lib/atlas

I also installed UMFPACK+AMD+UFConfig+CHOLMOD

I would appreciate any comments. I'm stuck here :(

Best regards,
Jose M. Borreguero



Below is the full error traceback:

$  /usr/local/bin/python2.7 setup.py build

Running from numpy source directory.F2PY Version 1
Traceback (most recent call last):
  File setup.py, line 211, in module
setup_package()
  File setup.py, line 204, in setup_package
configuration=configuration )
  File /projects/tmp/numpy-1.5.1/numpy/distutils/core.py, line 152, in
setup
config = configuration()
  File setup.py, line 151, in configuration
config.add_subpackage('numpy')
  File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 972,
in add_subpackage
caller_level = 2)
  File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 941,
in get_subpackage
caller_level = caller_level + 1)
  File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 878,
in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
  File numpy/setup.py, line 9, in configuration
config.add_subpackage('core')
  File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 972,
in add_subpackage
caller_level = 2)
  File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 941,
in get_subpackage
caller_level = caller_level + 1)
  File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 878,
in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
  File numpy/core/setup.py, line 807, in configuration
blas_info = get_info('blas_opt',0)
  File /projects/tmp/numpy-1.5.1/numpy/distutils/system_info.py, line 310,
in get_info
return cl().get_info(notfound_action)
  File /projects/tmp/numpy-1.5.1/numpy/distutils/system_info.py, line 409,
in __init__
self.parse_config_files()
  File /projects/tmp/numpy-1.5.1/numpy/distutils/system_info.py, line 416,
in parse_config_files
self.cp.read(self.files)
  File /usr/local/lib/python2.7/ConfigParser.py, line 297, in read
self._read(fp, filename)
  File /usr/local/lib/python2.7/ConfigParser.py, line 504, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
ConfigParser.MissingSectionHeaderError: File contains no section headers.
file: /projects/tmp/numpy-1.5.1/site.cfg, line: 60
'library_dirs = /usr/local/lib\n'
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build ERROR ConfigParser.MissingSectionHeaderError: File contains no section headers

2011-03-16 Thread josef . pktd
On Thu, Mar 17, 2011 at 12:23 AM, Jose Borreguero borregu...@gmail.com wrote:
 Dear Numpy/SciPy users,

 I have a build error with Numpy:

 $  /usr/local/bin/python2.7 setup.py build
 
   File /usr/local/lib/python2.7/ConfigParser.py, line 504, in _read
     raise MissingSectionHeaderError(fpname, lineno, line)
 ConfigParser.MissingSectionHeaderError: File contains no section headers.
 file: /projects/tmp/numpy-1.5.1/site.cfg, line: 60
 'library_dirs = /usr/local/lib\n'

 The relevant lines in my site.cfg file:


I think you just need to uncomment all the section headers that you
use, that`s what the exception says


 library_dirs = /usr/local/lib
 include_dirs = /usr/local/include
[blas_opt]
 libraries = f77blas, cblas, atlas
[lapack_opt]
 libraries = lapack, f77blas, cblas, atlas

Josef



 I have installed BLAS+LAPACK+ATLAS libraries under /usr/local/lib/atlas

 I also installed UMFPACK+AMD+UFConfig+CHOLMOD

 I would appreciate any comments. I'm stuck here :(

 Best regards,
 Jose M. Borreguero



 Below is the full error traceback:

 $  /usr/local/bin/python2.7 setup.py build

 Running from numpy source directory.F2PY Version 1
 Traceback (most recent call last):
   File setup.py, line 211, in module
     setup_package()
   File setup.py, line 204, in setup_package
     configuration=configuration )
   File /projects/tmp/numpy-1.5.1/numpy/distutils/core.py, line 152, in
 setup
     config = configuration()
   File setup.py, line 151, in configuration
     config.add_subpackage('numpy')
   File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 972,
 in add_subpackage
     caller_level = 2)
   File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 941,
 in get_subpackage
     caller_level = caller_level + 1)
   File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 878,
 in _get_configuration_from_setup_py
     config = setup_module.configuration(*args)
   File numpy/setup.py, line 9, in configuration
     config.add_subpackage('core')
   File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 972,
 in add_subpackage
     caller_level = 2)
   File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 941,
 in get_subpackage
     caller_level = caller_level + 1)
   File /projects/tmp/numpy-1.5.1/numpy/distutils/misc_util.py, line 878,
 in _get_configuration_from_setup_py
     config = setup_module.configuration(*args)
   File numpy/core/setup.py, line 807, in configuration
     blas_info = get_info('blas_opt',0)
   File /projects/tmp/numpy-1.5.1/numpy/distutils/system_info.py, line 310,
 in get_info
     return cl().get_info(notfound_action)
   File /projects/tmp/numpy-1.5.1/numpy/distutils/system_info.py, line 409,
 in __init__
     self.parse_config_files()
   File /projects/tmp/numpy-1.5.1/numpy/distutils/system_info.py, line 416,
 in parse_config_files
     self.cp.read(self.files)
   File /usr/local/lib/python2.7/ConfigParser.py, line 297, in read
     self._read(fp, filename)
   File /usr/local/lib/python2.7/ConfigParser.py, line 504, in _read
     raise MissingSectionHeaderError(fpname, lineno, line)
 ConfigParser.MissingSectionHeaderError: File contains no section headers.
 file: /projects/tmp/numpy-1.5.1/site.cfg, line: 60
 'library_dirs = /usr/local/lib\n'


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion