Re: [Numpy-discussion] Arguments silently ignored (kwargs) in meshgrid etc.

2014-05-29 Thread Michael McNeil Forbes
On May 29, 2014, at 3:16 PM, Michael McNeil Forbes 
 wrote:
> On May 29, 2014, at 1:41 AM, Ralf Gommers  wrote:
>> On Thu, May 29, 2014 at 5:35 AM, Michael McNeil Forbes 
>>  wrote:
>>> I just noticed that meshgrid() silently ignore extra arguments.  It just 
>>> burned me (I forgot that it is meshgrid(indexing='ij') and tried 
>>> meshgrid(indices='ij') which subtly broke my code.)
>> 
>> That's not very user-friendly, a check should be added. Do you want to send 
>> a PR for that?
> 
> Okay.  Working on it.

See PR 4758: 

https://github.com/numpy/numpy/pull/4758

> Question: Would be be okay to implement this in terms of a private (or 
> public) function, something like:
> 
> def _sparsegrid(xi, sparse=True, indexing='ij', copy=True):  ...

or maybe "opengrid()"?  (This is not included in the PR for technical reasons, 
I will open another if it is considered desirable)

Michael. 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Arguments silently ignored (kwargs) in meshgrid etc.

2014-05-29 Thread Michael McNeil Forbes
On May 29, 2014, at 1:41 AM, Ralf Gommers  wrote:
> On Thu, May 29, 2014 at 5:35 AM, Michael McNeil Forbes 
>  wrote:
>> I just noticed that meshgrid() silently ignore extra arguments.  It just 
>> burned me (I forgot that it is meshgrid(indexing='ij') and tried 
>> meshgrid(indices='ij') which subtly broke my code.)
> 
> That's not very user-friendly, a check should be added. Do you want to send a 
> PR for that?

Okay.  Working on it.

Question: Would be be okay to implement this in terms of a private (or public) 
function, something like:

def _sparsegrid(xi, sparse=True, indexing='ij', copy=True):
   ...

def meshgrid(*xi, **kwargs):
defaults = dict(sparse=False, indexing='xy', copy=False),
return _sparsegrid(xi, **dict(defaults, **kwargs))

This also addresses issue #2164 and a personal pet peeve that there is no 
analogue of ogrid for meshgrid.  (Not sure what the best name would be, maybe 
meshogrid?)  I see that you removed the original ndgrid implementation that did 
something similar.  Is there a reason for not providing an analogue?

Michael.





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Arguments silently ignored (kwargs) in meshgrid etc.

2014-05-28 Thread Michael McNeil Forbes
I just noticed that meshgrid() silently ignore extra arguments.  It just burned 
me (I forgot that it is meshgrid(indexing='ij') and tried 
meshgrid(indices='ij') which subtly broke my code.)

Is this intentional?  I don't see why `meshgrid` does not have explicit 
arguments. If this is not a design decision, I will open an issue and PR.

Michael.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Keyword argument support for vectorize.

2012-04-13 Thread Michael McNeil Forbes
On 9 Apr 2012, at 3:02 AM, Nathaniel Smith wrote:
> functools was added in Python 2.5, and so far numpy is still trying to
> maintain 2.4 compatibility.

Thanks: I forgot about that.  I have attached a 2.4 compatible patch,  
updated docs, and tests for review to ticket #2100.  This also  
includes a patch and regression test for ticket #1156 so it can be  
closed out too after review.

http://projects.scipy.org/numpy/ticket/1156
http://projects.scipy.org/numpy/ticket/2100

Michael.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Keyword argument support for vectorize.

2012-04-09 Thread Michael McNeil Forbes
On 8 Apr 2012, at 12:09 PM, Ralf Gommers wrote:
> That looks like a useful enhancement. Integrating in the existing  
> vectorize class should be the way to go.

Okay.  I will push forward.  I would also like to add support for  
"freezing" (or "excluding") certain arguments from the vectorization.   
Any ideas for a good argument name?  (I am just using "exclude=['p']"  
for now).

The use case I have is vectorizing polynomial evaluation `polyval(p,  
x)`.  The coefficient array `p` should not be vectorized over, only  
the variable `x`, so something like:

@vectorize(exclude=set(['p']))
def mypolyval(p, x):
return np.polyval(p, x)

would work like np.polyval currently behaves:

 >>> mypolyval([1.0,2.0],[0.0,3.0])
array([ 2., 5.])

(Of course, numpy already has polyval: I am actually trying to wrap  
similar functions that use Theano for automatic differentiation, but  
the idea is the same).

It seems like functools.partial is the appropriate tool to use here  
which means I will have to deal with the
This will require overcoming the issues with how vectorize deduces the  
number of parameters, but if I integrate this with the vectorize  
class, then this should be easy to patch as well.

http://mail.scipy.org/pipermail/numpy-discussion/2010-September/052642.html

Michael.

> On Sat, Apr 7, 2012 at 12:18 AM, Michael McNeil Forbes 
>  > wrote:
> Hi,
>
>> I added a simple enhancement patch to provide vectorize with simple
>> keyword argument support.  (I added a new kwvectorize decorator, but
>> suspect this could/should easily be rolled into the existing  
>> vectorize.)
>>
>> http://projects.scipy.org/numpy/ticket/2100
...

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Keyword argument support for vectorize.

2012-04-06 Thread Michael McNeil Forbes
Hi,

I added a simple enhancement patch to provide vectorize with simple  
keyword argument support.  (I added a new kwvectorize decorator, but  
suspect this could/should easily be rolled into the existing vectorize.)

http://projects.scipy.org/numpy/ticket/2100

This just reorders any kwargs into the correct position (filling in  
defaults as needed) and then class the "vectorize"d function.

If people think this is reasonable, I can improve the patch with more  
comprehensive testing and error messages.

Michael.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IndexExpression bug?

2009-08-17 Thread Michael McNeil Forbes
Submitted as ticket 1196

http://projects.scipy.org/numpy/ticket/1196

On 5 Jun 2009, at 4:12 PM, Robert Kern wrote:

> On Fri, Jun 5, 2009 at 16:14, Michael McNeil Forbes
>  wrote:
>>  >>> np.array([0,1,2,3])[1:-1]
>> array([1, 2])
>>
>> but
>>
>>  >>> np.array([0,1,2,3])[np.s_[1:-1]]
>> array([1, 2, 3])
>>  >>> np.array([0,1,2,3])[np.index_exp[1:-1]]
>> array([1, 2, 3])
...
> I think that getting rid of __getslice__ and __len__ should work
> better. I don't really understand what the logic was behind including
> them in the first place, though. I might be missing something.
...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Specifying Index Programmatically

2009-08-17 Thread Michael McNeil Forbes
There is also numpy.s_:

inds = np.s_[...,2,:]
z[inds]

(Though there are some problems with negative indices: see for example 
http://www.mail-archive.com/numpy-discussion@scipy.org/msg18245.html)

On 8 Aug 2009, at 10:02 PM, T J wrote:

> On Sat, Aug 8, 2009 at 8:54 PM, Neil Martinsen-Burrell > wrote:
>>
>> The ellipsis is a built-in python constant called Ellipsis.  The  
>> colon
>> is a slice object, again a python built-in, called with None as an
>> argument.  So, z[...,2,:] == z[Ellipsis,2,slice(None)].

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Efficient ?axpy operation without copy (B += a*A)

2009-06-23 Thread Michael McNeil Forbes
Thanks Pauli,

On 23 Jun 2009, at 12:46 PM, Pauli Virtanen wrote:
 from scipy.lib.blas import get_blas_funcs
 axpy, = get_blas_funcs(['axpy'], [A, B])
 res = axpy(A.ravel(), B.ravel(), A.size, a)
 res.base is B
...
> Works provided A and B are initially in C-order so that ravel()
> doesn't create copies. If unsure, it's best to make use of the
> return value of axpy and not assume B is modified in-place.
>
> [clip]
>> There are exposed blas daxpy operations in scipy, but in the  
>> version I
>> have (EPD), these also seem to make copies (though recent version  
>> seem
>> to be fixed by looking at the source.)
>
> I don't see many relevant changes in scipy.lib recently, so I'm
> not sure what change you mean by the above.


I was not ravelling or providing the size, and as a result the return  
value was a copy and B was not modified leading me to an incorrect  
conclusion: It works with EPD.

Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Efficient ?axpy operation without copy (B += a*A)

2009-06-23 Thread Michael McNeil Forbes
Hi,

Is there a way of performing vectorized ?axpy (daxpy) operations  
without making copies or dropping into C?

i.e: I want to do

big = (1,5000)
A = np.ones(big,dtype=float)
B = np.ones(big,dtype=float)
a = 1.5
B += a*A

without making any copies?

(I know I could go

A *= a
B += A
A /= a

but that is not so efficient either).

There are exposed blas daxpy operations in scipy, but in the version I  
have (EPD), these also seem to make copies (though recent version seem  
to be fixed by looking at the source.)

Thanks,
Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] IndexExpression bug?

2009-06-05 Thread Michael McNeil Forbes
 >>> np.array([0,1,2,3])[1:-1]
array([1, 2])

but

 >>> np.array([0,1,2,3])[np.s_[1:-1]]
array([1, 2, 3])
 >>> np.array([0,1,2,3])[np.index_exp[1:-1]]
array([1, 2, 3])

Possible fix:
class IndexExpression(object):
 ...
 def __len__(self):
 return 0

(Presently this returns sys.maxint)

Does this break anything (I can't find any coverage tests)?  If not, I  
will submit a ticket.

Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array 2 string

2009-03-11 Thread Michael McNeil Forbes

On 10 Mar 2009, at 10:33 AM, Michael S. Gilbert wrote:

> On Tue, 10 Mar 2009 17:21:23 +0100, Mark Bakker wrote:
>> Hello,
>>
>> I want to convert an array to a string.
>>
>> I like array2string, but it puts these annoying square brackets  
>> around
>> the array, like
>>
>> [[1 2 3],
>> [3 4 5]]
>>
>> Anyway we can suppress the square brackets and get (this is what is
>> written with savetxt, but I cannot get it to store in a variable)
>> 1 2 3
>> 4 5 6

How about using StringIO:

 >>> a = np.array([[1,2,3],[4,5,6]])
 >>> f = StringIO()
 >>> savetxt(f, a, fmt="%i")
 >>> s = f.getvalue()
 >>> f.close()
 >>> print s
1 2 3
4 5 6

Michael.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] profiling line by line

2008-09-15 Thread Michael McNeil Forbes
The hotshot profiler used to do this, but I don't think it is really  
supported anymore...  I have not used it in a while, but agree that a  
line-by-line profiler can be very nice.

Michael.
On Sep 15, 2008, at 6:27 AM, Robin wrote:

> Hi,
>
> I am using the prun feature of Ipython which is very helpful.
>
> I was wondering though if theres anything for Python that would allow
> line-by-line profiling (ie a time for each line of code) like the
> MATLAB profiler?
>
> Cheers
>
> Robin
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sum of positive values in an array

2008-09-07 Thread Michael McNeil Forbes
On Sep 5, 2008, at 8:52 AM, Keith Goodman wrote:

> Here's another difference:
>
>>> a = np.random.randn(10)
>>> timeit np.sum(a[np.where(a>0)])
> 100 loops, best of 3: 3.44 ms per loop
>>> timeit a[a > 0].sum()
> 100 loops, best of 3: 2.21 ms per loop

Here is an even faster method (but much more ugly!):
0.25ms:  (a.sum() + abs(a).sum())/2
0.34ms:  a[a > 0].sum()
0.55ms:  np.sum(a[np.where(a>0)])

Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sum of positive values in an array

2008-09-07 Thread Michael McNeil Forbes
On Sep 5, 2008, at 8:52 AM, Keith Goodman wrote:

> Here's another difference:
>
>>> a = np.random.randn(10)
>>> timeit np.sum(a[np.where(a>0)])
> 100 loops, best of 3: 3.44 ms per loop
>>> timeit a[a > 0].sum()
> 100 loops, best of 3: 2.21 ms per loop

Here is an even faster method (but much more ugly!):
0.25ms:  (a.sum() + abs(a).sum())/2
0.34ms:  a[a > 0].sum()
0.55ms:  np.sum(a[np.where(a>0)])

Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Infinity definitions

2008-07-15 Thread Michael McNeil Forbes
On 15 Jul 2008, at 6:33 AM, Bruce Southey wrote:

> Hi,
> Following Travis's suggestion below, I would like to suggest that the
> following definitions be depreciated or removed in this forthcoming  
> release:
>
> numpy.Inf
> numpy.Infinity
> numpy.infty
> numpy.PINF
> numpy.NAN
> numpy.NaN
...

While this is being discussed, what about the "representation" of  
nan's an infinities produced by repr in various forms?  It is  
particularly annoying when the repr of simple numeric types cannot be  
evaluated.  These include:

'infj'== repr(complex(0,inf))
'nanj'== repr(complex(0,nan))
'infnanj' == repr(complex(inf,nan))
'nannanj' == repr(complex(nan,nan))

It seems that perhaps infj and nanj should be defined symbols.  I am  
not sure why a + does not get inserted before nan.

In addition, the default infstr and nanstr printing options should  
also be changed to 'inf' and 'nan'.

Michael.





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] slow import of numpy modules

2008-07-02 Thread Michael McNeil Forbes
On 2 Jul 2008, at 3:59 PM, Robert Kern wrote:
> On Wed, Jul 2, 2008 at 17:43, Nathan Jensen  
> <[EMAIL PROTECTED]> wrote:
>> Hi,
>>
>> I was wondering if there was any way to speed up the global import of
>> numpy modules.  For a simple import numpy, it takes ~250 ms.  In
>> comparison, importing Numeric is only taking 40 ms.  It appears that
>> even if you only import a numpy submodule, it loads all the  
>> libraries,
>> resulting in the painful performance hit.  Are there plans to  
>> speed up
>> the importing of numpy,
>
> I am not sure how much is possible.
>
>> or at least have it not load libraries that
>> aren't requested?
>
> At this point in time, it is too late to make such sweeping changes  
> to the API.

One could use an environmental variable such as  
NUMPY_SUPPRESS_TOP_LEVEL_IMPORTS, that, if defined, suppresses the  
importing of unneeded packages.  This would only affect systems that  
define this variable, thus not breaking the API but providing the  
flexibility for those that need it.  (This or a similar variable  
could also contain a list of the numpy components to import  
automatically.)

If you want to try this, just modify numpy/__init__.py with something  
like the following


 import os
 fast_import = if 'NUMPY_SUPPRESS_TOL_LEVEL_IMPORTS' in os.environ
 del os

 if fast_import:
 
 else:
 

 del fast_import

Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Code samples in docstrings mistaken as doctests

2008-06-23 Thread Michael McNeil Forbes
> On Mon, Jun 23, 2008 at 4:51 PM, Robert Kern  
> <[EMAIL PROTECTED]> wrote:
>> random_array = np.random.rand(3,4)
>> random_array.shape
>>> (3,4)
>> random_array.max() < 1
>>> True
>> random_array.min() > 0
>>> True
>>
>> Yes, this makes it doctestable, but you've destroyed the exampleness.
>> It should be policy *not* to do this.

Well perhaps... but do you think that

rand(d0, d1, ..., dn) -> random values

is more exampley than

 >>> r = np.random.rand(3,2,4)
 >>> r.shape
(3,2,4)

?

On 23 Jun 2008, at 2:31 PM, Alan McIntyre wrote:
> So it seems we have:
>  1. Example code that is doctestable
>  2. Example code that probably can't ever be doctestable (random
> number stuff, etc.), but is still executable
>  3. Schematic examples that aren't executable
>
> Personally, I'm in favor of filling out examples of type #3 to make
> them at least #2, but maybe that's not always practical.  I don't
> think #3 should ever have ">>>" prompts, so it shouldn't ever be
> picked up by doctest.
>
> I suppose I could go for a decorator option to flag #2. If we execute
> them, but not look at the results, then at least we find out about
> examples that are broken enough to raise exceptions.

One can usually do #3 -> #1 or #2 by just leave bare assignments  
without printing a result (the user can always execute them and look  
at the result if they want).

 >>> r = np.random.rand(3,2,4)

which is cleaner than adding any flags...

Michael.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Code samples in docstrings mistaken as doctests

2008-06-23 Thread Michael McNeil Forbes
On 23 Jun 2008, at 1:28 PM, Anne Archibald wrote:

> 2008/6/23 Michael McNeil Forbes <[EMAIL PROTECTED]>:
>> Thus, one can argue that all examples should also be doctests.  This
>> generally makes things a little more ugly, but much less ambiguous.
>
> This is a bit awkward. How do you give an example for a random-number
> generator? Even if you are willing to include a seed in each
> statement, misleading users into thinking it's necessary, the value
> returned for a given seed is not necessarily part of the interface a
> random-number generator agrees to support.

I agree that this can be awkward sometimes, and should certainly not  
be policy, but one can usually get around this.  Instead of printing  
the result, you can use it, or demonstrate porperties:

 >>> random_array = np.random.rand(3,4)
 >>> random_array.shape
(3,4)
 >>> random_array.max() < 1
True
 >>> random_array.min() > 0
True

etc.

Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Code samples in docstrings mistaken as doctests

2008-06-23 Thread Michael McNeil Forbes
> On 23 Jun 2008, at 12:37 PM, Alan McIntyre wrote:
>> Ugh.  That just seems like a lot of unreadable ugliness to me.  If
>> this comment magic is the only way to make that stuff execute  
>> properly
>> under doctest, I think I'd rather just skip it in favor of clean,
>> uncluttered, non-doctestable code samples in the docstrings.

Another perspective: doctests ensure that documentation stays up to  
date (if the behaviour or interface changes, then tests will fail  
indicating that the documentation also needs to be updated.)

Thus, one can argue that all examples should also be doctests.  This  
generally makes things a little more ugly, but much less ambiguous.

...
Examples:
-
If A, B, C, and D are appropriately shaped 2-d arrays, then one can  
produce

 [ A  B ]
 [ C  D ]

using any of these methods:
 >>> A, B, C, D = [[1,1]], [[2,2]], [[3,3]], [[4,4]]
 >>> np.bmat('A, B; C, D')  # From a string
matrix([[ 1,  1,  2,  2],
 [ 3,  3,  4,  4]])
 >>> np.bmat([[A,B],[C,D]]) # From a nested sequence
matrix([[ 1,  1,  2,  2],
 [ 3,  3,  4,  4]])
 >>> np.bmat(np.r_[np.c_[A,B],np.c_[C,D]])  # From an array
matrix([[ 1,  1,  2,  2],
 [ 3,  3,  4,  4]])

Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2008-04-04 Thread Michael McNeil Forbes
On 16 Nov 2007, at 1:46 AM, Michael McNeil Forbes wrote:
> On 15 Nov 2007, at 8:23 PM, David Cournapeau wrote:
>
>> Could you try without atlas ? Also, how did you configure atlas when
>> building it ? It seems that atlas is definitely part of the problem
>> (everybody having the problem does use atlas), and that it involves
>> Core
>> 2 duo.
>>
>> David
>
> It seems to work fine without ATLAS, but then again, it is a somewhat
> "random" error.  I will let some code run tonight and see if I detect
> anything.

Just an update.  I am still having this problem, along with some  
additional problems where occasionally even dot returns nan's.  I  
have confirmed that without ATLAS everything seems to be fine, and  
that the problem still remains with newer versions of ATLAS, Python,  
gcc etc.

ATLAS was configured with

../configure --prefix=${BASE}/apps/${ATLAS}_${SUFFIX}\
  --with-netlib-lapack=${BASE}/src/${LAPACK}_${SUFFIX}/ 
lapack_LINUX.a\
  -A Core2Duo64SSE3\
  --cflags=-fPIC\
  -Fa alg -fPIC

and it passed all the tests.

The problem still exists with ATLAS version 3.8.1, gcc 4.3.0, and  
recent versions of numpy.

  >>> sys.version
  '2.5.2 (r252:60911, Mar 29 2008, 02:55:47) \n[GCC 4.3.0]'
  >>> numpy.version.version
  '1.0.5.dev4915'

I have managed to extract a matrix that causes this failure  
repeatedly once every two or four times eigh is called, so hopefully  
I should be able to run gdb and track down the problem...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Should array iterate over a set?

2007-11-17 Thread Michael McNeil Forbes
My expectation was that array would iterate over a set.  This is  
incorrect:

 >>> array(set([1,2,3]))
array(set([1, 2, 3]), dtype=object)

Is this the intended behaviour?  A trivial work-around that does what  
I need is

 >>> array(list(set([1,2,3])))
array([1, 2, 3])

but I was wondering if this was by design or just a forgotten  
corner.  (Maybe a vestige of the tuple special case for record arrays?)

Michael.

P.S. I just found that this was brought up by Ed Schofield on  
2006-05-03, but there were no replies in that thread.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-16 Thread Michael McNeil Forbes
On 15 Nov 2007, at 8:23 PM, David Cournapeau wrote:

> Could you try without atlas ? Also, how did you configure atlas when
> building it ? It seems that atlas is definitely part of the problem
> (everybody having the problem does use atlas), and that it involves  
> Core
> 2 duo.
>
> David

It seems to work fine without ATLAS, but then again, it is a somewhat  
"random" error.  I will let some code run tonight and see if I detect  
anything.

Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-15 Thread Michael McNeil Forbes
>> I have also been having random problems with the latest numpy from
>> svn built on an Intel core 2 Duo Linux box running in 64 bit mode
>> under Red Hat 3.4.6-8 with the gcc 3.4.6 20060404 and ATLAS 3.8.0.
>>
> Could you try without atlas ? Also, how did you configure atlas when
> building it ? It seems that atlas is definitely part of the problem
> (everybody having the problem does use atlas), and that it involves  
> Core
> 2 duo.

I will try that.  I made a mistake: it is a Core 2, not a core 2 duo...
model name  : Intel(R) Core(TM)2 CPU  6700  @ 2.66GHz

Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-15 Thread Michael McNeil Forbes
On 13 Nov 2007, at 9:43 AM, Geoffrey Zhu wrote:
> On Nov 13, 2007 2:37 AM, David Cournapeau <[EMAIL PROTECTED] 
> u.ac.jp> wrote:
>> Geoffrey Zhu wrote:
>>> Pointer problems are usually random...
...
> The original MSI version hangs on numpy.test() if I open IDLE and type
>
> import numpy
> numpy.test()
>
> If I try the OP's test first, once it hang on "from numpy.linalg
> import eig" and the other time it ran successfully. After it ran
> successfully, it ran numpy.test() successfully, too.
>
> As you said, it is random.


I have also been having random problems with the latest numpy from  
svn built on an Intel core 2 Duo Linux box running in 64 bit mode  
under Red Hat 3.4.6-8 with the gcc 3.4.6 20060404 and ATLAS 3.8.0.

I am having a problem with numpy.linalg.eigh and complex Hermitian  
matrices.  Randomly, I get seemingly correct answers, and then  
eigenvectors full of Nan's (though not completely.  The first row the  
the eigenvectors seem to be numbers, but incorrect.)

Sometimes, I can stop just after the error with pdb and "play".   
Calling eigh from the debugger sometimes gives a correct answer, and  
then other times gives eigenvalues and eigenvectors full of Nan's  
(not completely full mind you).  For example:

(Pdb) p numpy.linalg.eigh(HH)
(array([-50.50589438, -45.86305013, -40.56713543, -35.57216233,   
38.1497506 ,  40.17291371,  43.35773763,  46.59527636,   
49.42413434,  NaN,  NaN,
 NaN,  NaN,  NaN,   
NaN,  NaN,  NaN,  NaN,  NaN,   
NaN,  NaN,  NaN,
 NaN,  NaN,  NaN,   
NaN,  NaN,  NaN,  NaN,  NaN,   
NaN,  NaN,  NaN,
 NaN]), array([[-0.00072424 +0.j, -0.00136655 +0.j,   
0.00200233 +0.j, ...,  0. +0.j,  0. +0.j,  0.  
+0.j],
[NaN NaNj, NaN NaNj, NaN  
NaNj, ..., NaN NaNj, NaN NaNj, NaN NaNj],
[NaN NaNj, NaN NaNj, NaN  
NaNj, ..., NaN NaNj, NaN NaNj, NaN NaNj],
...,
[NaN NaNj, NaN NaNj, NaN  
NaNj, ..., NaN NaNj, NaN NaNj, NaN NaNj],
[NaN NaNj, NaN NaNj, NaN  
NaNj, ..., NaN NaNj, NaN NaNj, NaN NaNj],
[NaN NaNj, NaN NaNj, NaN  
NaNj, ..., NaN NaNj, NaN NaNj, NaN NaNj]]))
(Pdb) p numpy.linalg.eigh(HH)
(array([-51.06208813, -48.50332834, -48.49643331, -46.25814405,  
-46.25813858, -44.33668063, -44.33668063, -42.73548661, -42.73548661,  
-41.45454929, -41.45454929,
-40.49386126, -40.49386126, -39.85344006, -39.85344006,  
-39.53308677, -39.53308677,  37.91885011,  37.91885011,   
38.2392034 ,  38.2392034 ,  38.8796246 ,
 38.8796246 ,  39.84031263,  39.84031263,  41.12124995,   
41.12124995,  42.72244397,  42.72244398,  44.64390192,  44.6439074 ,   
46.88219666,  46.88909168,
 49.44785148]), array([[ -5.28060016e-04 +0.e+00j,   
-3.92271866e-05 +0.e+00j,   7.72453920e-04 +0.e 
+00j, ...,  -3.36896226e-01 +0.e+00j,
   6.28651296e-02 +0.e+00j,  -2.42202473e-01  
+0.e+00j],
[  1.48818848e-03 +2.78190640e-04j,   1.06069959e-03  
+1.98279117e-04j,  -1.88322135e-03 -3.52035081e-04j, ...,
2.86677919e-01 +5.35893907e-02j,
  -1.77188491e-01 -3.31222694e-02j,   2.38244862e-01  
+4.45356831e-02j],
[ -2.14234988e-03 -8.29950766e-04j,  -2.44246082e-03  
-9.46214364e-04j,   1.92200953e-03 +7.44590459e-04j, ...,   
-1.9231e-01 -7.47685718e-02j,
   2.55119386e-01 +9.88337767e-02j,  -2.26238355e-01  
-8.76452055e-02j],
...,
[  2.06281453e-01 -1.27724068e-01j,  -2.32614835e-01  
+1.44029008e-01j,  -1.75975052e-01 +1.08959139e-01j, ...,
1.75246553e-03 -1.08508072e-03j,
   2.22700685e-03 -1.37890426e-03j,   1.95336925e-03  
-1.20947504e-03j],
[ -2.26004880e-01 +8.75547569e-02j,   1.68085319e-01  
-6.51165996e-02j,   2.71949658e-01 -1.05353859e-01j, ...,   
-1.78646965e-03 +6.92082029e-04j,
  -1.00620547e-03 +3.89806076e-04j,  -1.41173185e-03  
+5.46907831e-04j],
[  2.38078516e-01 -4.45045876e-02j,  -6.17947313e-02  
+1.15514373e-02j,  -3.31159928e-01 +6.19045191e-02j, ...,
7.59301424e-04 -1.41938035e-04j,
   3.85592692e-05 -7.20797663e-06j,   5.19068791e-04  
-9.70307734e-05j]]))

Here is the version info (Everything build from scratch, numpy from  
svn):
 >>> sys.version
'2.5.1 (r251:54863, Nov 10 2007, 00:44:16) \n[GCC 3.4.6 20060404 (Red  
Hat 3.4.6-8)]'
 >>> numpy.version.version
'1.0.5.dev4427'
 >>> scipy.version.version
'0.7.0.dev3511'

Using ATLAS-3.8.0.

This is extremely annoying, and difficult to reproduce.  I will try  
recompiling with some different versions and see if I can reproduce  

Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2007-11-15 Thread Michael McNeil Forbes

On 15 Nov 2007, at 2:45 AM, David Cournapeau wrote:

> Which fortran compiler are you using ?

GNU Fortran (GCC) 3.4.6 20060404

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warnings as exceptions?

2007-11-13 Thread Michael McNeil Forbes
On 13 Nov 2007, at 8:46 AM, Travis E. Oliphant wrote:

> Michael McNeil Forbes wrote:
>> Why are numpy warnings printed rather than issued using the standard
>> warnings library? ... in util.py ...
> The "warn" option explicitly allows you to use the warnings library.
> There is already the "print" mode which is in fact the default.
I see now that it works exactly like I expected.  I got confused by  
the default and then looked in util.py in the old *numarray* code by  
mistake...

Thanks.  Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Warnings as exceptions?

2007-11-10 Thread Michael McNeil Forbes
Why are numpy warnings printed rather than issued using the standard  
warnings library?  I know that the behaviour can be controlled by  
seterr(), but it seem rather unpythonic not to use the warnings library.

Is there an explicit reason for this choice?  (It seems like a pretty  
trivial modification of util.py will catch most of the cases.)  If  
performance is an issue, one could add a 'print' mode which behaves  
as the current 'warn'.

Michael.

(P.S. I am sure this has been discussed somewhere before, but I can't  
seem to find a reference.)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Warning or error on conversion from complex to float.

2007-11-08 Thread Michael McNeil Forbes
Hi,

Is it possible or easy to add a warning and/or error when array  
assignments are made that lose information?  I just got caught with  
the following type of code:

def f():
 return numpy.array([1j,2.0])

x = numpy.empty((2,),dtype=float)
x[:] = f()

I am pre-allocating arrays for speed, but made a change to f()  
resulting in complex results.  My code now silently ignores the  
imaginary part.  It would have been very helpful if the assignment  
check to make sure that the conversion was not going to lose  
information (presumably with an assert so that performance is not  
affected outside of debugging).

Any simple suggestions on how to avoid this problem in the future?   
How hard would it be to add such a check in numpy?

Thanks,
Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction.

2007-06-21 Thread Michael McNeil Forbes
>> key_array = empty(len(keys),dtype=tuple)
>> key_array[:] = keys[:]
>
> the later two statements can also be written as:
>
> key_array = array(keys, dtype=tuple)

These are not equivalent:

 >>> keys = [('a',1),('b',2)]
 >>> key_array = array(keys, dtype=tuple)
 >>> key_array
array([[a, 1],
[b, 2]], dtype=object)

 >>> key_array = empty(len(keys),dtype=tuple)
 >>> key_array[:] = keys[:]
 >>> key_array
array([('a', 1), ('b', 2)], dtype=object)

Thanks,
Michael.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction.

2007-06-20 Thread Michael McNeil Forbes
Using take or array or similar operations on the initial list  
descends ignoring the tuples and converting the list to a multiple- 
dimension array:


>>> take(keys,[1,0],axis=0)
array([['b', '2'],
   ['a', '1']],
  dtype='|S4')

It is sorted as I want, but I can no-longer use them as keys.  The  
problem can be solved by creating an empty array first, then copying.


Thanks,
Michael.



On 6/20/07, Michael McNeil Forbes <[EMAIL PROTECTED]> wrote:
Hi,

I have a list of tuples that I am using as keys and I would like to
sort this along with some other arrays using argsort.  How can I do
this?  I would like to do something like:

You might want the keys in an object array, otherwise you end up  
with strings for all the values. Why are they keys? Do you want to  
sort on them also? Anyway, if you use take(keys, inds, axis=0) you  
will get an array with the rows containing the keys rearranged as I  
think you want.


Chuck


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction.

2007-06-20 Thread Michael McNeil Forbes
Hi,

That is a little more complicated than I want, but it shows me the  
solution: Construct the array of the desired shape first, then fill it.

data = [1.0, 3,0]
keys = [('a',1),('b',2)]

# Convert to arrays for indexing
data_array = array(data1)
key_array = empty(len(keys),dtype=tuple)
key_array[:] = keys[:]

inds = argsort(data)
data_array[:] = data[inds]
key_array[:] = keys[inds]

Thanks!
Michael.

On 20 Jun 2007, at 4:57 AM, Francesc Altet wrote:

> El dc 20 de 06 del 2007 a les 01:38 -0700, en/na Michael McNeil Forbes
> va escriure:
>> Hi,
>>
>> I have a list of tuples that I am using as keys and I would like to
>> sort this along with some other arrays using argsort.  How can I do
>> this?  I would like to do something like:
>>
>> # These are constructed using lists because they accumulate using
>> append()
>> data = [1.0, 3,0]
>> keys = [('a',1),('b',2)]
>>
>> # Convert to arrays for indexing
>> data = array(data1)
>> keys = array(keys) # <--Converts to a 2d array rather than 1d array
>> of tuples.
>>
>> inds = argsort(data)
>> data[:] = data[inds]
>> keys[:] = keys[inds]
>>
>> It seems there should be some way of specifying to the array
>> constructor not to 'descend' (perhaps by specifying the desired
>> dimensions of the final array or something) but I cannot find a nice
>> way around this.
>
> Here is a possible approach using recarrays:
>
> In [54]:data = [3.0, 1.0]
> In [55]:keys = [('a',1),('b',2)]
> In [56]:tmp=numpy.array(keys, dtype="S1,i4")
> In [57]:dest=numpy.empty(shape=len(keys), dtype="S1,i4,f8")
> In [58]:dest['f0'] = tmp['f0']
> In [59]:dest['f1'] = tmp['f1']
> In [60]:dest['f2'] = data
> In [61]:dest
> Out[61]:
> array([('a', 1, 3.0), ('b', 2, 1.0)],
>   dtype=[('f0', '|S1'), ('f1', ' In [62]:dest.sort(order='f2')
> In [63]:dest
> Out[63]:
> array([('b', 2, 1.0), ('a', 1, 3.0)],
>   dtype=[('f0', '|S1'), ('f1', '
> If you want to retrieve the keys and data from the dest recarray
> afterwards, that's easy:
>
> In [111]:data2=dest['f2'].tolist()
> In [112]:keys2=dest.getfield('S1,i4').tolist()
> In [113]:data2
> Out[113]:[1.0, 3.0]
> In [114]:keys2
> Out[114]:[('b', 2), ('a', 1)]
>
> Cheers,
>
> -- 
> Francesc Altet|  Be careful about using the following code --
> Carabos Coop. V.  |  I've only proven that it works,
> www.carabos.com   |  I haven't tested it. -- Donald Knuth

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Suppressing "nesting" (recursion, descent) in array construction.

2007-06-20 Thread Michael McNeil Forbes
Hi,

I have a list of tuples that I am using as keys and I would like to  
sort this along with some other arrays using argsort.  How can I do  
this?  I would like to do something like:

# These are constructed using lists because they accumulate using  
append()
data = [1.0, 3,0]
keys = [('a',1),('b',2)]

# Convert to arrays for indexing
data = array(data1)
keys = array(keys) # <--Converts to a 2d array rather than 1d array  
of tuples   .

inds = argsort(data)
data[:] = data[inds]
keys[:] = keys[inds]

It seems there should be some way of specifying to the array  
constructor not to 'descend' (perhaps by specifying the desired  
dimensions of the final array or something) but I cannot find a nice  
way around this.

Any suggestions?

Thanks,
Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] take semantics (bug?)

2006-12-05 Thread Michael McNeil Forbes
Robert Kern <[EMAIL PROTECTED]> wrote:
> Michael McNeil Forbes wrote:
> > What are the semantics of the "take" function?
> > 
> > I would have expected that the following have the same shape and size:
> > 
> >>>> a = array([1,2,3])
> >>>> inds = a.nonzero()
> >>>> a[inds]
> > array([1, 2, 3])
> >>>> a.take(inds)
> > array([[1, 2, 3]])
> > 
> > Is there a bug somewhere here or is this intentional?
> 
> It's a result of a.nonzero() returning a tuple.
...
> __getitem__ interprets tuples specially: a[1,2,3] == a[(1,2,3)], also a[0,] 
> == a[0].
> 
> .take() doesn't; it simply tries to convert its argument into an array. It 
> can
> convert (array([0, 1, 2]),) into array([[0, 1, 2]]), so it does.

Okay.  I understand why this happens from the code.

1) Is there a design reason for this inconsistent treatment of "indices"?
2) If so, is there some place (perhaps on the Wiki or in some source 
code I cannot find) that design decisions like this are discussed?  (I 
have several other inconsistencies I would like to address, but would 
like to find out if they are "intentional" before wasting people's time.)

Thanks,
Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] take semantics (bug?)

2006-12-03 Thread Michael McNeil Forbes
What are the semantics of the "take" function?

I would have expected that the following have the same shape and size:

>>> a = array([1,2,3])
>>> inds = a.nonzero()
>>> a[inds]
array([1, 2, 3])
>>> a.take(inds)
array([[1, 2, 3]])

Is there a bug somewhere here or is this intentional?

Michael.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion