Re: [Numpy-discussion] IDE's for numpy development?

2015-04-06 Thread Suzen, Mehmet
Hi Chuck,

Spider is good. If you are coming from Matlab world.

http://spyder-ide.blogspot.co.uk/

I don't think it supports C. But Maybe you are after Eclipse.

Best,
-m
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.10 release again.

2015-04-06 Thread Ralf Gommers
On Mon, Apr 6, 2015 at 9:22 PM, Charles R Harris charlesr.har...@gmail.com
wrote:

 Hi All,

 I'd like to mark current PR's for inclusion in 1.10.


Good idea. If you're going to do this, it may be helpful to create a new
1.10 milestone and keep but clean up the 1.10 blockers milestone so there
are only real blockers in there.


 If there is something that you want to have in the release, please mention
 it here by PR #.I think new enhancement PR's should be considered for 1.11
 rather than 1.10, but bug fixes will go in.


Assuming you mean no guarantees for anything that comes in from now on,
rather then no one is allowed to merge new enhancements PRs before the
release split - makes sense.

There is some flexibility, of course, as there are always last minute items
 that come up when release contents are begin decided.


I had a look through the complete set again. Of the ones that are not yet
marked for 1.10, those that look important to get in are:
- new contract function (#5488)
- the whole set of numpy.ma PRs
- the two numpy.distutils PRs (#4378, #5597)
- rewrite of docs on indexing (#4331)
- deciding on a bool indexing deprecation (#4353)
- weighted covariance for corrcoef (#4960)

There are too many PRs marked as 1.10 blockers, I think the only real
blockers are:
- __numpy_ufunc__ PRs (#4815, #4855)
- sgemv segfault workaround (#5237)
- fix for alignment issue (#5656)
- resolving the debate on diagonal (#5407)

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 1.10 release again.

2015-04-06 Thread Charles R Harris
Hi All,

I'd like to mark current PR's for inclusion in 1.10. If there is something
that you want to have in the release, please mention it here by PR #.I
think new enhancement PR's should be considered for 1.11 rather than 1.10,
but bug fixes will go in. There is some flexibility, of course, as there
are always last minute items that come up when release contents are begin
decided.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 01:49, Nathaniel Smith wrote:

 Any opinions, objections?

Accelerate does not break multiprocessing, quite the opposite. The bug 
is in multiprocessing and has been fixed in Python 3.4.

My vote would nevertheless be for OpenBLAS if we can use it without 
producing test failures in NumPy and SciPy.

Most of the test failures with OpenBLAS and Carl Kleffner's toolchain on 
Windows are due to differences between Microsoft and MinGW runtime 
libraries and not due to OpenBLAS itself. These test failures are not 
relevant on Mac.

ATLAS can easily reduce the speed of a matrix product or a linear 
algebra call with a factor of 20 compared to Accelerate, MKL or 
OpenBLAS. It would give us bad karma.


Sturla




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Nathaniel Smith
Hi all,

Starting with 1.9.1, the official numpy OS X wheels (the ones you get by
doing pip install numpy) have been built to use Apple's Accelerate
library for linear algebra. This is fast, but it breaks multiprocessing in
obscure ways (e.g. see this user report:
https://github.com/numpy/numpy/issues/5752).

Unfortunately, there is no obvious best solution to what linear algebra
package to use, so we have to make a decision as to which set of
compromises we prefer.

Options:

Accelerate: fast, but breaks multiprocessing as above.

OpenBLAS: fast, but Julian raised concerns about its trustworthiness last
year (
http://mail.scipy.org/pipermail/numpy-discussion/2014-March/069659.html).
Possibly things have improved since then (I get the impression that they've
gotten some additional developer attention from the Julia community), but I
don't know.

Atlas: slower (faster than reference blas but definitely slower than fancy
options like the above), but solid.

My feeling is that for wheels in particular it's more important that
everything just work than that we get the absolute fastest speeds. And
this is especially true for the multiprocessing issue, given that it's a
widely used part of the stdlib, the failures are really obscure/confusing,
and there is no workaround for python 2 which is still where a majority of
our users still are. So I'd vote for using either atlas or OpenBLAS. (And
would defer to Julian and Matthew about which to choose between these.)

Any opinions, objections?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDE's for numpy development?

2015-04-06 Thread Sturla Molden
On 06/04/15 20:33, Suzen, Mehmet wrote:
 Hi Chuck,

 Spider is good. If you are coming from Matlab world.

 http://spyder-ide.blogspot.co.uk/

 I don't think it supports C. But Maybe you are after Eclipse.

Spyder supports C.


Sturla


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.10 release again.

2015-04-06 Thread Charles R Harris
On Mon, Apr 6, 2015 at 3:01 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:



 On Mon, Apr 6, 2015 at 9:22 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Hi All,

 I'd like to mark current PR's for inclusion in 1.10.


 Good idea. If you're going to do this, it may be helpful to create a new
 1.10 milestone and keep but clean up the 1.10 blockers milestone so there
 are only real blockers in there.


Good idea.




 If there is something that you want to have in the release, please
 mention it here by PR #.I think new enhancement PR's should be considered
 for 1.11 rather than 1.10, but bug fixes will go in.


 Assuming you mean no guarantees for anything that comes in from now on,
 rather then no one is allowed to merge new enhancements PRs before the
 release split - makes sense.


 There is some flexibility, of course, as there are always last minute
 items that come up when release contents are begin decided.


 I had a look through the complete set again. Of the ones that are not yet
 marked for 1.10, those that look important to get in are:


Thanks for taking a look.


 - new contract function (#5488)
 - the whole set of numpy.ma PRs
 - the two numpy.distutils PRs (#4378, #5597)
 - rewrite of docs on indexing (#4331)
 - deciding on a bool indexing deprecation (#4353)
 - weighted covariance for corrcoef (#4960)

 There are too many PRs marked as 1.10 blockers, I think the only real
 blockers are:
 - __numpy_ufunc__ PRs (#4815, #4855)
 - sgemv segfault workaround (#5237)
 - fix for alignment issue (#5656)
 - resolving the debate on diagonal (#5407)


Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.10 release again.

2015-04-06 Thread Nathaniel Smith
On Apr 6, 2015 2:01 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:

 There are too many PRs marked as 1.10 blockers, I think the only real
blockers are:
 - __numpy_ufunc__ PRs (#4815, #4855)

The main blocker here is figuring out how to coordinate __numpy_ufunc__ and
__binop__ dispatch, e.g. PR #5748. We need to either resolve this or
disable __numpy_ufunc__ for another release (which would suck).

This needs some careful attention, so it'd be great if people could take a
look.

 - sgemv segfault workaround (#5237)
 - fix for alignment issue (#5656)

Agreed on these.

 - resolving the debate on diagonal (#5407)

Not really a blocker IMHO -- if we release 1.10 with the same settings as
1.9, then no harm will be done. (I guess some docs might be slightly off.)
And IMO that's the proper resolution for the moment anyway :-).

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.10 release again.

2015-04-06 Thread Charles R Harris
On Mon, Apr 6, 2015 at 5:59 PM, Nathaniel Smith n...@pobox.com wrote:

 On Apr 6, 2015 2:01 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
 
  There are too many PRs marked as 1.10 blockers, I think the only real
 blockers are:
  - __numpy_ufunc__ PRs (#4815, #4855)

 The main blocker here is figuring out how to coordinate __numpy_ufunc__
 and __binop__ dispatch, e.g. PR #5748. We need to either resolve this or
 disable __numpy_ufunc__ for another release (which would suck).

 This needs some careful attention, so it'd be great if people could take a
 look.

  - sgemv segfault workaround (#5237)
  - fix for alignment issue (#5656)


I think #5316 is the alignment fix.


 Agreed on these.

  - resolving the debate on diagonal (#5407)



 Not really a blocker IMHO -- if we release 1.10 with the same settings as
 1.9, then no harm will be done. (I guess some docs might be slightly off.)
 And IMO that's the proper resolution for the moment anyway :-).


Asked for this to be reopened anyway, as it was closed by accident.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 02:41, Nathaniel Smith wrote:

 Sure, but in some cases accelerate reduces speed by a factor of infinity
 by hanging, and OpenBLAS may or may not give wrong answers (but
 quickly!) since apparently they don't do regression tests, so we have to
 pick our poison.

OpenBLAS is safer on Mac than Windows (no MinGW related errors on Mac) 
so we should try it and see what happens.

GotoBLAS2 used to be great so it can't be that bad :-)




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Matthew Brett
Hi,

On Mon, Apr 6, 2015 at 5:13 PM, Sturla Molden sturla.mol...@gmail.com wrote:
 On 07/04/15 01:49, Nathaniel Smith wrote:

 Any opinions, objections?

 Accelerate does not break multiprocessing, quite the opposite. The bug
 is in multiprocessing and has been fixed in Python 3.4.

 My vote would nevertheless be for OpenBLAS if we can use it without
 producing test failures in NumPy and SciPy.

 Most of the test failures with OpenBLAS and Carl Kleffner's toolchain on
 Windows are due to differences between Microsoft and MinGW runtime
 libraries and not due to OpenBLAS itself. These test failures are not
 relevant on Mac.

 ATLAS can easily reduce the speed of a matrix product or a linear
 algebra call with a factor of 20 compared to Accelerate, MKL or
 OpenBLAS. It would give us bad karma.

ATLAS compiled with gcc also gives us some more license complication:

http://numpy-discussion.10968.n7.nabble.com/Copyright-status-of-NumPy-binaries-on-Windows-OS-X-tp38793p38824.html

I agree that big slowdowns would be dangerous for numpy's reputation.

Sturla - do you have a citable source for your factor of 20 figure?

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 02:13, Sturla Molden wrote:

 Most of the test failures with OpenBLAS and Carl Kleffner's toolchain on
 Windows are due to differences between Microsoft and MinGW runtime
 libraries

... and also differences in FPU precision.


Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Multidimensional Indexing

2015-04-06 Thread Nicholas Devenish
With the indexing example from the documentation:

y = np.arange(35).reshape(5,7)

Why does selecting an item from explicitly every row work as I’d expect:
 y[np.array([0,1,2,3,4]),np.array([0,0,0,0,0])]
array([ 0,  7, 14, 21, 28])

But doing so from a full slice (which, I would naively expect to mean “Every 
Row”) has some…other… behaviour:

 y[:,np.array([0,0,0,0,0])]
array([[ 0,  0,  0,  0,  0],
   [ 7,  7,  7,  7,  7],
   [14, 14, 14, 14, 14],
   [21, 21, 21, 21, 21],
   [28, 28, 28, 28, 28]])

What is going on in this example, and how do I get what I expect? By explicitly 
passing in an extra array with value===index? What is the rationale for this 
difference in behaviour?

Thanks,

Nick
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 02:19, Matthew Brett wrote:

 ATLAS compiled with gcc also gives us some more license complication:

 http://numpy-discussion.10968.n7.nabble.com/Copyright-status-of-NumPy-binaries-on-Windows-OS-X-tp38793p38824.html

Ok, then I have a question regarding OpenBLAS:

Do we use the f2c'd lapack_lite or do we build LAPACK with gfortran and 
link into OpenBLAS? In the latter case we might get the libquadmath 
linked into the OpenBLAS binary as well.



 I agree that big slowdowns would be dangerous for numpy's reputation.

 Sturla - do you have a citable source for your factor of 20 figure?

I will look it up. The best thing would be to do a new benchmark though.

Another thing is it depends on the hardware. ATLAS is not very scalable 
on multiple processors, so it will be worse on a Mac Pro than a Macbook. 
It will also we worse with AVX than without.



Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Nathaniel Smith
On Apr 6, 2015 5:13 PM, Sturla Molden sturla.mol...@gmail.com wrote:

 On 07/04/15 01:49, Nathaniel Smith wrote:

  Any opinions, objections?

 Accelerate does not break multiprocessing, quite the opposite. The bug
 is in multiprocessing and has been fixed in Python 3.4.

I disagree, but it hardly matters: you can call it a bug in accelerate, or
call it a bug in python, but either way it's an issue that affects our
users and we need to either work around it or not.

 ATLAS can easily reduce the speed of a matrix product or a linear
 algebra call with a factor of 20 compared to Accelerate, MKL or
 OpenBLAS. It would give us bad karma.

Sure, but in some cases accelerate reduces speed by a factor of infinity by
hanging, and OpenBLAS may or may not give wrong answers (but quickly!)
since apparently they don't do regression tests, so we have to pick our
poison.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion