Re: [Numpy-discussion] Removal of some numpy files

2017-02-25 Thread David Cournapeau
tools/win32build is used to build the so-called superpack installers, which
we don't build anymore AFAIK

tools/numpy-macosx-installer is used to build the .dmg for numpy (also not
used anymore AFAIK).

On Sat, Feb 25, 2017 at 3:21 PM, Charles R Harris  wrote:

> Hi All,
>
> While looking through the numpy tools directory I noticed some scripts
> that look outdated that might be candidates for removal:
>
>1. tools/numpy-macosx-installer/
>2. tools/win32build/
>
> Does anyone know if either of those are stlll relevant?
>
> Cheers,
>
> Chuck
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.3, scipy 0.18.1, MSVC 2015 and crashes in complex functions

2017-01-23 Thread David Cournapeau
Indeed. I wrongly assumed that since gholke's wheels did not crash, they
did not run into that issue.

That sounds like an ABI issue, since I suspect intel math library supports
C99 complex numbers. I will add info on that issue then,

David

On Mon, Jan 23, 2017 at 11:46 AM, Evgeni Burovski <
evgeny.burovs...@gmail.com> wrote:

> Related to https://github.com/scipy/scipy/issues/6336?
> 23.01.2017 14:40 пользователь "David Cournapeau" <courn...@gmail.com>
> написал:
>
>> Hi there,
>>
>> While building the latest scipy on top of numpy 1.11.3, I have noticed
>> crashes while running the scipy test suite, in scipy.special (e.g. in
>> scipy.special hyp0f1 test).. This only happens on windows for python 3.5
>> (where we use MSVC 2015 compiler).
>>
>> Applying some violence to distutils, I re-built numpy/scipy with debug
>> symbols, and the debugger claims that crashes happen inside scipy.special
>> ufunc cython code, when calling clog or csqrt. I first suspected a compiler
>> bug, but disabling those functions in numpy, to force using our own
>> versions in npymath, made the problem go away.
>>
>> I am a bit suspicious about the whole thing as neither conda's or
>> gholke's wheel crashed. Has anybody else encountered this ?
>>
>> David
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.11.3, scipy 0.18.1, MSVC 2015 and crashes in complex functions

2017-01-23 Thread David Cournapeau
Hi there,

While building the latest scipy on top of numpy 1.11.3, I have noticed
crashes while running the scipy test suite, in scipy.special (e.g. in
scipy.special hyp0f1 test).. This only happens on windows for python 3.5
(where we use MSVC 2015 compiler).

Applying some violence to distutils, I re-built numpy/scipy with debug
symbols, and the debugger claims that crashes happen inside scipy.special
ufunc cython code, when calling clog or csqrt. I first suspected a compiler
bug, but disabling those functions in numpy, to force using our own
versions in npymath, made the problem go away.

I am a bit suspicious about the whole thing as neither conda's or gholke's
wheel crashed. Has anybody else encountered this ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.12.0 release

2017-01-18 Thread David Cournapeau
On Wed, Jan 18, 2017 at 11:43 AM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:

> The version of gcc used will make a large difference in some places.
> E.g. the AVX2 integer ufuncs require something around 4.5 to work and in
> general the optimization level of gcc has improved greatly since the
> clang competition showed up around that time. centos 5 has 4.1 which is
> really ancient.
> I though the wheels used newer gccs also on centos 5?
>

I don't know if it is mandatory for many wheels, but it is possilbe to
build w/ gcc 4.8 at least, and still binary compatibility with centos 5.X
and above, though I am not sure about the impact on speed.

It has been quite some time already that building numpy/scipy with gcc 4.1
causes troubles with errors and even crashes anyway, so you definitely want
to use a more recent compiler in any case.

David


> On 18.01.2017 08:27, Nathan Goldbaum wrote:
> > I've seen reports on the anaconda mailing list of people seeing similar
> > speed ups when they compile e.g. Numpy with a recent gcc. Anaconda has
> > the same issue as manylinux in that they need to use versions of GCC
> > available on CentOS 5.
> >
> > Given the upcoming official EOL for CentOS5, it might make sense to
> > think about making a pep for a CentOS 6-based manylinux2 docker image,
> > which will allow compiling with a newer GCC.
> >
> > On Tue, Jan 17, 2017 at 9:15 PM Jerome Kieffer <jerome.kief...@esrf.fr
> > <mailto:jerome.kief...@esrf.fr>> wrote:
> >
> > On Tue, 17 Jan 2017 08:56:42 -0500
> >
> > Neal Becker <ndbeck...@gmail.com <mailto:ndbeck...@gmail.com>>
> wrote:
> >
> >
> >
> > > I've installed via pip3 on linux x86_64, which gives me a wheel.
> My
> >
> > > question is, am I loosing significant performance choosing this
> > pre-built
> >
> > > binary vs. compiling myself?  For example, my processor might have
> > some more
> >
> > > features than the base version used to build wheels.
> >
> >
> >
> > Hi,
> >
> >
> >
> > I have done some benchmarking (%timeit) for my code running in a
> >
> > jupyter-notebook within a venv installed with pip+manylinux wheels
> >
> > versus ipython and debian packages (on the same computer).
> >
> > I noticed the debian installation was ~20% faster.
> >
> >
> >
> > I did not investigate further if those 20% came from the manylinux (I
> >
> > suspect) or from the notebook infrastructure.
> >
> >
> >
> > HTH,
> >
> > --
> >
> > Jérôme Kieffer
> >
> >
> >
> > ___
> >
> > NumPy-Discussion mailing list
> >
> > NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
> >
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dropping sourceforge for releases.

2016-10-02 Thread David Cournapeau
+1 from me.

If we really need some distribution on top of github/pypi, note that
bintray (https://bintray.com/) is free for OSS projects, and is a much
better experience than sourceforge.

David

On Sun, Oct 2, 2016 at 12:02 AM, Charles R Harris <charlesr.har...@gmail.com
> wrote:

> Hi All,
>
> Ralf has suggested dropping sourceforge as a NumPy release site. There was
> discussion of doing that some time back but we have not yet done it. Now
> that we put wheels up on PyPI for all supported architectures source forge
> is not needed. I note that there are still some 15,000 downloads a week
> from the site, so it is still used.
>
> Thoughts?
>
> Chuck
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

2016-08-31 Thread David Morris
On Wed, Aug 31, 2016 at 2:28 PM, Michael Bieri <mibi...@gmail.com> wrote:

> Hi all
>
> There are several ways on how to use C/C++ code from Python with NumPy, as
> given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
> there's at least pybind11.
>
> I'm not quite sure which approach is state-of-the-art as of 2016. How
> would you do it if you had to make a C/C++ library available in Python
> right now?
>
> In my case, I have a C library with some scientific functions on matrices
> and vectors. You will typically call a few functions to configure the
> computation, then hand over some pointers to existing buffers containing
> vector data, then start the computation, and finally read back the data.
> The library also can use MPI to parallelize.
>

I have been delighted with Cython for this purpose.  Great integration with
NumPy (you can access numpy arrays directly as C arrays), very python like
syntax and amazing performance.

Good luck,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Added atleast_nd, request for clarification/cleanup of atleast_3d

2016-07-06 Thread David
Joseph Fox-Rabinovitz  
gmail.com> writes:

> 
> On Wed, Jul 6, 2016 at 2:57 PM, Eric 
Firing  hawaii.edu> wrote:
> > On 2016/07/06 8:25 AM, Benjamin Root 
wrote:
> >>
> >> I wouldn't have the keyword be 
"where", as that collides with the notion
> >> of "where" elsewhere in numpy.
> >
> >
> > Agreed.  Maybe "side"?
> 
> I have tentatively changed it to "pos". 
The reason that I don't like
> "side" is that it implies only a subset 
of the possible ways that that
> the position of the new dimensions can 
be specified. The current
> implementation only puts things on one 
side or the other, but I have
> considered also allowing an array of 
indices at which to place new
> dimensions, and/or a dictionary keyed by 
the starting ndims. I do not
> think "side" would be appropriate for 
these extended cases, even if
> they are very unlikely to ever 
materialize.
> 
> -Joe
> 
> > (I find atleast_1d and atleast_2d to 
be very helpful for handling inputs, as
> > Ben noted; I'm skeptical as to the 
value of atleast_3d and atleast_nd.)
> >
> > Eric
> >
> > 
__
_
> > NumPy-Discussion mailing list
> > NumPy-Discussion  scipy.org
> > 
https://mail.scipy.org/mailman/listinfo/nu
mpy-discussion
> 


About `order='C'` or `order='F'` for the 
argument name?



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How best to turn JSON into a CSV or Pandas data frame table?

2016-06-25 Thread David Shi




 Which are the best ways to turn a JSON object into a CSV or Pandas data frame 
table?
Looking forward to hearing from you.
Regards.
David

   ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-06-01 Thread David Cournapeau
On Tue, May 31, 2016 at 10:36 PM, Sturla Molden <sturla.mol...@gmail.com>
wrote:

> Joseph Martinot-Lagarde <contreba...@gmail.com> wrote:
>
> > The problem with FFTW is that its license is more restrictive (GPL), and
> > because of this may not be suitable everywhere numpy.fft is.
>
> A lot of us use NumPy linked with MKL or Accelerate, both of which have
> some really nifty FFTs. And the license issue is hardly any worse than
> linking with them for BLAS and LAPACK, which we do anyway. We could extend
> numpy.fft to use MKL or Accelerate when they are available.
>

That's what we used to do in scipy, but it was a PITA to maintain. Contrary
to blas/lapack, fft does not have a standard API, hence exposing a
consistent API in python, including data layout involved quite a bit of
work.

It is better to expose those through 3rd party APIs.

David


> Sturla
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Unable to pickle numpy array on iOS

2016-05-02 Thread David Morris
I have an application running on iOS where I pickle a numpy array in order
to save it for later use.  However, I receive the following error:

pickle.dumps(arr)
...
_pickle.PicklingError: Can't pickle :
import of module 'multiarray' failed

On a desktop system (OSX), there is no problem dumping the array.

I am using NumPy v1.9.3

Any ideas on why this might be happening?

Thank you,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Windows wheels, built, but should we deploy?

2016-03-04 Thread David Cournapeau
On Fri, Mar 4, 2016 at 4:42 AM, Matthew Brett <matthew.br...@gmail.com>
wrote:

> Hi,
>
> Summary:
>
> I propose that we upload Windows wheels to pypi.  The wheels are
> likely to be stable and relatively easy to maintain, but will have
> slower performance than other versions of numpy linked against faster
> BLAS / LAPACK libraries.
>
> Background:
>
> There's a long discussion going on at issue github #5479 [1], where
> the old problem of Windows wheels for numpy came up.
>
> For those of you not following this issue, the current situation for
> community-built numpy Windows binaries is dire:
>
> * We have not so far provided windows wheels on pypi, so `pip install
> numpy` on Windows will bring you a world of pain;
> * Until recently we did provide .exe "superpack" installers on
> sourceforge, but these became increasingly difficult to build and we
> gave up building them as of the latest (1.10.4) release.
>
> Despite this, popularity of Windows wheels on pypi is high.   A few
> weeks ago, Donald Stufft ran a query for the binary wheels most often
> downloaded from pypi, for any platform [2] . The top five most
> downloaded were (n_downloads, name):
>
> 6646,
> numpy-1.10.4-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
> 5445, cryptography-1.2.1-cp27-none-win_amd64.whl
> 5243, matplotlib-1.4.0-cp34-none-win32.whl
> 5241, scikit_learn-0.15.1-cp34-none-win32.whl
> 4573, pandas-0.17.1-cp27-none-win_amd64.whl
>
> So a) the OSX numpy wheel is very popular and b) despite the fact that
> we don't provide a numpy wheel for Windows, matplotlib, sckit_learn
> and pandas, that depend on numpy, are the 3rd, 4th and 5th most
> downloaded wheels as of a few weeks ago.
>
> So, there seems to be a large appetite for numpy wheels.
>
> Current proposal:
>
> I have now built numpy wheels, using the ATLAS blas / lapack library -
> the build is automatic and reproducible [3].
>
> I chose ATLAS to build against, rather than, say OpenBLAS, because
> we've had some significant worries in the past about the reliability
> of OpenBLAS, and I thought it better to err on the side of
> correctness.
>
> However, these builds are relatively slow for matrix multiply and
> other linear algebra routines compared numpy built against OpenBLAS or
> MKL (which we cannot use because of its license) [4].   In my very
> crude array test of a dot product and matrix inversion, the ATLAS
> wheels were 2-3 times slower than MKL.  Other benchmarks on Julia
> found about the same result for ATLAS vs OpenBLAS on 32-bit bit, but a
> much bigger difference on 64-bit (for an earlier version of ATLAS than
> we are currently using) [5].
>
> So, our numpy wheels likely to be stable and give correct results, but
> will be somewhat slow for linear algebra.
>

I would not worry too much about this: at worst, this gives us back the
situation where we were w/ so-called superpack, which have been successful
in the past to spread numpy use on windows.

My main worry is whether this locks us into ATLAS  for a long time because
of package depending on numpy blas/lapack (scipy, scikit learn). I am not
sure how much this is the case.

David


>
> I propose that we upload these ATLAS wheels to pypi.  The upside is
> that this gives our Windows users a much better experience with pip,
> and allows other developers to build Windows wheels that depend on
> numpy.  The downside is that these will not be optimized for
> performance on modern processors.  In order to signal that, I propose
> adding the following text to the numpy pypi front page:
>
> ```
> All numpy wheels distributed from pypi are BSD licensed.
>
> Windows wheels are linked against the ATLAS BLAS / LAPACK library,
> restricted to SSE2 instructions, so may not give optimal linear
> algebra performance for your machine. See
> http://docs.scipy.org/doc/numpy/user/install.html for alternatives.
> ```
>
> In a way this is very similar to our previous situation, in that the
> superpack installers also used ATLAS - in fact an older version of
> ATLAS.
>
> Once we are up and running with numpy wheels, we can consider whether
> we should switch to other BLAS libraries, such as OpenBLAS or BLIS
> (see [6]).
>
> I'm posting here hoping for your feedback...
>
> Cheers,
>
> Matthew
>
>
> [1] https://github.com/numpy/numpy/issues/5479
> [2] https://gist.github.com/dstufft/1dda9a9f87ee7121e0ee
> [3] https://ci.appveyor.com/project/matthew-brett/np-wheel-builder
> [4] http://mingwpy.github.io/blas_lapack.html#intel-math-kernel-library
> [5] https://github.com/numpy/numpy/issues/5479#issuecomment-185033668
> [6] https://github.com/numpy/num

Re: [Numpy-discussion] PyData Madrid

2016-02-20 Thread David Cournapeau
On Sat, Feb 20, 2016 at 5:26 PM, Kiko <kikocorre...@gmail.com> wrote:

>
>
> 2016-02-20 17:58 GMT+01:00 Ralf Gommers <ralf.gomm...@gmail.com>:
>
>>
>>
>> On Wed, Feb 17, 2016 at 9:46 PM, Sebastian Berg <
>> sebast...@sipsolutions.net> wrote:
>>
>>> On Mi, 2016-02-17 at 20:59 +0100, Jaime Fernández del Río wrote:
>>> > Hi all,
>>> >
>>> > I just found out there is a PyData Madrid happening in early April,
>>> > and it would feel wrong not to go, it being my hometown and all.
>>> >
>>> > Aside from the usual "Who else is going? We should meet!" I was also
>>> > thinking of submitting a proposal for a talk.  My idea was to put
>>> > something together on "The future of NumPy indexing" and use it as an
>>> > opportunity to raise awareness and hopefully gather feedback from
>>> > users on the proposed changes, in sort of a "if the mountain won't
>>> > come to Muhammad" type of thing.
>>> >
>>>
>>> I guess you do know my last name means mountain in german? But if
>>> Muhammed might come, I should really improve my arabic ;).
>>>
>>> In any case sounds good to me if you like to do it, I don't think I
>>> will go, though it sounds nice.
>>>
>>
>> Sounds like a good idea to me too. I like both the concrete topic, as
>> well as just having a talk on Numpy at a PyData conference. In general
>> there are too few (if any) talks on Numpy and other core libraries at
>> PyData and Scipy confs I think.
>>
>
> +1.
>
> It would be great a numpy talk from a core developer. BTW, C4P closes
> tomorrow!!!
>
> Jaime, if you come to Madrid you know you have some beers waiting for you.
>
> Disclaimer, I'm one of co-organizers of the PyData Madrid.
>

Since when does one need disclaimer when offering beers ? That would make
for a dangerous precedent :)

David

>
> Best.
>
>
>> Ralf
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building NumPy with gcc if Python was built with icc?!?

2016-02-17 Thread David Cournapeau
On Tue, Feb 16, 2016 at 7:39 PM, BERGER Christian <christian.ber...@3ds.com>
wrote:

> Hi All,
>
>
>
> Here's a potentially dumb question: is it possible to build NumPy with
> gcc, if python was built with icc?
>
> Right now, the build is failing in the toolchain check phase, because gcc
> doesn't know how to handle icc-specific c flags (like -fp-model, prec-sqrt,
> ...)
>
> In our environment we're providing an embedded python that our customers
> should be able to use and extend with 3rd party modules (like numpy).
> Problem is that our sw is built using icc, but we don't want to force our
> customers to do the same and we also don't want to build every possible 3rd
> party module for our customers.
>

If you are the one providing python, your best bet is to post process your
python to strip the info from Intel-specific options. The process is
convoluted, but basically:

- at configure time, python makes a difference between CFLAGS and OPTS, and
will store both in your Makefile
- when python is installed, it will parse the Makefile and generate some
dict written in the python stdlib as _sysconfigdata.py

_sysconfigdata.py is what's used by distutils to build C extensions.

This is only valid on Unix/cygwin, if you are on windows, the process is
completely different.

David


>
> Thanks for your help,
>
> Christian
>
>
>
> This email and any attachments are intended solely for the use of the
> individual or entity to whom it is addressed and may be confidential and/or
> privileged.
>
> If you are not one of the named recipients or have received this email in
> error,
>
> (i) you should not read, disclose, or copy it,
>
> (ii) please notify sender of your receipt by reply email and delete this
> email and all attachments,
>
> (iii) Dassault Systemes does not accept or assume any liability or
> responsibility for any use of or reliance on this email.
>
> For other languages, go to http://www.3ds.com/terms/email-disclaimer
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should I use pip install numpy in linux?

2016-01-15 Thread David Cournapeau
On Fri, Jan 15, 2016 at 9:56 PM, Steve Waterbury <water...@pangalactic.us>
wrote:

> On 01/15/2016 04:08 PM, Benjamin Root wrote:
>
>> So, again, I love conda for what it can do when it works well. I only
>> take exception to the notion that it can address *all* problems, because
>> there are some problems that it just simply isn't properly situated for.
>>
>
> Actually, I would say you didn't mention any ... ;)  The issue is
> not that it "isn't properly situated for" (whatever that means)
> the problems you describe, but that -- in the case you mention,
> for example -- no one has conda-packaged those solutions yet.
>
> FWIW, our sysadmins and I use conda for django / apache / mod_wsgi
> sites and we are very happy with it.  IMO, compiling mod_wsgi in
> the conda environment and keeping it up is trivial compared to the
> awkwardnesses introduced by using pip/virtualenv in those cases.
>
> We also use conda for sites with nginx and the conda-packaged
> uwsgi, which works great and even permits the use of a separate
> env (with, if necessary, different versions of django, etc.)
> for each application.  No need to set up an entire VM for each app!
> *My* sysadmins love conda -- as soon as they saw how much better
> than pip/virtualenv it was, they have never looked back.
>
> IMO, conda is by *far* the best packaging solution the python
> community has ever seen (and I have been using python for more
> than 20 years).  I too have been stunned by some of the resistance
> to conda that one sometimes sees in the python packaging world.
> I've had a systems package maintainer tell me "it solves a
> different problem [than pip]" ... hmmm ... I would say it
> solves the same problem *and more*, *better*.  I attribute
> some of the conda-ignoring to "NIH" and, to some extent,
> possibly defensiveness (I would be defensive too if I had been
> working on pip as long as they had when conda came along ;).
>

Conda and pip solve some of the same problems, but pip also does quite a
bit more than conda (and vice et versa, as conda also acts akin
rvm-for-python). Conda works so well because it supports a subset of what
pip does: install things from binaries. This is the logical thing to do
when you want to distribute binaries because in the python world, since for
historical reasons, metadata in the general python lang are dynamic by the
very nature of setup.py.

For a long time, pip worked from sources instead of binaries (this is
actually the reason why it was started following easy_install), and thus
had to cope w/ those dynamic metadata. It also has to deal w/ building
packages and the whole distutils/setuptools interoperability mess. Conda
being solely a packaging solution can sidestep all this complexity (which
is again a the logical and smart thing to do if what you care is
deployment).

Having pip understand conda packages is a non trivial endeavour: since
conda packages are relocatable, they are not compatible with the usual
python interpreters, and setuptools metadata is neither a subset or a
superset of conda metadata. Regarding conda/pip interoperability, there are
things that conda could (and IMO should) do, such as writing the expected
metadata in site-packages (PEP 376). Currently, conda does not recognize
packages installed by pip (because it does not implement PEP 376 and co),
so if you do a "pip install ." of a package, it will likely break existing
package if present.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining a base linux-64 environment [was: Should I use pip install numpy in linux?]

2016-01-09 Thread David Cournapeau
On Sat, Jan 9, 2016 at 12:20 PM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:

> On 09.01.2016 12:52, Robert McGibbon wrote:
> > Hi all,
> >
> > I went ahead and tried to collect a list of all of the libraries that
> > could be considered to constitute the "base" system for linux-64. The
> > strategy I used was to leverage off the work done by the folks at
> > Continuum by searching through their pre-compiled binaries
> > from https://repo.continuum.io/pkgs/free/linux-64/ to find shared
> > libraries that were dependened on (according to ldd)  that were not
> > accounted for by the declared dependencies that each package made known
> > to the conda package manager.
> >
>
> do those packages use ld --as-needed for linking?
> there are a lot libraries in that list that I highly doubt are directly
> used by the packages.
>

It is also a common problem when building packages without using a "clean"
build environment, as it is too easy to pick up dependencies accidentally,
especially for autotools-based packages (unless one uses pbuilder or
similar tools).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should I use pip install numpy in linux?

2016-01-09 Thread David Cournapeau
On Sat, Jan 9, 2016 at 12:12 PM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:

> On 09.01.2016 04:38, Nathaniel Smith wrote:
> > On Fri, Jan 8, 2016 at 7:17 PM, Nathan Goldbaum <nathan12...@gmail.com>
> wrote:
> >> Doesn't building on CentOS 5 also mean using a quite old version of gcc?
> >
> > Yes. IIRC CentOS 5 ships with gcc 4.4, and you can bump that up to gcc
> > 4.8 by using the Redhat Developer Toolset release (which is gcc +
> > special backport libraries to let it generate RHEL5/CentOS5-compatible
> > binaries). (I might have one or both of those version numbers slightly
> > wrong.)
> >
> >> I've never tested this, but I've seen claims on the anaconda mailing
> list of
> >> ~25% slowdowns compared to building from source or using system
> packages,
> >> which was attributed to building using an older gcc that doesn't
> optimize as
> >> well as newer versions.
> >
> > I'd be very surprised if that were a 25% slowdown in general, as
> > opposed to a 25% slowdown on some particular inner loop that happened
> > to neatly match some new feature in a new gcc (e.g. something where
> > the new autovectorizer kicked in). But yeah, in general this is just
> > an inevitable trade-off when it comes to distributing binaries: you're
> > always going to pay some penalty for achieving broad compatibility as
> > compared to artisanally hand-tuned binaries specialized for your
> > machine's exact OS version, processor, etc. Not much to be done,
> > really. At some point the baseline for compatibility will switch to
> > "compile everything on CentOS 6", and that will be better but it will
> > still be worse than binaries that target CentOS 7, and so on and so
> > forth.
> >
>
> I have over the years put in one gcc specific optimization after the
> other so yes using an ancient version will make many parts significantly
> slower. Though that is not really a problem, updating a compiler is easy
> even without redhats devtoolset.
>
> At least as far as numpy is concerned linux binaries should not be a
> very big problem. The only dependency where the version matters is glibc
> which has updated its interfaces we use (in a backward compatible way)
> many times.
> But here if we use a old enough baseline glibc (e.g. centos5 or ubuntu
> 10.04) we are fine at reasonable performance costs, basically only
> slower memcpy.
>
> Scipy on the other hand is a larger problem as it contains C++ code.
> Linux systems are now transitioning to C++11 which is binary
> incompatible in parts to the old standard. There a lot of testing is
> necessary to check if we are affected.
> How does Anaconda deal with C++11?
>

For canopy packages, we use the RH devtoolset w/ gcc 4.8.X, and statically
link the C++ stdlib.

It has worked so far for the few packages requiring C++11 and gcc > 4.4
(llvm/llvmlite/dynd), but that's not a solution I am a fan of myself, as
the implications are not always very clear.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: stop providing official win32 downloads (for now)

2015-12-22 Thread David Cournapeau
On Tue, Dec 22, 2015 at 7:11 PM, Chris Barker <chris.bar...@noaa.gov> wrote:

> On Mon, Dec 21, 2015 at 10:05 PM, Ralf Gommers <ralf.gomm...@gmail.com>
> wrote:
>
>>
>>>> There's a good chance that many downloads are from unsuspecting users
>> with a 64-bit Python, and they then just get an unhelpful "cannot find
>> Python" error from the installer.
>>
>
> could be -- hard to know.
>
>
>> At least until we have binary wheels on PyPi.
>>>
>>> What's up with that, by the way?
>>>
>>
>> I expect those to appear in 2016, built with MinGW-w64 and OpenBLAS.
>>
>
> nice. Anyway, I do think it's important to have a "official" easy way to
> get numpy for pyton.org pythons.
>
> numpy does/can/should see a lot of use outside the "scientific computing"
> community. And people are wary of dependencies. people should be able to
> use numpy in their projects, without requiring that their users start all
> over with Anaconda or ???
>
> The ideal is for "pip install" to "just work" -- sound sike we're getting
> there.
>
> BTW, we've been wary of putting a 32 bit wheel up 'cause of the whole
> "what processor features to require" issue, but if we think it's OK to drop
> the binary installer altogether, I can't see the harm in putting a 32 bit
> SSE2 wheel up.
>
> Any way to know how many people are running 32 bit Python on Windows these
> days??
>

I don't claim we are representative of the whole community, but as far as
canopy is concerned, it is still a significant platform. That's the only 32
bit platform we still support (both linux and osx 32 bits were < 1 % of our
downloads)

David


>
> -CHB
>
>
>
>
>
>
>
>
>
>
>
>
>
>>
>> Ralf
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-11 Thread David Cournapeau
On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald <archib...@astron.nl> wrote:

> Actually, GCC implements 128-bit floats in software and provides them as
> __float128; there are also quad-precision versions of the usual functions.
> The Intel compiler provides this as well, I think, but I don't think
> Microsoft compilers do. A portable quad-precision library might be less
> painful.
>
> The cleanest way to add extended precision to numpy is by adding a
> C-implemented dtype. This can be done in an extension module; see the
> quaternion and half-precision modules online.
>

We actually used __float128 dtype as an example of how to create a custom
dtype for a numpy C tutorial we did w/ Stefan Van der Walt a few years ago
at SciPy.

IIRC, one of the issue to make it more than a PoC was that numpy hardcoded
things like long double being the higest precision, etc... But that may has
been fixed since then.

David

> Anne
>
> On Fri, Dec 11, 2015, 16:46 Charles R Harris <charlesr.har...@gmail.com>
> wrote:
>
>> On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel <baruc...@gmx.com>
>> wrote:
>>
>>> From time to time it is asked on forums how to extend precision of
>>> computation on Numpy array. The most common answer
>>> given to this question is: use the dtype=object with some arbitrary
>>> precision module like mpmath or gmpy.
>>> See
>>> http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-algebra
>>> or
>>> http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath
>>> or
>>> http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
>>>
>>> While this is obviously the most relevant answer for many users because
>>> it will allow them to use Numpy arrays exactly
>>> as they would have used them with native types, the wrong thing is that
>>> from some point of view "true" vectorization
>>> will be lost.
>>>
>>> With years I got very familiar with the extended double-double type
>>> which has (for usual architectures) about 32 accurate
>>> digits with faster arithmetic than "arbitrary precision types". I even
>>> used it for research purpose in number theory and
>>> I got convinced that it is a very wonderful type as long as such
>>> precision is suitable.
>>>
>>> I often implemented it partially under Numpy, most of the time by trying
>>> to vectorize at a low-level the libqd library.
>>>
>>> But I recently thought that a very nice and portable way of implementing
>>> it under Numpy would be to use the existing layer
>>> of vectorization on floats for computing the arithmetic operations by
>>> "columns containing half of the numbers" rather than
>>> by "full numbers". As a proof of concept I wrote the following file:
>>> https://gist.github.com/baruchel/c86ed748939534d8910d
>>>
>>> I converted and vectorized the Algol 60 codes from
>>> http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf
>>> (Dekker, 1971).
>>>
>>> A test is provided at the end; for inverting 100,000 numbers, my type is
>>> about 3 or 4 times faster than GMPY and almost
>>> 50 times faster than MPmath. It should be even faster for some other
>>> operations since I had to create another np.ones
>>> array for testing this type because inversion isn't implemented here
>>> (which could of course be done). You can run this file by yourself
>>> (maybe you will have to discard mpmath or gmpy if you don't have it).
>>>
>>> I would like to discuss about the way to make available something
>>> related to that.
>>>
>>> a) Would it be relevant to include that in Numpy ? (I would think to
>>> some "contribution"-tool rather than including it in
>>> the core of Numpy because it would be painful to code all ufuncs; on the
>>> other hand I am pretty sure that many would be happy
>>> to perform several arithmetic operations by knowing that they can't use
>>> cos/sin/etc. on this type; in other words, I am not
>>> sure it would be a good idea to embed it as an every-day type but I
>>> think it would be nice to have it quickly available
>>> in some way). If you agree with that, in which way should I code it (the
>>> current link only is a "proof of concept"; I would
>>> be very happy to code it in some cleaner way)?
>>>
>>> b) Do you think such attempt should remain something external to Numpy
&

Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-06 Thread DAVID SAROFF (RIT Student)
Allan,

I see with a google search on your name that you are in the physics
department at Rutgers. I got my BA in Physics there. 1975. Biological
physics. A thought: Is there an entropy that can be assigned to the dna in
an organism? I don't mean the usual thing, coupled to the heat bath.
Evolution blindly explores metabolic and signalling pathways, and tends
towards disorder, as long as it functions. Someone working out signaling
pathways some years ago wrote that they were senselessly complex, branched
and interlocked. I think that is to be expected. Evolution doesn't find
minimalist, clear, rational solutions. Look at the amazon rain forest. What
are all those beetles and butterflies and frogs for? It is the wrong
question. I think some measure of the complexity could be related to the
amount of time that ecosystem has existed. Similarly for genomes.

On Sun, Dec 6, 2015 at 6:55 PM, Allan Haldane <allanhald...@gmail.com>
wrote:

>
> I've also often wanted to generate large datasets of random uint8 and
> uint16. As a workaround, this is something I have used:
>
> np.ndarray(100, 'u1', np.random.bytes(100))
>
> It has also crossed my mind that np.random.randint and np.random.rand
> could use an extra 'dtype' keyword. It didn't look easy to implement though.
>
> Allan
>
> On 12/06/2015 04:55 PM, DAVID SAROFF (RIT Student) wrote:
>
>> Matthew,
>>
>> That looks right. I'm concluding that the .astype(np.uint8) is applied
>> after the array is constructed, instead of during the process. This
>> random array is a test case. In the production analysis of radio
>> telescope data this is how the data comes in, and there is no  problem
>> with 10GBy files.
>> linearInputData = np.fromfile(dataFile, dtype = np.uint8, count = -1)
>> spectrumArray = linearInputData.reshape(nSpectra,sizeSpectrum)
>>
>>
>> On Sun, Dec 6, 2015 at 4:07 PM, Matthew Brett <matthew.br...@gmail.com
>> <mailto:matthew.br...@gmail.com>> wrote:
>>
>> Hi,
>>
>> On Sun, Dec 6, 2015 at 12:39 PM, DAVID SAROFF (RIT Student)
>> <dps7...@rit.edu <mailto:dps7...@rit.edu>> wrote:
>> > This works. A big array of eight bit random numbers is constructed:
>> >
>> > import numpy as np
>> >
>> > spectrumArray = np.random.randint(0,255,
>> (2**20,2**12)).astype(np.uint8)
>> >
>> >
>> >
>> > This fails. It eats up all 64GBy of RAM:
>> >
>> > spectrumArray = np.random.randint(0,255,
>> (2**21,2**12)).astype(np.uint8)
>> >
>> >
>> > The difference is a factor of two, 2**21 rather than 2**20, for the
>> extent
>> > of the first axis.
>>
>> I think what's happening is that this:
>>
>> np.random.randint(0,255, (2**21,2**12))
>>
>> creates 2**33 random integers, which (on 64-bit) will be of dtype
>> int64 = 8 bytes, giving total size 2 ** (21 + 12 + 6) = 2 ** 39 bytes
>> = 512 GiB.
>>
>> Cheers,
>>
>> Matthew
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>>
>>
>> --
>> David P. Saroff
>> Rochester Institute of Technology
>> 54 Lomb Memorial Dr, Rochester, NY 14623
>> david.sar...@mail.rit.edu <mailto:david.sar...@mail.rit.edu> | (434)
>> 227-6242
>>
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
David P. Saroff
Rochester Institute of Technology
54 Lomb Memorial Dr, Rochester, NY 14623
david.sar...@mail.rit.edu | (434) 227-6242
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of random numbers fails to construct

2015-12-06 Thread DAVID SAROFF (RIT Student)
Matthew,

That looks right. I'm concluding that the .astype(np.uint8) is applied
after the array is constructed, instead of during the process. This random
array is a test case. In the production analysis of radio telescope data
this is how the data comes in, and there is no  problem with 10GBy files.
linearInputData = np.fromfile(dataFile, dtype = np.uint8, count = -1)
spectrumArray = linearInputData.reshape(nSpectra,sizeSpectrum)


On Sun, Dec 6, 2015 at 4:07 PM, Matthew Brett <matthew.br...@gmail.com>
wrote:

> Hi,
>
> On Sun, Dec 6, 2015 at 12:39 PM, DAVID SAROFF (RIT Student)
> <dps7...@rit.edu> wrote:
> > This works. A big array of eight bit random numbers is constructed:
> >
> > import numpy as np
> >
> > spectrumArray = np.random.randint(0,255, (2**20,2**12)).astype(np.uint8)
> >
> >
> >
> > This fails. It eats up all 64GBy of RAM:
> >
> > spectrumArray = np.random.randint(0,255, (2**21,2**12)).astype(np.uint8)
> >
> >
> > The difference is a factor of two, 2**21 rather than 2**20, for the
> extent
> > of the first axis.
>
> I think what's happening is that this:
>
> np.random.randint(0,255, (2**21,2**12))
>
> creates 2**33 random integers, which (on 64-bit) will be of dtype
> int64 = 8 bytes, giving total size 2 ** (21 + 12 + 6) = 2 ** 39 bytes
> = 512 GiB.
>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
David P. Saroff
Rochester Institute of Technology
54 Lomb Memorial Dr, Rochester, NY 14623
david.sar...@mail.rit.edu | (434) 227-6242
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array of random numbers fails to construct

2015-12-06 Thread DAVID SAROFF (RIT Student)
This works. A big array of eight bit random numbers is constructed:

import numpy as np

spectrumArray = np.random.randint(0,255, (2**20,2**12)).astype(np.uint8)



This fails. It eats up all 64GBy of RAM:

spectrumArray = np.random.randint(0,255, (2**21,2**12)).astype(np.uint8)


The difference is a factor of two, 2**21 rather than 2**20, for the extent
of the first axis.

-- 
David P. Saroff
Rochester Institute of Technology
54 Lomb Memorial Dr, Rochester, NY 14623
david.sar...@mail.rit.edu | (434) 227-6242
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py, numpy.distutils and multiple Fortran source files

2015-12-05 Thread David Verelst
Thanks a lot for providing the example Sturla, that is exactly what we are
looking for!

On 4 December 2015 at 11:34, Sturla Molden <sturla.mol...@gmail.com> wrote:

> On 03/12/15 22:07, David Verelst wrote:
>
> Can this workflow be incorporated into |setuptools|/|numpy.distutils|?
>> Something along the lines as:
>>
>
> Take a look at what SciPy does.
>
>
> https://github.com/scipy/scipy/blob/81c096001974f0b5efe29ec83b54f725cc681540/scipy/fftpack/setup.py
>
> Multiple Fortran files are compiled into a static library using
> "add_library", which is subsequently linked to the extension module.
>
>
> Sturla
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] When to stop supporting Python 2.6?

2015-12-04 Thread David Cournapeau
On Fri, Dec 4, 2015 at 11:06 AM, Nathaniel Smith <n...@pobox.com> wrote:

> On Fri, Dec 4, 2015 at 1:27 AM, David Cournapeau <courn...@gmail.com>
> wrote:
> > I would be in favour of dropping 3.3, but not 2.6 until it becomes too
> > cumbersome to support.
> >
> > As a data point, as of april, 2.6 was more downloaded than all python 3.X
> > versions together when looking at pypi numbers:
> > https://caremad.io/2015/04/a-year-of-pypi-downloads/
>
> I'm not sure what's up with those numbers though -- they're *really*
> unrepresentative of what we see for numpy otherwise. E.g. they show
> 3.X usage as ~5%, but for numpy, 3.x usage has risen past 25%.
> (Source: 'vanity numpy', looking at OS X wheels b/c they're
> per-version and unpolluted by CI download spam. Unfortunately this
> doesn't provide numbers for 2.6 b/c we don't ship 2.6 binaries.) For
> all we know all those 2.6 downloads are travis builds testing projects
> on 2.6 to make sure they keep working because there are so many 2.6
> downloads on pypi :-). Which isn't an argument for dropping 2.6
> either, I just wouldn't put much weight on that blog post either
> way...
>

I agree pypi is only one data point. The proportion is also package
dependent (e.g. django had higher proportion of python 3.X). It is just
that having multiple data points is often more useful than guesses

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] When to stop supporting Python 2.6?

2015-12-04 Thread David Cournapeau
I would be in favour of dropping 3.3, but not 2.6 until it becomes too
cumbersome to support.

As a data point, as of april, 2.6 was more downloaded than all python 3.X
versions together when looking at pypi numbers:
https://caremad.io/2015/04/a-year-of-pypi-downloads/

David

On Thu, Dec 3, 2015 at 11:03 PM, Jeff Reback <jeffreb...@gmail.com> wrote:

> pandas is going to drop
> 2.6 and 3.3 next release at end of Jan
>
> (3.2 dropped in 0.17, in October)
>
>
>
> I can be reached on my cell 917-971-6387
> > On Dec 3, 2015, at 6:00 PM, Bryan Van de Ven <bry...@continuum.io>
> wrote:
> >
> >
> >> On Dec 3, 2015, at 4:59 PM, Eric Firing <efir...@hawaii.edu> wrote:
> >>
> >> Chuck,
> >>
> >> I would support dropping the old versions now.  As a related data
> point, matplotlib is testing master on 2.7, 3.4, and 3.5--no more 2.6 and
> 3.3.
> >
> > Ditto for Bokeh.
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] future of f2py and Fortran90+

2015-12-03 Thread David Verelst
f90wrap [1] extends the functionality of f2py, and can automatically
generate sensible wrappers for certain cases.
[1] https://github.com/jameskermode/f90wrap

On 15 July 2015 at 03:45, Sturla Molden  wrote:

> Eric Firing  wrote:
>
> > I'm curious: has anyone been looking into what it would take to enable
> > f2py to handle modern Fortran in general?  And into prospects for
> > getting such an effort funded?
>
> No need. Use Cython and Fortran 2003 ISO C bindings. That is the only
> portable way to interop between Fortran and C (including CPython) anyway.
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py, numpy.distutils and multiple Fortran source files

2015-12-03 Thread David Verelst
Hi,

For the wafo [1] package we are trying to include the extension compilation
process in setup.py [2] by using setuptools and numpy.distutils [3]. Some
of the extensions have one Fortran interface source file, but it depends on
several other Fortran sources (modules). The manual compilation process
would go as follows:

gfortran -fPIC -c source_01.f
gfortran -fPIC -c source_02.f
f2py -m module_name -c source_01.o source_02.o source_interface.f

Can this workflow be incorporated into setuptools/numpy.distutils?
Something along the lines as:

from numpy.distutils.core import setup, Extension
ext = Extension('module.name',
   depends=['source_01.f', 'source_02.f'],
   sources=['source_interface.f'])

(note that the above does not work)

[1] https://github.com/wafo-project/pywafo
[2] https://github.com/wafo-project/pywafo/blob/pipinstall/setup.py
[3] http://docs.scipy.org/doc/numpy/reference/distutils.html

Regards,
David
​
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question about structure arrays

2015-11-09 Thread David Morris
On Nov 7, 2015 2:58 PM, "aerojockey" <pythond...@aerojockey.com> wrote:
>
> Hello,
>
> Recently I made some changes to a program I'm working on, and found that
the
> changes made it four times slower than before.  After some digging, I
found
> out that one of the new costs was that I added structure arrays.  Inside a
> low-level loop, I create a structure array, populate it Python, then turn
it
> over to some handwritten C code for processing.  It turned out that, when
> passed a structure array as a dtype, numpy has to parse the dtype, which
> included calls to re.match and eval.
>
> Now, this is not a big deal for me to work around by using ordinary
slicing
> and such, and also I can improve things by reusing arrays.  Since this is
> inner loop stuff, sacrificing readability for speed is an appropriate
> tradeoff.
>
> Nevertheless, I was curious if there was a way (or any plans for there to
be
> a way) to compile a struture array dtype.  I realize it's not the
> bread-and-butter of numpy, but it turned out to be a very convenient
feature
> for my use case (populating an array of structures to pass off to C).

I was just looking into structured arrays. In case it is relevant:  Are you
using certain 1.10? They are apparently a LOT slower than 1.9.3, an issue
which will be fixed in a future version.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead

2015-10-27 Thread David Cournapeau
On Tue, Oct 27, 2015 at 2:31 PM, Edison Gustavo Muenz <
edisongust...@gmail.com> wrote:

> I'm sorry if this is out-of-topic, but I'm curious on why nobody mentioned
> Conda yet.
>

Conda is a binary distribution system, whereas we are talking about
installing from sources. You will need a way to install things when
building a conda package in any case

David


> Is there any particular reason for not using it?
>
> On Tue, Oct 27, 2015 at 11:48 AM, James E.H. Turner <jehtur...@gmail.com>
> wrote:
>
>> Apparently it is not well known that if you have a Python project
>>> source tree (e.g., a numpy checkout), then the correct way to install
>>> it is NOT to type
>>>
>>>python setup.py install   # bad and broken!
>>>
>>> but rather to type
>>>
>>>pip install .
>>>
>>
>> Though I haven't studied it exhaustively, it always seems to me that
>> pip is bad & broken, whereas python setup.py install does what I
>> expect (even if it's a mess internally). In particular, when
>> maintaining a distribution of Python packages, you try to have some
>> well-defined, reproducible build from source tarballs and then you
>> find that pip is going off and downloading stuff under the radar
>> without being asked (etc.). Stopping that can be a pain & I always
>> groan whenever some package insists on using pip. Maybe I don't
>> understand it well enough but in this role its dependency handling
>> is an unnecessary complication with no purpose. Just a comment that
>> not every installation is someone trying to get numpy on their
>> laptop...
>>
>> Cheers,
>>
>> James.
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.10.1 released.

2015-10-14 Thread David Cournapeau
On Wed, Oct 14, 2015 at 5:38 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Oct 14, 2015 9:15 AM, "Chris Barker" <chris.bar...@noaa.gov> wrote:
> >
> > On Mon, Oct 12, 2015 at 9:27 AM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
> >>
> >> * Compiling with msvc9 or msvc10 for 32 bit Windows now requires SSE2.
> >>   This was the easiest fix for what looked to be some miscompiled code
> when
> >>   SSE2 was not used.
> >
> >
> > Note that there is discusion right now on pyton-dev about requireing
> SSE2 for teh python.org build of python3.5 -- it does now, so it's fine
> for third party pacakges to also require it. But there is some talk of
> removing that requirement -- still a lot of old machines around, I guess --
> particular at schools and the like.
>
> Note that the 1.10.1 release announcement is somewhat misleading --
> apparently the affected builds have actually required SSE2 since numpy 1.8,
> and the change here just makes it even more required. I'm not sure if this
> is all 32 bit builds or only ones using msvc that have been needing SSE2
> all along. The change in 1.10.1 only affects msvc, which is not what most
> people are using (IIUC Enthought Canopy uses msvc, but the pypi, gohlke,
> and Anaconda builds don't).
>
> I'm actually not sure if anyone even uses the 32 bit builds at all :-)
>

I cannot divulge exact figures for downloads, but for us at Enthought,
windows 32 bits is in the same ballpark as OS X and Linux (64 bits) in
terms of proportion, windows 64 bits being significantly more popular.

Linux 32 bits and OS X 32 bits have been in the 1 % range each of our
downloads for a while (we recently stopped support for both).

David

> > Ideally, any binary wheels on PyPi should be compatible with the
> python.org builds -- so not require SSE2, if the python.org builds don't.
> >
> > Though we had this discussion a while back -- and numpy could, and maybe
> should require more -- did we ever figure out a way to get a meaningful
> message to the user if they try to run an SSE2 build on a machine without
> SSE2?
>
> It's not that difficult in principle, just someone has to do it :-).
>
> -n
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reorganizing numpy internal extensions (was: Re: Should we drop support for "one file" compilation mode?)

2015-10-08 Thread David Cournapeau
On Tue, Oct 6, 2015 at 8:04 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Tue, Oct 6, 2015 at 11:52 AM, David Cournapeau <courn...@gmail.com>
> wrote:
> >
> >
> > On Tue, Oct 6, 2015 at 7:30 PM, Nathaniel Smith <n...@pobox.com> wrote:
> >>
> >> [splitting this off into a new thread]
> >>
> >> On Tue, Oct 6, 2015 at 3:00 AM, David Cournapeau <courn...@gmail.com>
> >> wrote:
> >> [...]
> >> > I also agree the current situation is not sustainable -- as we
> discussed
> >> > privately before, cythonizing numpy.core is made quite more
> complicated
> >> > by
> >> > this. I have myself quite a few issues w/ cythonizing the other parts
> of
> >> > umath. I would also like to support the static link better than we do
> >> > now
> >> > (do we know some static link users we can contact to validate our
> >> > approach
> >> > ?)
> >> >
> >> > Currently, what we have in numpy core is the following:
> >> >
> >> > numpy.core.multiarray -> compilation units in
> numpy/core/src/multiarray/
> >> > +
> >> > statically link npymath
> >> > numpy.core.umath -> compilation units in numpy/core/src/umath +
> >> > statically
> >> > link npymath/npysort + some shenanigans to use things in
> >> > numpy.core.multiarray
> >>
> >> There are also shenanigans in the other direction - supposedly umath
> >> is layered "above" multiarray, but in practice there are circular
> >> dependencies (see e.g. np.set_numeric_ops).
> >
> > Indeed, I am not arguing about merging umath and multiarray.
>
> Oh, okay :-).
>
> >> > I would suggest to have a more layered approach, to enable both
> 'normal'
> >> > build and static build, without polluting the public namespace too
> much.
> >> > This is an approach followed by most large libraries (e.g. MKL), and
> is
> >> > fairly flexible.
> >> >
> >> > Concretely, we could start by putting more common functionalities (aka
> >> > the
> >> > 'core' library) into its own static library. The API would be
> considered
> >> > private to numpy (no stability guaranteed outside numpy), and every
> >> > exported
> >> > symbol from that library would be decorated appropriately to avoid
> >> > potential
> >> > clashes (e.g. '_npy_internal_').
> >>
> >> I don't see why we need this multi-layered complexity, though.
> >
> >
> > For several reasons:
> >
> >  - when you want to cythonize either extension, it is much easier to
> > separate it as cython for CPython API, C for the rest.
>
> I don't think this will help much, because I think we'll want to have
> multiple cython files, and that we'll probably move individual
> functions between being implemented in C and Cython (including utility
> functions). So that means we need to solve the problem of mixing C and
> Cython files inside a single library.
>

Separating the pure C code into static lib is the simple way of achieving
the same goal. Essentially, you write:

# implemented in npyinternal.a
_npy_internal_foo()

# implemented in merged_multiarray_umath.pyx
cdef PyArray_Foo(...):
# use _npy_internal_foo()

then our merged_multiarray_umath.so is built by linking the .pyx and the
npyinternal.a together. IOW, the static link is internal.

Going through npyinternal.a instead of just linking .o from pure C and
Cython together gives us the following:

 1. the .a can just use normal linking strategies instead of the awkward
capsule thing. Those are easy to get wrong when using cython as you may end
up with multiple internal copies of the wrapped object inside capsule,
causing hard to track bugs (this is what we wasted most of the time on w/
Stefan and Kurt during ds4ds)
 2. the only public symbols in .a are the ones needed by the cython
wrapping, and since those are decorated with npy_internal, clashes are
unlikely to happen
 3. since most of the code is already in .a internally, supporting the
static linking should be simpler since the only difference is how you
statically link the cython-generated code. Because of 1, you are also less
likely to cause nasty surprises when putting everything together.

When you cythonize umath/multiarray, you need to do most of the underlying
work anyway

I don't really care if the files are in the same directory or not, we can
keep things as they are now.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reorganizing numpy internal extensions (was: Re: Should we drop support for "one file" compilation mode?)

2015-10-08 Thread David Cournapeau
On Thu, Oct 8, 2015 at 8:47 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Oct 8, 2015 06:30, "David Cournapeau" <courn...@gmail.com> wrote:
> >
> [...]
> >
> > Separating the pure C code into static lib is the simple way of
> achieving the same goal. Essentially, you write:
> >
> > # implemented in npyinternal.a
> > _npy_internal_foo()
> >
> > # implemented in merged_multiarray_umath.pyx
> > cdef PyArray_Foo(...):
> > # use _npy_internal_foo()
> >
> > then our merged_multiarray_umath.so is built by linking the .pyx and the
> npyinternal.a together. IOW, the static link is internal.
> >
> > Going through npyinternal.a instead of just linking .o from pure C and
> Cython together gives us the following:
> >
> >  1. the .a can just use normal linking strategies instead of the awkward
> capsule thing. Those are easy to get wrong when using cython as you may end
> up with multiple internal copies of the wrapped object inside capsule,
> causing hard to track bugs (this is what we wasted most of the time on w/
> Stefan and Kurt during ds4ds)
>
> Check out Stéfan's branch -- it just uses regular linking to mix cython
> and C.
>
I know, we worked on this together after all ;)

My suggested organisation is certainly not mandatory, I was not trying to
claim otherwise, sorry if that was unclear.

At that point, I guess the consensus is that I have to prove my suggestion
is useful. I will take a few more hours to submit a PR with the umath
conversion (maybe merging w/ the work from Stéfan). I discovered on my
flight back that you can call PyModule_Init multiple times for a given
module, which is useful while we do the transition C->Cython for the module
initialization (it is not documented as possible, so I would not to rely on
it for long either).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reorganizing numpy internal extensions (was: Re: Should we drop support for "one file" compilation mode?)

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 7:30 PM, Nathaniel Smith <n...@pobox.com> wrote:

> [splitting this off into a new thread]
>
> On Tue, Oct 6, 2015 at 3:00 AM, David Cournapeau <courn...@gmail.com>
> wrote:
> [...]
> > I also agree the current situation is not sustainable -- as we discussed
> > privately before, cythonizing numpy.core is made quite more complicated
> by
> > this. I have myself quite a few issues w/ cythonizing the other parts of
> > umath. I would also like to support the static link better than we do now
> > (do we know some static link users we can contact to validate our
> approach
> > ?)
> >
> > Currently, what we have in numpy core is the following:
> >
> > numpy.core.multiarray -> compilation units in numpy/core/src/multiarray/
> +
> > statically link npymath
> > numpy.core.umath -> compilation units in numpy/core/src/umath +
> statically
> > link npymath/npysort + some shenanigans to use things in
> > numpy.core.multiarray
>
> There are also shenanigans in the other direction - supposedly umath
> is layered "above" multiarray, but in practice there are circular
> dependencies (see e.g. np.set_numeric_ops).
>

Indeed, I am not arguing about merging umath and multiarray.


> > I would suggest to have a more layered approach, to enable both 'normal'
> > build and static build, without polluting the public namespace too much.
> > This is an approach followed by most large libraries (e.g. MKL), and is
> > fairly flexible.
> >
> > Concretely, we could start by putting more common functionalities (aka
> the
> > 'core' library) into its own static library. The API would be considered
> > private to numpy (no stability guaranteed outside numpy), and every
> exported
> > symbol from that library would be decorated appropriately to avoid
> potential
> > clashes (e.g. '_npy_internal_').
>
> I don't see why we need this multi-layered complexity, though.
>

For several reasons:

 - when you want to cythonize either extension, it is much easier to
separate it as cython for CPython API, C for the rest.
 - if numpy.core.multiarray.so is built as cython-based .o + a 'large' C
static library, it should become much simpler to support static link.
 - maybe that's just personal, but I find the whole multiarray + umath
quite beyond manageable in terms of intertwined complexity. You may argue
it is not that big, and we all have different preferences in terms of
organization, but if I look at the binary size of multiarray + umath, it is
quite larger than the median size of the .so I have in my /usr/lib.

I am also hoping that splitting up numpy.core in separate elements that
communicate through internal APIs would make participating into numpy
easier.

We could also swap the argument: assuming it does not make the build more
complex, and that it does help static linking, why not doing it ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Mon, Oct 5, 2015 at 11:26 PM, Nathaniel Smith <n...@pobox.com> wrote:

> Hi all,
>
> For a long time, NumPy has supported two different ways of being compiled:
>
> "Separate compilation" mode: like most C projects, each .c file gets
> compiled to a .o file, and then the .o files get linked together to
> make a shared library. (This has been the default since 1.8.0.)
>
> "One file" mode: first concatenate all the .c files together to make
> one monster .c file, and then compile that .c file to make a shared
> library. (This was the default before 1.8.0.)
>
> Supporting these two different build modes creates a drag on
> development progress; in particular Stefan recently ran into this in
> this experiments with porting parts of the NumPy internals to Cython:
>   https://github.com/numpy/numpy/pull/6408
> (I suspect the particular problem he's running into can be fixed b/c
> so far he only has one .pyx file, but I also suspect that it will be
> impossible to support "one file" mode once we have multiple .pyx
> files.)
>
> There are some rumors that "one file" mode might be needed on some
> obscure platform somewhere, or that it might be necessary for
> statically linking numpy into the CPython executable, but we can't
> continue supporting things forever based only on rumors. If all we can
> get are rumors, then eventually we have to risk breaking things just
> to force anyone who cares to actually show up and explain what they
> need so we can support it properly :-).
>

Assuming one of the rumour is related to some comments I made some time
(years ?) earlier, the context was the ability to hide exported symbols. As
you know, the issue is not to build extensions w/ multiple compilation
units, but sharing functionalities between them without sharing them
outside the extension. I am just reiterating that point so that we all
discuss under the right context :)

I also agree the current situation is not sustainable -- as we discussed
privately before, cythonizing numpy.core is made quite more complicated by
this. I have myself quite a few issues w/ cythonizing the other parts of
umath. I would also like to support the static link better than we do now
(do we know some static link users we can contact to validate our approach
?)

Currently, what we have in numpy core is the following:

numpy.core.multiarray -> compilation units in numpy/core/src/multiarray/ +
statically link npymath
numpy.core.umath -> compilation units in numpy/core/src/umath + statically
link npymath/npysort + some shenanigans to use things in
numpy.core.multiarray

I would suggest to have a more layered approach, to enable both 'normal'
build and static build, without polluting the public namespace too much.
This is an approach followed by most large libraries (e.g. MKL), and is
fairly flexible.

Concretely, we could start by putting more common functionalities (aka the
'core' library) into its own static library. The API would be considered
private to numpy (no stability guaranteed outside numpy), and every
exported symbol from that library would be decorated appropriately to avoid
potential clashes (e.g. '_npy_internal_').

FWIW, that has always been my intention to go toward this when I split up
multiarray/umath into multiple .c files and extracted out npymath.

cheers,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 12:07 PM, Antoine Pitrou <solip...@pitrou.net> wrote:

> On Tue, 6 Oct 2015 11:00:30 +0100
> David Cournapeau <courn...@gmail.com> wrote:
> >
> > Assuming one of the rumour is related to some comments I made some time
> > (years ?) earlier, the context was the ability to hide exported symbols.
> As
> > you know, the issue is not to build extensions w/ multiple compilation
> > units, but sharing functionalities between them without sharing them
> > outside the extension.
>
> Can't you use the visibility attribute with gcc for this?
>

We do that already for gcc, I think the question was whether every platform
supported this or not (and whether we should care).


> Other Unix compilers probably provide something similar. The issue
> doesn't exist on Windows by construction.
>
>
> https://gcc.gnu.org/onlinedocs/gcc-5.2.0/gcc/Function-Attributes.html#Function-Attributes
>
> By the way, external packages may reuse the npy_* functions, so I would
> like them not the be hidden :-)
>

The npy_ functions in npymath were designed to be exported. Those would
stay that way.

David

>
> Regards
>
> Antoine.
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 6:07 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Tue, Oct 6, 2015 at 10:00 AM, Antoine Pitrou <solip...@pitrou.net>
> wrote:
> > On Tue, 6 Oct 2015 09:40:43 -0700
> > Nathaniel Smith <n...@pobox.com> wrote:
> >>
> >> If you need some npy_* function it'd be much better to let us know
> >> what it is and let us export it in an intentional way, instead of just
> >> relying on whatever stuff we accidentally exposed?
> >
> > Ok, we seem to be using only the complex math functions (npy_cpow and
> > friends, I could make a complete list if required).
>
> And how are you getting at them? Are you just relying the way that on
> ELF systems, if two libraries are loaded into the same address space
> then they automatically get access to each other's symbols, even if
> they aren't linked to each other? What do you do on Windows?
>

It is possible (and documented) to use any of the npy_ symbols from npymath
from outside numpy:
http://docs.scipy.org/doc/numpy-dev/reference/c-api.coremath.html#linking-against-the-core-math-library-in-an-extension

The design is not perfect (I was young and foolish :) ), but it has worked
fairly well and has been used in at least scipy since the 1.4/1.5 days IIRC
(including windows).

David


>
> > And, of course, we would also benefit from the CBLAS functions (or any
> > kind of C wrappers around them) :-)
> > https://github.com/numpy/numpy/issues/6324
>
> This is difficult to do from NumPy itself -- we don't necessarily have
> access to a full BLAS or LAPACK API -- in some configurations we fall
> back on our minimal internal implementations that just have what we
> need.
>
> There was an interesting idea that came up in some discussions here a
> few weeks ago -- we already know that we want to package up BLAS
> inside a Python package that (numpy / scipy / scikit-learn / ...) can
> depend on and assume is there to link against.
>
> Maybe this new package would also be a good place for exposing these
> wrappers?
>
> -n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 6:14 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Tue, Oct 6, 2015 at 10:10 AM, David Cournapeau <courn...@gmail.com>
> wrote:
> >
> >
> > On Tue, Oct 6, 2015 at 6:07 PM, Nathaniel Smith <n...@pobox.com> wrote:
> >>
> >> On Tue, Oct 6, 2015 at 10:00 AM, Antoine Pitrou <solip...@pitrou.net>
> >> wrote:
> >> > On Tue, 6 Oct 2015 09:40:43 -0700
> >> > Nathaniel Smith <n...@pobox.com> wrote:
> >> >>
> >> >> If you need some npy_* function it'd be much better to let us know
> >> >> what it is and let us export it in an intentional way, instead of
> just
> >> >> relying on whatever stuff we accidentally exposed?
> >> >
> >> > Ok, we seem to be using only the complex math functions (npy_cpow and
> >> > friends, I could make a complete list if required).
> >>
> >> And how are you getting at them? Are you just relying the way that on
> >> ELF systems, if two libraries are loaded into the same address space
> >> then they automatically get access to each other's symbols, even if
> >> they aren't linked to each other? What do you do on Windows?
> >
> >
> > It is possible (and documented) to use any of the npy_ symbols from
> npymath
> > from outside numpy:
> >
> http://docs.scipy.org/doc/numpy-dev/reference/c-api.coremath.html#linking-against-the-core-math-library-in-an-extension
> >
> > The design is not perfect (I was young and foolish :) ), but it has
> worked
> > fairly well and has been used in at least scipy since the 1.4/1.5 days
> IIRC
> > (including windows).
>
> Okay, so just to confirm, it looks like this does indeed implement the
> static linking thing I just suggested (so perhaps I am also young and
> foolish ;-)) -- from looking at the output of get_info("npymath"), it
> seems to add -I.../numpy/core/include to the compiler flags, add
> -lnpymath -L.../numpy/core/lib to the linker flags, and then
> .../numpy/core/lib contains only libnpymath.a, so it's static linking.
>

Yes, I was not trying to argue otherwise. If you thought I was, blame it on
my poor English (which sadly does not get better as I get less young...).

My proposal is to extend this technique for *internal* API, but with the
following differences:
 * the declarations are not put in any public header
 * we don't offer any way to link to this library, and name it something
scary enough that people would have to be foolish (young or not) to use it.

David


> -n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 5:44 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Tue, Oct 6, 2015 at 4:46 AM, David Cournapeau <courn...@gmail.com>
> wrote:
> > The npy_ functions in npymath were designed to be exported. Those would
> stay
> > that way.
>
> If we want to export these then I vote that we either:
> - use the usual API export mechanism, or else
> - provide a static library for people to link to, instead of trying to
> do runtime binding. (I.e. drop it in some known place, and then
> provide some functions for extension modules to find it at build time
> -- similar to how np.get_include() works.)
>

Unless something changed, that's more or less how it works already (npymath
is used in scipy, for example, which was one of the rationale for writing
it in the first place !).

You access the compilation/linking issues through the numpy distutils
get_info function.

David


> -n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 5:51 PM, David Cournapeau <courn...@gmail.com> wrote:

>
>
> On Tue, Oct 6, 2015 at 5:44 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
>> On Tue, Oct 6, 2015 at 4:46 AM, David Cournapeau <courn...@gmail.com>
>> wrote:
>> > The npy_ functions in npymath were designed to be exported. Those would
>> stay
>> > that way.
>>
>> If we want to export these then I vote that we either:
>> - use the usual API export mechanism, or else
>> - provide a static library for people to link to, instead of trying to
>> do runtime binding. (I.e. drop it in some known place, and then
>> provide some functions for extension modules to find it at build time
>> -- similar to how np.get_include() works.)
>>
>
> Unless something changed, that's more or less how it works already
> (npymath is used in scipy, for example, which was one of the rationale for
> writing it in the first place !).
>
> You access the compilation/linking issues through the numpy distutils
> get_info function.
>

And my suggestion is to use a similar mechanism for multiarray and umath,
so that in the end the exported Python C API is just a thin layer on top of
the underlying static library. That would make cython and I suspect static
linking quite a bit easier. The API between the low layer and python C API
of multiarray/umath would be considered private and outside any API/ABI
stability.

IOW, it would be an internal change, and should not cause visible changes
to the users, except that some _npy_private_ symbols would be exported (but
you would be crazy to use them, and the prototype declarations would not be
available when you install numpy anyway). Think of those as the internal
driver API/ABI of Linux or similar.

David

>
> David
>
>
>> -n
>>
>> --
>> Nathaniel J. Smith -- http://vorpus.org
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 6:18 PM, David Cournapeau <courn...@gmail.com> wrote:

>
>
> On Tue, Oct 6, 2015 at 6:14 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
>> On Tue, Oct 6, 2015 at 10:10 AM, David Cournapeau <courn...@gmail.com>
>> wrote:
>> >
>> >
>> > On Tue, Oct 6, 2015 at 6:07 PM, Nathaniel Smith <n...@pobox.com> wrote:
>> >>
>> >> On Tue, Oct 6, 2015 at 10:00 AM, Antoine Pitrou <solip...@pitrou.net>
>> >> wrote:
>> >> > On Tue, 6 Oct 2015 09:40:43 -0700
>> >> > Nathaniel Smith <n...@pobox.com> wrote:
>> >> >>
>> >> >> If you need some npy_* function it'd be much better to let us know
>> >> >> what it is and let us export it in an intentional way, instead of
>> just
>> >> >> relying on whatever stuff we accidentally exposed?
>> >> >
>> >> > Ok, we seem to be using only the complex math functions (npy_cpow and
>> >> > friends, I could make a complete list if required).
>> >>
>> >> And how are you getting at them? Are you just relying the way that on
>> >> ELF systems, if two libraries are loaded into the same address space
>> >> then they automatically get access to each other's symbols, even if
>> >> they aren't linked to each other? What do you do on Windows?
>> >
>> >
>> > It is possible (and documented) to use any of the npy_ symbols from
>> npymath
>> > from outside numpy:
>> >
>> http://docs.scipy.org/doc/numpy-dev/reference/c-api.coremath.html#linking-against-the-core-math-library-in-an-extension
>> >
>> > The design is not perfect (I was young and foolish :) ), but it has
>> worked
>> > fairly well and has been used in at least scipy since the 1.4/1.5 days
>> IIRC
>> > (including windows).
>>
>> Okay, so just to confirm, it looks like this does indeed implement the
>> static linking thing I just suggested (so perhaps I am also young and
>> foolish ;-)) -- from looking at the output of get_info("npymath"), it
>> seems to add -I.../numpy/core/include to the compiler flags, add
>> -lnpymath -L.../numpy/core/lib to the linker flags, and then
>> .../numpy/core/lib contains only libnpymath.a, so it's static linking.
>>
>
> Yes, I was not trying to argue otherwise. If you thought I was, blame it
> on my poor English (which sadly does not get better as I get less young...).
>
> My proposal is to extend this technique for *internal* API, but with the
> following differences:
>  * the declarations are not put in any public header
>  * we don't offer any way to link to this library, and name it something
> scary enough that people would have to be foolish (young or not) to use it.
>

I am stupid: we of course do not even ship that internal library, it would
just be linked into multiarray/umath and never installed or part of binary
packages.

David

>
> David
>
>
>> -n
>>
>> --
>> Nathaniel J. Smith -- http://vorpus.org
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Should we drop support for "one file" compilation mode?

2015-10-06 Thread David Cournapeau
On Tue, Oct 6, 2015 at 5:58 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Tue, Oct 6, 2015 at 9:51 AM, David Cournapeau <courn...@gmail.com>
> wrote:
> >
> > On Tue, Oct 6, 2015 at 5:44 PM, Nathaniel Smith <n...@pobox.com> wrote:
> >>
> >> On Tue, Oct 6, 2015 at 4:46 AM, David Cournapeau <courn...@gmail.com>
> >> wrote:
> >> > The npy_ functions in npymath were designed to be exported. Those
> would
> >> > stay
> >> > that way.
> >>
> >> If we want to export these then I vote that we either:
> >> - use the usual API export mechanism, or else
> >> - provide a static library for people to link to, instead of trying to
> >> do runtime binding. (I.e. drop it in some known place, and then
> >> provide some functions for extension modules to find it at build time
> >> -- similar to how np.get_include() works.)
> >
> > Unless something changed, that's more or less how it works already
> (npymath
> > is used in scipy, for example, which was one of the rationale for
> writing it
> > in the first place !).
>
> Okay... in fact multiarray.so right now *does* export tons and tons of
> random junk into the global symbol namespace (on systems like Linux
> that do have a global symbol namespace), so it isn't obvious whether
> people are asking for that to continue :-). I'm just specifically
> saying that we should try to get this back down to the 1 exported
> symbol.
>
> (Try:
>objdump -T $(python -c 'import numpy;
> print(numpy.core.multiarray.__file__)')
> This *should* print 1 line... I currently get ~700. numpy.core.umath
> is similar.)
>
>
I think this overestimates the amount by quite a bit, since you see GLIBC
symbols, etc... I am using nm -Dg --defined-only $(python -c 'import numpy;
print(numpy.core.multiarray.__file__)')instead.

I see around 290 symboles: the npy_ from npymath don't bother me, but the
ones from npysort do. We should at least prefix those with npy_ (I don't
think npysort API has ever been publisher in our header like npymath was ?)

David

-n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] composition of the steering council (was Re: Governance model request)

2015-09-24 Thread David Cournapeau
od). [1]
>
> More to the point of the actual members:
>
> So to say, I feel the council members have to try to be *directly*
> active and see being active as a necessary *commitment* (i.e. also try
> to travel to meetings). This will always be a difficult judgment of
> course, but there is no help to it. The current definitions imply this.
> And two years seems fine. It is not that short, at least unless someone
> stops contributing very abruptly which I do not think is that usual. I
> will weight in to keep the current times but do not feel very strongly.
>
> About using the commit log to seed, I think there are some old term
> contributers (David Cournapeau maybe?), who never stopped doing quite a
> bit but may not have merge commits. However, I think we can start of
> with what we had, then I would hope Chuck and maybe Ralf can fill in the
> blanks.
>

AFAIK, I still have merge commits. I am actually doing a bit of numpy
development ATM, so I would prefer keeping them, but I won't fight it
either.

David


>
> About the size, I think if we get too many -- if that is possible -- we
> should just change the governance at that time to be not veto based
> anymore. This is something to keep in mind, but probably does not need
> to be formalized.
>
> - Sebastian
>
>
> [1] Sorry to "footnote" this, but I think I am probably rudely repeating
> myself and frankly do **not want this to be discussed**. It is just to
> try to be fully clear where I come from:
> Until SciPy 2015, I could list many people on this list who have shown
> more direct involvement in numpy then Travis since I joined and have no
> affiliation to numpy. If Travis had been new to the community at the
> time, I would be surprised if I would even recognize his name.
> I know this is only half the picture and Travis already mentioned
> another side, but this is what I mostly saw even if it may be a harsh
> and rude assessment.
>
>
> >
> > Chuck
> >
> >
> >
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Compiling NumPy for iOS PyQt app?

2015-09-20 Thread David Morris
I have a PyQt app running on iOS and would like to add NumPy to improve
calculation speed.  I see a few python interpreters in the app store which
use NumPy so it must be possible, however I have not been able to find any
information on the build process for the iOS cross compile.

We are building a python with all libraries static linked.  Here is the
environment:

iOS 8.0+
Python 3.4
PyQt 5.5
Qt 5.5
pyqtdeploy

Any help getting NumPy compiled into the iOS app?

Thank you,
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OK to upload patched 1.9.2 for Python 3.5?

2015-09-14 Thread David Cournapeau
On Mon, Sep 14, 2015 at 9:18 AM, Matthew Brett <matthew.br...@gmail.com>
wrote:

> Hi,
>
> I'm just building numpy 1.9.2 for Python 3.5 (just released).
>
> In order to get the tests to pass on Python 3.5, I need to cherry pick
> commit 7d6aa8c onto the 1.9.2 tag position.
>
> Does anyone object to me uploading a wheel built from this patched
> version to pypi as 1.9.2 for Python 3.5 on OSX?   It would help to get
> the ball rolling for Python 3.5 binary wheels.
>

Why not releasing this as 1.9.3 ? It does not need to be a full release
(with binaries and all), but having multiple sources for a given tag is
confusing.

David


>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cythonizing some of NumPy

2015-09-01 Thread David Cournapeau
On Tue, Sep 1, 2015 at 8:16 AM, Nathaniel Smith <n...@pobox.com> wrote:

> On Sun, Aug 30, 2015 at 2:44 PM, David Cournapeau <courn...@gmail.com>
> wrote:
> > Hi there,
> >
> > Reading Nathaniel summary from the numpy dev meeting, it looks like
> there is
> > a consensus on using cython in numpy for the Python-C interfaces.
> >
> > This has been on my radar for a long time: that was one of my rationale
> for
> > splitting multiarray into multiple "independent" .c files half a decade
> ago.
> > I took the opportunity of EuroScipy sprints to look back into this, but
> > before looking more into it, I'd like to make sure I am not going astray:
> >
> > 1. The transition has to be gradual
>
> Yes, definitely.
>
> > 2. The obvious way I can think of allowing cython in multiarray is
> modifying
> > multiarray such as cython "owns" the PyMODINIT_FUNC and the module
> > PyModuleDef table.
>
> The seems like a plausible place to start.
>
> In the longer run, I think we'll need to figure out a strategy to have
> source code divided over multiple .pyx files (for the same reason we
> want multiple .c files -- it'll just be impossible to work with
> otherwise). And this will be difficult for annoying technical reasons,
> since we definitely do *not* want to increase the API surface exposed
> by multiarray.so, so we will need to compile these multiple .pyx and
> .c files into a single module, and have them talk to each other via
> internal interfaces. But Cython is currently very insistent that every
> .pyx file should be its own extension module, and the interface
> between different files should be via public APIs.
>
> I spent some time poking at this, and I think it's possible but will
> take a few kluges at least initially. IIRC the tricky points I noticed
> are:
>
> - For everything except the top-level .pyx file, we'd need to call the
> generated module initialization functions "by hand", and have a bit of
> utility code to let us access the symbol tables for the resulting
> modules
>
> - We'd need some preprocessor hack (or something?) to prevent the
> non-main module initialization functions from being exposed at the .so
> level (like 'cdef extern from "foo.h"', 'foo.h' re#defines
> PyMODINIT_FUNC to remove the visibility declaration)
>
> - By default 'cdef' functions are name-mangled, which is annoying if
> you want to be able to do direct C calls between different .pyx and .c
> files. You can fix this by adding a 'public' declaration to your cdef
> function. But 'public' also adds dllexport stuff which would need to
> be hacked out as per above.
>
> I think the best strategy for this is to do whatever horrible things
> are necessary to get an initial version working (on a branch, of
> course), and then once that's done assess what changes we want to ask
> the cython folks for to let us eliminate the gross parts.
>

Agreed.

Regarding multiple cython .pyx and symbol pollution, I think it would be
fine to have an internal API with the required prefix (say `_npy_cpy_`) in
a core library, and control the exported symbols at the .so level. This is
how many large libraries work in practice (e.g. MKL), and is a model well
understood by library users.

I will start the cythonize process without caring about any of that though:
one large .pyx file, and everything build together by putting everything in
one .so. That will avoid having to fight both cython and distutils at the
same time :)

David

>
> (Insisting on compiling everything into the same .so will probably
> also help at some point in avoiding Cython-Related Binary Size Blowup
> Syndrome (CRBSBS), because the masses of boilerplate could in
> principle be shared between the different files. I think some modern
> linkers are even clever enough to eliminate this kind of duplicate
> code automatically, since C++ suffers from a similar problem.)
>
> > 3. We start using cython for the parts that are mostly menial refcount
> work.
> > Things like functions in calculation.c are obvious candidates.
> >
> > Step 2 should not be disruptive, and does not look like a lot of work:
> there
> > are < 60 methods in the table, and most of them should be fairly
> > straightforward to cythonize. At worse, we could just keep them as is
> > outside cython and just "export" them in cython.
> >
> > Does that sound like an acceptable plan ?
> >
> > If so, I will start working on a PR to work on 2.
>
> Makes sense to me!
>
> -n
>
> --
> Nathaniel J. Smith -- http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Cythonizing some of NumPy

2015-08-30 Thread David Cournapeau
Hi there,

Reading Nathaniel summary from the numpy dev meeting, it looks like there
is a consensus on using cython in numpy for the Python-C interfaces.

This has been on my radar for a long time: that was one of my rationale for
splitting multiarray into multiple independent .c files half a decade
ago. I took the opportunity of EuroScipy sprints to look back into this,
but before looking more into it, I'd like to make sure I am not going
astray:

1. The transition has to be gradual
2. The obvious way I can think of allowing cython in multiarray is
modifying multiarray such as cython owns the PyMODINIT_FUNC and the
module PyModuleDef table.
3. We start using cython for the parts that are mostly menial refcount
work. Things like functions in calculation.c are obvious candidates.

Step 2 should not be disruptive, and does not look like a lot of work:
there are  60 methods in the table, and most of them should be fairly
straightforward to cythonize. At worse, we could just keep them as is
outside cython and just export them in cython.

Does that sound like an acceptable plan ?

If so, I will start working on a PR to work on 2.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Notes from the numpy dev meeting at scipy 2015

2015-08-25 Thread David Cournapeau
Thanks for the good summary Nathaniel.

Regarding dtype machinery, I agree casting is the hardest part. Unless the
code has changed dramatically, this was the main reason why you could not
make most of the dtypes separate from numpy codebase (I tried to move the
datetime dtype out of multiarray into a separate C extension some years
ago). Being able to separate the dtypes from the multiarray module would be
an obvious way to drive the internal API change.

Regarding the use of cython in numpy, was there any discussion about the
compilation/size cost of using cython, and talking to the cython team to
improve this ? Or was that considered acceptable with current cython for
numpy. I am convinced cleanly separating the low level parts from the
python C API plumbing would be the single most important thing one could do
to make the codebase more amenable.

David




On Tue, Aug 25, 2015 at 9:58 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Tue, Aug 25, 2015 at 1:00 PM, Travis Oliphant tra...@continuum.io
 wrote:

 Thanks for the write-up Nathaniel.   There is a lot of great detail and
 interesting ideas here.

 I've am very eager to understand how to help NumPy and the wider
 community move forward however I can (my passions on this have not changed
 since 1999, though what I myself spend time on has changed).

 There are a lot of ways to think about approaching this, though.   It's
 hard to get all the ideas on the table, and it was unfortunate we couldn't
 get everybody wyho are core NumPy devs together in person to have this
 discussion as there are still a lot of questions unanswered and a lot of
 thought that has gone into other approaches that was not brought up or
 represented in the meeting (how does Numba fit into this, what about
 data-shape, dynd, memory-views and Python type system, etc.).   If NumPy
 becomes just an interface-specification, then why don't we just do that
 *outside* NumPy itself in a way that doesn't jeopardize the stability of
 NumPy today.These are some of the real questions I have.   I will try
 to write up my thoughts in more depth soon, but  I won't be able to respond
 in-depth right now.   I just wanted to comment because Nathaniel said I
 disagree which is only partly true.

 The three most important things for me are 1) let's make sure we have
 representation from as wide of the community as possible (this is really
 hard), 2) let's look around at the broader community and the prior art that
 is happening in this space right now and 3) let's not pretend we are going
 to be able to make all this happen without breaking ABI compatibility.
 Let's just break ABI compatibility with NumPy 2.0 *and* have as much
 fidelity with the API and semantics of current NumPy as possible (though
 there will be some changes necessary long-term).

 I don't think we should intentionally break ABI if we can avoid it, but I
 also don't think we should spend in-ordinate amounts of time trying to
 pretend that we won't break ABI (for at least some people), and most
 importantly we should not pretend *not* to break the ABI when we actually
 do.We did this once before with the roll-out of date-time, and it was
 really un-necessary. When I released NumPy 1.0, there were several
 things that I knew should be fixed very soon (NumPy was never designed to
 not break ABI).Those problems are still there.Now, that we have
 quite a bit better understanding of what NumPy *should* be (there have been
 tremendous strides in understanding and community size over the past 10
 years), let's actually make the infrastructure we think will last for the
 next 20 years (instead of trying to shoe-horn new ideas into a 20-year old
 code-base that wasn't designed for it).

 NumPy is a hard code-base.  It has been since Numeric days in 1995. I
 could be wrong, but my guess is that we will be passed by as a community if
 we don't seize the opportunity to build something better than we can build
 if we are forced to use a 20 year old code-base.

 It is more important to not break people's code and to be clear when a
 re-compile is necessary for dependencies.   Those to me are the most
 important constraints. There are a lot of great ideas that we all have
 about what we want NumPy to be able to do. Some of this are pretty
 transformational (and the more exciting they are, the harder I think they
 are going to be to implement without breaking at least the ABI). There
 is probably some CAP-like theorem around
 Stability-Features-Speed-of-Development (pick 2) when it comes to Open
 Source Software development and making feature-progress with NumPy *is
 going* to create in-stability which concerns me.

 I would like to see a little-bit-of-pain one time with a NumPy 2.0,
 rather than a constant pain because of constant churn over many years
 approach that Nathaniel seems to advocate.   To me NumPy 2.0 is an
 ABI-breaking release that is as API-compatible as possible and whose

Re: [Numpy-discussion] Proposal to remove the Bento build.

2015-08-19 Thread David Cournapeau
On Wed, Aug 19, 2015 at 1:22 AM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Aug 18, 2015 at 4:15 PM, David Cournapeau courn...@gmail.com
 wrote:
  If everybody wants to remove bento, we should remove it.

 FWIW, I don't really have an opinion either way on bento versus
 distutils, I just feel that we shouldn't maintain two build systems
 unless we're actively planning to get rid of one of them, and for
 several years now we haven't really been learning anything by keeping
 the bento build working, nor has there been any movement towards
 switching to bento as the one-and-only build system, or even a clear
 consensus that this would be a good thing. (Obviously distutils and
 numpy.distutils are junk, so that's a point in bento's favor, but it
 isn't *totally* cut and dried -- we know numpy.distutils works and we
 have to maintain it regardless for backcompat, while bento doesn't
 seem to have any activity upstream or any other users...).

 So I'd be totally in favor of adding bento back later if/when such a
 plan materializes; I just don't think it makes sense to keep
 continuously investing effort into it just in case such a plan
 materializes later.

  Regarding single file builds, why would it help for static builds ? I
  understand it would make things slightly easier to have one .o per
  extension, but it does not change the fundamental process as the exported
  symbols are the same in the end ?

 IIUC they aren't: with the multi-file build we control exported
 symbols using __attribute__((visibility(hidden)) or equivalent,
 which hides symbols from the shared object export table, but not from
 other translation units that are statically linked. So if you want to
 statically link cpython and numpy, you need some other way to let
 numpy .o files see each others's symbols without exposing them to
 cpython's .o files,


It is less a problem than in shared linking because you can detect the
conflicts at linking time (instead of loading time).


and the single-file build provides one mechanism
 to do that: make the numpy symbols 'static' and then combine them all
 into a single translation unit.

 I would love to be wrong about this though. The single file build is
 pretty klugey :-).


I know, it took me a while to split the files to go out of single file
build in the first place :)

David



 -n

 --
 Nathaniel J. Smith -- http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal to remove the Bento build.

2015-08-18 Thread David Cournapeau
If everybody wants to remove bento, we should remove it.

Regarding single file builds, why would it help for static builds ? I
understand it would make things slightly easier to have one .o per
extension, but it does not change the fundamental process as the exported
symbols are the same in the end ?

David

On Tue, Aug 18, 2015 at 9:07 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Hi All,
  .
 I'm bringing up this topic again on account of the discussion at
 https://github.com/numpy/numpy/pull/6199. The proposal is to stop
 (trying) to support the Bento build system for Numpy and remove it. Votes
 and discussion welcome.

 Along the same lines, Pauli has suggested removing the single file builds,
 but Nathaniel has pointed out that it may be the only way to produce static
 python + numpy builds. If anyone does that or has more information about
 it, please comment.

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread David Cournapeau
Which command exactly did you run to have that error ? Normally, the code
in msvc9compiler should not be called if you call the setup.py with the
mingw compiler as expected by distutils

On Fri, Aug 7, 2015 at 12:19 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Thu, Aug 6, 2015 at 5:11 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Thu, Aug 6, 2015 at 4:22 PM, David Cournapeau courn...@gmail.com
 wrote:

 Sorry if that's obvious, but do you have Visual Studio 2010 installed ?

 On Thu, Aug 6, 2015 at 11:17 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Anyone know how to fix this? I've run into it before and never got it
 figured out.

 [192.168.121.189:22] out:   File
 C:\Python34\lib\distutils\msvc9compiler.py, line 259, in query_vcvarsall
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: raise DistutilsPlatformError(Unable to
 find vcvarsall.bat)
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: distutils.errors.DistutilsPlatformError:
 Unable to find vcvarsall.bat

 Chuck



 I'm running numpy-vendor, which is running wine. I think it is all mingw
 with a few installed dll's. The error is coming from the Python distutils
 as part of `has_cblas`.


 It's not impossible that we have changed the build somewhere along the
 line.

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-06 Thread David Cournapeau
Sorry if that's obvious, but do you have Visual Studio 2010 installed ?

On Thu, Aug 6, 2015 at 11:17 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Anyone know how to fix this? I've run into it before and never got it
 figured out.

 [192.168.121.189:22] out:   File
 C:\Python34\lib\distutils\msvc9compiler.py, line 259, in query_vcvarsall
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: raise DistutilsPlatformError(Unable to
 find vcvarsall.bat)
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: distutils.errors.DistutilsPlatformError: Unable
 to find vcvarsall.bat

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Verify your sourceforge windows installer downloads

2015-05-28 Thread David Cournapeau
IMO, this really begs the question on whether we still want to use
sourceforge at all. At this point I just don't trust the service at all
anymore.

Could we use some resources (e.g. rackspace ?) to host those files ? Do we
know how much traffic they get so estimate the cost ?

David

On Thu, May 28, 2015 at 9:46 PM, Julian Taylor 
jtaylor.deb...@googlemail.com wrote:

 hi,
 It has been reported that sourceforge has taken over the gimp
 unofficial windows downloader page and temporarily bundled the
 installer with unauthorized adware:
 https://plus.google.com/+gimp/posts/cxhB1PScFpe

 As NumPy is also distributing windows installers via sourceforge I
 recommend that when you download the files you verify the downloads
 via the checksums in the README.txt before using them. The README.txt
 is clearsigned with my gpg key so it should be safe from tampering.
 Unfortunately as I don't use windows I cannot give any advice on how
 to do the verifcation on these platforms. Maybe someone familar with
 available tools can chime in.

 I have checked the numpy downloads and they still match what I
 uploaded, but as sourceforge does redirect based on OS and geolocation
 this may not mean much.

 Cheers,
 Julian Taylor
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Verify your sourceforge windows installer downloads

2015-05-28 Thread David Cournapeau
On Fri, May 29, 2015 at 2:00 AM, Andrew Collette andrew.colle...@gmail.com
wrote:

  Here is their lame excuse:
 
 
 https://sourceforge.net/blog/gimp-win-project-wasnt-hijacked-just-abandoned/
 
  It probably means this:
 
  If NumPy installers are moved away from Sourceforge, they will set up a
  mirror and load the mirrored installers with all sorts of crapware. It is
  some sort of racket the mob couldn't do better.

 I noticed that like most BSD-licensed software, NumPy's license
 includes this clause:

 Neither the name of the NumPy Developers nor the names of any
 contributors may be used to endorse or promote products derived from
 this software without specific prior written permission.

 There's an argument to be made that SF isn't legally permitted to
 distribute poisoned installers under the name NumPy without
 permission.  I recall a similar dust-up a while ago about Standard
 Markdown using the name Markdown; the original author (John Gruber)
 took action and got them to change the name.

 In any case I've always been surprised that NumPy is distributed
 through SourceForge, which has been sketchy for years now. Could it
 simply be hosted on PyPI?


They don't accept arbitrary binaries like SF does, and some of our
installer formats can't be uploaded there.

David



 Andrew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.diag(np.dot(A, B))

2015-05-22 Thread David Cournapeau
On Fri, May 22, 2015 at 5:39 PM, Mathieu Blondel math...@mblondel.org
wrote:

 Hi,

 I often need to compute the equivalent of

 np.diag(np.dot(A, B)).

 Computing np.dot(A, B) is highly inefficient if you only need the diagonal
 entries. Two more efficient ways of computing the same thing are

 np.sum(A * B.T, axis=1)

 and

 np.einsum(ij,ji-i, A, B).

 The first can allocate quite a lot of temporary memory.
 The second can be quite cryptic for someone not familiar with einsum.
 I assume that einsum does not compute np.dot(A, B), but I haven't verified.

 Since this is is quite a recurrent pattern, I was wondering if it would be
 worth adding a dedicated function to NumPy and SciPy's sparse module. A
 possible name would be diagdot. The best performance would be obtained
 when A is C-style and B fortran-style.


Does your implementation use BLAS, or is just a a wrapper around einsum ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NPY_SEPARATE_COMPILATION and RELAXED_STRIDES_CHECKING

2015-04-05 Thread David Cournapeau
On Sat, Apr 4, 2015 at 4:25 AM, Nathaniel Smith n...@pobox.com wrote:

 IIRC there allegedly exist platforms where separate compilation doesn't
 work right? I'm happy to get rid of it if no one speaks up to defend such
 platforms, though, we can always add it back later. One case was for
 statically linking numpy into the interpreter, but I'm skeptical about how
 much we should care about that case, since that's already a hacky kind of
 process and there are simple alternative hacks that could be used to strip
 the offending symbols.

 Depends on how much it lets us simplify things, I guess. Would we get to
 remove all the no-export attributes on everything?


No, the whole point of the no-export is to support the separate compilation
use case.

David


 On Apr 3, 2015 8:01 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Fri, Apr 3, 2015 at 9:00 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Hi All,

 Just to raise the question if these two options should be removed at
 some point? The current default value for both is 0, so we have separate
 compilation and relaxed strides checking by default.


 Oops, default value is 1, not 0.

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] IDE's for numpy development?

2015-04-02 Thread David Cournapeau
On Wed, Apr 1, 2015 at 7:43 PM, Charles R Harris charlesr.har...@gmail.com
wrote:



 On Wed, Apr 1, 2015 at 11:55 AM, Sturla Molden sturla.mol...@gmail.com
 wrote:

 Charles R Harris charlesr.har...@gmail.com wrote:

  I'd be
  interested in information from anyone with experience in using such an
 IDE
  and ideas of how Numpy might make using some of the common IDEs easier.
 
  Thoughts?

 I guess we could include project files for Visual Studio (and perhaps
 Eclipse?), like Python does. But then we would need to make sure the
 different build systems are kept in sync, and it will be a PITA for those
 who do not use Windows and Visual Studio. It is already bad enough with
 Distutils and Bento. I, for one, would really prefer if there only was one
 build process to care about. One should also note that a Visual Studio
 project is the only supported build process for Python on Windows. So they
 are not using this in addition to something else.

 Eclipse is better than Visual Studio for mixed Python and C development.
 It
 is also cross-platform.

 cmake needs to be mentioned too. It is not fully integrated with Visual
 Studio, but better than having multiple build processes.


 Mark chose cmake for DyND because it supported Visual Studio projects.
 OTOH, he said it was a PITA to program.


I concur on that:  For the 350+ packages we support at Enthought, cmake has
been a higher pain point than any other build tool (that is including
custom ones). And we only support mainstream platforms.

But the real question for me is what does visual studio support mean ? Does
it really mean solution files ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyCon?

2015-02-17 Thread David Cournapeau
I'll be there as well, though I am still figuring out when exactly .

On Wed, Feb 18, 2015 at 1:07 AM, Nathaniel Smith n...@pobox.com wrote:

 Hi all,

 It looks like I'll be at PyCon this year. Anyone else? Any interest in
 organizing a numpy sprint?

 -n

 --
 Nathaniel J. Smith -- http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] advanced indexing question

2015-02-04 Thread David Kershaw
Sebastian Berg sebastian at sipsolutions.net writes:
 
 Python has a mechanism both for getting an item and for setting an item.
 The latter will end up doing this (python already does this for us):
 x[:,d,:,d] = x[:,d,:,d] + 1
 so there is an item assignment going on (__setitem__ not __getitem__)
 
 - Sebastian
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion at scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

Thanks for the prompt help Sebastian,

So can I use any legitimate ndarray indexing selection object, obj, in
 x.__setitem__(obj,y)
and as long as y's shape can be broadcast to x[obj]'s shape it will always 
set the appropriate elements of x to the corresponding elements of y?



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] advanced indexing question

2015-02-03 Thread David Kershaw
The numpy reference manual, array objects/indexing/advance indexing, 
says: 
Advanced indexing always returns a copy of the data (contrast with 
basic slicing that returns a view).

If I run the following code:
 import numpy as np
 d=range[2]
 x=np.arange(36).reshape(3,2,3,2)
 y=x[:,d,:,d]
 y+=1
 print x
 x[:,d,:,d]+=1
 print x
then the first print x shows that x is unchanged as it should be since y 
was a copy, not a view, but the second print x shows that all the elements 
of x with 1st index = 3rd index are now 1 bigger. Why did the left side of
 x[:,d,:,d]+=1
act like a view and not a copy?

Thanks,
David

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing FloatingPointError for numpy on cygwin64

2015-01-31 Thread David Cournapeau
Hi Sebastian,

I think you may be one of the first person to report using cygwin 64. I
think it makes sense to support that platform as it is becoming more common.

Could you report the value of `sys.platform` on cygwin64 ? The first place
I would look for cygwin-related FPU issues is there:
https://github.com/numpy/numpy/blob/master/numpy/core/setup.py#L638

David

On Sat, Jan 31, 2015 at 9:53 PM, Sebastien Gouezel 
sebastien.goue...@univ-rennes1.fr wrote:

 Dear all,

 I tried to use numpy (version 1.9.1, installed by `pip install numpy`)
 on cygwin64. I encountered the following weird bug:

   import numpy
   with numpy.errstate(all='raise'):
 ...print 1/float64(0.0)
 inf

 I was expecting a FloatingPointError, but it didn't show up. Curiously,
 with different numerical types (all intxx, or float128), I indeed get
 the FloatingPointError.

 Same thing with the most recent git version, or with 1.7.1 provided as a
 precompiled package by cygwin. This behavior does not happen on cygwin32
 (I always get the FloatingPointError there).

 I wonder if there is something weird with my config, or if this is a
 genuine reproducible bug. If so, where should I start looking if I want
 to fix it? (I don't know anything about numpy's code)

 Sebastien

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] npymath on Windows

2014-12-28 Thread David Cournapeau
On Sun, Dec 28, 2014 at 1:59 AM, Matthew Brett matthew.br...@gmail.com
wrote:

 Hi,

 Sorry for this ignorant email, but we got confused trying to use
 'libnpymath.a' from the mingw builds of numpy:

 We were trying to link against the mingw numpy 'libnpymath.a' using
 Visual Studio C, but this give undefined symbols from 'libnpymath.a'
 like this:


This is not really supported. You should avoid mixing compilers when
building C extensions using numpy C API. Either all mingw, or all MSVC.

David



 npymath.lib(npy_math.o) : error LNK2019: unresolved external symbol
 _atanf referenced in function _npy_atanf
 npymath.lib(npy_math.o) : error LNK2019: unresolved external symbol
 _acosf referenced in function _npy_acosf
 npymath.lib(npy_math.o) : error LNK2019: unresolved external symbol
 _asinf referenced in function _npy_asinf

 (see :
 http://nipy.bic.berkeley.edu/builders/dipy-bdist32-33/builds/73/steps/shell_6/logs/stdio
 )

 npymath.lib from Christophe Gohlke's (MSVC compiled) numpies does not
 give such an error.  Sure enough, 'npymath.lib' shows these lines from
 `dumpbin /all npymath.lib`:

   0281  REL32  4F  asinf
   0291  REL32  51  acosf
   02A1  REL32  53  atanf

 whereas `dumpbin /all libnpymath.a` shows these kinds of lines:

  08E5  REL32  86  _asinf
  08F5  REL32  85  _acosf
  0905  REL32  84  _atanf

 As far as I can see, 'acosf' is defined in the msvc runtime library.
 I guess that '_acosf' is defined in some mingw runtime library?   Is
 there any way of making a npymath library that will pick up the msvc
 math and so may work with both msvc and mingw?

 Sorry again if that's a dumb question,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] ANN: Scipy 0.14.1 release candidate 1

2014-12-19 Thread David Cournapeau
I built that rc on top of numpy 1.8.1 and MKL, and it worked on every
platform we support @ Enthought.

I saw a few test failures on linux and windows 64 bits, but those were
there before or are precisions issues.

I also tested when run on top of numpy 1.9.1 (but still built against
1.8.1), w/ similar results.

Thanks for all the hard work,
David

On Sun, Dec 14, 2014 at 10:29 PM, Pauli Virtanen p...@iki.fi wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Dear all,

 We have finished preparing the Scipy 0.14.1 release candidate 1.
 If no regressions turn up, the final release is planned within the
 following weeks.

 The 0.14.1 release will be a bugfix-only release, addressing the
 following issues:

 - - gh-3630 NetCDF reading results in a segfault
 - - gh-3631 SuperLU object not working as expected for complex matrices
 - - gh-3733 Segfault from map_coordinates
 - - gh-3780 Segfault when using CSR/CSC matrix and uint32/uint64
 - - gh-3781 Fix omitted types in sparsetools typemaps
 - - gh-3802 0.14.0 API breakage: _gen generators are missing from
 scipy.stats.distributions API
 - - gh-3805 ndimge test failures with numpy 1.10
 - - gh-3812 == sometimes wrong on csr_matrix
 - - gh-3853 Many scipy.sparse test errors/failures with numpy 1.9.0b2
 - - gh-4084 Fix exception declarations for Cython 0.21.1 compatibility
 - - gh-4093 Avoid a memory error in splev(x, tck, der=k)
 - - gh-4104 Workaround SGEMV segfault in Accelerate (maintenance 0.14.x)
 - - gh-4143 Fix ndimage functions for large data
 - - gh-4149 Bug in expm for integer arrays
 - - gh-4154 Ensure that the 'size' argument of PIL's 'resize' method is
 a tuple
 - - gh-4163 ZeroDivisionError in scipy.sparse.linalg.lsqr
 - - gh-4164 Remove use of deprecated numpy API in lib/lapack/ f2py wrapper
 - - gh-4180 pil resize support tuple fix
 - - gh-4168 Address arpack test failures on windows 32 bits with numpy
 1.9.1
 - - gh-4218 make ndimage interpolation compatible with numpy relaxed
 strides
 - - gh-4225 off-by-one error in PPoly shape checks
 - - gh-4248 fix issue with incorrect use of closure for slsqp

 Source tarballs and binaries are available at
 https://sourceforge.net/projects/scipy/files/scipy/0.14.1rc1/

 Best regards,
 Pauli Virtanen
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iEYEARECAAYFAlSOD0YACgkQ6BQxb7O0pWDO5ACfccLqMvZWfkHqSzDCkMSoRKAU
 n7cAni6XhWJRy7oJ757rlGeIi0e34HTn
 =9bB/
 -END PGP SIGNATURE-

 ___
 SciPy-Dev mailing list
 scipy-...@scipy.org
 http://mail.scipy.org/mailman/listinfo/scipy-dev

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] GDB macros for numpy

2014-11-30 Thread David Cournapeau
Hi there,

I remember having seen some numpy-aware gdb macros at some point, but
cannot find any reference. Does anyone know of any ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] GDB macros for numpy

2014-11-30 Thread David Cournapeau
On Sun, Nov 30, 2014 at 5:45 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Sun, Nov 30, 2014 at 4:54 AM, David Cournapeau courn...@gmail.com
 wrote:

 Hi there,

 I remember having seen some numpy-aware gdb macros at some point, but
 cannot find any reference. Does anyone know of any ?


 What would numpy aware gdb macros be? Never heard of them.


Python itself has some gdb macros (
https://wiki.python.org/moin/DebuggingWithGdb).

You can do things like:

# p is a PyObject * pointer to the list [1, 2]
 pyo p
[1, 2]

The idea would be to have a nice representation for arrays, dtypes, etc...

David


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Setting up a newcomers label on the issue tracker ?

2014-11-26 Thread David Cournapeau
oups, I missed it. Will use that one then.

On Wed, Nov 26, 2014 at 9:01 AM, Julian Taylor 
jtaylor.deb...@googlemail.com wrote:

 On 11/26/2014 09:44 AM, David Cournapeau wrote:
  Hi,
 
  Would anybody mind if I create a label newcomers on GH, and start
  labelling simple issues ?
 
  This is in anticipation to the bloomberg lab event in London this WE. I
  will try to give a hand to people interested in numpy/scipy,
 
  David
 

 we have the easy-fix tag which we use for this purpose, though we have
 not been applying it very systematically, the bug list could probably do
 with a new sweep for these issues.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Scipy 0.15.0 beta 1 release

2014-11-25 Thread David Cournapeau
Shall we consider https://github.com/scipy/scipy/issues/4168 to be a
blocker (the issue arises on scipy master as well as 0.14.1) ?

On Sun, Nov 23, 2014 at 11:13 PM, Pauli Virtanen p...@iki.fi wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Dear all,

 We have finally finished preparing the Scipy 0.15.0 beta 1 release.
 Please try it and report any issues on the scipy-dev mailing list,
 and/or on Github.

 If no surprises turn up, the final release is planned on Dec 20 in
 three weeks.

 Source tarballs and full release notes are available at
 https://sourceforge.net/projects/scipy/files/SciPy/0.15.0b1/
 Binary installers should also be up soon.

 Best regards,
 Pauli Virtanen


 - 

 SciPy 0.15.0 is the culmination of 6 months of hard work. It contains
 several new features, numerous bug-fixes, improved test coverage and
 better documentation.  There have been a number of deprecations and
 API changes in this release, which are documented below.  All users
 are encouraged to upgrade to this release, as there are a large number
 of bug-fixes and optimizations.  Moreover, our development attention
 will now shift to bug-fix releases on the 0.16.x branch, and on adding
 new features on the master branch.

 This release requires Python 2.6, 2.7 or 3.2-3.3 and NumPy 1.5.1 or
 greater.


 New features
 

 Linear Programming Interface
 - - 

 The new function ``scipy.optimize.linprog`` provides a generic
 linear programming similar to the way ``scipy.optimize.minimize``
 provides a generic interface to nonlinear programming optimizers.
 Currently the only method supported is *simplex* which provides
 a two-phase, dense-matrix-based simplex algorithm. Callbacks
 functions are supported,allowing the user to monitor the progress
 of the algorithm.

 Differential_evolution, a global optimizer
 - - --

 A new ``differential_evolution`` function is available in the
 ``scipy.optimize``
 module.  Differential Evolution is an algorithm used for finding the
 global
 minimum of multivariate functions. It is stochastic in nature (does
 not use
 gradient methods), and can search large areas of candidate space, but
 often
 requires larger numbers of function evaluations than conventional gradient
 based techniques.

 ``scipy.signal`` improvements
 - - -

 The function ``max_len_seq`` was added, which computes a Maximum
 Length Sequence (MLS) signal.

 ``scipy.integrate`` improvements
 - - 

 It is now possible to use ``scipy.integrate`` routines to integrate
 multivariate ctypes functions, thus avoiding callbacks to Python and
 providing better performance.

 ``scipy.linalg`` improvements
 - - -

 Add function ``orthogonal_procrustes`` for solving the procrustes
 linear algebra problem.

 ``scipy.sparse`` improvements
 - - -

 ``scipy.sparse.linalg.svds`` can now take a ``LinearOperator`` as its
 main input.

 ``scipy.special`` improvements
 - - --

 Values of ellipsoidal harmonic (i.e. Lame) functions and associated
 normalization constants can be now computed using ``ellip_harm``,
 ``ellip_harm_2``, and ``ellip_normal``.

 New convenience functions ``entr``, ``rel_entr`` ``kl_div``,
 ``huber``, and ``pseudo_huber`` were added.

 ``scipy.sparse.csgraph`` improvements
 - - -

 Routines ``reverse_cuthill_mckee`` and ``maximum_bipartite_matching``
 for computing reorderings of sparse graphs were added.

 ``scipy.stats`` improvements
 - - 

 Added a Dirichlet distribution as multivariate distribution.

 The new function ``scipy.stats.median_test`` computes Mood's median test.

 The new function ``scipy.stats.combine_pvalues`` implements Fisher's
 and Stouffer's methods for combining p-values.

 ``scipy.stats.describe`` returns a namedtuple rather than a tuple,
 allowing
 users to access results by index or by name.

 Deprecated features
 ===

 The ``scipy.weave`` module is deprecated.  It was the only module
 never ported
 to Python 3.x, and is not recommended to be used for new code - use Cython
 instead.  In order to support existing code, ``scipy.weave`` has been
 packaged
 separately: `https://github.com/scipy/weave`_.  It is a pure Python
 package, and
 can easily be installed with ``pip install weave``.

 ``scipy.special.bessel_diff_formula`` is deprecated.  It is a private
 function,
 and therefore will be removed from the public API in a following release.


 Backwards incompatible changes
 ==

 scipy.ndimage
 - - -

 The functions ``scipy.ndimage.minimum_positions``,
 ``scipy.ndimage.maximum_positions`` and ``scipy.ndimage.extrema`` return
 positions as ints instead of floats.

 scipy.integrate
 - - ---

 The format 

Re: [Numpy-discussion] Scipy 0.15.0 beta 1 release

2014-11-25 Thread David Cournapeau
On Tue, Nov 25, 2014 at 6:10 PM, Sturla Molden sturla.mol...@gmail.com
wrote:

 David Cournapeau courn...@gmail.com wrote:
  Shall we consider a
  href=https://github.com/scipy/scipy/issues/4168;
 https://github.com/scipy/scipy/issues/4168/a
  to be a
  blocker (the issue arises on scipy master as well as 0.14.1) ?
 

 It is really bad, but does anyone know what is really going on?


Yes, it is in the bug report.

David



 Which changes to NumPy set this off?

 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.9.1, zeros and alignement

2014-11-18 Thread David Cournapeau
Hi,

I have not followed closely the changes that happen in 1.9.1, but was
surprised by the following:

x = np.zeros(12, d)
assert x.flags.aligned # fails

This is running numpy 1.9.1 built on windows with VS 2008. Is it expected
that zeros may return a non-aligned array ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.9.1, zeros and alignement

2014-11-18 Thread David Cournapeau
It is on windows 32 bits, but I would need to make this work for complex
(pair of double) as well.

Is this a bug (I  assumed array creation methods would always create
aligned arrays for their type) ? Seems like quite a bit of code out there
would assume this (scipy itself does for example).

(the context is  100 test failures on scipy 0.14.x on top of numpy 1.9.,
because f2py intent(inout) fails on work arrays created by zeros, this is a
windows-32 only failure).

David

On Tue, Nov 18, 2014 at 6:26 PM, Julian Taylor 
jtaylor.deb...@googlemail.com wrote:

 On 18.11.2014 19:20, David Cournapeau wrote:
  Hi,
 
  I have not followed closely the changes that happen in 1.9.1, but was
  surprised by the following:
 
  x = np.zeros(12, d)
  assert x.flags.aligned # fails
 
  This is running numpy 1.9.1 built on windows with VS 2008. Is it
  expected that zeros may return a non-aligned array ?
 

 what is the real alignment of the array? Are you on 32 bit or 64 bit?
 What is the alignment of doubles in windows (linux its 4 byte on 32 bit
 8 byte on 64 bit (% special compiler flags)?
 print x.__array_interface__[data]

 there are problems with complex types but doubles should be aligned even
 on 32 bit.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.9.1, zeros and alignement

2014-11-18 Thread David Cournapeau
Additional point: it seems to always return aligned data on 1.8.1 (same
platform/compiler/everything).

On Tue, Nov 18, 2014 at 6:35 PM, David Cournapeau courn...@gmail.com
wrote:

 It is on windows 32 bits, but I would need to make this work for complex
 (pair of double) as well.

 Is this a bug (I  assumed array creation methods would always create
 aligned arrays for their type) ? Seems like quite a bit of code out there
 would assume this (scipy itself does for example).

 (the context is  100 test failures on scipy 0.14.x on top of numpy 1.9.,
 because f2py intent(inout) fails on work arrays created by zeros, this is a
 windows-32 only failure).

 David

 On Tue, Nov 18, 2014 at 6:26 PM, Julian Taylor 
 jtaylor.deb...@googlemail.com wrote:

 On 18.11.2014 19:20, David Cournapeau wrote:
  Hi,
 
  I have not followed closely the changes that happen in 1.9.1, but was
  surprised by the following:
 
  x = np.zeros(12, d)
  assert x.flags.aligned # fails
 
  This is running numpy 1.9.1 built on windows with VS 2008. Is it
  expected that zeros may return a non-aligned array ?
 

 what is the real alignment of the array? Are you on 32 bit or 64 bit?
 What is the alignment of doubles in windows (linux its 4 byte on 32 bit
 8 byte on 64 bit (% special compiler flags)?
 print x.__array_interface__[data]

 there are problems with complex types but doubles should be aligned even
 on 32 bit.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.9.1, zeros and alignement

2014-11-18 Thread David Cournapeau
On Tue, Nov 18, 2014 at 6:40 PM, Julian Taylor 
jtaylor.deb...@googlemail.com wrote:

  1.9 lies about alignment it doesn't actually check for new arrays.


When I do the following on 1.8.1 with win 32 bits:

x = np.zeros(12, D)
print x.aligned.flags  == (x.__array_interface__[data][0] % 16 == 0) #
always true
print x.aligned.flags # always true

but on 1.9.1:

x = np.zeros(12, D)
print x.aligned.flags == (x.__array_interface__[data][0] % 16 == 0) #
always true
print x.aligned.flags # not always true

I wonder why numpy 1.8.1 always returned 16 bytes aligned arrays in this
case (unlikely to be a coincidence, as I try quite a few times with
different sizes).



 is the array aligned?

 On 18.11.2014 19:37, David Cournapeau wrote:
  Additional point: it seems to always return aligned data on 1.8.1 (same
  platform/compiler/everything).
 
  On Tue, Nov 18, 2014 at 6:35 PM, David Cournapeau courn...@gmail.com
  mailto:courn...@gmail.com wrote:
 
  It is on windows 32 bits, but I would need to make this work for
  complex (pair of double) as well.
 
  Is this a bug (I  assumed array creation methods would always create
  aligned arrays for their type) ? Seems like quite a bit of code out
  there would assume this (scipy itself does for example).
 
  (the context is  100 test failures on scipy 0.14.x on top of numpy
  1.9., because f2py intent(inout) fails on work arrays created by
  zeros, this is a windows-32 only failure).
 
  David
 
  On Tue, Nov 18, 2014 at 6:26 PM, Julian Taylor
  jtaylor.deb...@googlemail.com
  mailto:jtaylor.deb...@googlemail.com wrote:
 
  On 18.11.2014 19:20, David Cournapeau wrote:
   Hi,
  
   I have not followed closely the changes that happen in 1.9.1,
  but was
   surprised by the following:
  
   x = np.zeros(12, d)
   assert x.flags.aligned # fails
  
   This is running numpy 1.9.1 built on windows with VS 2008. Is
 it
   expected that zeros may return a non-aligned array ?
  
 
  what is the real alignment of the array? Are you on 32 bit or 64
  bit?
  What is the alignment of doubles in windows (linux its 4 byte on
  32 bit
  8 byte on 64 bit (% special compiler flags)?
  print x.__array_interface__[data]
 
  there are problems with complex types but doubles should be
  aligned even
  on 32 bit.
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-29 Thread David Cournapeau
On Wed, Oct 29, 2014 at 9:48 AM, Eelco Hoogendoorn 
hoogendoorn.ee...@gmail.com wrote:

 My point isn't about speed; its about the scope of numpy. typing
 np.fft.fft isn't more or less convenient than using some other symbol from
 the scientific python stack.

 Numerical algorithms should be part of the stack, for sure; but should
 they be part of numpy? I think its cleaner to have them in a separate
 package. Id rather have us discuss how to facilitate the integration of as
 many possible fft libraries with numpy behind a maximally uniform
 interface, rather than having us debate which fft library is 'best'.


I would agree if it were not already there, but removing it (like
Blas/Lapack) is out of the question for backward compatibility reason. Too
much code depends on it.

David



 On Tue, Oct 28, 2014 at 6:21 PM, Sturla Molden sturla.mol...@gmail.com
 wrote:

 Eelco Hoogendoorn hoogendoorn.ee...@gmail.com wrote:

  Perhaps the 'batteries included' philosophy made sense in the early
 days of
  numpy; but given that there are several fft libraries with their own
 pros
  and cons, and that most numpy projects will use none of them at all, why
  should numpy bundle any of them?

 Because sometimes we just need to compute a DFT, just like we sometimes
 need to compute a sine or an exponential. It does that job perfectly well.
 It is not always about speed. Just typing np.fft.fft(x) is convinient.

 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-28 Thread David Cournapeau
On Tue, Oct 28, 2014 at 5:24 AM, Sturla Molden sturla.mol...@gmail.com
wrote:

 Matthew Brett matthew.br...@gmail.com wrote:

  Is this an option for us?  Aren't we a little behind the performance
  curve on FFT after we lost FFTW?

 It does not run on Windows because it uses POSIX to allocate executable
 memory for tasklets, as i understand it.

 By the way, why did we loose FFTW, apart from GPL? One thing to mention
 here is that MKL supports the FFTW APIs. If we can use MKL for linalg and
 numpy.dot I don't see why we cannot use it for FFT.


The problem is APIs: MKL, Accelerate, etc... all use a standard API
(BLAS/LAPACK), but for FFT, you need to reimplement pretty much the whole
thing. Unsurprisingly, this meant the code was not well maintained.

Wrapping non standard, non-BSD libraries makes much more sense in separate
libraries in general.

David



 On Mac there is also vDSP in Accelerate framework which has an insanely
 fast FFT (also claimed to be faster than FFTW). Since it is a system
 library there should be no license problems.

 There are clearly options if someone wants to work on it and maintain it.

 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-28 Thread David Cournapeau
On Tue, Oct 28, 2014 at 9:19 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Tue, Oct 28, 2014 at 1:32 AM, Jerome Kieffer jerome.kief...@esrf.fr
 wrote:

 On Tue, 28 Oct 2014 04:28:37 +
 Nathaniel Smith n...@pobox.com wrote:

  It's definitely attractive. Some potential issues that might need
 dealing
  with, based on a quick skim:

 In my tests, numpy's FFTPACK isn't that bad considering
 * (virtually) no extra overhead for installation
 * (virtually) no plan creation time
 * not that slower for each transformation

 Because the plan creation was taking ages with FFTw, numpy's FFTPACK was
 often faster (overall)

 Cheers,


 Ondrej says that f90 fftpack (his mod) runs faster than fftw.


I would be interested to see the benchmarks for this.

The real issue with fftw (besides the license) is the need for plan
computation, which are expensive (but are not needed for each transform).
Handling this in a way that is user friendly while tweakable for advanced
users is not easy, and IMO more appropriate for a separate package.

The main thing missing from fftpack is the handling of transform sizes that
 are not products of 2,3,4,5.


Strickly speaking, it is handled, just not through an FFT (it goes back to
the brute force O(N**2)).

I made some experiments with the Bluestein transform to handle prime
transforms on fftpack, but the precision seemed to be an issue. Maybe I
should revive this work (if I still have it somewhere).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-28 Thread David Cournapeau
I

On Tue, Oct 28, 2014 at 2:31 PM, Nathaniel Smith n...@pobox.com wrote:

 On 28 Oct 2014 07:32, Jerome Kieffer jerome.kief...@esrf.fr wrote:
 
  On Tue, 28 Oct 2014 04:28:37 +
  Nathaniel Smith n...@pobox.com wrote:
 
   It's definitely attractive. Some potential issues that might need
 dealing
   with, based on a quick skim:
 
  In my tests, numpy's FFTPACK isn't that bad considering
  * (virtually) no extra overhead for installation
  * (virtually) no plan creation time
  * not that slower for each transformation

 Well, this is what makes FFTS intriguing :-). It's BSD licensed, so we
 could distribute it by default like we do fftpack, it uses cache-oblivious
 algorithms so it has no planning step, and even without planning it
 benchmarks as faster than FFTW's most expensive planning mode (in the cases
 that FFTS supports, i.e. power-of-two transforms).

 The paper has lots of benchmark graphs, including measurements of setup
 time:
   http://anthonix.com/ffts/preprints/tsp2013.pdf


Nice. In this case, the solution may be to implement the Bluestein
transform to deal with prime/near-prime numbers on top of FFTS.

I did not look much, but it did not obviously support building on windows
as well ?

David


 -n

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-28 Thread David Cournapeau
On Tue, Oct 28, 2014 at 3:06 PM, David Cournapeau courn...@gmail.com
wrote:

 I

 On Tue, Oct 28, 2014 at 2:31 PM, Nathaniel Smith n...@pobox.com wrote:

 On 28 Oct 2014 07:32, Jerome Kieffer jerome.kief...@esrf.fr wrote:
 
  On Tue, 28 Oct 2014 04:28:37 +
  Nathaniel Smith n...@pobox.com wrote:
 
   It's definitely attractive. Some potential issues that might need
 dealing
   with, based on a quick skim:
 
  In my tests, numpy's FFTPACK isn't that bad considering
  * (virtually) no extra overhead for installation
  * (virtually) no plan creation time
  * not that slower for each transformation

 Well, this is what makes FFTS intriguing :-). It's BSD licensed, so we
 could distribute it by default like we do fftpack, it uses cache-oblivious
 algorithms so it has no planning step, and even without planning it
 benchmarks as faster than FFTW's most expensive planning mode (in the cases
 that FFTS supports, i.e. power-of-two transforms).

 The paper has lots of benchmark graphs, including measurements of setup
 time:
   http://anthonix.com/ffts/preprints/tsp2013.pdf


 Nice. In this case, the solution may be to implement the Bluestein
 transform to deal with prime/near-prime numbers on top of FFTS.

 I did not look much, but it did not obviously support building on windows
 as well ?


Ok, I took a quick look at it, and it will be a significant effort to be
able to make FFTS work at all with MSVC on windows:

- the code is not C89 compatible
- it uses code generation using POSIX library. One would need to port that
part to using Win32 API as well.
- the test suite looks really limited (roundtripping only).

The codebase does not seem particularly well written either (but neither is
FFTPACK to be fair).

Nothing impossible (looks like Sony at least uses this code on windows:
https://github.com/anthonix/ffts/issues/27#issuecomment-40204403), but not
a 2 hours thing either.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] npy_log2 undefined on Linux

2014-10-26 Thread David Cournapeau
Not exactly: if you build numpy with mingw (as is the official binary), you
need to build everything that uses numpy C API with it.

On Sun, Oct 26, 2014 at 1:22 AM, Matthew Brett matthew.br...@gmail.com
wrote:

 On Sat, Oct 25, 2014 at 2:15 PM, Matthew Brett matthew.br...@gmail.com
 wrote:
  On Fri, Oct 24, 2014 at 6:04 PM, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  We (dipy developers) have a hit a new problem trying to use the
  ``npy_log`` C function in our code.
 
  Specifically, on Linux, but not on Mac or Windows, we are getting
  errors of form:
 
  ImportError: /path/to/extension/distances.cpython-34m.so: undefined
  symbol: npy_log2
 
  when compiling something like:
 
  eg_log.pyx
  import numpy as np
  cimport numpy as cnp
 
  cdef extern from numpy/npy_math.h nogil:
  double npy_log(double x)
 
 
  def use_log(double val):
  return npy_log(val)
  /eg_log.pyx
 
  See : https://github.com/matthew-brett/mincy/tree/npy_log_example for
  a self-contained example that replicates the failure with ``make``.
 
  I guess this means that the code referred to by ``npy_log`` is not on
  the ordinary runtime path on Linux?
 
  To answer my own question - npy_log is defined in ``libnpymath.a``, in
  numpy/core/lib.
 
  The hint I needed was in
 
 https://github.com/numpy/numpy/blob/master/doc/source/reference/c-api.coremath.rst
 
  The correct setup.py is:
 
  setup.py
  from distutils.core import setup
  from distutils.extension import Extension
  from Cython.Distutils import build_ext
 
  from numpy.distutils.misc_util import get_info
  npm_info = get_info('npymath')
 
  ext_modules = [Extension(eg_log, [eg_log.pyx],
   **npm_info)]
 
  setup(
name = 'eg_log',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
  )
  /setup.py

 Ah, except this doesn't work on Windows, compiling with Visual Studio:

 LINK : fatal error LNK1181: cannot open input file 'npymath.lib'

 Investigating, c:\Python27\Lib\site-packages\numpy\core\lib has only
 `libnpymath.a``; I guess this isn't going to work for Visual Studio.
 Is this a bug?

 Cheers,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom dtypes without C -- or, a standard ndarray-like type

2014-09-23 Thread David Cournapeau
On Mon, Sep 22, 2014 at 4:31 AM, Nathaniel Smith n...@pobox.com wrote:

 On Sun, Sep 21, 2014 at 7:50 PM, Stephan Hoyer sho...@gmail.com wrote:
  pandas has some hacks to support custom types of data for which numpy
 can't
  handle well enough or at all. Examples include datetime and Categorical
 [1],
  and others like GeoArray [2] that haven't make it into pandas yet.
 
  Most of these look like numpy arrays but with custom dtypes and type
  specific methods/properties. But clearly nobody is particularly excited
  about writing the the C necessary to implement custom dtypes [3]. Nor is
 do
  we need the ndarray ABI.
 
  In many cases, writing C may not actually even be necessary for
 performance
  reasons, e.g., categorical can be fast enough just by wrapping an integer
  ndarray for the internal storage and using vectorized operations. And
 even
  if it is necessary, I think we'd all rather write Cython than C.
 
  It's great for pandas to write its own ndarray-like wrappers (*not*
  subclasses) that work with pandas, but it's a shame that there isn't a
  standard interface like the ndarray to make these arrays useable for the
  rest of the scientific Python ecosystem. For example, pandas has loads of
  fixes for np.datetime64, but nobody seems to be up for porting them to
 numpy
  (I doubt it would be easy).

 Writing them in the first place probably wasn't easy either :-). I
 don't really know why pandas spends so much effort on reimplementing
 stuff and papering over numpy limitations instead of fixing things
 upstream so that everyone can benefit. I assume they have reasons, and
 I could make some general guesses at what some of them might be, but
 if you want to know what they are -- which is presumably the first
 step in changing the situation -- you'll have to ask them, not us :-).

  I know these sort of concerns are not new, but I wish I had a sense of
 what
  the solution looks like. Is anyone actively working on these issues? Does
  the fix belong in numpy, pandas, blaze or a new project? I'd love to get
 a
  sense of where things stand and how I could help -- without writing any C
  :).

 I think there are there are three parts:

 For stuff that's literally just fixing bugs in stuff that numpy
 already has, then we'd certainly be happy to accept those bug fixes.
 Probably there are things we can do to make this easier, I dunno. I'd
 love to see some of numpy's internals moving into Cython to make them
 easier to hack on, but this won't be simple because right now using
 Cython to implement a module is really an all-or-nothing affair;
 making it possible to mix Cython with numpy's existing C code will
 require upstream changes in Cython.


 For cases where people genuinely want to implement a new array-like
 types (e.g. DataFrame or scipy.sparse) then numpy provides a fair
 amount of support for this already (e.g., the various hooks that allow
 things like np.asarray(mydf) or np.sin(mydf) to work), and we're
 working on adding more over time (e.g., __numpy_ufunc__).

 My feeling though is that in most of the cases you mention,
 implementing a new array-like type is huge overkill. ndarray's
 interface is vast and reimplementing even 90% of it is a huge effort.
 For most of the cases that people seem to run into in practice, the
 solution is to enhance numpy's dtype interface so that it's possible
 for mere mortals to implement new dtypes, e.g. by just subclassing
 np.dtype. This is totally doable and would enable a ton of
 awesomeness, but it requires someone with the time to sit down and
 work on it, and no-one has volunteered yet. Unfortunately it does
 require hacking on C code though.


While preparing my tutorial on NumPy C internals 1 year ago, I tried to get
a basic dtype implemented in cython, and there were various issues even
if you wanted to do all of it in cython (I can't remember the details now).
Solving this would be a good first step.

There were (are ?) also some issues regarding precedence in ufuncs
depending on the new dtype: numpy hardcodes that long double is the highest
precision floating point type, for example, and there were similar issues
regarding datetime handling. Does not matter for completely new types that
don't require interactions with others (categorical ?).

Would it help to prepare a set of implement your own dtype notebooks ? I
have a starting point from last year tutorial (the corresponding slides
were never shown for lack of time).

David




 --
 Nathaniel J. Smith
 Postdoctoral researcher - Informatics - University of Edinburgh
 http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to install numpy on a box without hardware FPU

2014-09-02 Thread David Cournapeau
On Tue, Sep 2, 2014 at 8:29 AM, Emel Hasdal emel_has...@hotmail.com wrote:

 I am trying to run a python application which performs statistical
 calculations using Pandas which seem to depend on Numpy. Hence I have to
 install Numpy to get the app working.

 Do you mean I can change

numpy/core/src/npymath/ieee754.c.src

 such that the functions referencing exceptions (npy_get_floatstatus,
 npy_clear_floatstatus, npy_set_floatstatus_divbyzer,
 npy_set_floatstatus_overflow, npy_set_floatstatus_underflow,
 npy_set_floatstatus_invalid) do nothing?

 Could there be any implications of this on the numpy functionality?


AFAIK, few people have ever tried to run numpy on CPU without an FPU. The
generic answer is that we do not know, as this is not a supported platform,
so you are on your own.

I would suggest you just try adding stubs to see how far you can go, and
come back to this group with the result of your investigation.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help - numpy / scipy binary compatibility

2014-08-08 Thread David Cournapeau
On Sat, Aug 9, 2014 at 10:41 AM, Matthew Brett matthew.br...@gmail.com
wrote:

 Hi,

 I would be very happy of some help trying to work out a numpy package
 binary incompatibility.

 I'm trying to work out what's happening for this ticket:

 https://github.com/scipy/scipy/issues/3863

 which I summarized at the end:

 https://github.com/scipy/scipy/issues/3863#issuecomment-51669861

 but basically, we're getting these errors:

 RuntimeWarning: numpy.dtype size changed, may indicate binary
 incompatibility

 I now realize I am lost in the world of numpy / scipy etc binary
 compatibility, I'd really like some advice.In this case

 numpy == 1.8.1
 scipy == 0.14.0 - compiled against numpy 1.5.1
 scikit-learn == 0.15.1 compiled against numpy 1.6.0

 Can y'all see any potential problem with those dependencies in binary
 builds?

 The relevant scipy Cython c files seem to guard against raising this
 error by doing not-strict checks of the e.g. numpy dtype, so I am
 confused how these errors come about.  Can anyone give any pointers?


Assuming the message is not bogus, I would try import von_mises with a venv
containing numpy 1.5.1, then 1.6.0, etc... to detect when the change
happened.

David


 Cheers,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove numpy/compat/_inspect.py ?

2014-08-01 Thread David Cournapeau
On Fri, Aug 1, 2014 at 11:23 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:




 On Fri, Aug 1, 2014 at 7:59 AM, Robert Kern robert.k...@gmail.com wrote:

 On Fri, Aug 1, 2014 at 2:54 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:

  Importing inspect looks to take about  500 ns on my machine. Although
 It is
  hard to be exact, as I suspect the file is sitting in the file cache.
 Would
  probably be slower with hard disks.

 Or where site-packages is on NFS.

  But as the inspect module is already
  imported elsewhere, the python interpreter should also have it cached.

 Not on a normal import it's not.

  import numpy
  import sys
  sys.modules['inspect']
 Traceback (most recent call last):
   File stdin, line 1, in module
 KeyError: 'inspect'


 There are two lazy imports of inspect.



 You should feel free to remove whatever parts of `_inspect` are not
 being used and to move the parts that are closer to where they are
 used if you feel compelled to. Please do not replace the current uses
 of `_inspect` with `inspect`.


 It is used in just one place. Is importing inspect so much slower than all
 the other imports we do?


Yes, please look at the thread I referred to. The custom inspect cut
imports by 30 %, I doubt the ratio is much different today.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove numpy/compat/_inspect.py ?

2014-08-01 Thread David Cournapeau
On my machine, if I use inspect instead of _inspect in
numpy.compat.__init__, the import time increases ~ 25 % (from 82 ms to 99
ms).

So the hack certainly still make sense, one just need to fix whatever needs
fixing (I am still not sure what's broken for the very specific usecase
that code was bundled for).

David


On Sat, Aug 2, 2014 at 5:11 AM, David Cournapeau courn...@gmail.com wrote:




 On Fri, Aug 1, 2014 at 11:23 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:




 On Fri, Aug 1, 2014 at 7:59 AM, Robert Kern robert.k...@gmail.com
 wrote:

 On Fri, Aug 1, 2014 at 2:54 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:

  Importing inspect looks to take about  500 ns on my machine. Although
 It is
  hard to be exact, as I suspect the file is sitting in the file cache.
 Would
  probably be slower with hard disks.

 Or where site-packages is on NFS.

  But as the inspect module is already
  imported elsewhere, the python interpreter should also have it cached.

 Not on a normal import it's not.

  import numpy
  import sys
  sys.modules['inspect']
 Traceback (most recent call last):
   File stdin, line 1, in module
 KeyError: 'inspect'


 There are two lazy imports of inspect.



 You should feel free to remove whatever parts of `_inspect` are not
 being used and to move the parts that are closer to where they are
 used if you feel compelled to. Please do not replace the current uses
 of `_inspect` with `inspect`.


 It is used in just one place. Is importing inspect so much slower than
 all the other imports we do?


 Yes, please look at the thread I referred to. The custom inspect cut
 imports by 30 %, I doubt the ratio is much different today.

 David

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove numpy/compat/_inspect.py ?

2014-08-01 Thread David Cournapeau
On Sat, Aug 2, 2014 at 11:17 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:




 On Fri, Aug 1, 2014 at 8:01 PM, David Cournapeau courn...@gmail.com
 wrote:

 On my machine, if I use inspect instead of _inspect in
 numpy.compat.__init__, the import time increases ~ 25 % (from 82 ms to 99
 ms).

 So the hack certainly still make sense, one just need to fix whatever
 needs fixing (I am still not sure what's broken for the very specific
 usecase that code was bundled for).


 I'm not sure a one time hit of 17 ms is worth fighting for ;) The problems
 were that both the `string` and `dis` modules were used without importing
 them.


Don't fix what ain't broken ;)

The 17 ms is not what matters, the % is. People regularly complain about
import times, and 25 % increase in import time is significant (the above
timing are on my new macbook with SSD and 16 Gb RAM -- figures will easily
be 1 order of magnitude worse in common situations with slower computers,
slower HDD, NFS, etc...)

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove numpy/compat/_inspect.py ?

2014-08-01 Thread David Cournapeau
On Sat, Aug 2, 2014 at 11:36 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:




 On Fri, Aug 1, 2014 at 8:22 PM, David Cournapeau courn...@gmail.com
 wrote:




 On Sat, Aug 2, 2014 at 11:17 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:




 On Fri, Aug 1, 2014 at 8:01 PM, David Cournapeau courn...@gmail.com
 wrote:

 On my machine, if I use inspect instead of _inspect in
 numpy.compat.__init__, the import time increases ~ 25 % (from 82 ms to 99
 ms).

 So the hack certainly still make sense, one just need to fix whatever
 needs fixing (I am still not sure what's broken for the very specific
 usecase that code was bundled for).


 I'm not sure a one time hit of 17 ms is worth fighting for ;) The
 problems were that both the `string` and `dis` modules were used without
 importing them.


 Don't fix what ain't broken ;)

 The 17 ms is not what matters, the % is. People regularly complain about
 import times, and 25 % increase in import time is significant (the above
 timing are on my new macbook with SSD and 16 Gb RAM -- figures will easily
 be 1 order of magnitude worse in common situations with slower computers,
 slower HDD, NFS, etc...)


 Be interesting to compare times. Could you send along the code you used?
 My machine is similar except it is a desktop with 2 SSDs in raid 0.


I just hacked numpy.lib.__init__ to use inspect instead of _inspect:

diff --git a/numpy/compat/__init__.py b/numpy/compat/__init__.py
index 5b371f5..57f6d7f 100644
--- a/numpy/compat/__init__.py
+++ b/numpy/compat/__init__.py
@@ -10,11 +10,11 @@ extensions, which may be included for the following
reasons:
 
 from __future__ import division, absolute_import, print_function

-from . import _inspect
+import inspect as _inspect
 from . import py3k
-from ._inspect import getargspec, formatargspec
+from inspect import getargspec, formatargspec
 from .py3k import *

 __all__ = []
-__all__.extend(_inspect.__all__)
+__all__.extend([getargspec, formatargspec])
 __all__.extend(py3k.__all__)

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove numpy/compat/_inspect.py ?

2014-07-31 Thread David Cournapeau
The docstring at the beginning of the module is still relevant AFAIK: it
was about decreasing import times. See
http://mail.scipy.org/pipermail/numpy-discussion/2009-October/045981.html


On Fri, Aug 1, 2014 at 10:27 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Hi All,

 The _inspect.py function looks like a numpy version of the python inspect
 function. ISTR that is was a work around for problems with the early python
 versions, but that would have been back in 2009.

 Thoughts?

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove bento from numpy

2014-07-06 Thread David Cournapeau
On Sun, Jul 6, 2014 at 2:24 AM, Julian Taylor jtaylor.deb...@googlemail.com
 wrote:

 On 05.07.2014 19:11, David Cournapeau wrote:
  On Sun, Jul 6, 2014 at 1:55 AM, Julian Taylor
  jtaylor.deb...@googlemail.com mailto:jtaylor.deb...@googlemail.com
  wrote:
 
  On 05.07.2014 18:40, David Cournapeau wrote:
   The efforts are on average less demanding than this discussion. We
 are
   talking about adding entries to a list in most cases...
  
   Also, while adding the optimization support for bento, I've
  noticed that
   a lot of the related distutils code is broken, and does not work as
   expected on at least OS X + clang.
 
  It just spits out a lot of warnings but they are harmless.
 
 
  Adding lots of warnings are not harmless as they render the compiler
  warning system near useless (too many false alarms).
 

 true but until now we haven't received a single complaint nor fixes so
 probably not many developers are actually using macs/clang to work on
 numpy C code.
 But I do agree its bad and I have fixing that on my todo list, I didn't
 get around to it yet.


Here is an attempt: https://github.com/numpy/numpy/pull/4842

It uses a vile hack, but I did not see any other simple way. It fixes the
warnings on osx, once travis-ci confirms the tests pass ok on linux, I will
test it on msvc.

David



  I will fix the checks for both distutils and bento (using the autoconf
  macros setup, which should be more reliable than what we use for builtin
  and __attribute__-related checks)
 
  David
 
 
  We could remove them by using with -Werror=attribute for the
 conftests
  if it really bothers someone.
  Or do you mean something else?
 
  
   David
  
  
   On Sun, Jul 6, 2014 at 1:38 AM, Nathaniel Smith n...@pobox.com
  mailto:n...@pobox.com
   mailto:n...@pobox.com mailto:n...@pobox.com wrote:
  
   On Sat, Jul 5, 2014 at 3:21 PM, David Cournapeau
  courn...@gmail.com mailto:courn...@gmail.com
   mailto:courn...@gmail.com mailto:courn...@gmail.com
 wrote:
   
On Sat, Jul 5, 2014 at 11:17 PM, Nathaniel Smith
  n...@pobox.com mailto:n...@pobox.com
   mailto:n...@pobox.com mailto:n...@pobox.com wrote:
   
Maybe bento will revive and take over the new python
  packaging world!
Maybe not. Maybe something else will. I don't see how our
  support for
it will really affect these outcomes in any way. And I
  especially
don't see why it's important to spend time *now* on keeping
  bento
working, just in case it becomes useful *later*.
   
But it is working right now, so that argument is moot.
  
   My suggestion was that we should drop the rule that a patch
 has to
   keep bento working to be merged. We're talking about future
  breakages
   and future effort. The fact that it's working now doesn't say
  anything
   about whether it's worth continuing to invest time in it.
  
   --
   Nathaniel J. Smith
   Postdoctoral researcher - Informatics - University of Edinburgh
   http://vorpus.org
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  mailto:NumPy-Discussion@scipy.org mailto:
 NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
  
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
On Sat, Jul 5, 2014 at 11:25 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Ralf likes the speed of bento, but it is not currently maintained


What exactly is not maintained ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
On Sat, Jul 5, 2014 at 5:23 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:




 On Sat, Jul 5, 2014 at 10:13 AM, David Cournapeau courn...@gmail.com
 wrote:




 On Sat, Jul 5, 2014 at 11:25 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Ralf likes the speed of bento, but it is not currently maintained


 What exactly is not maintained ?


 The issue is that Julian made some slightly nontrivial changes to
 core/setup.py and didn't want to update core/bscript. No one else has taken
 the time either to make those changes. That didn't bother me enough yet to
 go fix it, because they're all optional features and using Bento builds
 works just fine at the moment (and is part of the Travis CI test runs, so
 it'll keep working).


What are those changes ?

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
On Sat, Jul 5, 2014 at 11:17 PM, Nathaniel Smith n...@pobox.com wrote:

 On Sat, Jul 5, 2014 at 2:32 PM, Ralf Gommers ralf.gomm...@gmail.com
 wrote:
 
  On Sat, Jul 5, 2014 at 1:54 PM, Nathaniel Smith n...@pobox.com wrote:
 
  On 5 Jul 2014 09:23, Ralf Gommers ralf.gomm...@gmail.com wrote:
  
   On Sat, Jul 5, 2014 at 10:13 AM, David Cournapeau courn...@gmail.com
 
   wrote:
  
   On Sat, Jul 5, 2014 at 11:25 AM, Charles R Harris
   charlesr.har...@gmail.com wrote:
  
   Ralf likes the speed of bento, but it is not currently maintained
  
  
   What exactly is not maintained ?
  
  
   The issue is that Julian made some slightly nontrivial changes to
   core/setup.py and didn't want to update core/bscript. No one else has
 taken
   the time either to make those changes. That didn't bother me enough
 yet to
   go fix it, because they're all optional features and using Bento
 builds
   works just fine at the moment (and is part of the Travis CI test
 runs, so
   it'll keep working).
 
  Perhaps a compromise would be to declare it officially unsupported and
  remove it from Travis CI, while leaving the files in place to be used
 on an
  at-your-own-risk basis? As long as it's in Travis, the default is that
  anyone who breaks it has to fix it. If it's not in Travis, then the
 default
  is that the people (person?) who use bento are responsible for keeping
 it
  working for their needs.
 
  -1 that just means that simple changes like adding a new extension will
 not
  get made before PRs get merged, and bento support will be in a broken
 state
  much more often.

 Yes, and then the handful of people who care about this would fix it
 or not. Your -1 is attempting to veto other people's *not* paying
 attention to this build system. I... don't think -1's work that way
 :-(

   I don't think the above is a good reason to remove Bento support. The
   much faster builds alone are a good reason to keep it. And the
 assertion
   that all numpy devs understand numpy.distutils is more than a little
   questionable:)
 
  They surely don't. But thousands of people use setup.py, and one or two
  use bento.
 
  I'm getting a little tired of these assertions. It's clear that David
 and I
  use it. A cursory search on Github reveals that Stefan, Fabian, Jonas and
  @aksarkar do (or did) as well:
 https://github.com/scipy/scipy/commit/74d823b3
 https://github.com/numpy/numpy/issues/2993
 https://github.com/numpy/numpy/pull/3606
 https://github.com/numpy/numpy/issues/3889
  For every user you can measure there's usually a number of users that you
  don't hear about.

 I apologize for forgetting before that you do use Bento, but these
 patches you're finding don't really change the overall picture. Let's
 assume that there are 100 people using Bento, who would be slightly
 inconvenienced if they had to use setup.py instead, or got stuck
 patching the bento build themselves to keep it working. 100 is
 probably an order of magnitude too high, but whatever. OTOH numpy has
 almost 7 million downloads on PyPI+sf.net, of which approximately
 every one used setup.py one way or another, plus all the people get it
 from alternative channels like distros, which also AFAIK universally
 use setup.py. Software development is all about trade-offs. Time that
 numpy developers spend messing about with bento to benefit those
 hundred users is time that could instead be spent on improvements that
 benefit many orders of magnitudes more users. Why do you want us to
 spend our time producing x units of value when we could instead be
 producing 100*x units of value for the same effort?

  Yet supporting both requires twice as much energy and attention as
  supporting just one.
 
  That's of course not true. For most changes the differences in where and
 how
  to update the build systems are small. Only for unusual changes like
 Julian
  patches to make use of optional GCC features, Bento and distutils may
  require very different changes.
 
  We've probably spent more person-hours talking about this, documenting
 the
  missing bscript bits, etc. than you've saved on those fast builds.
 
  Then maybe stop talking about it:)
 
  Besides the fast builds, which is only one example of why I like Bento
  better, there's also the fundamental question of what we do with build
 tools
  in the long term. It's clear that distutils is a dead end. All the PEPs
  related to packaging move in the direction of supporting tools like Bento
  better. If in the future we need significant new features in our build
 tool,
  Bento is a much better base to build on than numpy.distutils. It's
  unfortunate that at the moment there's no one that works on improving our
  build situation, but that is what it is. Removing Bento support is a
 step in
  the wrong direction imho.

 We must do something! This is something!

 Bento is pre-alpha software whose last upstream commit was in July
 2013. It's own CI tests have been failing since Feb. 2013, almost a
 year

Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
On Sat, Jul 5, 2014 at 11:51 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:




 On Sat, Jul 5, 2014 at 8:28 AM, Matthew Brett matthew.br...@gmail.com
 wrote:

 On Sat, Jul 5, 2014 at 3:21 PM, David Cournapeau courn...@gmail.com
 wrote:
 
 
 
  On Sat, Jul 5, 2014 at 11:17 PM, Nathaniel Smith n...@pobox.com wrote:
 
  On Sat, Jul 5, 2014 at 2:32 PM, Ralf Gommers ralf.gomm...@gmail.com
  wrote:
  
   On Sat, Jul 5, 2014 at 1:54 PM, Nathaniel Smith n...@pobox.com
 wrote:
  
   On 5 Jul 2014 09:23, Ralf Gommers ralf.gomm...@gmail.com wrote:
   
On Sat, Jul 5, 2014 at 10:13 AM, David Cournapeau
courn...@gmail.com
wrote:
   
On Sat, Jul 5, 2014 at 11:25 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
   
Ralf likes the speed of bento, but it is not currently
 maintained
   
   
What exactly is not maintained ?
   
   
The issue is that Julian made some slightly nontrivial changes to
core/setup.py and didn't want to update core/bscript. No one else
 has
taken
the time either to make those changes. That didn't bother me
 enough
yet to
go fix it, because they're all optional features and using Bento
builds
works just fine at the moment (and is part of the Travis CI test
runs, so
it'll keep working).
  
   Perhaps a compromise would be to declare it officially unsupported
 and
   remove it from Travis CI, while leaving the files in place to be
 used
   on an
   at-your-own-risk basis? As long as it's in Travis, the default is
 that
   anyone who breaks it has to fix it. If it's not in Travis, then the
   default
   is that the people (person?) who use bento are responsible for
 keeping
   it
   working for their needs.
  
   -1 that just means that simple changes like adding a new extension
 will
   not
   get made before PRs get merged, and bento support will be in a broken
   state
   much more often.
 
  Yes, and then the handful of people who care about this would fix it
  or not. Your -1 is attempting to veto other people's *not* paying
  attention to this build system. I... don't think -1's work that way
  :-(
 
I don't think the above is a good reason to remove Bento support.
 The
much faster builds alone are a good reason to keep it. And the
assertion
that all numpy devs understand numpy.distutils is more than a
 little
questionable:)
  
   They surely don't. But thousands of people use setup.py, and one or
 two
   use bento.
  
   I'm getting a little tired of these assertions. It's clear that David
   and I
   use it. A cursory search on Github reveals that Stefan, Fabian, Jonas
   and
   @aksarkar do (or did) as well:
  https://github.com/scipy/scipy/commit/74d823b3
  https://github.com/numpy/numpy/issues/2993
  https://github.com/numpy/numpy/pull/3606
  https://github.com/numpy/numpy/issues/3889
   For every user you can measure there's usually a number of users that
   you
   don't hear about.
 
  I apologize for forgetting before that you do use Bento, but these
  patches you're finding don't really change the overall picture. Let's
  assume that there are 100 people using Bento, who would be slightly
  inconvenienced if they had to use setup.py instead, or got stuck
  patching the bento build themselves to keep it working. 100 is
  probably an order of magnitude too high, but whatever. OTOH numpy has
  almost 7 million downloads on PyPI+sf.net, of which approximately
  every one used setup.py one way or another, plus all the people get it
  from alternative channels like distros, which also AFAIK universally
  use setup.py. Software development is all about trade-offs. Time that
  numpy developers spend messing about with bento to benefit those
  hundred users is time that could instead be spent on improvements that
  benefit many orders of magnitudes more users. Why do you want us to
  spend our time producing x units of value when we could instead be
  producing 100*x units of value for the same effort?
 
   Yet supporting both requires twice as much energy and attention as
   supporting just one.
  
   That's of course not true. For most changes the differences in where
 and
   how
   to update the build systems are small. Only for unusual changes like
   Julian
   patches to make use of optional GCC features, Bento and distutils may
   require very different changes.
  
   We've probably spent more person-hours talking about this,
 documenting
   the
   missing bscript bits, etc. than you've saved on those fast builds.
  
   Then maybe stop talking about it:)
  
   Besides the fast builds, which is only one example of why I like
 Bento
   better, there's also the fundamental question of what we do with
 build
   tools
   in the long term. It's clear that distutils is a dead end. All the
 PEPs
   related to packaging move in the direction of supporting tools like
   Bento
   better. If in the future we need significant new features in our
 build
   tool,
   Bento is a much better

Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
The efforts are on average less demanding than this discussion. We are
talking about adding entries to a list in most cases...

Also, while adding the optimization support for bento, I've noticed that a
lot of the related distutils code is broken, and does not work as expected
on at least OS X + clang.

David


On Sun, Jul 6, 2014 at 1:38 AM, Nathaniel Smith n...@pobox.com wrote:

 On Sat, Jul 5, 2014 at 3:21 PM, David Cournapeau courn...@gmail.com
 wrote:
 
  On Sat, Jul 5, 2014 at 11:17 PM, Nathaniel Smith n...@pobox.com wrote:
 
  Maybe bento will revive and take over the new python packaging world!
  Maybe not. Maybe something else will. I don't see how our support for
  it will really affect these outcomes in any way. And I especially
  don't see why it's important to spend time *now* on keeping bento
  working, just in case it becomes useful *later*.
 
  But it is working right now, so that argument is moot.

 My suggestion was that we should drop the rule that a patch has to
 keep bento working to be merged. We're talking about future breakages
 and future effort. The fact that it's working now doesn't say anything
 about whether it's worth continuing to invest time in it.

 --
 Nathaniel J. Smith
 Postdoctoral researcher - Informatics - University of Edinburgh
 http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
On Sun, Jul 6, 2014 at 1:55 AM, Julian Taylor jtaylor.deb...@googlemail.com
 wrote:

 On 05.07.2014 18:40, David Cournapeau wrote:
  The efforts are on average less demanding than this discussion. We are
  talking about adding entries to a list in most cases...
 
  Also, while adding the optimization support for bento, I've noticed that
  a lot of the related distutils code is broken, and does not work as
  expected on at least OS X + clang.

 It just spits out a lot of warnings but they are harmless.


Adding lots of warnings are not harmless as they render the compiler
warning system near useless (too many false alarms).

I will fix the checks for both distutils and bento (using the autoconf
macros setup, which should be more reliable than what we use for builtin
and __attribute__-related checks)

David


 We could remove them by using with -Werror=attribute for the conftests
 if it really bothers someone.
 Or do you mean something else?

 
  David
 
 
  On Sun, Jul 6, 2014 at 1:38 AM, Nathaniel Smith n...@pobox.com
  mailto:n...@pobox.com wrote:
 
  On Sat, Jul 5, 2014 at 3:21 PM, David Cournapeau courn...@gmail.com
  mailto:courn...@gmail.com wrote:
  
   On Sat, Jul 5, 2014 at 11:17 PM, Nathaniel Smith n...@pobox.com
  mailto:n...@pobox.com wrote:
  
   Maybe bento will revive and take over the new python packaging
 world!
   Maybe not. Maybe something else will. I don't see how our support
 for
   it will really affect these outcomes in any way. And I especially
   don't see why it's important to spend time *now* on keeping bento
   working, just in case it becomes useful *later*.
  
   But it is working right now, so that argument is moot.
 
  My suggestion was that we should drop the rule that a patch has to
  keep bento working to be merged. We're talking about future breakages
  and future effort. The fact that it's working now doesn't say
 anything
  about whether it's worth continuing to invest time in it.
 
  --
  Nathaniel J. Smith
  Postdoctoral researcher - Informatics - University of Edinburgh
  http://vorpus.org
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Remove bento from numpy

2014-07-05 Thread David Cournapeau
On Sun, Jul 6, 2014 at 2:24 AM, Julian Taylor jtaylor.deb...@googlemail.com
 wrote:

 On 05.07.2014 19:11, David Cournapeau wrote:
  On Sun, Jul 6, 2014 at 1:55 AM, Julian Taylor
  jtaylor.deb...@googlemail.com mailto:jtaylor.deb...@googlemail.com
  wrote:
 
  On 05.07.2014 18:40, David Cournapeau wrote:
   The efforts are on average less demanding than this discussion. We
 are
   talking about adding entries to a list in most cases...
  
   Also, while adding the optimization support for bento, I've
  noticed that
   a lot of the related distutils code is broken, and does not work as
   expected on at least OS X + clang.
 
  It just spits out a lot of warnings but they are harmless.
 
 
  Adding lots of warnings are not harmless as they render the compiler
  warning system near useless (too many false alarms).
 

 true but until now we haven't received a single complaint nor fixes so
 probably not many developers are actually using macs/clang to work on
 numpy C code.


Not many people are working on numpy C code period :)

FWIW, clang is now the standard OS X compiler since Maverick (Apple in all
its wisdom made gcc an alias to clang...).

David


 But I do agree its bad and I have fixing that on my todo list, I didn't
 get around to it yet.

  I will fix the checks for both distutils and bento (using the autoconf
  macros setup, which should be more reliable than what we use for builtin
  and __attribute__-related checks)
 
  David
 
 
  We could remove them by using with -Werror=attribute for the
 conftests
  if it really bothers someone.
  Or do you mean something else?
 
  
   David
  
  
   On Sun, Jul 6, 2014 at 1:38 AM, Nathaniel Smith n...@pobox.com
  mailto:n...@pobox.com
   mailto:n...@pobox.com mailto:n...@pobox.com wrote:
  
   On Sat, Jul 5, 2014 at 3:21 PM, David Cournapeau
  courn...@gmail.com mailto:courn...@gmail.com
   mailto:courn...@gmail.com mailto:courn...@gmail.com
 wrote:
   
On Sat, Jul 5, 2014 at 11:17 PM, Nathaniel Smith
  n...@pobox.com mailto:n...@pobox.com
   mailto:n...@pobox.com mailto:n...@pobox.com wrote:
   
Maybe bento will revive and take over the new python
  packaging world!
Maybe not. Maybe something else will. I don't see how our
  support for
it will really affect these outcomes in any way. And I
  especially
don't see why it's important to spend time *now* on keeping
  bento
working, just in case it becomes useful *later*.
   
But it is working right now, so that argument is moot.
  
   My suggestion was that we should drop the rule that a patch
 has to
   keep bento working to be merged. We're talking about future
  breakages
   and future effort. The fact that it's working now doesn't say
  anything
   about whether it's worth continuing to invest time in it.
  
   --
   Nathaniel J. Smith
   Postdoctoral researcher - Informatics - University of Edinburgh
   http://vorpus.org
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  mailto:NumPy-Discussion@scipy.org mailto:
 NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
  
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] big-bangs versus incremental improvements (was: Re: SciPy 2014 BoF NumPy Participation)

2014-06-06 Thread David Cournapeau
On Thu, Jun 5, 2014 at 11:41 PM, Nathaniel Smith n...@pobox.com wrote:

 On Thu, Jun 5, 2014 at 3:36 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:
  @nathaniel IIRC, one of the objections to the missing values work was
 that
  it changed the underlying array object by adding a couple of variables to
  the structure. I'm willing to do that sort of thing, but it would be
 good to
  have general agreement that that is acceptable.

 I can't think of reason why adding new variables to the structure *per
 se* would be objectionable to anyone? IIRC the objection you're
 thinking of wasn't to the existence of new variables, but to their
 effect on compatibility: their semantics meant that every piece of
 legacy C code that worked with ndarrays had to be updated to check for
 the new variables before it could correctly interpret the -data
 field, and if it wasn't updated it would just return incorrect
 results. And there wasn't really a clear story for how we were going
 to detect and fix all this legacy code. This specific kind of
 compatibility break does seem pretty objectionable, but that's because
 of the details of its behaviour, not because variables in general are
 problematic, I think.

  As to blaze/dynd, I'd like to steal bits here and there, and maybe in the
  long term base numpy on top of it with a compatibility layer. There is a
 lot
  of thought and effort that has gone into those projects and we should use
  what we can. As is, I think numpy is good for another five to ten years
 and
  will probably hang on for fifteen, but it will be outdated by the end of
  that period. Like great whites, we need to keep swimming just to have
  oxygen. Software projects tend to be obligate ram ventilators.

 I worry a bit that this could become a self-fulfilling prophecy.
 Plenty of software survives longer than that; the Linux kernel hasn't
 had a real major number increase [1] since 2.6.0, more than 10 years
 ago, and it's still an extraordinarily vital project. Partly this is
 because they have resources we don't etc., but partly it's just
 because they've decided that incremental change is how they're going
 to do things, and approached each new feature with that in mind. And
 in ten years they haven't yet found any features that required a
 serious compatibility break.

 This is a pretty minor worry though -- we don't have to agree about
 what will happen in 10 years to agree about what to do now :-).

 [1] http://www.pcmag.com/article2/0,2817,2388926,00.asp

  The Python 3 experience is definitely something we want to avoid. And
 while
  blaze does big data and offers some nice features, I don't know that it
  offers compelling reasons to upgrade to the more ordinary user at this
 time,
  so I'd like to sort of slip it into numpy if possible.
 
  If we do start moving numpy forward in more radical steps, we should try
 to
  have some agreement beforehand as to what sort of changes are acceptable.
  For instance, to maintain backward compatibility, is it sufficient that a
  recompile will do the job, or do we require forward compatibility for
  extensions compiled against earlier releases?

 I find it hard to discuss these things in general, since specific
 compatibility issues usually involve complicated trade-offs -- will
 every package have to recompile or just some of them, if they don't
 will it be a nice error message or a segfault, is there some way we
 can issue warnings ahead of time for the offending behaviour, etc.
 etc.

 That said, my vote is that if there's a change that (a) can't be done
 some other way, (b) requires a recompile, (c) doesn't cause segfaults
 but rather produces some sensible error message like ABI mismatch
 please recompile, (d) is a change that's worth the bother (this
 determination to include at least canvassing the list to check that
 users in general agree that it's worth it), then yeah we should do it.
 I don't anticipate that this will happen very often given how far
 we've gotten without it, but yeah.


Changing the ABI 'safely' (i.e. raise a python exception if changed) is
already handled in numpy. We can always increase the ABI version if we
think it is worth it

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] big-bangs versus incremental improvements (was: Re: SciPy 2014 BoF NumPy Participation)

2014-06-06 Thread David Cournapeau
On Thu, Jun 5, 2014 at 11:48 PM, Nathaniel Smith n...@pobox.com wrote:

 On Thu, Jun 5, 2014 at 3:24 PM, David Cournapeau courn...@gmail.com
 wrote:
  On Thu, Jun 5, 2014 at 2:51 PM, Charles R Harris 
 charlesr.har...@gmail.com
  wrote:
  On Thu, Jun 5, 2014 at 6:40 AM, David Cournapeau courn...@gmail.com
  wrote:
  IMO, what is needed the most is refactoring the internal to extract the
  Python C API low level from the rest of the code, as I think that's
 the main
  bottleneck to get more contributors (or get new core features more
 quickly).
 
 
  What do you mean by extract the Python C API?
 
  Poor choice of words: I meant extracting the lower level part of
  array/ufunc/etc... from its wrapping into the python C API (with the idea
  that the latter could be done in Cython, modulo improvements in cython to
  manage the binary/code size explosion).
 
  IOW, split numpy into core and core-py (I think dynd benefits a lots from
  that, on top of its feature set).

 Can you give some examples of these benefits?


numpy.core is difficult to approach as a codebase: it is big, and quite
entangled. Why the concepts are sound, there is not much internal
architecture. I would love for numpy to have a proper internal C API. I'd
like to think my effort of splitting multiarray giant .c files into
multiple files somehow made everybody's like easier.

A lot of the current code is python C API, which nobody cares about, and
could be handled by e.g. cython (although again the feasibility of that
needs to be discussed with the cython team, as cython cannot
realisticallybe  used pervasively for numpy ATM, see e.g.
http://mail.scipy.org/pipermail/scipy-dev/2012-July/017717.html).

Such a separation would also be helpful for the pie-in-the-sky projects you
mentioned. I think Wes Mc Kinney and the pandas team experience about using
cython for the core vs just for the C-Python integration would be useful
there as well.

I'm kinda wary of
 refactoring-for-the-sake-of-it -- IME usually it's easier, more
 valuable, and more fun to refactor in the process of making some
 concrete improvement.


Sure, I am just suggesting there should be a conscious effort to not just
add features but also think about the internal consistency.

One concrete example is dtype and its pluggability: I would love to see
things like datetime in a separate extension. It would keep us honest to
allow people creating custom dtype (last time I've checked, there were some
hardcoding that made it hard to do).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] big-bangs versus incremental improvements (was: Re: SciPy 2014 BoF NumPy Participation)

2014-06-05 Thread David Cournapeau
On Thu, Jun 5, 2014 at 9:44 AM, Todd toddr...@gmail.com wrote:


 On 5 Jun 2014 02:57, Nathaniel Smith n...@pobox.com wrote:
 
  On Wed, Jun 4, 2014 at 7:18 AM, Travis Oliphant tra...@continuum.io
 wrote:
  And numpy will be much harder to replace than numeric --
  numeric wasn't the most-imported package in the pythonverse ;-).

 If numpy is really such a core part of  python ecosystem, does it really
 make sense to keep it as a stand-alone package?  Rather than thinking about
 a numpy 2, might it be better to be focusing on getting ndarray and dtype
 to a level of quality where acceptance upstream might be plausible?


There has been discussions about integrating numpy a long time ago (can't
find a reference right now), and the consensus was that this was possible
in its current shape nor advisable. The situation has not changed.

Putting something in the stdlib means it basically cannot change anymore:
API compatibility requirements would be stronger than what we provide even
now. NumPy is also a large codebase which would need some major clean up to
be accepted, etc...

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] big-bangs versus incremental improvements (was: Re: SciPy 2014 BoF NumPy Participation)

2014-06-05 Thread David Cournapeau
 stay
 with C or should we support C++ code with its advantages of smart pointers,
 exception handling, and templates? We will need a certain amount of
 flexibility going forward and we should decide, or at least discuss, such
 issues up front.


Last time the C++ discussion was brought up, no consensus could be made. I
think quite a few radical changes can be made without that consensus
already, though other may disagree there.

IMO, what is needed the most is refactoring the internal to extract the
Python C API low level from the rest of the code, as I think that's the
main bottleneck to get more contributors (or get new core features more
quickly).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] big-bangs versus incremental improvements (was: Re: SciPy 2014 BoF NumPy Participation)

2014-06-05 Thread David Cournapeau
On Thu, Jun 5, 2014 at 2:51 PM, Charles R Harris charlesr.har...@gmail.com
wrote:




 On Thu, Jun 5, 2014 at 6:40 AM, David Cournapeau courn...@gmail.com
 wrote:




 On Thu, Jun 5, 2014 at 3:36 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:




 On Wed, Jun 4, 2014 at 7:29 PM, Travis Oliphant tra...@continuum.io
 wrote:

 Believe me, I'm all for incremental changes if it is actually possible
 and doesn't actually cost more.  It's also why I've been silent until now
 about anything we are doing being a candidate for a NumPy 2.0.  I
 understand the challenges of getting people to change.  But, features and
 solid improvements *will* get people to change --- especially if their new
 library can be used along with the old library and the transition can be
 done gradually. Python 3's struggle is the lack of features.

 At some point there *will* be a NumPy 2.0.   What features go into
 NumPy 2.0, how much backward compatibility is provided, and how much
 porting is needed to move your code from NumPy 1.X to NumPy 2.X is the real
 user question --- not whether it is characterized as incremental change
 or re-write. What I call a re-write and what you call an
 incremental-change are two points on a spectrum and likely overlap
 signficantly if we really compared what we are thinking about.

 One huge benefit that came out of the numeric / numarray / numpy
 transition that we mustn't forget about was actually the extended buffer
 protocol and memory view objects.  This really does allow multiple array
 objects to co-exist and libraries to use the object that they prefer in a
 way that did not exist when Numarray / numeric / numpy came out.So, we
 shouldn't be afraid of that world.   The existence of easy package managers
 to update environments to try out new features and have applications on a
 single system that use multiple versions of the same library is also
 something that didn't exist before and that will make any transition easier
 for users.

 One thing I regret about my working on NumPy originally is that I
 didn't have the foresight, skill, and understanding to work more on a more
 extended and better designed multiple-dispatch system so that multiple
 array objects could participate together in an expression flow.   The
 __numpy_ufunc__ mechanism gives enough capability in that direction that it
 may be better now.

 Ultimately, I don't disagree that NumPy can continue to exist in
 incremental change mode ( though if you are swapping out whole swaths of
 C-code for Cython code --- it sounds a lot like a re-write) as long as
 there are people willing to put the effort into changing it.   I think this
 is actually benefited by the existence of other array objects that are
 pushing the feature envelope without the constraints --- in much the same
 way that the Python standard library is benefitted by many versions of
 different capabilities being tried out before moving into the standard
 library.

 I remain optimistic that things will continue to improve in multiple
 ways --- if a little messier than any of us would conceive individually.
   It *is* great to see all the PR's coming from multiple people on NumPy
 and all the new energy around improving things whether great or small.


 @nathaniel IIRC, one of the objections to the missing values work was
 that it changed the underlying array object by adding a couple of variables
 to the structure. I'm willing to do that sort of thing, but it would be
 good to have general agreement that that is acceptable.



 I think changing the ABI for some versions of numpy (2.0 , whatever) is
 acceptable. There is little doubt that the ABI will need to change to
 accommodate a better and more flexible architecture.

 Changing the C API is more tricky: I am not up to date to the changes
 from the last 2-3 years, but at that time, most things could have been
 changed internally without breaking much, though I did not go far enough to
 estimate what the performance impact could be (if any).



 As to blaze/dynd, I'd like to steal bits here and there, and maybe in
 the long term base numpy on top of it with a compatibility layer. There is
 a lot of thought and effort that has gone into those projects and we should
 use what we can. As is, I think numpy is good for another five to ten years
 and will probably hang on for fifteen, but it will be outdated by the end
 of that period. Like great whites, we need to keep swimming just to have
 oxygen. Software projects tend to be obligate ram ventilators.

 The Python 3 experience is definitely something we want to avoid. And
 while blaze does big data and offers some nice features, I don't know that
 it offers compelling reasons to upgrade to the more ordinary user at this
 time, so I'd like to sort of slip it into numpy if possible.

 If we do start moving numpy forward in more radical steps, we should try
 to have some agreement beforehand as to what sort of changes are
 acceptable. For instance

Re: [Numpy-discussion] SciPy 2014 BoF NumPy Participation

2014-06-04 Thread David Cournapeau
I won't be able to make it at scipy this year sadly.

I concur with Nathaniel that we can do a lot of things without a full
rewrite -- it is all too easy to see what is gained with a rewrite and lose
sight of what is lost. I have yet to see a really strong argument for a
full rewrite. It may be easier to do a rewrite for a core when you have a
few full-time people, but that's a different story for a community effort
like numpy.

The main issue preventing new features in numpy is the lack of internal
architecture at the C level, but nothing that could not be done by
refactoring. Using cython to move away from the python C api would be
great, though we need to talk with the cython people so that we can share
common code between multiple extensions using cython, to avoid binary size
explosion.

There are things that may require some backward incompatible changes in the
C API, but that's much more acceptable than a significant break at the
python level.

David


On Wed, Jun 4, 2014 at 9:58 AM, Sebastian Berg sebast...@sipsolutions.net
wrote:

 On Mi, 2014-06-04 at 02:26 +0100, Nathaniel Smith wrote:
  On Wed, Jun 4, 2014 at 12:33 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
   On Tue, Jun 3, 2014 at 5:08 PM, Kyle Mandli kyle.man...@gmail.com
 wrote:
  
   Hello everyone,
  
   As one of the co-chairs in charge of organizing the birds-of-a-feather
   sesssions at the SciPy conference this year, I wanted to solicit
 through the
   NumPy list to see if we could get enough interest to hold a NumPy
 centered
   BoF this year.  The BoF format would be up to those who would lead the
   discussion, a couple of ideas used in the past include picking out a
 few of
   the lead devs to be on a panel and have a QA type of session or an
 open QA
   with perhaps audience guided list of topics.  I can help facilitate
   organization of something but we would really like to get something
   organized this year (last year NumPy was the only major project that
 was not
   really represented in the BoF sessions).
  
   I'll be at the conference, but I don't know who else will be there. I
 feel
   that NumPy has matured to the point where most of the current work is
   cleaning stuff up, making it run faster, and fixing bugs. A topic that
 I'd
   like to see discussed is where do we go from here. One option to look
 at is
   Blaze, which looks to have matured a lot in the last year. The problem
 with
   making it a NumPy replacement is that NumPy has become quite
 widespread,
   with downloads from PyPi running at about 3 million per year. With
 that much
   penetration it may be difficult for a new core like Blaze to gain
 traction.
   So I'd like to also discuss ways to bring the two branches of
 development
   together at some point and explore what NumPy can do to pave the way.
 Mind,
   there are definitely things that would be nice to add to NumPy, a
 better
   type system, missing values, etc., but doing that is difficult given
 the
   current design.
 
  I won't be at the conference unfortunately (I'm on the wrong continent
  and have family commitments then anyway), but I think there's lots of
  exciting stuff that can be done in numpy-land.
 

 I wouldn't like to come, but to be honest have not planned to yet and it
 doesn't fit too well with the stuff I work on mostly right now. So will
 have to see.

 - Sebastian

  We absolutely could rewrite the dtype system, and this would
  straightforwardly give us excellent support for missing values, units,
  categorical data, automatic differentiation, better datetimes, etc.
  etc. -- and make numpy much more friendly in general to third-party
  extensions.
 
  I'd like to see the ufunc system revisited in the light of all the
  things we know now, to make gufuncs more first-class, provide better
  support for user-defined types, more flexible loop selection (e.g.
  make it possible to implement np.add.reduce(a, type=kahan)), etc.;
  one goal would be to convert a lot of ufunc-like functions (np.mean
  etc.) into being real ufuncs, and then they'd automatically benefit
  from __numpy_ufunc__, which would also massively improve
  interoperability with alternative array types like blaze.
 
  I'd like to see support for extensible label-based indexing, like pandas.
 
  Internally, I'd like to see internal migrating out of C and into
  Cython -- we have hundreds of lines of code that could be replaced
  with a few lines of Cython and no-one would notice. (Combining this
  with a cffi cython backend and pypy would be pretty interesting
  too...)
 
  I'd like to see sparse ndarrays, with integration into the ufunc
  looping machinery so all ufuncs just work. Or even better, I'd like to
  see the right hooks added so that anyone can write a sparse ndarray
  package using only public APIs, and have all ufuncs just work. (I was
  going to put down deferred/loop-fused/out-of-core computation as a
  wishlist item too, but if we do it right then this too could be
  implemented

Re: [Numpy-discussion] fftw supported?

2014-06-02 Thread David Cournapeau
FFTW is not used anymore in neither numpy or scipy (has not been for
several years). If you want to use fftw with numpy, there are 3rd party
extensions to do it, like pyfftw


On Mon, Jun 2, 2014 at 12:27 PM, Neal Becker ndbeck...@gmail.com wrote:

 I just d/l numpy-1.8.1 and try to build.  I uncomment:

 [fftw]
 libraries = fftw3

 This is fedora 20.  fftw3 (and devel) is installed as fftw.

 I see nothing written to stderr during the build that has any reference to
 fftw.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran 90 Library and .mod files numpy.distutils

2014-05-30 Thread David Huard
Hi Onur,

Have you taken a look at https://github.com/numpy/numpy/issues/1350 ? Maybe
both issues are related.

Cheers,

David H.


On Fri, May 30, 2014 at 6:20 AM, Onur Solmaz onursol...@gmail.com wrote:

 Was this mail seen? I cannot be sure because it is the first time I posted.



 On Mon, May 26, 2014 at 2:48 PM, Onur Solmaz onursol...@gmail.com wrote:

 I am building a Fortran 90 library and its extension. .mod files get
 generated inside the build/temp.linux-x86_64-2.7/ directory, and stay
 there; so when building the extension, the compiler complains that it
 cannot find the modules
 This is because the include paths do not have the temp directory. I can
 work this around by adding that to the include paths for the extension, but
 this is not a clean solution.
 What is the best solution to this?

 I also want to be able to use the modules later, because I will
 distribute the library. It is some other issue whether the modules should
 be distributed with the library under /usr/lib or /usr/include, refer to
 this https://gcc.gnu.org/bugzilla/show_bug.cgi?id=49138 bug.

 Also one can refer to this
 https://gcc.gnu.org/ml/fortran/2011-06/msg00117.html thread. This is
 what convinced me to distribute the modules, rather than putting module
 definitions into header files, which the user can include in their code to
 recreate the modules. Yet another way is to use submodules, but that
 feature is not available in Fortran 90.



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
David Huard, PhD
Conseiller scientifique, Ouranos
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   4   5   6   7   8   9   10   >