Re: [Numpy-discussion] Hook in __init__.py to let distributors patch numpy

2016-02-11 Thread Michael Sarahan
+1.  This seems nicer than patching __init__.py itself, in that it is much
more transparent.

Good idea.
Michael

On Thu, Feb 11, 2016 at 7:19 PM Matthew Brett 
wrote:

> Hi,
>
> Over at https://github.com/numpy/numpy/issues/5479 we're discussing
> Windows wheels.
>
> On thing that we would like to be able to ship Windows wheels, is to
> be able to put some custom checks into numpy when you build the
> wheels.
>
> Specifically, for Windows, we're building on top of ATLAS BLAS /
> LAPACK, and we need to check that the system on which the wheel is
> running, has SSE2 instructions, otherwise we know ATLAS will crash
> (almost everybody does have SSE2 these days).
>
> The way I propose we do that, is this patch here:
>
> https://github.com/numpy/numpy/pull/7231
>
> diff --git a/numpy/__init__.py b/numpy/__init__.py
> index 0fcd509..ba3ba16 100644
> --- a/numpy/__init__.py
> +++ b/numpy/__init__.py
> @@ -190,6 +190,12 @@ def pkgload(*packages, **options):
>  test = testing.nosetester._numpy_tester().test
>  bench = testing.nosetester._numpy_tester().bench
>
> +# Allow platform-specific build to intervene in numpy init
> +try:
> +from . import _distributor_init
> +except ImportError:
> +pass
> +
>  from . import core
>  from .core import *
>  from . import compat
>
> So, numpy __init__.py looks for a module `_distributor_init`, in which
> the distributor might have put custom code to do any checks and
> initialization needed for the particular platform.  We don't by
> default ship a `_distributor_init.py` but leave it up to packagers to
> generate this when building binaries.
>
> Does that sound like a sensible approach to y'all?
>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.0b2 released

2016-02-06 Thread Michael Sarahan
Robert,

Thanks for pointing out auditwheel.  We're experimenting with a GCC 5.2
toolchain, and this tool will be invaluable.

Chris,

Both conda-build-all and obvious-ci are excellent projects, and we'll
leverage them where we can (particularly conda-build-all).  Obvious CI and
conda-smithy are in a slightly different space, as we want to use our own
anaconda.org build service, rather than write scripts to run on other CI
services.  With more control, we can do cool things like splitting up build
jobs and further parallelizing them on more workers, which I see as very
important if we're going to be building downstream stuff.  As I see it, the
single, massive recipe repo that is conda-recipes has been a disadvantage
for a while in terms of complexity, but now may be an advantage in terms of
building downstream packages (how else would dependency get resolved?)  It
remains to be seen whether git submodules might replace individual folders
in conda-recipes - I think this might give project maintainers more direct
control over their packages.

The goal, much like ObviousCI, is to enable project maintainers to get
their latest releases available in conda sooner, and to simplify the whole
CI setup process.  We hope we can help each other rather than compete.

Best,
Michael

On Sat, Feb 6, 2016 at 5:53 PM Chris Barker  wrote:

> On Sat, Feb 6, 2016 at 3:42 PM, Michael Sarahan 
> wrote:
>
>> FWIW, we (Continuum) are working on a CI system that builds conda
>> recipes.
>>
>
> great, could be handy. I hope you've looked at the open-source systems
> that do this: obvious-ci and conda-build-all. And conda-smithy to help set
> it all up..
>
> Chris, it may still be useful to use docker here (perhaps on the build
>> worker, or elsewhere), also, as the distinction between build machines and
>> user machines is important to make.  Docker would be great for making sure
>> that all dependency requirements are met on end-user systems
>>
>
> yes -- veryhandy, I have certainly accidentally brough in other system
> libs in a build
>
> Too bad it's Linux only. Though very useful for manylinux.
>
>
> -Chris
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.0b2 released

2016-02-06 Thread Michael Sarahan
FWIW, we (Continuum) are working on a CI system that builds conda recipes.
Part of this is testing not only individual packages that change, but also
any downstream packages that are also in the repository of recipes.  The
configuration for this is in
https://github.com/conda/conda-recipes/blob/master/.binstar.yml and the
project doing the dependency detection is in
https://github.com/ContinuumIO/ProtoCI/

This is still being established (particularly, provisioning build workers),
but please talk with us if you're interested.

Chris, it may still be useful to use docker here (perhaps on the build
worker, or elsewhere), also, as the distinction between build machines and
user machines is important to make.  Docker would be great for making sure
that all dependency requirements are met on end-user systems (we've had a
few recent issues with libgfortran accidentally missing as a requirement of
scipy).

Best,
Michael

On Sat, Feb 6, 2016 at 5:22 PM Chris Barker  wrote:

> On Fri, Feb 5, 2016 at 3:24 PM, Nathaniel Smith  wrote:
>
>> On Fri, Feb 5, 2016 at 1:16 PM, Chris Barker 
>> wrote:
>>
>
>
>> >> > If we set up a numpy-testing conda channel, it could be used to cache
>> >> > binary builds for all he versions of everything we want to test
>> >> > against.
>>
>   Anaconda doesn't always have the
>> > latest builds of everything.
>
>
> OK, this may be more or less helpful, depending on what we want to built
> against. But a conda environment (maybe tied to a custom channel) really
> does make  a nice contained space for testing that can be set up fast on a
> CI server.
>
> If whoever is setting up a test system/matrix thinks this would be useful,
> I'd be glad to help set it up.
>
> -Chris
>
>
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> chris.bar...@noaa.gov
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread Michael Sarahan
When you say find/use, can you please clarify whether you have completed
the compilation/linking successfully?  I'm not clear on exactly when you're
having problems.  What is the error output?

One very helpful tool in diagnosing dll problems is dependency walker:
http://www.dependencywalker.com/

It may be that your openblas has a dependency that it can't load for some
reason.  Dependency walker works on .pyd files as well as .dll files.

Hth,
Michael

On Wed, Jan 27, 2016, 07:40 G Young  wrote:

> I do have my site.cfg file pointing to my library which contains a .lib
> file along with the appropriate include_dirs parameter.  However, NumPy
> can't seem to find / use the DLL file no matter where I put it (numpy/core,
> same directory as openblas.lib).  By the way, I should mention that I am
> using a slightly dated version of OpenBLAS (0.2.9), but that shouldn't have
> any effect I would imagine.
>
> Greg
>
> On Wed, Jan 27, 2016 at 1:14 PM, Michael Sarahan 
> wrote:
>
>> I'm not sure about the mingw tool chain, but usually on windows at link
>> time you need a .lib file, called the import library.  The .dll is used at
>> runtime, not at link time.  This is different from *nix, where the .so
>> serves both purposes.  The link you posted mentions import files, so I hope
>> this is helpful information.
>>
>> Best,
>> Michael
>>
>> On Wed, Jan 27, 2016, 03:39 G Young  wrote:
>>
>>> Hello all,
>>>
>>> I'm trying to update the documentation for building Numpy from source,
>>> and I've hit a brick wall in trying to build the library using OpenBLAS
>>> because I can't seem to link the libopenblas.dll file.  I tried following
>>> the suggestion of placing the DLL in numpy/core as suggested here
>>> <https://github.com/numpy/numpy/wiki/Mingw-static-toolchain#notes> but
>>> it still doesn't pick it up.  What am I doing wrong?
>>>
>>> Thanks,
>>>
>>> Greg
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread Michael Sarahan
I'm not sure about the mingw tool chain, but usually on windows at link
time you need a .lib file, called the import library.  The .dll is used at
runtime, not at link time.  This is different from *nix, where the .so
serves both purposes.  The link you posted mentions import files, so I hope
this is helpful information.

Best,
Michael

On Wed, Jan 27, 2016, 03:39 G Young  wrote:

> Hello all,
>
> I'm trying to update the documentation for building Numpy from source, and
> I've hit a brick wall in trying to build the library using OpenBLAS because
> I can't seem to link the libopenblas.dll file.  I tried following the
> suggestion of placing the DLL in numpy/core as suggested here
>  but it
> still doesn't pick it up.  What am I doing wrong?
>
> Thanks,
>
> Greg
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Appveyor Testing Changes

2016-01-25 Thread Michael Sarahan
Conda can generally install older versions of python in environments:

conda create -n myenv python=3.4

You really don't need any particular initial version of python/conda in
order to do this.  You do, however, need to activate the new environment to
use it:

activate myenv

(For windows, you do not need "source")

Hth,
Michael

On Mon, Jan 25, 2016, 17:21 G Young  wrote:

> With regards to testing numpy, both Conda and Pip + Virtualenv work quite
> well.  I have used both to install master and run unit tests, and both pass
> with flying colors.  This chart here
> 
>  illustrates
> my point nicely as well.
>
> However, I can't seem to find / access Conda installations for slightly
> older versions of Python (e.g. Python 3.4).  Perhaps this is not much of an
> issue now with the next release (1.12) being written only for Python 2.7
> and Python 3.4 - 5.  However, if we were to wind the clock slightly back to
> when we were testing 2.6 - 7, 3.2 - 5, I feel Conda falls short in being
> able to test on a variety of Python distributions given the nature of Conda
> releases.  Maybe that situation is no longer the case now, but in the long
> term, it could easily happen again.
>
> Greg
>
>
> On Mon, Jan 25, 2016 at 10:50 PM, Nathaniel Smith  wrote:
>
>> On Mon, Jan 25, 2016 at 2:37 PM, G Young  wrote:
>> > Hello all,
>> >
>> > I currently have a branch on my fork (not PR) where I am experimenting
>> with
>> > running Appveyor CI via Virtualenv instead of Conda.  I have build
>> running
>> > here.  What do people think of using Virtualenv (as we do on Travis)
>> instead
>> > of Conda for testing?
>>
>> Can you summarize the advantages and disadvantages that you're aware of?
>>
>> -n
>>
>> --
>> Nathaniel J. Smith -- https://vorpus.org
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance solving system of equations in numpy and MATLAB

2015-12-16 Thread Michael Sarahan
Continuum provides MKL free now - you just need to have a free anaconda.org
account to get the license: http://docs.continuum.io/mkl-optimizations/index

HTH,
Michael

On Wed, Dec 16, 2015 at 12:35 PM Edison Gustavo Muenz <
edisongust...@gmail.com> wrote:

> Sometime ago I saw this: https://software.intel.com/sites/campaigns/nest/
>
> I don't know if the "community" license applies in your case though. It is
> worth taking a look at.
>
> On Wed, Dec 16, 2015 at 4:30 PM, Francesc Alted  wrote:
>
>> Sorry, I have to correct myself, as per:
>> http://docs.continuum.io/mkl-optimizations/index it seems that Anaconda
>> is not linking with MKL by default (I thought that was the case before?).
>> After installing MKL (conda install mkl), I am getting:
>>
>> In [1]: import numpy as np
>> Vendor:  Continuum Analytics, Inc.
>> Package: mkl
>> Message: trial mode expires in 30 days
>>
>> In [2]: testA = np.random.randn(15000, 15000)
>>
>> In [3]: testb = np.random.randn(15000)
>>
>> In [4]: %time testx = np.linalg.solve(testA, testb)
>> CPU times: user 1min, sys: 468 ms, total: 1min 1s
>> Wall time: 15.3 s
>>
>>
>> so, it looks like you will need to buy a MKL license separately (which
>> makes sense for a commercial product).
>>
>> Sorry for the confusion.
>> Francesc
>>
>>
>> 2015-12-16 18:59 GMT+01:00 Francesc Alted :
>>
>>> Hi,
>>>
>>> Probably MATLAB is shipping with Intel MKL enabled, which probably is
>>> the fastest LAPACK implementation out there.  NumPy supports linking with
>>> MKL, and actually Anaconda does that by default, so switching to Anaconda
>>> would be a good option for you.
>>>
>>> Here you have what I am getting with Anaconda's NumPy and a machine with
>>> 8 cores:
>>>
>>> In [1]: import numpy as np
>>>
>>> In [2]: testA = np.random.randn(15000, 15000)
>>>
>>> In [3]: testb = np.random.randn(15000)
>>>
>>> In [4]: %time testx = np.linalg.solve(testA, testb)
>>> CPU times: user 5min 36s, sys: 4.94 s, total: 5min 41s
>>> Wall time: 46.1 s
>>>
>>> This is not 20 sec, but it is not 3 min either (but of course that
>>> depends on your machine).
>>>
>>> Francesc
>>>
>>> 2015-12-16 18:34 GMT+01:00 Edward Richards :
>>>
 I recently did a conceptual experiment to estimate the computational
 time required to solve an exact expression in contrast to an approximate
 solution (Helmholtz vs. Helmholtz-Kirchhoff integrals). The exact solution
 requires a matrix inversion, and in my case the matrix would contain ~15000
 rows.

 On my machine MATLAB seems to perform this matrix inversion with random
 matrices about 9x faster (20 sec vs 3 mins). I thought the performance
 would be roughly the same because I presume both rely on the same
 LAPACK solvers.

 I will not actually need to solve this problem (even at 20 sec it is
 prohibitive for broadband simulation), but if I needed to I would
 reluctantly choose MATLAB . I am simply wondering why there is this
 performance gap, and if there is a better way to solve this problem in
 numpy?

 Thank you,

 Ned

 #Python version

 import numpy as np

 testA = np.random.randn(15000, 15000)

 testb = np.random.randn(15000)

 %time testx = np.linalg.solve(testA, testb)

 %MATLAB version

 testA = randn(15000);

 testb = randn(15000, 1);
 tic(); testx = testA \ testb; toc();

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 https://mail.scipy.org/mailman/listinfo/numpy-discussion


>>>
>>>
>>> --
>>> Francesc Alted
>>>
>>
>>
>>
>> --
>> Francesc Alted
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] Setting up a dev environment with conda

2015-10-18 Thread Michael Sarahan
Running tests in the folder might be causing your problem.  If it's trying
to import numpy, and numpy is a folder in your current folder, sometimes
you see errors like this.  The confusion is that Python treats folders
(packages) similarly to modules, and the resolution order sometimes bites
you.  Try cd'ing to a different folder (importantly, one NOT containing a
numpy folder!) and run the test command from there.

HTH,
Michael

On Sun, Oct 18, 2015 at 6:46 PM Luke Zoltan Kelley 
wrote:

> Thanks Yu,
>
> There was nothing in my PYTHONPATH at first, and adding my numpy directory
> ('/Users/lzkelley/Programs/public/numpy') didn't help (same error).  In
> both cases, adding 'print(np)' yields:
>
>  '/Users/lzkelley/Programs/public/numpy/numpy/__init__.pyc'>
>
>
> > On Oct 18, 2015, at 7:22 PM, Feng Yu  wrote:
> >
> > Hi Luke,
> >
> > Could you check if you have "/Users/lzkelley/Programs/public/numpy/ in
> > your PYTHONPATH?
> >
> > I would also suggest you add a print(np) line before the crash in
> > nosetester.py. I got something like this (which didn't crash):
> >
> >  >
> '/home/yfeng1/source/numpy/build/testenv/lib64/python2.7/site-packages/numpy/__init__.pyc'>
> >
> > If you see something not starting with 'numpy/build', then it is again
> > pointing at  PYTHONPATH.
> >
> > I hope these helps.
> >
> > Best,
> >
> > - Yu
> >
> > On Sun, Oct 18, 2015 at 1:25 PM, Luke Zoltan Kelley 
> wrote:
> >> Thanks for the help Nathaniel --- but building via `./runtests.py` is
> >> failing in the same way.  Hopefully Numpy-discussion can help me out.
> >>
> >> I'm able to build using `python setup.py build_ext --inplace` but both
> >> trying to run `python setup.py install` or `./runtests.py` leads to the
> >> following error:
> >>
> >> (numpy-py27)daedalus-2:numpy lzkelley$ ./runtests.py
> >> Building, see build.log...
> >> Running from numpy source directory.
> >> Traceback (most recent call last):
> >>  File "setup.py", line 264, in 
> >>setup_package()
> >>  File "setup.py", line 248, in setup_package
> >>from numpy.distutils.core import setup
> >>  File
> "/Users/lzkelley/Programs/public/numpy/numpy/distutils/__init__.py",
> >> line 21, in 
> >>from numpy.testing import Tester
> >>  File "/Users/lzkelley/Programs/public/numpy/numpy/testing/__init__.py",
> >> line 14, in 
> >>from .utils import *
> >>  File "/Users/lzkelley/Programs/public/numpy/numpy/testing/utils.py",
> line
> >> 17, in 
> >>from numpy.core import float32, empty, arange, array_repr, ndarray
> >>  File "/Users/lzkelley/Programs/public/numpy/numpy/core/__init__.py",
> line
> >> 59, in 
> >>test = Tester().test
> >>  File
> "/Users/lzkelley/Programs/public/numpy/numpy/testing/nosetester.py",
> >> line 180, in __init__
> >>if raise_warnings is None and '.dev0' in np.__version__:
> >> AttributeError: 'module' object has no attribute '__version__'
> >>
> >> Build failed!
> >>
> >>
> >> Has anyone seen something like this before?
> >>
> >> Thanks!
> >> Luke
> >>
> >> ___
> >> NumPy-Discussion mailing list
> >> NumPy-Discussion@scipy.org
> >> https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >>
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] At.: use less RAM memory and increase execution speed

2013-09-26 Thread Michael Sarahan
xrange should be more memory efficient than range:
http://stackoverflow.com/questions/135041/should-you-always-favor-xrange-over-range

Replacing arrays with lists is probably a bad idea for a lot of reasons.
 You'll lose nice vectorization of simple operations, and all of numpy's
other benefits.  To be more parsimonious with memory, you probably want to
pay close attention to array data types and make sure that things aren't
being automatically upconverted to higher precision data types.

I've never considered using reset.  Perhaps you need to take a look at your
program's structure and make sure that useless arrays can be garbage
collected properly.

Preallocation of arrays can give you tons of benefits with regards to array
size and program speed.  If you aren't using preallocation, now's a great
time to start.

You can pass numpy arrays into Cython functions, and you can also call
numpy/scipy functions within Cython functions.  Identify your bottlenecks
using some kind of profiling, then work on optimizing those with Cython.

HTH,
Mike


On Thu, Sep 26, 2013 at 11:19 AM, Josè Luis Mietta <
joseluismie...@yahoo.com.ar> wrote:

> Hi experts!
>
>  I wanna use less RAM memory in my Monte Carlo simulations. In my
> algorithm I use numpy arrays and xrange() function.
> I hear that I can reduce RAM used in my lagorithm if I do the next:
>
> 1) replace xrange() for range().
> 2) replace numpya arrays for python lists
> 3) use reset() function for deleting useless arrays.
> Is that true?
>
> In adition, I wanna  increase execution speed of my code (I use numpy and
> SciPy functions). How can I apply Cython? Will it help?
>
> Please help.
>
> Thanks a lot!!
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion