Re: [Numpy-discussion] [JOB ANNOUNCEMENT] Software Developer permanent position available at ESRF, France

2014-02-20 Thread V. Armando Solé

  
  
Sorry, The link I sent you is in
  French. This is the English version.
  
   EUROPEAN SYNCHROTRON
  RADIATION
  FACILITY

INSTALLATION
  EUROPEENNE DE RAYONNEMENT SYNCHROTRON
  
   
  The ESRF is a multinational
research institute, situated in Grenoble, France and financed by
20 countries mostly European. It operates a powerful synchrotron
X-ray source with some 30 beamlines (instruments) covering a
wide range of scientific research in fields such as biology and
medicine, chemistry, earth and environmental sciences, materials
and surface science, and physics. The ESRF employs about
600 staff and is organized as a French société civil.
  

  Within the Instrumentation
Services and Development Division, the Software Group
is now seeking to recruit a:
  Software Developer
(m/f)
  permanent contract
  THE FUNCTION
  
  The ESRF is in the
  process of a major upgrade of the accelerator source and of
  several beamlines. In particular, the Upgrade Programme has
  created a heavy demand for data visualisation and analysis due
  to the massive data flow coming from the new detectors. The
  next generation of experiments will rely on both advanced
  parallelised algorithms for data analysis and high performance
  tools for data visualization.
  You will join the Data
Analysis Unit in the Software Group of the ISDD and
  will develop software for data analysis and visualization. You
  will be expected to:
  
develop and maintain software and graphical
user interfaces for visualizing scientific data
help develop a long term strategy for the
visualization and analysis of data (online and offline)
contribute to the general effort of adapting
existing software and developing new solutions for data
analysis
  
  You will need to be able
  to understand data analysis requirements and propose working
  solutions.
   
  QUALIFICATIONS AND EXPERIENCE
  
  The candidate should have
  a higher university degree (Master, MSc, DESS, Diplom,
  Diploma, Ingeniera Superior, Licenciatura, Laurea or
  equivalent) in Computer Science, Mathematics, Physics,
  Chemistry, Bioinformatics, Engineering or related areas.
  Applicants must have at least 3 years of experience in
  scientific programming in the fields of data analysis and
  visualisation.
  The candidate must have
  good knowledge of OpenGL and the OpenGL Shading Language or
  similar visualisation libraries. Experience in data analysis,
  especially of large datasets is highly desirable, particularly
  using e.g. OpenCL, CUDA. Knowledge of one high level
  programming language (Python, Matlab, ...), a high-level
  graphics library (VTK ,...) and one low level language (C,
  C++, ...) will be considered assets in addition to competence
  in using development tools for compilation, distribution and
  code management. Proven contributions to open source projects
  will also be appreciated.
  The successful candidate
  should be able to work independently as well as in
  multidisciplinary teams. Good English communication and
  presentation skills are required.
  Further information on
  the post can be obtained from Andy Götz (andy.g...@esrf.fr) and/or
  Claudio Ferrero (ferr...@esrf.fr).
  
   
  Ref. 8173
 - Deadline for returning application forms: 
01/04/2014
  

  

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Header files in windows installers

2014-02-20 Thread Ralf Gommers
On Thu, Feb 20, 2014 at 6:33 PM, Matt Newell  wrote:

>
> I have a small c++ extension used to feed a 1d numpy array into a
> QPainterPath.  Very simple just using PyArray_[Check|FLAGS|SIZE|DATA].
> I developed it on debian which was of course very straightforward, but now
> I
> need to deploy on windows, which is of course where the fun always begins.
>
> I was unpleasantly surprised to discover that the numpy installers for
> window
> do not have an option to install the include files, which appear to be all
> that
> is needed to compile extensions that use numpy's C-api.
>

That would be a bug. They can't all be missing though, because I'm able to
compile scipy against numpy installed with those installers without
problems.

Could you open an issue on Github and give details on which headers are
missing where?

Ralf


> I have actually managed to copy my set of include files from linux and
> with a
> few modifications to _numpyconfig.h got my extension compiled and working.
>
> Is there any chance that the includes could be included in future releases?
>
> Thanks,
> Matt Newell
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Automatic issue triage.

2014-02-20 Thread Charles R Harris
After 6 days of trudging through the numpy issues and finally passing the
half way point, I'm wondering if we can set up so that new defects get a
small test that can be parsed out and run periodically to mark issues that
might be fixed. I expect it can be done, but might be more trouble than it
is worth to keep working.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Carl Kleffner
Hi,

2014-02-20 23:17 GMT+01:00 Olivier Grisel :

> I had a quick look (without running the procedure) but I don't
> understand some elements:
>
> - apparently you never tell in the numpy's site.cfg nor the scipy.cfg
> to use the openblas lib nor set the
> library_dirs: how does numpy.distutils know that it should dynlink
> against numpy/core/libopenblas.dll
>

numpy's site.cfg is something like: (64 bit)

[openblas]
libraries = openblas
library_dirs = D:/devel/mingw64static/x86_64-w64-mingw32/lib
include_dirs = D:/devel/mingw64static/x86_64-w64-mingw32/include

or (32 bit)

[openblas]
libraries = openblas
library_dirs = D:/devel32/mingw32static/i686-w64-mingw32/lib
include_dirs = D:/devel32/mingw32static/i686-w64-mingw32/include

Please adapt the paths of course and apply the patches to numpy.



>
> - how to you deal with the link to the following libraries:
> libgfortran.3.dll
> libgcc_s.1.dll
> libquadmath.0.dll
>
> You won't need them. I build the toolchain statically. Thus you don't have
to mess up with GCC runtime libs. You can check the dependencies with MS
depends or with ntldd (included in the toolchain)


> If MinGW is installed on the system I assume that the linker will find
> them. But would it work when the wheel packages are installed on a
> system that does not have MinGW installed?
>

The wheels should be sufficient regardless if you have mingw installed or
not.

with best Regards

Carl
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [cython-users] Re: avoiding numpy temporaries via refcount

2014-02-20 Thread jtaylor . debian


On Wednesday, February 19, 2014 9:10:03 AM UTC+1, Stefan Behnel wrote:
>
> >> Nathaniel Smith wrote: 
> >>> Does anyone see any issue we might be overlooking in this refcount == 
> 1 
> >>> optimization for the python api? I'll post a PR with the change 
> shortly. 
> >>> 
> >>> It occurs belatedly that Cython code like   a = np.arange(10) 
> >>>   b = np.arange(10) 
> >>>   c = a + b might end up calling tp_add with refcnt 1 arrays. Ditto 
> for 
> >>> same with cdef np.ndarray or cdef object added. We should check... 
>
> That can happen, yes. Cython only guarantees that the object it passes is 
> safely owned so that the reference cannot go away while it's being 
> processed by a function. If it's in a local (non-closure) variable (or 
> Cython temporary variable), that guarantee holds, so it's safe to pass 
> objects with only a single reference into a C function, and Cython will do 
> that. 
>
> Stefan 
>


thats unfortunate, it would be a quite significant improvement to numpy 
(~30% improvement to all operations involving temporaries).
Is increasing the reference count before going into PyNumber functions 
really that expensive that its worth avoiding?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Olivier Grisel
I had a quick look (without running the procedure) but I don't
understand some elements:

- apparently you never tell in the numpy's site.cfg nor the scipy.cfg
to use the openblas lib nor set the
library_dirs: how does numpy.distutils know that it should dynlink
against numpy/core/libopenblas.dll

- how to you deal with the link to the following libraries:
libgfortran.3.dll
libgcc_s.1.dll
libquadmath.0.dll

If MinGW is installed on the system I assume that the linker will find
them. But would it work when the wheel packages are installed on a
system that does not have MinGW installed?

Best,

-- 
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Nathaniel Smith
On Thu, Feb 20, 2014 at 1:35 PM, Stefan Otte  wrote:
> Hey guys,
>
> I quickly hacked together a prototype of the optimization step:
> https://github.com/sotte/numpy_mdot
>
> I think there is still room for improvements so feedback is welcome :)
> I'll probably have some time to code on the weekend.
>
> @Nathaniel, I'm still not sure about integrating it in dot. Don't a
> lot of people use the optional out parameter of dot?

The email you're replying to below about deprecating stuff in 'dot'
was in reply to Eric's email about using dot on arrays with shape (k,
n, n), so those comments are unrelated to the mdot stuff.

I wouldn't mind seeing out= arguments become kw-only in general, but
even if we decided to do that it would take a long deprecation period,
so yeah, let's give up on 'dot(A, B, C, D)' as syntax for mdot.

However, the suggestion of supporting np.dot([A, B, C, D]) still seems
like it might be a good idea...? I have mixed feelings about it -- one
less item cluttering up the namespace, but it is weird and magical to
have two totally different calling conventions for the same function.

-n

> On Thu, Feb 20, 2014 at 4:02 PM, Nathaniel Smith  wrote:
>> If you send a patch that deprecates dot's current behaviour for ndim>2,
>> we'll probably merge it. (We'd like it to function like you suggest, for
>> consistency with other gufuncs. But to get there we have to deprecate the
>> current behaviour first.)
>>
>> While I'm wishing for things I'll also mention that it would be really neat
>> if binary gufuncs would have a .outer method like regular ufuncs do, so
>> anyone currently using ndim>2 dot could just switch to that. But that's a
>> lot more work than just deprecating something :-).
>>
>> -n
>>
>> On 20 Feb 2014 09:27, "Eric Moore"  wrote:
>>>
>>>
>>>
>>> On Thursday, February 20, 2014, Eelco Hoogendoorn
>>>  wrote:

 If the standard semantics are not affected, and the most common
 two-argument scenario does not take more than a single if-statement
 overhead, I don't see why it couldn't be a replacement for the existing
 np.dot; but others mileage may vary.


 On Thu, Feb 20, 2014 at 11:34 AM, Stefan Otte 
 wrote:
>
> Hey,
>
> so I propose the following.  I'll implement a new function `mdot`.
> Incorporating the changes in `dot` are unlikely. Later, one can still
> include
> the features in `dot` if desired.
>
> `mdot` will have a default parameter `optimize`.  If `optimize==True`
> the
> reordering of the multiplication is done.  Otherwise it simply chains
> the
> multiplications.
>
> I'll test and benchmark my implementation and create a pull request.
>
> Cheers,
>  Stefan
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion


>>> Another consideration here is that we need a better way to work with
>>> stacked matrices such as np.linalg handles now.  Ie I want to compute the
>>> matrix product of two (k, n, n) arrays producing a (k,n,n) result.  Near as
>>> I can tell there isn't a way to do this right now that doesn't involve an
>>> explicit loop. Since dot will return a (k, n, k, n) result. Yes this output
>>> contains what I want but it also computes a lot of things that I don't want
>>> too.
>>>
>>> It would also be nice to be able to do a matrix product reduction, (k, n,
>>> n) -> (n, n) in a single line too.
>>>
>>> Eric
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [JOB ANNOUNCEMENT] Software Developer permanent position available at ESRF, France

2014-02-20 Thread V. Armando Sole
Sorry, the link was in French ...

The English version:

http://esrf.profilsearch.com/recrute/fo_form_cand.php?_lang=en&id=300

Best regards,

Armando


On 20.02.2014 18:21, V. Armando Sole wrote:
> Dear colleagues,
>
> The ESRF is looking for a Software Developer:
>
> http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=300
>
> Our ideal candidate would be experienced on OpenGL, OpenCL and 
> Python.
>
> Best regards,
>
> Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Stefan Otte
Hey guys,

I quickly hacked together a prototype of the optimization step:
https://github.com/sotte/numpy_mdot

I think there is still room for improvements so feedback is welcome :)
I'll probably have some time to code on the weekend.

@Nathaniel, I'm still not sure about integrating it in dot. Don't a
lot of people use the optional out parameter of dot?


Best,
 Stefan



On Thu, Feb 20, 2014 at 4:02 PM, Nathaniel Smith  wrote:
> If you send a patch that deprecates dot's current behaviour for ndim>2,
> we'll probably merge it. (We'd like it to function like you suggest, for
> consistency with other gufuncs. But to get there we have to deprecate the
> current behaviour first.)
>
> While I'm wishing for things I'll also mention that it would be really neat
> if binary gufuncs would have a .outer method like regular ufuncs do, so
> anyone currently using ndim>2 dot could just switch to that. But that's a
> lot more work than just deprecating something :-).
>
> -n
>
> On 20 Feb 2014 09:27, "Eric Moore"  wrote:
>>
>>
>>
>> On Thursday, February 20, 2014, Eelco Hoogendoorn
>>  wrote:
>>>
>>> If the standard semantics are not affected, and the most common
>>> two-argument scenario does not take more than a single if-statement
>>> overhead, I don't see why it couldn't be a replacement for the existing
>>> np.dot; but others mileage may vary.
>>>
>>>
>>> On Thu, Feb 20, 2014 at 11:34 AM, Stefan Otte 
>>> wrote:

 Hey,

 so I propose the following.  I'll implement a new function `mdot`.
 Incorporating the changes in `dot` are unlikely. Later, one can still
 include
 the features in `dot` if desired.

 `mdot` will have a default parameter `optimize`.  If `optimize==True`
 the
 reordering of the multiplication is done.  Otherwise it simply chains
 the
 multiplications.

 I'll test and benchmark my implementation and create a pull request.

 Cheers,
  Stefan
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>> Another consideration here is that we need a better way to work with
>> stacked matrices such as np.linalg handles now.  Ie I want to compute the
>> matrix product of two (k, n, n) arrays producing a (k,n,n) result.  Near as
>> I can tell there isn't a way to do this right now that doesn't involve an
>> explicit loop. Since dot will return a (k, n, k, n) result. Yes this output
>> contains what I want but it also computes a lot of things that I don't want
>> too.
>>
>> It would also be nice to be able to do a matrix product reduction, (k, n,
>> n) -> (n, n) in a single line too.
>>
>> Eric
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd error checking vs. speed

2014-02-20 Thread alex
On Mon, Feb 17, 2014 at 1:24 PM, Sturla Molden wrote:
> Sturla Molden wrote:
>> Dave Hirschfeld wrote:
>>
>>> Even if lapack_lite always performed the isfinite check and threw a python
>>> error if False, it would be much better than either hanging or segfaulting 
>>> and
>>> people who care about the isfinite cost probably would be linking to a fast
>>> lapack anyway.
>>
>> +1 (if I have a vote)
>>
>> Correctness is always more important than speed. Segfaulting or hanging
>> while burning the CPU is not something we should allow "by design". And
>> those who need speed should in any case use a different lapack library
>> instead. The easiest place to put a finiteness test is the check_object
>> function here:
>>
>> https://github.com/numpy/numpy/blob/master/numpy/linalg/lapack_litemodule.c
>>
>> But in that case we should probably use a macro guard to leave it out if
>> any other LAPACK than the builtin f2c version is used.
>
>
> It seems even the more recent (3.4.x) versions of LAPACK have places where
> NANs can cause infinite loops. As long as this is an issue it might perhaps
> be worth checking everywhere.
>
> http://www.netlib.org/lapack/bug_list.html
>
> The semi-official C interface LAPACKE implements NAN checking as well:
>
> http://www.netlib.org/lapack/lapacke.html#_nan_checking
>
> If Intel's engineers put NAN checking inside LAPACKE it probably were for a
> good reason.

As more evidence that checking isfinite could be important for
stability even for non-lapack-lite LAPACKs, MKL docs currently include
the following warning:

WARNING
LAPACK routines assume that input matrices do not contain IEEE 754
special values such as INF
or NaN values. Using these special values may cause LAPACK to return
unexpected results or
become unstable.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenBLAS on Mac

2014-02-20 Thread Olivier Grisel
I have exactly the same setup as yours and it links to OpenBLAS
correctly (in a venv as well, installed with python setup.py install).
The only difference is that I installed OpenBLAS in the default
folder: /opt/OpenBLAS (and I reflected that in site.cfg).

When you run otool -L, is it in your source tree or do you point to
the numpy/core/_dotblas.so of the site-packages folder of your venv?

If you activate your venv, go to a different folder (e.g. /tmp) and type:

python -c "import numpy as np; np.show_config()"

what do you get? I get:

$ python -c "import numpy as np; np.show_config()"
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = f77
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = f77
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = f77
blas_mkl_info:
  NOT AVAILABLE

-- 
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Header files in windows installers

2014-02-20 Thread Matt Newell

I have a small c++ extension used to feed a 1d numpy array into a 
QPainterPath.  Very simple just using PyArray_[Check|FLAGS|SIZE|DATA]. 
I developed it on debian which was of course very straightforward, but now I 
need to deploy on windows, which is of course where the fun always begins.

I was unpleasantly surprised to discover that the numpy installers for window 
do not have an option to install the include files, which appear to be all that 
is needed to compile extensions that use numpy's C-api.

I have actually managed to copy my set of include files from linux and with a 
few modifications to _numpyconfig.h got my extension compiled and working.

Is there any chance that the includes could be included in future releases?

Thanks,
Matt Newell
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [JOB ANNOUNCEMENT] Software Developer permanent position available at ESRF, France

2014-02-20 Thread V. Armando Sole
Dear colleagues,

The ESRF is looking for a Software Developer:

http://esrf.profilsearch.com/recrute/fo_annonce_voir.php?id=300

Our ideal candidate would be experienced on OpenGL, OpenCL and Python.

Best regards,

Armando
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OpenBLAS on Mac

2014-02-20 Thread Jurgen Van Gael
Hi All,

I run Mac OS X 10.9.1 and was trying to get OpenBLAS working for numpy.
I've downloaded the OpenBLAS source and compiled it (thanks to Olivier
Grisel). I installed everything to /usr/local/lib (I believe): e.g. "ll
/usr/local/lib/ | grep openblas"

lrwxr-xr-x   1 37B 10 Feb 14:51 libopenblas.a@ ->
libopenblas_sandybridgep-r0.2.9.rc1.a
lrwxr-xr-x   1 56B 10 Feb 14:51 libopenblas.dylib@ ->
/usr/local/lib/libopenblas_sandybridgep-r0.2.9.rc1.dylib
-rw-r--r--   1 18M  7 Feb 16:02 libopenblas_sandybridgep-r0.2.9.rc1.a
-rwxr-xr-x   1 12M 10 Feb 14:51 libopenblas_sandybridgep-r0.2.9.rc1.dylib*

Then I download the numpy sources and add a site.cfg with the only three
lines uncommented being:

[openblas]
libraries = openblas
library_dirs = /usr/local/lib
include_dirs = /usr/local/include

When I run python setup.py config I get message that say that openblas_info
has been found. I then run setup build and setup install (into a
virtualenv). In the virtualenv, when I then check what _dotblas.so is
linked to, I keep getting that it is linked to Accelerate. E.g.

otool -L .../numpy/core/_dotblas.so =>
/System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate

Any suggestions on getting my numpy working with OpenBLAS?

Thanks, Jurgen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Olivier Grisel
2014-02-20 16:01 GMT+01:00 Julian Taylor :
>
> this is probably caused by the memory warmup
> it can be disabled with NO_WARMUP=1 in some configuration file.

This was it, I now get:

>>> import os, psutil
>>> psutil.Process(os.getpid()).get_memory_info().rss / 1e6
20.324352
>>> %time import numpy
CPU times: user 84 ms, sys: 464 ms, total: 548 ms
Wall time: 59.3 ms
>>> psutil.Process(os.getpid()).get_memory_info().rss / 1e6
27.906048

Thanks for the tip.

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Carl Kleffner
good point, I didn't used this option.

Carl


2014-02-20 16:01 GMT+01:00 Julian Taylor :

> On Thu, Feb 20, 2014 at 3:50 PM, Olivier Grisel
>  wrote:
> > Thanks for sharing, this is all very interesting.
> >
> > Have you tried to have a look at the memory usage and import time of
> > numpy when linked against libopenblas.dll?
> >
> > --
>
> this is probably caused by the memory warmup
> it can be disabled with NO_WARMUP=1 in some configuration file.
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Carl Kleffner
looked at the taskmanager there is not much difference to numpy-MKL. I
didn't made any qualified measurements however.

Carl


2014-02-20 15:50 GMT+01:00 Olivier Grisel :

> Thanks for sharing, this is all very interesting.
>
> Have you tried to have a look at the memory usage and import time of
> numpy when linked against libopenblas.dll?
>
> --
> Olivier
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Nathaniel Smith
If you send a patch that deprecates dot's current behaviour for ndim>2,
we'll probably merge it. (We'd like it to function like you suggest, for
consistency with other gufuncs. But to get there we have to deprecate the
current behaviour first.)

While I'm wishing for things I'll also mention that it would be really neat
if binary gufuncs would have a .outer method like regular ufuncs do, so
anyone currently using ndim>2 dot could just switch to that. But that's a
lot more work than just deprecating something :-).

-n
On 20 Feb 2014 09:27, "Eric Moore"  wrote:

>
>
> On Thursday, February 20, 2014, Eelco Hoogendoorn <
> hoogendoorn.ee...@gmail.com> wrote:
>
>> If the standard semantics are not affected, and the most common
>> two-argument scenario does not take more than a single if-statement
>> overhead, I don't see why it couldn't be a replacement for the existing
>> np.dot; but others mileage may vary.
>>
>>
>> On Thu, Feb 20, 2014 at 11:34 AM, Stefan Otte wrote:
>>
>>> Hey,
>>>
>>> so I propose the following.  I'll implement a new function `mdot`.
>>> Incorporating the changes in `dot` are unlikely. Later, one can still
>>> include
>>> the features in `dot` if desired.
>>>
>>> `mdot` will have a default parameter `optimize`.  If `optimize==True` the
>>> reordering of the multiplication is done.  Otherwise it simply chains the
>>> multiplications.
>>>
>>> I'll test and benchmark my implementation and create a pull request.
>>>
>>> Cheers,
>>>  Stefan
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>> Another consideration here is that we need a better way to work with
> stacked matrices such as np.linalg handles now.  Ie I want to compute the
> matrix product of two (k, n, n) arrays producing a (k,n,n) result.  Near
> as  I can tell there isn't a way to do this right now that doesn't involve
> an explicit loop. Since dot will return a (k, n, k, n) result. Yes this
> output contains what I want but it also computes a lot of things that I
> don't want too.
>
> It would also be nice to be able to do a matrix product reduction, (k, n,
> n) -> (n, n) in a single line too.
>
> Eric
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Julian Taylor
On Thu, Feb 20, 2014 at 3:50 PM, Olivier Grisel
 wrote:
> Thanks for sharing, this is all very interesting.
>
> Have you tried to have a look at the memory usage and import time of
> numpy when linked against libopenblas.dll?
>
> --

this is probably caused by the memory warmup
it can be disabled with NO_WARMUP=1 in some configuration file.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Olivier Grisel
Thanks for sharing, this is all very interesting.

Have you tried to have a look at the memory usage and import time of
numpy when linked against libopenblas.dll?

-- 
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Carl Kleffner
Hi,

some days ago I put some preliminary mingw-w64 binaries and code based on
python2.7 on my google drive to discuss it with Matthew Brett. Maybe its
time for a broader discussion. IMHO it is ready for testing but not for
consumption.

url:
https://drive.google.com/folderview?id=0B4DmELLTwYmldUVpSjdpZlpNM1k&usp=sharing

contains:

(1) patches used

  numpy.patch
  scipy.patch


(2) 64 bit GCC toolchain
amd64/
 mingw-w64-toolchain-static_amd64-gcc-4.8.2_vc90_rev-20140131.7z
 libpython27.a

(3) numpy-1.8.0 linked against OpenBLAS
amd64/numpy-1.8.0/
  numpy-1.8.0.win-amd64-py2.7.exe
  numpy-1.8.0-cp27-none-win_amd64.whl
  numpy_amd64_fcompiler.log
  numpy_amd64_build.log
  numpy_amd64_test.log
  _numpyconfig.h
  config.h

(4) scipy-0.13.3 linked against OpenBLAS
amd64/scipy-0.13.3/
  scipy-0.13.3.win-amd64-py2.7.exe
  scipy-0.13.3-cp27-none-win_amd64.whl
  scipy_amd64_fcompiler.log
  scipy_amd64_build.log
  scipy_amd64_build_cont.log
  scipy_amd64_test._segfault.log
  scipy_amd64_test.log


(5) 32 bit GCC toolchain
win32/
  mingw-w64-toolchain-static_win32-gcc-4.8.2_vc90_rev-20140131.7z
  libpython27.a

(6) numpy-1.8.0 linked against OpenBLAS
win32/numpy-1.8.0/
  numpy-1.8.0.win32-py2.7.exe
  numpy-1.8.0-cp27-none-win32.whl
  numpy_win32_fcompiler.log
  numpy_win32_build.log
  numpy_win32_test.log
  _numpyconfig.h
  config.h

(7) scipy-0.13.3 linked against OpenBLAS
win32/scipy-0.13.3/
  scipy-0.13.3.win32-py2.7.exe
  scipy-0.13.3-cp27-none-win32.whl
  scipy_win32_fcompiler.log
  scipy_win32_build.log
  scipy_win32_build_cont.log
  scipy_win32_test.log

Summary to compile numpy:

(1) \bin and python should be in the PATH. Choose 32 bit or 64 bit
architecture.
(2) copy libpython27.a to \libs
 check, that \libs does not contain libmsvcr90.a
(3) apply numpy.patch
(4) copy libopenblas.dll from \bin to numpy\core
of course don't ever mix 32bit and 64 bit code
(5) create a site.cfg in the numpy folder with the absolute path to the
mingw import
files/header files. I copied the openblas header files, importlibs into
the GCC toolchain.
(6) create a mingw distutils.cfg file
(7) test the configuration
python setup.py config_fc --verbose
and
python setup.py build --help-fcompiler
(8) build
python setup.py build --fcompiler=gnu95
(9) make a distro
python setup.py bdist --format=wininst
(10) make a wheel
 wininst2wheel numpy-1.8.0.win32-py2.7.exe  (for 32 bit)
(11) install
 wheel install numpy-1.8.0-cp27-none-win32.whl
(12) import numpy; numpy.test()

Summary to compile scipy:

(1) apply scipy.patch
(2) python setup.py build --fcompiler=gnu95
and a second time
python setup.py build --fcompiler=gnu95
(3) python setup.py bdist --format=wininst
(4) install
(5) import scipy; scipy.test()

Hints:

(1) libpython import file:

The libpython27.a import files has been generated with gendef and dlltool
according to the recommendations on the mingw-w64 faq  site. It is
essential to not use import libraries from anywhere, but create it with the
tools in the GCC toolchain. The GCC toolchains contains correct generated
mscvrXX import files per default.

(2) OpenBLAS:

the openblas DLL must be copied to numpy/core before building numpy. All
Blas and Lapack code will be linked dynamically to this DLL. Because of
this the overall distro size gets much smaller compared to numpy-MKL or
scipy-MKL. It is not necessary to add numpy/core to the path! (at least on
my machine). To load libopenblas.dll to the process space it is only
necessary to import numpy - nothing else. libopenblas.dll is linked against
the msvcr90.dll, just like python. The DLL itself is a fat binary
containing all optimized kernels for all supported platforms. DLL, headers
and import files have been included into the toolchain.

(3) mingw-w64 toolchain:

In short it is an extended version of the 'recommended' mingw-builds
toolchain with some minor patches and customizations. I used
https://github.com/niXman/mingw-builds for my build. It is a 'statically'
build, thus all gcc related runtimes are linked statically into the
resulting binaries.

(4) Results:

Some FAILS - see corresp. log-files. I got a segfault with scipy.test() (64
bit) with multithreaded OpenBLAS (test_ssygv_1) but not in single threaded
mode. Due to time constraints I didn't made further tests right now.

Regards

Carl


2014-02-20 14:28 GMT+01:00 Sturla Molden :

> Will this mean NumPy, SciPy et al. can start using OpenBLAS in the
> "official" binary packages, e.g. on Windows and Mac OS X? ATLAS is slow and
> Accelerate conflicts with fork as well.
>
> Will dotblas be built against OpenBLAS? AFAIK, it is only buit against
> ATLAS or MKL, not any other BLAS, but it should just be a matter of
> changing the build/link process.
>
> Sturla
>
>
>
> Nathaniel Smith  wrote:
> > Hey all,
> >
> > Just a heads up: thanks to the tireless work of Olivier Grisel, the
> > OpenBLAS development branch is now fork-safe when built with its default
> > threading support. (It i

Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Olivier Grisel
FYI: to build scipy against OpenBLAS I used the following site.cfg at
the root of my scipy source folder:

[DEFAULT]
library_dirs = /opt/OpenBLAS-noomp/lib:/usr/local/lib
include_dirs = /opt/OpenBLAS-noomp/include:/usr/local/include

[blas_opt]
libraries = openblas

[lapack_opt]
libraries = openblas

But this is unrelated to the previous numpy memory pattern as it
occurs independendly of scipy.

-- 
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Eelco Hoogendoorn
Erik; take a look at np.einsum

The only reason against such dot semantics is that there isn't much to be
gained in elegance that np.einsum already provides, For a plain chaining,
multiple arguments to dot would be an improvement; but if you want to go
for more complex products, the elegance of np.einsum will be hard to beat


On Thu, Feb 20, 2014 at 3:27 PM, Eric Moore  wrote:

>
>
> On Thursday, February 20, 2014, Eelco Hoogendoorn <
> hoogendoorn.ee...@gmail.com> wrote:
>
>> If the standard semantics are not affected, and the most common
>> two-argument scenario does not take more than a single if-statement
>> overhead, I don't see why it couldn't be a replacement for the existing
>> np.dot; but others mileage may vary.
>>
>>
>> On Thu, Feb 20, 2014 at 11:34 AM, Stefan Otte wrote:
>>
>>> Hey,
>>>
>>> so I propose the following.  I'll implement a new function `mdot`.
>>> Incorporating the changes in `dot` are unlikely. Later, one can still
>>> include
>>> the features in `dot` if desired.
>>>
>>> `mdot` will have a default parameter `optimize`.  If `optimize==True` the
>>> reordering of the multiplication is done.  Otherwise it simply chains the
>>> multiplications.
>>>
>>> I'll test and benchmark my implementation and create a pull request.
>>>
>>> Cheers,
>>>  Stefan
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>> Another consideration here is that we need a better way to work with
> stacked matrices such as np.linalg handles now.  Ie I want to compute the
> matrix product of two (k, n, n) arrays producing a (k,n,n) result.  Near
> as  I can tell there isn't a way to do this right now that doesn't involve
> an explicit loop. Since dot will return a (k, n, k, n) result. Yes this
> output contains what I want but it also computes a lot of things that I
> don't want too.
>
> It would also be nice to be able to do a matrix product reduction, (k, n,
> n) -> (n, n) in a single line too.
>
> Eric
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Olivier Grisel
2014-02-20 14:28 GMT+01:00 Sturla Molden :
> Will this mean NumPy, SciPy et al. can start using OpenBLAS in the
> "official" binary packages, e.g. on Windows and Mac OS X? ATLAS is slow and
> Accelerate conflicts with fork as well.

This what I would like to do personnally. Ideally as a distribution of
wheel packages

To do so I built the current develop branch of OpenBLAS with:

  make USE_OPENMP=0 NUM_THREAD=32 NO_AFFINITY=1
  make PREFIX=/opt/OpenBLAS-noomp install

Then I added a site.cfg file in the numpy source folder with the lines:

  [openblas]
  libraries = openblas
  library_dirs = /opt/OpenBLAS-noomp/lib
  include_dirs = /opt/OpenBLAS-noomp/include


> Will dotblas be built against OpenBLAS?

Yes:

  $ ldd numpy/core/_dotblas.so
  linux-vdso.so.1 =>  (0x7fff24d04000)
  libopenblas.so.0 => /opt/OpenBLAS-noomp/lib/libopenblas.so.0
(0x7f432882f000)
  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f4328449000)
  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f432814c000)
  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f4327f2f000)
  libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3
(0x7f4327c18000)
  /lib64/ld-linux-x86-64.so.2 (0x7f43298d3000)
  libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0
(0x7f43279e1000)


However when testing this I noticed the following strange slow import
and memory usage in an IPython session:

>>> import os, psutil
>>> psutil.Process(os.getpid()).get_memory_info().rss / 1e6
20.324352
>>> %time import numpy
CPU times: user 1.95 s, sys: 1.3 s, total: 3.25 s
Wall time: 530 ms
>>> psutil.Process(os.getpid()).get_memory_info().rss / 1e6
349.507584

The libopenblas.so file is just 14MB so I don't understand how I could
get those 330MB from.

It's even worst when using static linking (libopenblas.a instead of
libopenblas.so under linux).

With Atlas or MKL I get import times under 50 ms and the memory
overhead of the numpy import is just ~15MB.

I would be very interested in any help on this:

- can you reproduce this behavior?
- do you have an idea of a possible cause?
- how to investigate?

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Eric Moore
On Thursday, February 20, 2014, Eelco Hoogendoorn <
hoogendoorn.ee...@gmail.com> wrote:

> If the standard semantics are not affected, and the most common
> two-argument scenario does not take more than a single if-statement
> overhead, I don't see why it couldn't be a replacement for the existing
> np.dot; but others mileage may vary.
>
>
> On Thu, Feb 20, 2014 at 11:34 AM, Stefan Otte 
> 
> > wrote:
>
>> Hey,
>>
>> so I propose the following.  I'll implement a new function `mdot`.
>> Incorporating the changes in `dot` are unlikely. Later, one can still
>> include
>> the features in `dot` if desired.
>>
>> `mdot` will have a default parameter `optimize`.  If `optimize==True` the
>> reordering of the multiplication is done.  Otherwise it simply chains the
>> multiplications.
>>
>> I'll test and benchmark my implementation and create a pull request.
>>
>> Cheers,
>>  Stefan
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> Another consideration here is that we need a better way to work with
stacked matrices such as np.linalg handles now.  Ie I want to compute the
matrix product of two (k, n, n) arrays producing a (k,n,n) result.  Near
as  I can tell there isn't a way to do this right now that doesn't involve
an explicit loop. Since dot will return a (k, n, k, n) result. Yes this
output contains what I want but it also computes a lot of things that I
don't want too.

It would also be nice to be able to do a matrix product reduction, (k, n,
n) -> (n, n) in a single line too.

Eric
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Sturla Molden
Will this mean NumPy, SciPy et al. can start using OpenBLAS in the
"official" binary packages, e.g. on Windows and Mac OS X? ATLAS is slow and
Accelerate conflicts with fork as well. 

Will dotblas be built against OpenBLAS? AFAIK, it is only buit against
ATLAS or MKL, not any other BLAS, but it should just be a matter of
changing the build/link process.

Sturla



Nathaniel Smith  wrote:
> Hey all,
> 
> Just a heads up: thanks to the tireless work of Olivier Grisel, the
> OpenBLAS development branch is now fork-safe when built with its default
> threading support. (It is still not thread-safe when built using OMP for
> threading and gcc, but this is not the default.)
> 
> Gory details:  href="https://github.com/xianyi/OpenBLAS/issues/294";>https://github.com/xianyi/OpenBLAS/issues/294
> 
> Check it out - if it works you might want to consider lobbying your
> favorite distro to backport it.
> 
> -n
> 
> ___ NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org  href="http://mail.scipy.org/mailman/listinfo/numpy-discussion";>http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyViennaCL

2014-02-20 Thread Toby St Clere Smithe
Hi all,

Apologies for posting across lists; I thought that this might be of
interest to both groups.

I have just released PyViennaCL 1.0.0, which is a set of largely
NumPy-compatible Python bindings to the ViennaCL linear algebra and
numerical computation library for GPGPU and heterogeneous systems.

PyViennaCL aims to make powerful GPGPU computing really transparently
easy, especially for users already using NumPy for representing
matrices.

Please see my announcement below for links to source and packages and
documentation, a list of features, and a list of missing pieces. I hope
to iron out all those missing bits over the coming months, and work on
closer integration, especially with PyOpenCL / PyCUDA, over the summer.

Best wishes,

Toby St Clere Smithe


--- Begin Message ---
Dear ViennaCL users,


If you've ever used Python for your numerical applications, you know
what joy it can be. Now, the easy power of ViennaCL 1.5.1 is at last
married to that experience. I am pleased to announce the first release
of PyViennaCL!


Download links for source and Ubuntu binaries are found at the usual
place: http://viennacl.sourceforge.net/viennacl-download.html
 * If you are or know anyone who could help with building PyViennaCL for
   other systems (Windows, Mac OS X, CentOS / RHEL, Fedora, SuSE, ...),
   please get in touch!


See the following link for documentation and example code:
http://viennacl.sourceforge.net/pyviennacl/doc/


PyViennaCL 1.0.0 exposes most of the functionality of ViennaCL:
 + sparse (compressed, co-ordinate, ELL, and hybrid) and dense
 (row-major and column-major) matrices, vectors and scalars on your
 compute device using OpenCL;
 
 + standard arithmetic operations and mathematical functions;

 + fast matrix products for sparse and dense matrices, and inner and
 outer products for vectors;

 + direct solvers for dense triangular systems;

 + iterative solvers for sparse and dense systems, using the BiCGStab,
 CG, and GMRES algorithms;

 + iterative algorithms for eigenvalue estimation problems.


PyViennaCL has also been designed for straightforward use in the context
of NumPy and SciPy: PyViennaCL objects can be constructed using NumPy
arrays, and arithmetic operations and comparisons in PyViennaCL are
type-agnostic.


Some ViennaCL functionality is not yet available, and these features are
planned for a release in the coming months:
 + preconditioners and QR factorization;
 + additional solvers and other algorithms, such as FFT computation;
 + structured matrices;
 + CUDA support (use OpenCL for now!);
 + advanced OpenCL integration.



Spread the word!


Toby St Clere Smithe

--- End Message ---
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Olivier Grisel
2014-02-20 11:32 GMT+01:00 Julian Taylor :
> On Thu, Feb 20, 2014 at 1:25 AM, Nathaniel Smith  wrote:
>> Hey all,
>>
>> Just a heads up: thanks to the tireless work of Olivier Grisel, the OpenBLAS
>> development branch is now fork-safe when built with its default threading
>> support. (It is still not thread-safe when built using OMP for threading and
>> gcc, but this is not the default.)
>>
>> Gory details: https://github.com/xianyi/OpenBLAS/issues/294
>>
>> Check it out - if it works you might want to consider lobbying your favorite
>> distro to backport it.
>>
>
> debian unstable and the upcoming ubuntu 14.04 are already fixed.

Nice!

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Eelco Hoogendoorn
If the standard semantics are not affected, and the most common
two-argument scenario does not take more than a single if-statement
overhead, I don't see why it couldn't be a replacement for the existing
np.dot; but others mileage may vary.


On Thu, Feb 20, 2014 at 11:34 AM, Stefan Otte  wrote:

> Hey,
>
> so I propose the following.  I'll implement a new function `mdot`.
> Incorporating the changes in `dot` are unlikely. Later, one can still
> include
> the features in `dot` if desired.
>
> `mdot` will have a default parameter `optimize`.  If `optimize==True` the
> reordering of the multiplication is done.  Otherwise it simply chains the
> multiplications.
>
> I'll test and benchmark my implementation and create a pull request.
>
> Cheers,
>  Stefan
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal: Chaining np.dot with mdot helper function

2014-02-20 Thread Stefan Otte
Hey,

so I propose the following.  I'll implement a new function `mdot`.
Incorporating the changes in `dot` are unlikely. Later, one can still include
the features in `dot` if desired.

`mdot` will have a default parameter `optimize`.  If `optimize==True` the
reordering of the multiplication is done.  Otherwise it simply chains the
multiplications.

I'll test and benchmark my implementation and create a pull request.

Cheers,
 Stefan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default builds of OpenBLAS development branch are now fork safe

2014-02-20 Thread Julian Taylor
On Thu, Feb 20, 2014 at 1:25 AM, Nathaniel Smith  wrote:
> Hey all,
>
> Just a heads up: thanks to the tireless work of Olivier Grisel, the OpenBLAS
> development branch is now fork-safe when built with its default threading
> support. (It is still not thread-safe when built using OMP for threading and
> gcc, but this is not the default.)
>
> Gory details: https://github.com/xianyi/OpenBLAS/issues/294
>
> Check it out - if it works you might want to consider lobbying your favorite
> distro to backport it.
>

debian unstable and the upcoming ubuntu 14.04 are already fixed.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion