Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 02:13, Sturla Molden wrote: Most of the test failures with OpenBLAS and Carl Kleffner's toolchain on Windows are due to differences between Microsoft and MinGW runtime libraries ... and also differences in FPU precision. Sturla

Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 02:19, Matthew Brett wrote: ATLAS compiled with gcc also gives us some more license complication: http://numpy-discussion.10968.n7.nabble.com/Copyright-status-of-NumPy-binaries-on-Windows-OS-X-tp38793p38824.html Ok, then I have a question regarding OpenBLAS: Do we use the f2c'd

Re: [Numpy-discussion] IDE's for numpy development?

2015-04-01 Thread Sturla Molden
Charles R Harris charlesr.har...@gmail.com wrote: I'd be interested in information from anyone with experience in using such an IDE and ideas of how Numpy might make using some of the common IDEs easier. Thoughts? I guess we could include project files for Visual Studio (and perhaps

Re: [Numpy-discussion] How to Force Storage Order

2015-04-01 Thread Sturla Molden
Klemm, Michael michael.kl...@intel.com wrote: I have found that the numpy.linalg.svd algorithm creates the resulting U, sigma, and V matrixes with Fortran storage. Is there any way to force these kind of algorithms to not change the storage order? That would make passing the matrixes to the

[Numpy-discussion] SIMD programming in Cython

2015-03-11 Thread Sturla Molden
So I just learned a new trick. This is a very nice one which can be nice to know about, so I thought I should share this: https://groups.google.com/forum/#!msg/cython-users/nTnyI7A6sMc/a6_GnOOsLuQJ Regards, Sturla ___ NumPy-Discussion mailing list

Re: [Numpy-discussion] Introductory mail and GSoc Project Vector math library integration

2015-03-11 Thread Sturla Molden
On 11/03/15 23:20, Dp Docs wrote: ​​​So we are supposed to integrate just one of these libraries? As a Mac user I would be annoyed if we only supported MKL and not Accelerate Framework. AMD LibM should be supported too. MKL is non-free, but we use it for BLAS and LAPACK. AMD LibM is non-free

Re: [Numpy-discussion] Introductory mail and GSoc Project Vector math library integration

2015-03-11 Thread Sturla Molden
There are several vector math libraries NumPy could use, e.g. MKL/VML, Apple Accelerate (vecLib), ACML, and probably others. They all suffer from requiring dense arrays and specific array alignments, whereas NumPy arrays have very flexible strides and flexible alignment. NumPy also has ufuncs

Re: [Numpy-discussion] Would like to patch docstring for numpy.random.normal

2015-03-10 Thread Sturla Molden
On 09/03/15 21:34, Paul Hobson wrote: I feel your pain. Making it worse, numpy.random.lognormal takes mean and sigma as input. If there's ever a backwards incompatible release, I hope these things will be cleared up. The question is how... The fix is obvious, but the consequences are

Re: [Numpy-discussion] Would like to patch docstring for numpy.random.normal

2015-03-03 Thread Sturla Molden
Daniel Sank sank.dan...@gmail.com wrote: It seems unnecessarily convoluted to name the input arguments loc and scale, then immediately define them as the mean and standard deviation in the Parameters section, and then again rename them as mu and sigma in the written formula. I propose to

[Numpy-discussion] So I found a bug...

2015-02-27 Thread Sturla Molden
Somewhere... But where is it? NumPy, SciPy, Matplotlib, Cython or ipython? I am suspecting ipython, but proving it is hard... http://nbviewer.ipython.org/urls/dl.dropboxusercontent.com/u/12464039/lenna-bug.ipynb Sturla ___ NumPy-Discussion mailing

Re: [Numpy-discussion] One-byte string dtype: third time's the charm?

2015-02-22 Thread Sturla Molden
On 22/02/15 19:21, Aldcroft, Thomas wrote: Problems like this are now showing up in the wild [3]. Workarounds are also showing up, like a way to easily convert from 'S' to 'U' within astropy Tables [4], but this is really not a desirable way to go. Gigabyte-sized string data arrays are not

Re: [Numpy-discussion] One-byte string dtype: third time's the charm?

2015-02-22 Thread Sturla Molden
On 22/02/15 21:04, Robert Kern wrote: Python 3's `str` type is opaque, so it can freely choose how to represent the data in memory. numpy dtypes transparently describe how the data is represented in memory. Hm, yes, that is a good point. Sturla

Re: [Numpy-discussion] One-byte string dtype: third time's the charm?

2015-02-22 Thread Sturla Molden
On 22/02/15 20:57, Nathaniel Smith wrote: This is a discussion about how strings are represented as bit-patterns inside ndarrays; the internal storage representation used by 'str' is irrelevant. I thought it would be clever to just use the same internal representation as Python would choose.

[Numpy-discussion] Nature says 'Pick up Python'

2015-02-13 Thread Sturla Molden
A recent article in Nature advice scientists to use Python, Cython and the SciPy stack. http://www.nature.com/news/programming-pick-up-python-1.16833 Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-10 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: The strongest use-case seems to be for teaching that involves linear algebra concepts, not real production code. Not really. SymPy is a better teaching tool. Some find A*B easier to read than dot(A,B). But with the @ operator in Python 3.5 it does

Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-10 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: Well, splitting it off is a good idea, seeing as how it hasn't gotten much love. But if the rest of numpy does not work well with it, then it becomes even less useful. PEP 3118 takes care of that. Sturla

Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-09 Thread Sturla Molden
On 09/02/15 08:34, Stefan Reiterer wrote: So maybe the better way would be not to add warnings to braodcasting operations, but to overhaul the matrix class to make it more attractive for numerical linear algebra(?) I think you underestimate the amount of programming this would take. Take an

Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-09 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: Do you realize that: arr = np.ones((5,)) ar2 = arr * 5 is broadcasting, too? Perhaps we should only warn for a subset of broadcastings? E.g. avoid the warning on scalar times array. I prefer we don't warn about this though, because it might

Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-02-09 Thread Sturla Molden
Two quick comments: - You need MSYS or Cygwin to build OpenBLAS. MSYS has uname and perl. Carl probably used MSYS. - BLAS and LAPACK are Fortran libs, hence there are no header files. NumPy and SciPy include their own cblas headers. Sturla Olivier Grisel olivier.gri...@ensta.org wrote: Hi

Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-08 Thread Sturla Molden
Simon Wood sgwoo...@gmail.com wrote: Not quite the same. This is not so much about language semantics as mathematical definitions. You (the Numpy community) have decided to overload certain mathematical operators to act in a way that is not consistent with linear algebra teachings. We have

Re: [Numpy-discussion] Silent Broadcasting considered harmful

2015-02-08 Thread Sturla Molden
On 08/02/15 23:17, Stefan Reiterer wrote: Actually I use numpy for several years now, and I love it. The reason that I think silent broadcasting of sums is bad comes simply from the fact, that I had more trouble with it, than it helped me. In Fortran 90, broadcasting allows us to write

Re: [Numpy-discussion] Any interest in a 'heaviside' ufunc?

2015-02-04 Thread Sturla Molden
On 04/02/15 06:18, Warren Weckesser wrote: By discrete form, do you mean discrete time (i.e. a function defined on the integers)? Then I agree, the discrete time unit step function is defined as It is the cumulative integral of the delta function, and thus it can never obtain the value 0.5.

Re: [Numpy-discussion] Any interest in a 'heaviside' ufunc?

2015-02-03 Thread Sturla Molden
Warren Weckesser warren.weckes...@gmail.com wrote: 0if x 0 heaviside(x) = 0.5 if x == 0 1if x 0 This is not correct. The discrete form of the Heaviside step function has the value 1 for x == 0. heaviside = lambda x : 1 - (x

Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Sturla Molden
On 27/01/15 11:32, Carl Kleffner wrote: OpenBLAS in the test wheels is build with DYNAMIC_ARCH, that is all assembler based kernels are included and are choosen at runtime. Ok, I wasn't aware of that option. Last time I built OpenBLAS I think I had to specify the target CPU. Non optimized

Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-25 Thread Sturla Molden
On 25/01/15 22:15, Matthew Brett wrote: I agree, that shipping openblas with both numpy and scipy seems perfectly reasonable to me - I don't think anyone will much care about the 30M, and I think our job is to make something that works with the least complexity and likelihood of error. Yes.

Re: [Numpy-discussion] Aligned array allocations

2015-01-22 Thread Sturla Molden
Antoine Pitrou solip...@pitrou.net wrote: By always using an aligned allocator there is some overhead: - all arrays occupy a bit more memory by a small average amount (probably 16 bytes average on a 64-bit machine, for a 16 byte guaranteed alignment) NumPy arrays are Python objects. They

Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-22 Thread Sturla Molden
Were there any failures with the 64 bit build, or did all tests pass? Sturla On 22/01/15 22:29, Carl Kleffner wrote: I took time to create mingw-w64 based wheels of numpy-1.9.1 and scipy-0.15.1 source distributions and put them on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads

Re: [Numpy-discussion] Characteristic of a Matrix.

2015-01-06 Thread Sturla Molden
On 06/01/15 02:08, cjw wrote: This is not a comment on any present matrix support, but deals with the matrix class, which existed back when Todd Miller of the Space Telescope Group supported numpy. Matrix is a sub-class of ndarray. Since this Matrix class is (more or less) deprecated and

Re: [Numpy-discussion] The future of ndarray.diagonal()

2015-01-05 Thread Sturla Molden
Konrad Hinsen konrad.hin...@fastmail.net wrote: Scientific communication depends more and more on scripts as the only precise documentation of a computational method. Our programming languages are becoming a major form of scientific notation, alongside traditional mathematics. To me it

Re: [Numpy-discussion] The future of ndarray.diagonal()

2015-01-04 Thread Sturla Molden
On 04/01/15 17:22, Konrad Hinsen wrote: There are two different scenarios to consider here, and perhaps I didn't make that distinction clear enough. One scenario is that of a maintained library or application that depends on NumPy. The other scenario is a set of scripts written for a specific

Re: [Numpy-discussion] The future of ndarray.diagonal()

2015-01-04 Thread Sturla Molden
On 03/01/15 20:49, Nathaniel Smith wrote: i.e., slow-incremental-change has actually worked well in his experience. (And in particular, the np.diagonal issue only comes in as an example to illustrate what he means by the phrase slow continuous change -- this particular change hasn't actually

[Numpy-discussion] Correct C string handling in the NumPy C API?

2015-01-03 Thread Sturla Molden
Here is an example: NPY_NO_EXPORT NpyIter_IterNextFunc * NpyIter_GetIterNext(NpyIter *iter, char **errmsg) { npy_uint32 itflags = NIT_ITFLAGS(iter); int ndim = NIT_NDIM(iter); int nop = NIT_NOP(iter); if (NIT_ITERSIZE(iter) 0) { if (errmsg == NULL) {

Re: [Numpy-discussion] Correct C string handling in the NumPy C API?

2015-01-03 Thread Sturla Molden
Sturla Molden sturla.mol...@gmail.com wrote: Thanks. That explains it. 20 years after learning C I still discover new things... On the other hand, Fortran is Fortran, and seems to be free of these gotchas... Python is better as well. I hate to say it but C++ would also be less confusing

Re: [Numpy-discussion] Correct C string handling in the NumPy C API?

2015-01-03 Thread Sturla Molden
Nathaniel Smith n...@pobox.com wrote: No, this code is safe (fortunately!). C string literals have static storage (see paragraph 6.4.5.5 in C99), which means that their lifetime is the same as the lifetime of a 'static char[]'. They aren't stack allocated. Thanks. That explains it. Sturla

Re: [Numpy-discussion] diag, diagonal, ravel and all that

2015-01-03 Thread Sturla Molden
On 03/01/15 03:04, Charles R Harris wrote: The diag, diagonal, and ravel functions have recently been changed to preserve subtypes. However, this causes lots of backward compatibility problems for matrix users, in particular, scipy.sparse. One possibility for fixing this is to special case

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-02 Thread Sturla Molden
Yuxiang Wang yw...@virginia.edu wrote: 4) I wanted to say that it seems to me, as the project gradually scales up, Cython is easier to deal with, especially when I am using a lot of numpy arrays. If it is even higher dimensional data, it would be verbose while it is really succinct to use

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-02 Thread Sturla Molden
Yuxiang Wang yw...@virginia.edu wrote: 1) @Strula Sorry about my stupid mistake! That piece of code totally gave away how green I am in coding C :) Don't worry. C is a high-level assember. It will bite you again and again, it happens to everyone. Those who say they have never made a stupid

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Sturla Molden
You can pretend double** is an array of dtype np.intp. This is because on all modern systems, double** has the size of void*, and np.intp is an integer with the size of void* (np.intp maps to Py_intptr_t). Now you just need to fill in the adresses. If you have a 2d ndarray in C order, or at

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Sturla Molden
On 01/01/15 19:00, Yuxiang Wang wrote: Could anyone please give any thoughts to help? Say you want to call void foobar(int m, int n, double **x) from dummy.so or dummpy.dll with ctypes. Here is a fully worked out example (no tested, but is will work unless I made a typo): import numpy as np

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Sturla Molden
On 01/01/15 19:56, Nathaniel Smith wrote: However, I suspect that this question can't really be answered in a useful way without more information about why exactly the C code wants a **double (instead of a *double) and what it expects to do with it. E.g., is it going to throw away the passed

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Sturla Molden
On 01/01/15 20:25, Yuxiang Wang wrote: #include stdlib.h __declspec(dllexport) void foobar(const int m, const int n, const double **x, double **y) { size_t i, j; y = (** double)malloc(sizeof(double *) * m); for(i=0; im; i++) y[i] = (*double)calloc(sizeof(double),

Re: [Numpy-discussion] Pass 2d ndarray into C **double using ctypes

2015-01-01 Thread Sturla Molden
On 01/01/15 19:30, Sturla Molden wrote: You can pretend double** is an array of dtype np.intp. This is because on all modern systems, double** has the size of void*, and np.intp is an integer with the size of void* (np.intp maps to Py_intptr_t). Well, it also requires that the user space

Re: [Numpy-discussion] npymath on Windows

2015-01-01 Thread Sturla Molden
On 28/12/14 17:17, David Cournapeau wrote: This is not really supported. You should avoid mixing compilers when building C extensions using numpy C API. Either all mingw, or all MSVC. That is not really good enough. Even if we build binary wheels with MinGW (see link) the binary npymath

Re: [Numpy-discussion] npymath on Windows

2015-01-01 Thread Sturla Molden
On 28/12/14 01:59, Matthew Brett wrote: As far as I can see, 'acosf' is defined in the msvc runtime library. I guess that '_acosf' is defined in some mingw runtime library? AFAIK it is a GCC built-in function. When the GCC compiler or linker sees it the binary code will be inlined.

Re: [Numpy-discussion] Fast sizes for FFT

2014-12-24 Thread Sturla Molden
On 24/12/14 04:33, Robert McGibbon wrote: Alex Griffing pointed out on github that this feature was recently added to scipy in https://github.com/scipy/scipy/pull/3144. Sweet! I use different padsize search than the one in SciPy. It would be interesting to see which is faster. from numpy

Re: [Numpy-discussion] Fast sizes for FFT

2014-12-24 Thread Sturla Molden
On 24/12/14 13:07, Sturla Molden wrote: v cdef intp_t checksize(intp_t n): while not (n % 5): n /= 5 while not (n % 3): n /= 3 while not (n % 2): n /= 2 return (1 if n == 1 else 0) def _next_regular(target): cdef intp_t n = target while not checksize

Re: [Numpy-discussion] Fast sizes for FFT

2014-12-24 Thread Sturla Molden
On 24/12/14 13:23, Julian Taylor wrote: hm this is a brute force search, probably fast enough but slower than scipy's code (if it also were cython) That was what I tought as well when I wrote it. But it turned out that regular numbers are so close and abundant that was damn fast, even in

Re: [Numpy-discussion] Fast sizes for FFT

2014-12-24 Thread Sturla Molden
On 24/12/14 04:33, Robert McGibbon wrote: Alex Griffing pointed out on github that this feature was recently added to scipy in https://github.com/scipy/scipy/pull/3144. Sweet! I would rather have SciPy implement this with the overlap-and-add method rather than padding the FFT. Overlap-and-add

Re: [Numpy-discussion] Fast sizes for FFT

2014-12-24 Thread Sturla Molden
On 24/12/14 14:34, Sturla Molden wrote: I would rather have SciPy implement this with the overlap-and-add method rather than padding the FFT. Overlap-and-add is more memory efficient for large n: (eh, the list should be) - Overlap-and-add is more memory efficient for large n. - It scales

Re: [Numpy-discussion] error: no matching function for call to 'PyArray_DATA'

2014-12-11 Thread Sturla Molden
Jack Howarth howarth.mailing.li...@gmail.com wrote: What is the correct coding to eliminate this error? I have found some threads which seems to suggest that PyArray_DATA is still available in numpy 1.9 as an inline but I haven't found any examples of projects patching their code to convert

Re: [Numpy-discussion] Should ndarray be a context manager?

2014-12-10 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: I haven't managed to trigger a segfault yet but it sure looks like I could... You can also trigger random errors. If the array is small, Python's memory mamager might keep the memory in the heap for reuse by PyMem_Malloc. And then you can actually

[Numpy-discussion] Should ndarray be a context manager?

2014-12-09 Thread Sturla Molden
) as x: [...] Sturla # (C) 2014 Sturla Molden from cpython cimport PyMem_Malloc, PyMem_Free from libc.string cimport memset cimport numpy as cnp cnp.init_array() cdef class Heapmem: cdef: void *_pointer cnp.intp_t _size def __cinit__(Heapmem self

Re: [Numpy-discussion] Should ndarray be a context manager?

2014-12-09 Thread Sturla Molden
On 09/12/14 18:39, Julian Taylor wrote: A context manager will also not help you with reference cycles. If will because __exit__ is always executed. Even if the PyArrayObject struct lingers, the data buffer will be released. Sturla ___

Re: [Numpy-discussion] Should ndarray be a context manager?

2014-12-09 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: my first thought iust that you can just do: x = np.zeros(n) [... your code here ] del x x's ref count will go down, and it will be deleted if there are no other references to it. 1. This depends on reference counting. PyPy supports numpy too

Re: [Numpy-discussion] Should ndarray be a context manager?

2014-12-09 Thread Sturla Molden
Nathaniel Smith n...@pobox.com wrote: @contextmanager def tmp_zeros(*args, **kwargs): arr = np.zeros(*args, **kwargs) try: yield arr finally: arr.resize((0,), check_refs=False) That one is interesting. I have actually never used ndarray.resize(). It did not

Re: [Numpy-discussion] Should ndarray be a context manager?

2014-12-09 Thread Sturla Molden
Nathaniel Smith n...@pobox.com wrote: This should be pretty trivial to implement. AFAICT you don't need any complicated cython I have a bad habit of thinking in terms of too complicated C instead of just using NumPy. @contextmanager def tmp_zeros(*args, **kwargs): arr =

Re: [Numpy-discussion] Scipy 0.15.0 beta 1 release

2014-11-25 Thread Sturla Molden
David Cournapeau courn...@gmail.com wrote: Shall we consider a href=https://github.com/scipy/scipy/issues/4168;https://github.com/scipy/scipy/issues/4168/a to be a blocker (the issue arises on scipy master as well as 0.14.1) ? It is really bad, but does anyone know what is really going on?

Re: [Numpy-discussion] Initializing array from buffer

2014-11-21 Thread Sturla Molden
On 18/11/14 04:21, Robert McGibbon wrote: The np.ndarray constructor http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html takes a strides argument argument, and a buffer. Is it not sufficiently flexible? -Robert AFAIK the buffer argument is not a memory address but an

Re: [Numpy-discussion] Numpy 1.9.1, zeros and alignement

2014-11-19 Thread Sturla Molden
On 18/11/14 23:10, Pauli Virtanen wrote: The second question is whether F2py actually *needs* to check the dtype-size alignment, or is just something like sizeof(double) enough for Fortran compilers. The Fortran standard does not specify an ABI so this is likely compiler dependent. The

Re: [Numpy-discussion] Initializing array from buffer

2014-11-17 Thread Sturla Molden
Andrea Arteaga andyspi...@gmail.com wrote: My use case is the following: we have a some 3D arrays in our C++ framework. The ordering of the elements in these arrays is neither C nor Fortran style: it might be IJK (i.e. C style, 3rd dimension contiguous in memory), KJI (i.e. Fortran style,

Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-30 Thread Sturla Molden
Nathaniel Smith n...@pobox.com wrote: [*] Actually, we could, but the binaries would be tainted with a viral license. And binaries linked with MKL are tainted by a proprietary license... They have very similar effects, The MKL license is proprietary but not viral. Sturla

Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-29 Thread Sturla Molden
On 29/10/14 10:48, Eelco Hoogendoorn wrote: Id rather have us discuss how to facilitate the integration of as many possible fft libraries with numpy behind a maximally uniform interface, rather than having us debate which fft library is 'best'. I am happy with the NumPy interface. There

Re: [Numpy-discussion] numpy.i and std::complex

2014-10-28 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote: The polite, welcoming response to someone coming along with a straightforward, obviously-correct contribution to our SWIG capabilities is Thank you!, not perhaps you overestimate the number of NumPy users who use Swig. That was a response to

Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-28 Thread Sturla Molden
Jerome Kieffer jerome.kief...@esrf.fr wrote: Because the plan creation was taking ages with FFTw, numpy's FFTPACK was often faster (overall) Matlab switched from FFTPACK to FFTW because the latter was faster in general. If FFTW guesses a plan it does not take very long. Actual measurements can

Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-28 Thread Sturla Molden
David Cournapeau courn...@gmail.com wrote: The real issue with fftw (besides the license) is the need for plan computation, which are expensive (but are not needed for each transform). This is not a problem if you thell FFTW to guess a plan instead of making measurements. FFTPACK needs to set

Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-28 Thread Sturla Molden
Pierre Barbier de Reuille pie...@barbierdereuille.net wrote: I would add one element to the discussion: for some (odd) reasons, SciPy is lacking the functions `rfftn` and `irfftn`, functions using half the memory space compared to their non-real equivalent `fftn` and `ifftn`. In both NumPy

Re: [Numpy-discussion] multi-dimensional c++ proposal

2014-10-28 Thread Sturla Molden
Neal Becker ndbeck...@gmail.com wrote: That's harsh! Do you have any specific features you dislike? Are you objecting to the syntax? I have programmed C++ for almost 15 years. But I cannot look at the proposed code an get a mental image of what it does. It is not a specific feature, but

Re: [Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-28 Thread Sturla Molden
Eelco Hoogendoorn hoogendoorn.ee...@gmail.com wrote: Perhaps the 'batteries included' philosophy made sense in the early days of numpy; but given that there are several fft libraries with their own pros and cons, and that most numpy projects will use none of them at all, why should numpy

Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Glen Mabey gma...@swri.org wrote: I'd really like for this to be included alongside numpy.i -- but maybe I overestimate the number of numpy users who use complex data (let your voice be heard!) and who also end up using std::complex in C++ land. I don't think you do. But perhaps you

Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Glen Mabey gma...@swri.org wrote: I chose swig after reviewing the options listed here, and I didn't see cython on the list: http://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html It's because that list is old and has not been updated. It has the predecessor to Cython, Pyrex, but

Re: [Numpy-discussion] [EXTERNAL] Re: numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Bill Spotz wfsp...@sandia.gov wrote: Oops, I meant 'Cython is its own language,' not Python (although Python qualifies, too, just not in context). Also, Pyrex, listed in the c-info.python-as-glue.html page, was the pre-cursor to Cython. But when it comes to interfacing NumPy, they are

Re: [Numpy-discussion] multi-dimensional c++ proposal

2014-10-27 Thread Sturla Molden
On 27/10/14 13:14, Neal Becker wrote: The multi-dimensional c++ stuff is interesting (about time!) http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3851.pdf OMG, that API is about as awful as it gets. Obviously it is written by two computer scientists who do not understand what

Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote: Please stop haranguing the new guy for not knowing things that you know. I am not doing any of that. You are the only one haranguing here. I usually ignore your frequent inpolite comments, but I will do an exception this time and ask you to shut up.

Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
josef.p...@gmail.com wrote: For fft I use mostly scipy, IIRC. (scipy's fft imports numpy's fft, partially?) No. SciPy uses the Fortran library FFTPACK (wrapped with f2py) and NumPy uses a smaller C library called fftpack_lite. Algorithmically they are are similar, but fftpack_lite has fewer

Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
Sturla Molden sturla.mol...@gmail.com wrote: If we really need a kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or Apple's Accelerate Framework, I should perhaps also mention FFTS here, which claim to be faster than FFTW and has a BSD licence: http://anthonix.com/ffts

Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
josef.p...@gmail.com wrote: ahref=https://github.com/scipy/scipy/blob/e758c482efb8829685dcf494bdf71eeca3dd77f0/scipy/signal/signaltools.py#L13;https://github.com/scipy/scipy/blob/e758c482efb8829685dcf494bdf71eeca3dd77f0/scipy/signal/signaltools.py#L13/a doesn't seem to mind mixing numpy and

Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
Matthew Brett matthew.br...@gmail.com wrote: Is this an option for us? Aren't we a little behind the performance curve on FFT after we lost FFTW? It does not run on Windows because it uses POSIX to allocate executable memory for tasklets, as i understand it. By the way, why did we loose

Re: [Numpy-discussion] parallel compilation with numpy.distutils in numpy 1.10

2014-10-10 Thread Sturla Molden
Julian Taylor jtaylor.deb...@googlemail.com wrote: There is still one problem in regards to parallelizing fortran 90. The ccompiler.py contains following comment: # build any sources in same order as they were originally specified # especially important for fortran .f90 files using

Re: [Numpy-discussion] parallel compilation with numpy.distutils in numpy 1.10

2014-10-10 Thread Sturla Molden
Sturla Molden sturla.mol...@gmail.com wrote: When a Fortran module is compiled, the compiler emits an object file (.o) and a module file (.mod). The module file plays the role of a header file in C. So when another Fortran file imports the module with a use statement, the compiler looks

Re: [Numpy-discussion] parallel compilation with numpy.distutils in numpy 1.10

2014-10-10 Thread Sturla Molden
Sturla Molden sturla.mol...@gmail.com wrote: So the Fortran 90 files creates a directed asyclic graph. To compute in parallel Eh, *compile* in parallel. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org

Re: [Numpy-discussion] parallel compilation with numpy.distutils in numpy 1.10

2014-10-10 Thread Sturla Molden
Julian Taylor jtaylor.deb...@googlemail.com wrote: thanks for the explanation. Modules are only available with f90 right? f77 files do not have these generated interdependencies? being able to handle f77 would already be quite good, as it should at least cover current scipy. One can look at

Re: [Numpy-discussion] Copyright status of NumPy binaries on Windows/OS X

2014-10-09 Thread Sturla Molden
Travis Oliphant tra...@continuum.io wrote: A good mingw64 stack for Windows would be great and benefits many communities. Carl Kleffner has made 32- and 64-bit mingw stacks compatible with Python. E.g. the stack alignment in the 32-bit version is different from the vanilla mingw distribution.

Re: [Numpy-discussion] Copyright status of NumPy binaries on Windows/OS X

2014-10-09 Thread Sturla Molden
Travis Oliphant tra...@continuum.io wrote: A good mingw64 stack for Windows would be great and benefits many communities. BTW: Carl Kleffners mingw toolchains are here: Documentation: https://github.com/numpy/numpy/wiki/Mingw-static-toolchain Downloads:

Re: [Numpy-discussion] Copyright status of NumPy binaries on Windows/OS X

2014-10-08 Thread Sturla Molden
Travis Oliphant tra...@continuum.io wrote: Microsoft has actually released their Visual Studio 2008 compiler stack so that OpenBLAS and ATLAS could be compiled on Windows for these platforms as well. I would be very interested to see conda packages for these libraries which should be pretty

Re: [Numpy-discussion] skip samples in random number generator

2014-10-02 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote: No one needs small jumps of arbitrary size. The real use case for jumping is to make N parallel streams that won't overlap. You pick a number, let's call it `jump_steps`, much larger than any single run of your system could possibly consume (i.e. the

Re: [Numpy-discussion] skip samples in random number generator

2014-10-02 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote: Yes, but that would require rewriting much of numpy.random to allow replacing the core generator. This would work out-of-box because it's just manipulating the state of the current core generator. Yes, then we just need to sacrifice a year's worth of

Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-10 Thread Sturla Molden
Pierre-Andre Noel noel.pierre.an...@gmail.com wrote: Why not do something like the C++11 random? In random, a generator is the engine producing randomness, and a distribution decides what is the type of outputs that you want. This is what randomkit is doing internally, which is why it is

Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-10 Thread Sturla Molden
Julian Taylor jtaylor.deb...@googlemail.com wrote: But as already mentioned by Robert, we know what we can do, what is missing is someone writting the code. This is actually a part of NumPy I know in detail, so I will be able to contribute. Robert Kern's last post about objects like

Re: [Numpy-discussion] @ operator

2014-09-10 Thread Sturla Molden
Charles R Harris charlesr.har...@gmail.com wrote: Note also that the dot cblas versions are not generally blocked, so the size of the arrays is limited (and not checked). But it is possible to create a blocked dot function with the current cblas, even though they use C int for array

Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-09 Thread Sturla Molden
On 09/09/14 20:08, Nathaniel Smith wrote: There's also another reason why generator decisions should be part of the RandomState object itself: we may want to change the distribution methods themselves over time (e.g., people have been complaining for a while that we use a suboptimal method

Re: [Numpy-discussion] Generalize hstack/vstack -- stack; Block matrices like in matlab

2014-09-08 Thread Sturla Molden
Stefan Otte stefan.o...@gmail.com wrote: stack([[a, b], [c, d]]) In my case `stack` replaced `hstack` and `vstack` almost completely. If you're interested in including it in numpy I created a pull request [1]. I'm looking forward to getting some feedback! As far as I can see, it uses

Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-07 Thread Sturla Molden
Benjamin Root ben.r...@ou.edu wrote: In addition to issues with reproducibility, think of all of the unit tests that would break! That is a reproducibility problem :) ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-05 Thread Sturla Molden
On 05/09/14 19:19, Neal Becker wrote: It's a variant of the standard MT rather than just an implementation of it, so we can't just drop it in. You will need to build the infrastructure to support multiple PRNGs first (or rather, build the infrastructure to reuse the non-uniform

Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-05 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote: No, that is not what I meant. If the SFMT can be made to output the same bitstream for the same seed, we can use it (modifying it if necessary to avoid global state if necessary), but it does not look so to me. I welcome corrections on that count (in

Re: [Numpy-discussion] NumPy-Discussion OpenBLAS and dotblas

2014-08-12 Thread Sturla Molden
Matti Picus matti.pi...@gmail.com wrote: Thanks for your prompt reply. I think numpy is a wonderful project, and you all do a great job moving it forward. If you ask what would my vision for maturing numpy, I would like to see a grouping of linalg matrix-operation functionality into a

Re: [Numpy-discussion] NumPy-Discussion OpenBLAS and dotblas

2014-08-12 Thread Sturla Molden
Charles R Harris charlesr.har...@gmail.com wrote: - Consider using blas_lite instead of cblas, but that is now independent of the previous steps. It should also be possible to build reference cblas on top of blas_lite. (Or just create a wrapper for the parts of cblas we need.) Sturla

Re: [Numpy-discussion] NumPy-Discussion OpenBLAS and dotblas

2014-08-12 Thread Sturla Molden
Charles R Harris charlesr.har...@gmail.com wrote: - Move _dotblas down into multiarray 1. When there is cblas, add cblas implementations of decr-f-dot. 2. Reimplement API matrixproduct2 3. Make ndarray.dot a first class method and use it for numpy.dot. - Implement matmul

Re: [Numpy-discussion] NumPy-Discussion OpenBLAS and dotblas

2014-08-12 Thread Sturla Molden
Ralf Gommers ralf.gomm...@gmail.com wrote: That's not possible. The only way you can do that is move the hard dependency on BLAS amp; LAPACK to numpy, which we don't want to do. But NumPy already depends on BLAS and LAPACK, right? ___

Re: [Numpy-discussion] NumPy-Discussion OpenBLAS and dotblas

2014-08-12 Thread Sturla Molden
Ralf Gommers ralf.gomm...@gmail.com wrote: No. Numpy uses those libs when they're detected, but it falls back on its own dot implementation if they're not found. From first bullet under a

Re: [Numpy-discussion] NumPy-Discussion OpenBLAS and dotblas

2014-08-12 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote: BLAS/LAPACK are heavy dependencies that often give problems, which is why you don't want to require them for the casual user that only needs numpy arrays to make some plots for examples. Maybe we are not talking about the same thing, but isn't

<    1   2   3   4   5   6   7   8   >