Re: [Numpy-discussion] allocated memory cache for numpy

2014-02-18 Thread Sturla Molden
I am cross-posting this to Cython user group to make sure they see this. Sturla Nathaniel Smith wrote: > On 18 Feb 2014 10:21, "Julian Taylor" wrote: > > On Mon, Feb 17, 2014 at 9:42 PM, Nathaniel Smith wrote: > On 17 Feb 2014 15:17, "Sturla Molden"

Re: [Numpy-discussion] Suggestion: Port Theano RNG implementation to NumPy

2014-02-18 Thread Sturla Molden
AFAIK, CMRG (MRG31k3p) is more equidistributed than Mersenne Twister, but the period is much shorter. However, MT is getting acceptance as the PRNG of choice for numerical work. And when we are doing stochastic simulations in Python, the speed of the PRNG is unlikely to be the bottleneck. Sturla

Re: [Numpy-discussion] Proposal to make power return float, and other such things.

2014-02-18 Thread Sturla Molden
Nathaniel Smith wrote: > Perhaps integer power should raise an error on negative powers? That way > people will at least be directed to use arr ** -1.0 instead of silently > getting nonsense from arr ** -1. That sounds far better than silently returning errorneous results. Sturla _

Re: [Numpy-discussion] Proposal to make power return float, and other such things.

2014-02-18 Thread Sturla Molden
Robert Kern wrote: > We're talking about numpy.power(), not just ndarray.__pow__(). The > equivalence of the two is indeed an implementation detail, but I do > think that it is useful to maintain the equivalence. If we didn't, it > would be the only exception, to my knowledge. But in this case i

Re: [Numpy-discussion] Proposal to make power return float, and other such things.

2014-02-18 Thread Sturla Molden
Robert Kern wrote: > That's fine if you only have one value for each operand. When you have > multiple values for each operand, say an exponent array containing > both positive and negative integers, that becomes a problem. I don't really see why. If you have a negative integer in there you get

Re: [Numpy-discussion] Proposal to make power return float, and other such things.

2014-02-18 Thread Sturla Molden
Charles R Harris wrote: > This is apropos issue #899 < href="https://github.com/numpy/numpy/issues/899";>https://github.com/numpy/numpy/issues/899>, > where it is suggested that power promote integers to float. That sounds > reasonable to me, but such a change in behavior makes it a bit iffy. > >

Re: [Numpy-discussion] allocated memory cache for numpy

2014-02-18 Thread Sturla Molden
Julian Taylor wrote: > I was thinking of something much simpler, just a layer of pointer stacks > for different allocations sizes, the larger the size the smaller the > cache with pessimistic defaults. > e.g. the largest default cache layer is 128MB and with one or two > entries so we can cache te

Re: [Numpy-discussion] allocated memory cache for numpy

2014-02-17 Thread Sturla Molden
Nathaniel Smith wrote: > Also, I'd be pretty wary of caching large chunks of unused memory. People > already have a lot of trouble understanding their program's memory usage, > and getting rid of 'greedy free' will make this even worse. A cache would only be needed when there is a lot of computin

Re: [Numpy-discussion] allocated memory cache for numpy

2014-02-17 Thread Sturla Molden
Julian Taylor wrote: > When an array is created it tries to get its memory from the cache and > when its deallocated it returns it to the cache. Good idea, however there is already a C function that does this. It uses a heap to keep the cached memory blocks sorted according to size. You know it

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
Sturla Molden wrote: > wrote: > maybe -1 >> >> statsmodels is using np.linalg.pinv which uses svd >> I never ran heard of any crash (*), and the only time I compared with >> scipy I didn't like the slowdown. > > If you did care about speed in least-

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
wrote: maybe -1 > > statsmodels is using np.linalg.pinv which uses svd > I never ran heard of any crash (*), and the only time I compared with > scipy I didn't like the slowdown. If you did care about speed in least-sqares fitting you would not call QR or SVD directly, but use the builting LAPA

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
Sturla Molden wrote: > Dave Hirschfeld wrote: > >> Even if lapack_lite always performed the isfinite check and threw a python >> error if False, it would be much better than either hanging or segfaulting >> and >> people who care about the isfinite cost proba

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
Dave Hirschfeld wrote: > Even if lapack_lite always performed the isfinite check and threw a python > error if False, it would be much better than either hanging or segfaulting > and > people who care about the isfinite cost probably would be linking to a fast > lapack anyway. +1 (if I have

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
wrote: > I use official numpy release for development, Windows, 32bit python, > i.e. MingW 3.5 and whatever old ATLAS the release includes. > > a constant 13% cpu usage is 1/8 th of my 8 virtual cores. Based on this and Alex' message it seems the offender is the f2c generated lapack_lite librar

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
Jason Grout wrote: > For what my vote is worth, -1. I thought this was pretty much the > designed difference between the scipy and numpy linalg routines. Scipy > does the checking, and numpy provides the raw speed. Maybe this is > better resolved as a note in the documentation for numpy ab

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Sturla Molden
Dave Hirschfeld wrote: > It certainly shouldn't crash or hang though and for me at least it doesn't - > it returns NaN which immediately suggests to me that I've got bad input > (maybe just because I've seen it before). It might be dependent on the BLAS or LAPACK version. Since you are on Anac

Re: [Numpy-discussion] svd error checking vs. speed

2014-02-15 Thread Sturla Molden
wrote: > copy of np.pinv used in linear regression > https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py#L348 > (it's a recent change to streamline some of the linalg in regression, > and master only) Why not call lapack routine DGELSS instead? It does exactly this,

Re: [Numpy-discussion] libflatarray

2014-02-13 Thread Sturla Molden
Neal Becker wrote: > I thought this was interesting: > > http://www.libgeodecomp.org/libflatarray.html This is mostly flawed thinking. Nowadays, CPUs are much faster than memory access, and the gap is just increasing. In addition, CPUs have hierarchical memory (several layers of cache). Most alg

Re: [Numpy-discussion] Overlapping time series

2014-02-11 Thread Sturla Molden
Daniele Nicolodi wrote: > Sure, I realize that, thank for the clarification. The arrays are quite > small, then the three loops and the temporary take negligible time and > memory in the overall processing. If they are small, a Python loop would do the job as well. And if it doesn't, it is just

Re: [Numpy-discussion] Overlapping time series

2014-02-11 Thread Sturla Molden
Daniele Nicolodi wrote: > That's more or less my current approach (except that I use the fact that > the data is evenly samples to use np.where(np.diff(t1) != dt) to detect > the regions of continuous data, to avoid the loop. I hope you realize that np.where(np.diff(t1) != dt) generates three lo

Re: [Numpy-discussion] Overlapping time series

2014-02-11 Thread Sturla Molden
Daniele Nicolodi wrote: > I was probably not that clear: I have two 2xN arrays, one for each data > recording, one column for time (taken from the same clock for both > measurements) and one with data values. Each array has some gaps. If you want all subarrays where both timeseries are sampled,

Re: [Numpy-discussion] Overlapping time series

2014-02-11 Thread Sturla Molden
Daniele Nicolodi wrote: > Correct me if I'm wrong, but this assumes that missing data points are > represented with Nan. In my case missing data points are just missing. Then your data cannot be stored in a 2 x N array as you indicated. Sturla ___

Re: [Numpy-discussion] Overlapping time series

2014-02-11 Thread Sturla Molden
Daniele Nicolodi wrote: > I can imagine strategies about how to approach the problem, but none > that would be efficient. Ideas? I would just loop from the start and loop from the end and find out where to clip. Then slice in between. If Python loops take too much time, JIT compile them with

Re: [Numpy-discussion] deprecate numpy.matrix

2014-02-11 Thread Sturla Molden
Pauli Virtanen wrote: > [1] http://fperez.org/py4science/numpy-pep225/numpy-pep225.html I have to agree with Robert Kern here. One operator that we can (ab)use for matrix multiplication would suffice. Sturla ___ NumPy-Discussion mailing list NumPy-Di

Re: [Numpy-discussion] deprecate numpy.matrix

2014-02-11 Thread Sturla Molden
Pauli Virtanen wrote: > It is not a good thing that there is no well defined > "domain specific language" for matrix algebra in Python. Perhaps Python should get some new operators? __dot__ __cross__ __kronecker__ Obviously, using them in real Python code would require unicode. ;-) On the se

Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-07 Thread Sturla Molden
Thomas Unterthiner wrote: > Sorry for going a bit off-topic, but: do you have any links to the > benchmarks? I googled around, but I haven't found anything. FWIW, on my > own machines OpenBLAS is on par with MKL (on an i5 laptop and an older > Xeon server) and actually slightly faster than A

Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-06 Thread Sturla Molden
Charles R Harris wrote: > The Eigen license is MPL-2. That doesn't look to be incompatible with > BSD, but it may complicate things. Q8: I want to distribute (outside my > organization) executable programs or libraries that I have compiled from > someone else's unchanged MPL-licensed source code,

Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-06 Thread Sturla Molden
I just thought i'd mention this: Why not use Eigen? It has a full BLAS implementation and passes the BLAS test suite, and is generelly faster than MKL except on Core i7. Also, Eigen requires no build process, just include the header files. Yes, Eigen is based on C++, but OpenBLAS is parially cod

Re: [Numpy-discussion] striding through arbitrarily large files

2014-02-04 Thread Sturla Molden
RayS wrote: > Thanks Daniele, I'll be trying mmap with Python64. With 32 bit the > mmap method throws MemoryError with 2.5GB files... > The idea is that we allow the users to inspect the huge files > graphically, then they can "zoom" into regions of interest and then > load a ~100 MB en block

Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-02 Thread Sturla Molden
Sturla Molden wrote: > > Yes, it seems to be a GNU problem: > > http://bisqwit.iki.fi/story/howto/openmp/#OpenmpAndFork > > This Howto also claims Intel compilers is not affected. It seems another patch has been proposed to the libgomp team today: http://gcc.gnu.org/bugzil

Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-02 Thread Sturla Molden
Carl Kleffner wrote: > If you work in an academia world it can be relevant once third parties > are involved in a bigger project. A situation may be reached, where you > just have to prove the license situation of all of your software components. If you involve third parties outside academia, y

Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-02 Thread Sturla Molden
Carl Kleffner wrote: > In the case of numpy-MKL the MKL binaries are statically linked to the > pyd-files. Given the usefulness, performance and robustness of the > MKL-based binaries a definite answer to this question would be desirable. > Say: "Can I use and re-redistribute a product with a pre

Re: [Numpy-discussion] MKL and OpenBLAS

2014-01-30 Thread Sturla Molden
On 26/01/14 13:44, Dinesh Vadhia wrote:> This conversation gets discussed often with Numpy developers but since > the requirement for optimized Blas is pretty common these days, how > about distributing Numpy with OpenBlas by default? People who don't > want optimized BLAS or OpenBLAS can then

Re: [Numpy-discussion] MKL and OpenBLAS

2014-01-30 Thread Sturla Molden
-based numpy/scipy > - The licence question of numpy-MKL is unclear. I know that MKL is > linked in statically. But can I redistribite it myself or use it in > commercial context without buying a Intel licence? > > Carl > > > 2014-01-30 Sturla Molden <mailto:sturla.mo

Re: [Numpy-discussion] MKL and OpenBLAS

2014-01-30 Thread Sturla Molden
On 30/01/14 12:01, Carl Kleffner wrote: > My conclusion is: mixing different compiler architectures for building > Python extensions on Windows is possible but makes it necessary to build > a 'vendor' gcc toolchain. Right. This makes a nice twist on the infamous XML and Regex story: - There onc

Re: [Numpy-discussion] MKL and OpenBLAS

2014-01-30 Thread Sturla Molden
On 27/01/14 12:01, Carl Kleffner wrote: > Did you consider to check the experimental binaries on > https://code.google.com/p/mingw-w64-static/ for Python-2.7? These > binaries has been build with with a customized mingw-w64 toolchain. > These builds are fully statically build and are link against t

Re: [Numpy-discussion] windows and C99 math

2014-01-27 Thread Sturla Molden
Julian Taylor wrote: > Are our binary builds for windows not correct or does windows just not > support C99 math? Microsoft's C compiler does not support C99. It is not an OS issue. Use gcc, clang or Intel icc instead, and C99 is supported. Sturla _

Re: [Numpy-discussion] MKL and OpenBLAS

2014-01-26 Thread Sturla Molden
Julian Taylor wrote: > the use of gnu openmp is probably be the problem, forking and gomp is > only possible in very limited circumstances. > see e.g. https://github.com/xianyi/OpenBLAS/issues/294 > > maybe it will work with clangs intel based openmp which should be coming > soon. > the current

Re: [Numpy-discussion] MKL and OpenBLAS

2014-01-26 Thread Sturla Molden
Julian Taylor wrote: > if this issue disqualifies accelerate, it also disqualifies openblas as > a default. openblas has the same issue, we stuck a big fat warning into > the docs (site.cfg) for this now as people keep running into it. What? Last time I checked, OpenBLAS (and GotoBLAS2) used Ope

Re: [Numpy-discussion] Numpy arrays vs typed memoryviews

2014-01-25 Thread Sturla Molden
I think I have said this before, but its worth a repeat: Pickle (including cPickle) is a slow hog! That might not be the overhead you see, you just haven't noticed it yet. I saw this some years ago when I worked on shared memory arrays for Numpy (cf. my account on Github). Shared memory really

Re: [Numpy-discussion] algorithm for faster median calculation ?

2013-01-15 Thread Sturla Molden
On 15.01.2013 20:50, Sturla Molden wrote: You might want to look at this first: https://github.com/numpy/numpy/issues/1811 Yes it is possible to compute the median faster by doing quickselect instead of quicksort. Best case O(n) for quickselect, O(n log n) for quicksort. But adding selection

Re: [Numpy-discussion] algorithm for faster median calculation ?

2013-01-15 Thread Sturla Molden
You might want to look at this first: https://github.com/numpy/numpy/issues/1811 Yes it is possible to compute the median faster by doing quickselect instead of quicksort. Best case O(n) for quickselect, O(n log n) for quicksort. But adding selection and partial sorting to NumPy is a bigger is

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 21:24, Henry Gomersall wrote: > I didn't know that. It's a real pain having so many libc libs knocking > around. I have little experience of Windows, as you may have guessed! Originally there was only one system-wide CRT on Windows (msvcrt.dll), which is why MinGW linkes with that

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 21:13, Sturla Molden wrote: > Because if CRT resources are shared between different CRT versions, bad > things will happen (the ABIs are not equivalent, errno and other globals > are at different addresses, etc.) For example, PyErr_SetFromErrno will return garbage if

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 21:03, Henry Gomersall wrote: > Why is it important? (for my own understanding) Because if CRT resources are shared between different CRT versions, bad things will happen (the ABIs are not equivalent, errno and other globals are at different addresses, etc.) Cython code tends to s

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 20:57, Sturla Molden wrote: > On 20.12.2012 20:52, Henry Gomersall wrote: > >> Perhaps the DLL should go and read MS's edicts! > > Do you link with same same CRT as Python? (msvcr90.dll) > > You should always use -lmsvcr90. > > If you don't,

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 20:52, Henry Gomersall wrote: > Perhaps the DLL should go and read MS's edicts! Do you link with same same CRT as Python? (msvcr90.dll) You should always use -lmsvcr90. If you don't, you will link with msvcrt.dll. Sturla ___ NumPy-Disc

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 18:38, Henry Gomersall wrote: > Except I build with MinGW. Please don't tell me I need to install Visual > Studio... I have about 1GB free on my windows partition! The same DLL is used as CRT. Sturla ___ NumPy-Discussion mailing list Num

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 20.12.2012 17:47, Henry Gomersall wrote: > On Thu, 2012-12-20 at 17:26 +0100, Sturla Molden wrote: >>return tmp[offset:offset+N]\ >> .view(dtype=d)\ >> .reshape(shape, order=order) > > Also, just for the e

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 19.12.2012 19:25, Henry Gomersall wrote: > That is not true at least under Windows 32-bit. I think also it's not > true for Linux 32-bit from my vague recollections of testing in a > virtual machine. (disclaimer: both those statements _may_ be out of > date). malloc is required to return memor

Re: [Numpy-discussion] Byte aligned arrays

2012-12-20 Thread Sturla Molden
On 19.12.2012 09:40, Henry Gomersall wrote: > I've written a few simple cython routines for assisting in creating > byte-aligned numpy arrays. The point being for the arrays to work with > SSE/AVX code. > > https://github.com/hgomersall/pyFFTW/blob/master/pyfftw/utils.pxi Why use Cython? http://m

Re: [Numpy-discussion] Building numpy with OpenBLAS

2012-12-14 Thread Sturla Molden
On 14.12.2012 10:17, Sergey Bartunov wrote: > Now things went even worse. I assume that numpy built with BLAS and > LAPACK should do dot operation faster than "clean" installation on > relatively large matirces (say 2000 x 2000). Here I don't use OpenBLAS > anyway. No, _dotblas is only built agai

Re: [Numpy-discussion] Proposal to drop python 2.4 support in numpy 1.8

2012-12-14 Thread Sturla Molden
So when upgrading everything you prefer to keep the bugs in 2.6 that were squashed in 2.7? Who has taught IT managers that older and more buggy versions of software are more "professional" and better for corporate environments? Sturla Den 14. des. 2012 kl. 05:14 skrev Raul Cota : > > +1 from

Re: [Numpy-discussion] Proposal to drop python 2.4 support in numpy 1.8

2012-12-13 Thread Sturla Molden
Yes, and ditto for SciPy. With dropped 2.4 support we can also use the new memoryview syntax instead of ndarray syntax in Cython. That is more important for SciPy, but it has some relevance for NumPy too. Sturla Sendt fra min iPad Den 13. des. 2012 kl. 17:34 skrev Charles R Harris : > Time t

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-21 Thread Sturla Molden
On 21.11.2012 15:55, Nathaniel Smith wrote: > I think the point is that it's easy for programmers to decide to avoid > GCD if they want to use multiprocessing. But it's not so easy for them > to decide to avoid BLAS. Actually the answer from Apple was that no API except POSIX is supported on bot

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-21 Thread Sturla Molden
But if this issue is in the GCD, it will probably affect other applications that uses the GCD and fork without exec as well. Unless you are certain the GCD is not used, a fork would never be safe without an exec. Sturla On 21.11.2012 15:45, Sturla Molden wrote: > Ok, so using BLAS on each s

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-21 Thread Sturla Molden
GIL sucks 10x more on Windows than Mac or Linux. Sturla On 21.11.2012 15:37, David Cournapeau wrote: > On Wed, Nov 21, 2012 at 2:31 PM, Sturla Molden wrote: >> On 21.11.2012 15:01, David Cournapeau wrote: >>> On Wed, Nov 21, 2012 at 12:00 PM, Sturla Molden wrote: >>&g

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-21 Thread Sturla Molden
On 21.11.2012 15:01, David Cournapeau wrote: > On Wed, Nov 21, 2012 at 12:00 PM, Sturla Molden wrote: >> But do we need a binary OpenBLAS on Mac? Isn't there an accelerate framework >> with BLAS and LAPACK on that platform? > > Because of this: https://gist.gi

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-21 Thread Sturla Molden
But do we need a binary OpenBLAS on Mac? Isn't there an accelerate framework with BLAS and LAPACK on that platform? Sturla Sendt fra min iPad Den 21. nov. 2012 kl. 12:44 skrev David Cournapeau : > On Wed, Nov 21, 2012 at 10:56 AM, Henry Gomersall wrote: >> On Wed, 2012-11-21 at 10:49 +, D

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-20 Thread Sturla Molden
On 20.11.2012 15:38, David Cournapeau wrote: > I support this as well in principle for our binary release: one issue > is that we don't have the infrastructure on mac to build an installer > with multi-arch support, and we can't assume every mac out there has > SSE 3 or 4 available. Perhaps we co

Re: [Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-19 Thread Sturla Molden
On 19.11.2012 18:42, Dag Sverre Seljebotn wrote: > Even on CPUs that are not directly supported, this is at least better > than reference BLAS. > > (On our AMD CPUs, which are too new to have a separate OpenBLAS > implementation, the implementations for older AMD CPUs still outperform > at least I

[Numpy-discussion] Use OpenBLAS for the binary releases?

2012-11-19 Thread Sturla Molden
I think NumPy and SciPy should consider to use OpenBLAS (a fork of GotoBLAS2) instead of ATLAS or f2c'd Netlib BLAS for the binary releases. Here are its virtues: * Very easy to build: Just a makefile, no configuration script or special build tools. * Building ATLAS can be a PITA. So why bothe

Re: [Numpy-discussion] Fwd: [numpy] ENH: Initial implementation of a 'neighbor' calculation (#303)

2012-10-12 Thread Sturla Molden
If the language is confusing, now is the time to change the names. > > Kindest regards, > Tim > > On Fri, Oct 12, 2012 at 8:33 AM, Sturla Molden wrote: >> On 10.10.2012 15:42, Nathaniel Smith wrote: >> > This PR submitted a few months ago adds a substantial new API t

Re: [Numpy-discussion] Fwd: [numpy] ENH: Initial implementation of a 'neighbor' calculation (#303)

2012-10-12 Thread Sturla Molden
On 10.10.2012 15:42, Nathaniel Smith wrote: > This PR submitted a few months ago adds a substantial new API to numpy, > so it'd be great to get more review. No-one's replied yet, though... > > Any thoughts, anyone? Is it useful, could it be better...? Fast neighbor search is what scipy.spatial.cKD

Re: [Numpy-discussion] Double-ended queues

2012-09-25 Thread Sturla Molden
On 25.09.2012 11:38, Nathaniel Smith wrote: > Implementing a ring buffer on top of ndarray would be pretty > straightforward and probably work better than a linked-list > implementation. Amazingly, many do not know that a ringbuffer is simply an array indexed modulus its length: foo = np.zeros(

Re: [Numpy-discussion] A sad day for our community. John Hunter: 1968-2012.

2012-08-30 Thread Sturla Molden
. Hunter in the Acknowledgement. I encourage everyone else who are using Matplotlib for their research to do the same. Sturla Molden Ph.D. On 30.08.2012 04:57, Fernando Perez wrote: > Dear friends and colleagues, > > [please excuse a possible double-post of this message, in-flight &

Re: [Numpy-discussion] Licensing question

2012-08-06 Thread Sturla Molden
But the Fortran FFTPACK is GPL, or has the licence been changed? http://people.sc.fsu.edu/~jburkardt/f77_src/fftpack5.1/fftpack5.1.html Sturla Sendt fra min iPad Den 3. aug. 2012 kl. 07:52 skrev Travis Oliphant : > This should be completely fine.The fftpack.h file indicates that fftpack >

Re: [Numpy-discussion] f2py with allocatable arrays

2012-07-03 Thread Sturla Molden
Den 04.07.2012 01:59, skrev Sturla Molden: > But neither was the case here. The allocatable was a dummy variable in > a subroutine's interface, declared with intent(out). That is an error > the compiler should trap, because it is doomed to segfault. Ok, so the answer here se

Re: [Numpy-discussion] f2py with allocatable arrays

2012-07-03 Thread Sturla Molden
Den 03.07.2012 20:38, skrev Casey W. Stark: > > Sturla, this is valid Fortran, but I agree it might just be a bad > idea. The Fortran 90/95 Explained book mentions this in the > allocatable dummy arguments section and has an example using an array > with allocatable, intent(out) in a subrountine

Re: [Numpy-discussion] f2py with allocatable arrays

2012-07-03 Thread Sturla Molden
Den 03.07.2012 19:24, skrev Pearu Peterson: > > One can have allocatable arrays in module data block, for instance, where > they a global In Fortran 2003 one can also have allocatable arrays as members in derived types. But neither was the case here. The allocatable was a dummy variable in a su

Re: [Numpy-discussion] f2py with allocatable arrays

2012-07-03 Thread Sturla Molden
Den 03.07.2012 11:54, skrev George Nurser: >> module zp >>implicit none >>contains >>subroutine ics(..., num_particles, particle_mass, positions, velocities) >> use data_types, only : dp >> implicit none >> ... inputs ... >> integer, intent(out) :: num_particles >>

Re: [Numpy-discussion] f2py with int8

2012-04-19 Thread Sturla Molden
On 17.04.2012 07:32, John Mitchell wrote: > Hi, > > I am using f2py to pass a numpy array of type numpy.int8 to fortran. It > seems like I am misunderstanding something because I just can't make it > work. > > Here is what I am doing. > > PYTHON > b=numpy.array(numpy.zeros(shape=(10,),dtype=numpy.

Re: [Numpy-discussion] numpy videos

2012-03-13 Thread Sturla Molden
he desktop computer. Sturla /* (C) Sturla Molden, University of Oslo */ #include #include #include #include typedef struct { TCHAR filename [MAX_PATH + 1]; HANDLE hFile, hMap; SIZE_T size; void*data; } blob; #define PAGE_SIZE 4096 /* #define EXPORT __declspe

Re: [Numpy-discussion] Looking for people interested in helping with Python compiler to LLVM

2012-03-11 Thread Sturla Molden
Den 11.03.2012 23:11, skrev Travis Oliphant: * Numba will be much closer to Cython in spirit than Unladen Swallow (or PyPy) --- people who just use Cython for a loop or two will be able to use Numba instead This is perhaps the most important issue for scientific and algorithmic codes. Not

Re: [Numpy-discussion] Looking for people interested in helping with Python compiler to LLVM

2012-03-11 Thread Sturla Molden
Den 11.03.2012 15:52, skrev Pauli Virtanen: > To get speed gains, you need to optimize not only the bytecode > interpreter side, but also the object space --- Python classes, > strings and all that. Keeping in mind Python's dynamism, there are > potential side effects everywhere. I guess this is

Re: [Numpy-discussion] Looking for people interested in helping with Python compiler to LLVM

2012-03-11 Thread Sturla Molden
Den 11. mars 2012 kl. 15:52 skrev Pauli Virtanen : > To get speed gains, you need to optimize not only the bytecode > interpreter side, but also the object space --- Python classes, strings > and all that. Keeping in mind Python's dynamism, there are potential > side effects everywhere. I guess

Re: [Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-10 Thread Sturla Molden
Den 10.03.2012 22:56, skrev Sturla Molden: I am not sure why NumPy uses f2c'd routines instead of a dependency on BLAS and LAPACK like SciPy. Actually, np.dot does depend on the CBLAS interface to BLAS (_dotblas.c). But the lapack methods in lapack_lite seems to use f2c'd code

Re: [Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-10 Thread Sturla Molden
Den 07.03.2012 21:02, skrev "V. Armando Solé": I had already used the information Robert Kern provided on the 2009 thread and obtained the PyCObject as: from scipy.linalg.blas import fblas dgemm = fblas.dgemm._cpointer sgemm = fblas.sgemm._cpointer but I did not find a way to obtain those po

Re: [Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-10 Thread Sturla Molden
Den 07.03.2012 21:02, skrev "V. Armando Solé": I had already used the information Robert Kern provided on the 2009 thread and obtained the PyCObject as: from scipy.linalg.blas import fblas dgemm = fblas.dgemm._cpointer sgemm = fblas.sgemm._cpointer but I did not find a way to obtain those po

Re: [Numpy-discussion] C++ Example

2012-03-06 Thread Sturla Molden
Den 7. mars 2012 kl. 00:43 skrev Charles R Harris : > > > I don't see generics as the main selling point of C++ for Numpy. What I > expect to be really useful is exception handling, smart pointers, and RIAA. > And maybe some carefule uses of classes and inheritance. Having a standard > inli

Re: [Numpy-discussion] C++ Example

2012-03-06 Thread Sturla Molden
Upcoming Cython releases will have a generics system called "fused types". Sturla Sendt fra min iPad Den 6. mars 2012 kl. 23:26 skrev Chris Barker : > On Sun, Mar 4, 2012 at 2:18 PM, Luis Pedro Coelho wrote: >> At least last time I read up on it, cython was not able to do multi-type >> code,

Re: [Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-06 Thread Sturla Molden
On 06.03.2012 21:33, David Cournapeau wrote: > Of course it does make his life easier. This way he does not have to > distribute his own BLAS/LAPACK/etc... > > Please stop presenting as truth things which are at best highly > opiniated. You already made such statements many times, and it is not >

Re: [Numpy-discussion] C++ Example

2012-03-06 Thread Sturla Molden
On 06.03.2012 21:45, Matthieu Brucher wrote: > This is your opinion, but there are a lot of numerical code now in C++ > and they are far more maintainable than in Fortran. And they are faster > for exactly this reason. That is mostly because C++ makes tasks that are non-numerical easier. But tha

Re: [Numpy-discussion] C++ Example

2012-03-06 Thread Sturla Molden
On 03.03.2012 17:07, Luis Pedro Coelho wrote: > I sort of missed the big C++ discussion, but I'd like to give some examples of > how writing code can become much simpler if you are based on C++. This is from > my mahotas package, which has a thin C++ wrapper around numpy's C API Here you are usin

Re: [Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-06 Thread Sturla Molden
On 05.03.2012 14:26, "V. Armando Solé" wrote: > In 2009 there was a thread in this mailing list concerning the access to > BLAS from C extension modules. > > If I have properly understood the thread: > > http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046567.html > > the answer by t

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 21:12, skrev Sturla Molden: > > If you need to control the lifetime of an object, make an inner block > with curly brackets, and declare it on top of the block. Don't call new > and delete to control where you want it to be allocated and deallocated. > Noth

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 20:14, skrev Daniele Nicolodi: > Hello Sturla, unrelated to the numpy tewrite debate, can you please > suggest some resources you think can be used to learn how to program > C++ "the proper way"? Thank you. Cheers, This is totally OT on this list, however ... Scott Meyer's book

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 18:34, skrev Christopher Jordan-Squire: > I don't follow this. Could you expand a bit more? (Specifically, I > wasn't aware that numpy could be 10-20x slower than a cython loop, if > we're talking about the base numpy library--so core operations. I'm > also not totally sure why a

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 18:18, skrev Dag Sverre Seljebotn: > > I think it is moot to focus on improving NumPy performance as long as in > practice all NumPy operations are memory bound due to the need to take a > trip through system memory for almost any operation. C/C++ is simply > "good enough". JIT is wh

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 18:14, skrev Charles R Harris: > > Would that work for Ruby also? One of the advantages of C++ is that > the code doesn't need to be refactored to start with, just modified > step by step going into the future. I think PyPy is close to what you > are talking about. > If we plant

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 17:42, skrev Sturla Molden: > There are still other options than C or C++ that are worth considering. > One would be to write NumPy in Python. E.g. we could use LLVM as a > JIT-compiler and produce the performance critical code we need on the fly. > > LLVM and its

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 19.02.2012 00:09, skrev David Cournapeau: > There are better languages than C++ that has most of the technical > benefits stated in this discussion (rust and D being the most > "obvious" ones), but whose usage is unrealistic today for various > reasons: knowledge, availability on "esoteric"

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 08:35, skrev Paul Anton Letnes: > As far as I can understand, implementing element-wise operations, slicing, > and a host of other NumPy features is in some sense pointless - the Fortran > compiler authors have already done it for us. Only if you know the array dimensions in advan

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 08:35, skrev Paul Anton Letnes: > In the language wars, I have one question. Why is Fortran not being > considered? Fortran already implements many of the features that we want in > NumPy: Yes ... but it does not make Fortran a systems programming language. Making NumPy is differ

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 10:54, skrev Pauli Virtanen: > Fortran is OK for simple numerical algorithms, but starts to suck > heavily if you need to do any string handling, I/O, complicated logic, > or data structures For string handling, C is actually worse than Fortran. In Fortran a string can be sliced

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Sturla Molden
Den 20.02.2012 12:43, skrev Charles R Harris: > > > There also used to be a problem with unsigned types not being > available. I don't know if that is still the case. > Fortran -- like Python and Java -- does not have built-in unsigned integer types. It is never really a problem though. One can

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-19 Thread Sturla Molden
Den 20.02.2012 00:39, skrev Nathaniel Smith: > But there's an order-of-magnitude difference in compile times between > most real-world C projects and most real-world C++ projects. It might > not be a deal-breaker and it might not apply for subset of C++ you're > planning to use, but AFAICT that'

Re: [Numpy-discussion] How a transition to C++ could work

2012-02-19 Thread Sturla Molden
Den 19.02.2012 16:45, skrev Adam Klein: > > Just to add, with respect to acceptable compilation times, a judicious > choice of C++ features is critical. > I use Python to avoid recompiling my code all the time. I don't recompile NumPy every time I use it. (I know you are thinking about developm

Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-19 Thread Sturla Molden
Den 19.02.2012 10:28, skrev Mark Wiebe: > > Particular styles of using templates can cause this, yes. To properly > do this kind of advanced C++ library work, it's important to think > about the big-O notation behavior of your template instantiations, not > just the big-O notation of run-time. C

Re: [Numpy-discussion] How a transition to C++ could work

2012-02-19 Thread Sturla Molden
Den 19.02.2012 10:52, skrev Mark Wiebe: C++ removes some of this advantage -- now there is extra code generated by the compiler to handle constructors, destructors, operators etc which can make a material difference to fast inner loops. So you end up just writing "C-s

<    1   2   3   4   5   6   7   8   9   >