should
be exposed to C extesions without calling back to Python.
Can I open a ticket for this and take care of it? At least 1, 2 and 4
should only take me an hour or so to write, so it might even be ready for
1.3.0.
Sturla Molden
___
Numpy-discussion
On Sat, Mar 14, 2009 at 1:37 PM, josef.p...@gmail.com wrote:
OK. One more question: how often do the tests fail? I want to include a
note
to repeat testing if the test fails.
I don't like this. I think the prngs should use fixed seeds known to pass
the test. Depending on confidence intervals
On Sat, Mar 14, 2009 at 2:24 PM, Charles R Harris
Give it a shot. Note that the fft transforms also use int instead of
intp,
which limits the maximum transform size to 32 bits. Fixing that is
somewhere
on my todo list but I would be happy to leave it to you ;) Although I
expect
transforms
On Sat, Mar 14, 2009 at 3:58 PM, Sturla Molden stu...@molden.no wrote:
There is also a ticket (#579) to add an implementation of the Bluestein
algorithm for doing prime order fft's. This could also be used for zoom
type fft's. There is lots of fft stuff to be done. I wonder if some
Fortran,
this is not the case.
2) In C, indexing arrays with unsigned integers are much more efficient
(cf. AMDs optimization guide). I think the use of signed integers as array
indices are inherited from Fortran77 FFTPACK. We should probably index the
arrays with unsigned longs.
Sturla Molden
On Sat, Mar 14, 2009 at 7:23 PM, Sturla Molden stu...@molden.no wrote:
We can't count on C99 at this point. Maybe David will add something so we
can use c99 when it is available.
Ok, but most GNU compilers has a __restrict__ extension for C89 and C++
that we could use. And MSVC has a compiler
convention, so I just changed that to Vreal array.
I changed all ints to long for 64 bit support.
Well, it compiles and runs ok on my computer now. I'll open a ticket for
the FFT. I'll attach the C files to the ticket.
Sturla Molden
___
Numpy-discussion
If you just want i to be unordered, use numpy.argsort on j.
S.M.
I have a large number ( 1bn) of (32-bit) integer co-ordinates (i, j) in a
file. The i are ordered and the j unordered eg.
...
6940, 22886
6940, 38277
6940, 43788
7007, 0
7007, 2362
7007, 34
etc.
...
I want to create
with this. I feel that cdef long i, j, k is a request to step
into C. But here I feel the Cython team is trying to make me step into a
broken C.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman
On 3/13/2009 12:47 PM, Dag Sverre Seljebotn wrote:
(Introducing a new set of types for typed Python is an idea that could
please everybody, but I fear the confusion it would bring myself...)
AFAIK, Python 3 has optional type annotations.
Sturla Molden
source of confusion on my part.
If I want the behaviour of Python integers, Cython lets me use Python
objects. I don't declare a variable cdef long if I want it to behave like
a Python int.
Sturla Molden
___
Numpy-discussion mailing list
Numpy
?
bool(NaN) has no obvious interpretation, so it should be considered an
error.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
standard, I'd say this is system and
compiler dependent.
Should NumPy rely on a specific binary representation of NaN?
A related issue is the boolean value of Inf and -Inf.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http
Charles R Harris wrote:
Raising exceptions in ufuncs is going to take some work as the inner
loops are void functions without any means of indicating an error.
Exceptions also need to be thread safe. So I am not opposed but it is
something for the future.
I just saw David Cournapeau's
the binary data is stored by Fortran
(experimenting, hex editor, etc) and read them using numpy.fromfile. You
can also use or numpy.memmap with a recarray.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org
On 3/5/2009 8:51 AM, Dag Sverre Seljebotn wrote:
What's your take on Blitz++? Around here when you say C++ and numerical
in the same sentence, Blitz++ is what they mean.
I have not looked at it for a long time (8 years or so). It is based on
profane C++ templates that makes debugging
On 3/5/2009 10:11 AM, Dag Sverre Seljebotn wrote:
Cython can relatively easily transform things like
cdef int[:,:] a = ..., b = ...
c = a + b * b
Now you are wandering far into Fortran territory...
If a and b are declared as contiguous arrays and restrict, I suppose
the C compiler
A Thursday 05 March 2009, Dag Sverre Seljebotn escrigué:
Good point. I was not aware of this subtlity. In fact, numexpr does
not get well with transposed views of NumPy arrays. Filed the bug in:
http://code.google.com/p/numexpr/issues/detail?id=18
Not sure whether it is possible with
On 2/11/2009 6:40 AM, A B wrote:
Hi,
How do I write a loadtxt command to read in the following file and
store each data point as the appropriate data type:
12|h|34.5|44.5
14552|bbb|34.5|42.5
dt = {'names': ('gender','age','weight','bal'), 'formats': ('i4',
'S4','f4', 'f4')}
Does this
On 3/4/2009 12:57 PM, Sturla Molden wrote:
Does this work for you?
Never mind, it seems my e-mail got messed up. I ought to keep them
sorted by date...
S.M.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org
On 3/4/2009 7:50 AM, Hoyt Koepke wrote:
In cython, the above would be (something like):
It also helps to turn off bounds checks:
from numpy cimport ndarray
cdef extern from math.h:
double cos(double)
double sin(double)
@cython.boundscheck(False)
cpdef ndarray[double,
On 2/23/2009 1:11 PM, Nicolas Pinto wrote:
Dear all,
I'd like to generate equivalent sequences of 'random' numbers in matlab
and numpy, is there any way I can do that? ...
Asked and answered on scipy-user.
S.M.
___
Numpy-discussion mailing
On 2/19/2009 7:04 AM, David Cournapeau wrote:
Does pyprocessing work well on windows as well ? I have 0 experience
with it.
Yes it works well on Windows, albeit process creation is a bit slower
than on Unix (there is no os.fork in Windows, so more Python objects has
to be pickled). From
I have a shiny new computer with 8 cores and numpy still takes forever
to compile
Yes, forever/8 = forever.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
is fast enough for
most purposes.
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On 2/13/2009 4:07 PM, Michael S. Gilbert wrote:
I'm not obsessed with speed, but a 2x improvement is quite
significant.
Honestly, I don't care about 2x differences here. How many milliseconds
do you save?
The PRNG in SciPy is fast enough for 99% of any use I can conceive. I
have yet to see
On 2/13/2009 4:51 PM, Michael S. Gilbert wrote:
It's not about saving milliseconds, it's about taking half the time to
run the same simulation. So if my runs currently take 2 hours, they
will take 1 hour instead; and if they take 2 days, they will take 1
day instead. It may not impact your
On Fri, 13 Feb 2009 17:04:48 +0100 Sturla Molden wrote:
Yes, running a lot of monte carlo simulations back-to-back. if the
PRNG were twice as fast, my code would be twice as fast. It isn't that
unbelievable...
Profile before you make such bold statements. You are implying that your
On 2/12/2009 7:15 AM, David Cournapeau wrote:
Since openmp also exists on windows, I doubt that it is required that
openmp uses pthread :)
On Windows, MSVC uses Win32 threads and GCC (Cygwin and MinGW) uses
pthreads. If you use OpenMP with MinGW, the executable becomes dependent
on
the names in the C code, I had to put the OpenMP part in a separate C file.
OpenMP does not need to be a aprt of the Cython language. It can be
special comments in the code as in Fortran. After all, #pragma omp
parallel is a comment in Cython.
Sturla Molden
GPUs can yield hundreds of gigaflops, that is going to be hard to match
(unless we make an ndarray that uses the GPU). But again, as the
performance of GPUs comes from massive multithreading, immutability may
be the key here as well.
Sturla Molden
suggestion. Easy to implement as you don't need to learn
OpenMP first (not that it is difficult).
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy
On 2/12/2009 1:44 PM, Sturla Molden wrote:
Here is an example of SciPy's ckdtree.pyx modified to use OpenMP.
It seems I managed to post an errorneous C file. :(
S.M.
/*
* Parallel query for faster kd-tree searches on SMP computers.
* This function
On 2/12/2009 5:24 PM, Gael Varoquaux wrote:
My two cents: go for cython objects/statements. Not only does code in
comments looks weird and a hack, but also it means to you have to hack
the parser.
I agree with this. Particularly because Cython uses intendation as
syntax. With comments you
Sturla Molden wrote:
IMO there's a problem with using literal variable names here, because
Python syntax implies that the value is passed. One shouldn't make
syntax where private=(i,) is legal but private=(f(),) isn't.
The latter would be illegal in OpenMP as well. OpenMP pragmas only take
).
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
I'm trying to test out f2py in Windows (python 2.5.4 32-bit
for now + most recent Numpy). I'd like to use the Intel
compilers, but msvc is fine if needed. I get the output below
about which I have a question re: the warning about VS
version. I have VS 2008 currently which should
On 1/30/2009 2:18 PM, Neal Becker wrote:
A nit, but it would be nice if 'ones' could fill with a value other than 1.
Maybe an optional val= keyword?
I am -1 on this. Ones should fill with ones, zeros should fill with
zeros. Anything else is counter-intuitive. Calling numpy.ones to fill
On 1/30/2009 3:07 PM, Grissiom wrote:
Is fill function has any advantage over ones(size)*x ?
You avoid filling with ones, all the multiplications and creating an
temporary array. It can be done like this:
a = empty(size)
a[:] = x
Which would be slightly faster and more memory efficient.
those of numpy.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
*ny, dtype=np.float32).view((nx,ny), order='F')
assuming real(kind=4) is equivalent to np.float32.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On 1/30/2009 5:23 PM, Sturla Molden wrote:
ux = np.fromfile(nx*ny, dtype=np.float32).view((nx,ny), order='F')
oops.. this should be
ux = np.fromfile(file, count=nx*ny, dtype=np.float32).view((nx,ny),
order='F')
S.M.
___
Numpy-discussion
On 1/30/2009 5:27 PM, Sturla Molden wrote:
On 1/30/2009 5:23 PM, Sturla Molden wrote:
ux = np.fromfile(nx*ny, dtype=np.float32).view((nx,ny), order='F')
oops.. this should be
ux = np.fromfile(file, count=nx*ny, dtype=np.float32).view((nx,ny),
order='F')
fu*k
ux
Careful -- the last time I read a Fortran-=written binary file, I found
that the various structures (is that what you call them in Fortran?)
were padded with I think 4 bytes.
That is precisely why I suggested using f2py. If you let Fortran read the
file (be careful to the same compiler!), it
On 1/27/2009 6:03 AM, Jochen wrote:
BTW memmap arrays have
the same problem
if I create a memmap array and later do something like
a=a+1
all later changes will not be written to the file.
= is Python's rebinding operator.
a = a + 1 rebinds a to a different object.
As for ndarray's, I'd
in-place operations?
a = a + 1 # rebinds the name 'a' to another array
a[:] = a + 1 # fills a with the result of a + 1
This has to do with Python syntax, not NumPy per se. You cannot overload
the behaviour of Python's name binding operator (=).
Sturla Molden
On 1/27/2009 12:37 PM, Sturla Molden wrote:
address = fftw_malloc(N * d.nbytes) # restype = ctypes.c_ulong
if (address = 0):
if (address == ): raise MemoryError, 'fftw_malloc returned NULL'
Sorry for the typo.
S.M.
___
Numpy
On 1/27/2009 12:37 PM, Sturla Molden wrote:
It is easy to use function's like fftw_malloc with NumPy:
Besides this, if I were to write a wrapper for FFTW in Python, I would
consider wrapping FFTW's Fortran interface with f2py.
It is probably safer, as well as faster, than using ctypes
On 1/27/2009 12:37 PM, Sturla Molden wrote:
import ctypes
import numpy
fftw_malloc = ctypes.cdll.fftw.fftw_malloc
fftw_malloc.argtypes = [ctypes.c_ulong,]
fftw_malloc.restype = ctypes.c_ulong
def aligned_array(N, dtype):
d = dtype()
address = fftw_malloc(N * d.nbytes
On Tue, 2009-01-27 at 14:16 +0100, Sturla Molden wrote:
def aligned_array(N, dtype):
d = dtype()
tmp = numpy.array(N * d.nbytes + 16, dtype=numpy.uint8)
address = tmp.__array_interface__['data'][0]
offset = (16 - address % 16) % 16
return tmp[offset:offset+N].view
': [('', 'f8')], 'strides': None, 'shape': (10,), 'version': 3,
'typestr': 'f8', 'data': (20388752, False)}
b = fromaddress(20388752, numpy.float64, (10,))
b
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
b[0] = 1.0
a
array([ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
Sturla
On 1/21/2009 2:38 PM, Sturla Molden wrote:
If you can get a pointer (as integer) to your C++ data, and the shape
and dtype is known, you may use this (rather unsafe) 'fromaddress' hack:
And opposite, if you need to get the address referenced to by an
ndarray, you can do this:
def addressof
a wonderful tool.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Is it possible to make f2py raise an exception if a fortran routine
signals an error?
If I e.g. have
subroutine foobar(a, ierr)
Can I get an exception automatically raised if ierr != 0?
Sturla Molden
___
Numpy-discussion mailing list
Numpy
I simplified the code to focus only on what I need, rather to bother you
with the full code.
def test():
w = 3096
h = 2048
a = numpy.zeros((h,w), order='F') #Normally loaded with real data
b = numpy.zeros((h,w,3), order='F')
w0 = slice(0,w-2)
w1 = slice(1,w-1)
-2- Now with the code below I have strange result.
With w=h=400:
- Using slice= 0.99 sec
- Using numpy.ogrid = 0.01 sec
It is not equivalent. The ogrid version only uses diagonal elements, and
does less work.
It seems ogrid got better performance, but broadcasting is not
However, just using the slices on the matrix instead of passing the
slices through ogrid is faster.
So what is ogrid useful for?
For the same problems where you would use meshgrid in Matlab. That is
certain graphics problem for example; e.g. evaluating a surface z = f(x,y)
over a grid of x,y
Sturla Molden wrote:
For the same problems where you would use meshgrid in Matlab.
well, I used to use meshgrid a lot because MATLAB could not do
broadcasting. Which is probably why the OP has been trying to use it.
mgrid and ogrid are both meshgrids, with ogrid having a sparse
it used to be).
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
.
Fortran 90/95 has array slicing as well.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On 1/7/2009 7:52 PM, josef.p...@gmail.com wrote:
But, I think,
matlab is ahead in parallelization (which I haven't used much)
Not really. There is e.g. nothing like Python's multiprocessing package
in Matlab. Matlab is genrally single-threaded. Python is multi-threaded
but there is a GIL.
I am wondering if not scipy.signal.lfilter ought to be a part of the
core NumPy. Note that it is similar to the filter function found in
Matlab, and it makes a complement to numpy.convolve.
May I suggest that it is renamed or aliased to numpy.filter?
Sturla Molden
There was an discussion about this on the c.l.p a while ago. Using a sort
will scale like O(n log n) or worse, whereas using a set (hash table) will
scale like amortized O(n). How to use a Python set to get a unique
collection of objects I'll leave to your imagination.
Sturla Molden
On Mon
See the module docs for pickle and cPickle.
Sturla Molden
Dear all
I have a class that contains various data arrays and constants
Is there a way of using numpy.save() to save the class so that when I
reload it back in I have access to all the member arrays?
Thanks
Ross
, and it occurs prior to any fork.
This is also system dependent by the way. On Windows multiprocessing
does not fork() and does not produce this problem.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http
you? So if you remember to seed
the random number generators after forking, this race condition should
be of no significance.
mtrand.pyx seems pretty clear about that: on import.
In which case they are initialized prior to forking.
Sturla Molden
familiar with Windows programming. But what is needed is a
fork handler (similar to a system hook in Windows jargon) that sets a
new seed in the child process.
Could pthread_atfork be used?
Sturla Molden
___
Numpy-discussion mailing list
Numpy
On 12/11/2008 6:21 PM, Sturla Molden wrote:
It would not help, as the seeding is done prior to forking.
I am mostly familiar with Windows programming. But what is needed is a
fork handler (similar to a system hook in Windows jargon) that sets a
new seed in the child process.
Actually I
process responsible for making the random numbers and write those to
a queue. It would scale if generating the deviates is the least costly
part of the algorithm.
Sturla Molden
=== test.py ===
from test_helper import task, generator
from multiprocessing import Pool, Process, Queue
q = Queue
In the docs I found this:
We used a hypothesis that a set of PRNGs based on linear recurrences is
mutually 'independent' if the characteristic polynomials are relatively
prime to each other. There is no rigorous proof of this hypothesis...
S.M.
Here is the c program and the description
? If not, RandomState cannot be used for this particular purpose.
Cf. what the creators of MT wrote about dynamically creating MT generators
at http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/DC/dc.html
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion
which match 960 - i.e.
it would return 3 and 6
import numpy
a = numpy.array([0,1,2,960,5,6,960,7])
a == 960
array([False, False, False, True, False, False, True, False], dtype=bool)
idx, = numpy.where(a == 960)
idx
array([3, 6])
idx.tolist()
[3, 6]
Sturla Molden
should perhaps be NumPy's
default (over no alignment), as MMX and SSE extensions depend on it.
nVidia's CUDA also require alignment on 2 byte boundaries.
Sturla Molden
On Thu, Apr 24, 2008 at 4:57 PM, Zachary Pincus [EMAIL PROTECTED]
wrote:
The reason that one must slice before .view()ing
. This example cannot be replicated using take.
So I think this strange behaviour is actually correct.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On 6/19/2007 12:14 PM, Sturla Molden wrote:
h[0,:,numpy.arange(14)] is a case of sdvanced indexing. You can also
see that
h[0,:,[0,1,2,3,4,5,6,7,8,9,10,11,12,13]].shape
(14, 4)
Another way to explain this is that numpy.arange(14) and
[0,1,2,3,4,5,6,7,8,9,10,11,12,13] is a sequence (i.e
,:,numpy.arange(5)].shape
(5,10)
hm...
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
not use it (the high offset
DWORD is always zero). Only files smaller than 4 GB can be memory mapped.
Regards,
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
the offset problem is trivial to solve (it requires a small
change to the memmap object), this is not the case with the i/o error
problem. It is anything but trivial.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http
memory
mapped files.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
by using copy-on-write, so it will be efficient in some
cases, excessive cycles og copy-in and copy-out is usually what you get.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy
On 4/26/2007 2:42 PM, David Cournapeau wrote:
You are true for the semantics, but wrong for the consequences on
copying, as matlab is using COW, and this works well in matlab.
It works well only if you don't change your input arguments. Never try
to write to a matrix received as an argument
well. Even the name Matlab reflects that, it is an abbreviation for
matrix laboratory. I am not surprised that most Matlab users do not
use facilities like function handles or closures. But Matlab does have
that, nevertheless.
Sturla Molden
___
Numpy
On Wednesday 18 April 2007 20:14, Sturla Molden wrote:
This case will require some extra work. The array needs to remember,
that there is an unevaluated expression depending on it.
This is certainly a complication that needs to be worked out. An array
could store a set of dependent
, and storing it in a
linked list. Finally, the stored arrays are retrieved as a single
contiguous array. Example code below (cf. class scalar_list).
Sturla Molden
import numpy
class ndarray_list:
a single linked list of numpy ndarrays.
class node:
def __init__(self, data
On 3/5/2007 2:13 PM, David Koch wrote:
- Am I correct in assuming that all arrays have to be initialized to
their final number of elements in NumPy (using empty/zero for instance)?
You can also create an array from a Python list, data in a file, another
array or a memory mapping. In these
it
as a class attribute? What happens if there is a thread switch between
__new__ and __array_finalize__? This design is not thread safe and can
produce strange race conditions.
IMHO, the preferred way to set an instance attribute is to use __init__
method, which is the 'Pythonic' way to do it.
Sturla
I don't pretend to know all the inner workings of subclassing, but I
don't think that would work, given the following output:
In [6]: x+y
This is where __array_finalize__ is called
Out[6]: MyArray([4, 5, 6])
Why is not __new__ called for the return value of x + y? Does it call
__new__ for
). But in the case of
complex_arange(0+1j,0+5j,1+1j) the return value is an empty array, as the
extent along the real axis is 0.
Regards,
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy
compiler
will pass a pointer to a struct referred to as a 'dope vector', which by
the way is very similar to a NumPy ndarray. But the ABI for a 'dope
vector' is implementation (compiler) dependent. So what does f2py do? Can
it use Charm?
Sturla Molden
701 - 789 of 789 matches
Mail list logo