Den 11.06.2010 03:02, skrev Charles R Harris:
But for an initial refactoring it probably falls in the category of
premature optimization. Another thing to avoid on the first go around
is micro-optimization, as it tends to complicate the code and often
doesn't do much for performance.
I also tried to Install numpy with intel mkl 9.1
I still used gfortran for numpy installation as intel mkl 9.1 supports gnu
compiler.
I would suggest using GotoBLAS instead of ATLAS. It is easier to build
then ATLAS (basically no configuration), and has even better performance
than MKL.
Sturla Molden wrote:
I would suggest using GotoBLAS instead of ATLAS.
http://www.tacc.utexas.edu/tacc-projects/
That does look promising -- nay idea what the license is? They don't
make it clear on the site
UT TACC Research License (Source Code)
The Texas Advanced Computing Center
Colin J. Williams skrev:
When one has a smallish sample size, what give the best estimate of the
variance?
What do you mean by best estimate?
Unbiased? Smallest standard error?
In the widely used Analysis of Variance (ANOVA), the degrees of freedom
are reduced for each mean estimated,
Colin J. Williams skrev:
suggested that 1 (one) would be a better default but Robert Kern told
us that it won't happen.
I don't even see the need for this keyword argument, as you can always
multiply the variance by n/(n-1) to get what you want.
Also, normalization by n gives the ML
to the right.
- Hit 7 or better, with no bias.
Do you think it can be shown that the latter option is the better?
No?
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
David Cournapeau skrev:
If every python package starts to put its extensions (*.pyd) into a
directory, what happens when two different packages have an extension
with the same name (e.g. package foo has a package multiarray.pyd) ? I
would also be really annoyed if a 3rd party extension starts
David Cournapeau skrev:
We are talking about the numpy extensions here, which are not
installed through the install_data command. The problem is about how
windows looks for dll with the manifest mechanism, and how to
build/install extensions when the C runtime (or any other system
dll) is not
David Cournapeau skrev:
We are talking about the numpy extensions here, which are not
installed through the install_data command. The problem is about how
windows looks for dll with the manifest mechanism, and how to
build/install extensions when the C runtime (or any other system
dll) is not
Pauli Virtanen skrev:
XXX: 3K: numpy.random is disabled for now, uses PyString_*
XXX: 3K: numpy.ma is disabled for now -- some issues
I thought numpy.random uses Cython? Is it just a matter of recompiling
the pyx-file?
I remember Dag was working on this a bit: how far did it go?
Robin skrev:
I had assumed when matlab unloads the mex function it would also
unload python - but it looks like other dynamic libs pulled in from
the mex function (in this case python and in turn numpy) aren't
unloaded...
Matlab MEX functions are DLLs, Python interpreter is a DLL, NumPy
Robin skrev:
So far the only remotely tricky thing I did was redirect sys.stdout
and sys.stderr to a wrapper that uses mexPrintf so output goes to the
matlab console.
Be careful when you are using file handles. You have to be sure that
Matlab, Python and NumPy are all linked against the
Robin skrev:
Ah, I hadn't realised it was an OS constraint - I thought it was
possible to unload dlls - and that was why matlab provides the clear
function. mex automatically clears a function when you rebuild it - I
thought that was how you can rebuild and reload mex functions without
Alexey Tigarev skrev:
I have implemented multiple regression in a following way:
You should be using QR or SVD for this.
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
David Cournapeau wrote:
On Fri, Nov 6, 2009 at 6:54 AM, David Goldsmith d.l.goldsm...@gmail.com
wrote:
Interesting thread, which leaves me wondering two things: is it documented
somewhere (e.g., at the IEEE site) precisely how many *decimal* mantissae
are representable using the 64-bit
Jake VanderPlas wrote:
Does anybody know a
way to directly access the numpy.linalg routines from a C extension,
without the overhead of a python callback? Thanks for the help.
You find a C function pointer wrapped in a CObject in the ._cpointer
attribute.
Anne Archibald skrev:
The short answer is, you can't.
Not really true. It is possible create an array (sub)class that stores
memory addresses (pointers) instead of values. It is doable, but I am
not wasting my time implementing it.
Sturla
___
Bill Blinn skrev:
v = multiview((3, 4))
#the idea of the following lines is that the 0th row of v is
#a view on the first row of a. the same would hold true for
#the 1st and 2nd row of v and the 0th rows of b and c, respectively
v[0] = a[0]
This would not even work, becuase a[0] does not
Thomas Robitaille skrev:
np.random.random_integers(np.iinfo(np.int32).min,high=np.iinfo
(np.int32).max,size=10)
which gives
array([-1506183689, 662982379, -1616890435, -1519456789, 1489753527,
-604311122, 2034533014, 449680073, -444302414,
-1924170329])
This fails
Robert Kern skrev:
64-bit and larger integers could be done, but it requires
modification. The integer distributions were written to support C
longs, not anything larger. You could also use .bytes() and
np.fromstring().
But as of Python 2.6.4, even 32-bit integers fail, at least on
Robert Kern skrev:
Then let me clarify: it was written to support integer ranges up to
sys.maxint. Absolutely, it would be desirable to extend it.
I know, but look at this:
import sys
sys.maxint
2147483647
2**31-1
2147483647L
sys.maxint becomes a long, which is what confuses mtrand.
Robert Kern skrev:
Then let me clarify: it was written to support integer ranges up to
sys.maxint. Absolutely, it would be desirable to extend it.
Actually it only supports integers up to sys.maxint-1, as
random_integers call randint. random_integers includes the upper range,
but randint
Sturla Molden skrev:
Robert Kern skrev:
Then let me clarify: it was written to support integer ranges up to
sys.maxint. Absolutely, it would be desirable to extend it.
Actually it only supports integers up to sys.maxint-1, as
random_integers call randint. random_integers
up Python high-lighting for KDE. Not done yet,
but I will post a Python with NumPy highlighter later on if this is
interesting.
P.P.S. This also covers Pyrex, but add in some Cython stuff.
Sturla Molden
?xml version=1.0 encoding=UTF-8?
!DOCTYPE language
!-- Python syntax highlightning v0.9
Sturla Molden skrev:
and Cython with NumPy shows up under Sources. Anyway, this is the
syntax high-lighter I use to write Cython.
It seems I posted the wrong file. :-(
S.M.
?xml version=1.0 encoding=UTF-8?
!DOCTYPE language
!-- Python syntax highlightning v0.9 by Per Wigren --
!-- Python
Lisandro Dalcin skrev:
Is there any specific naming convention for these XML files to work
with KATE? Would it be fine to call it 'cython-mode-kate.xml' to push
it to the repo? Will it still work (I mean, with that name) when
placed appropriately in KATE config dirs or whatever? ... Just
Ralf Gommers skrev:
If anyone with knowledge of the differences between the C and Fortran
versions could add a few notes at the above link, that would be great.
The most notable difference (from a user perspective) is that the
Fortran version has more transforms, such as discrete sine and
Dag Sverre Seljebotn skrev:
Microsoft's compilers don't support C99 (or, at least, versions that
still has to be used doesn't).
Except for automatic arrays, they do support some of the more important
parts of C99 as extensions to C89:
inline functions
restrict qualifier
for (int i=0;
Robert Kern skrev:
No, I think you're right. Using SIMD to refer to numpy-like
operations is an abuse of the term not supported by any outside
community that I am aware of. Everyone else uses SIMD to describe
hardware instructions, not the application of a single syntactical
element of a high
Matthieu Brucher skrev:
I agree with Sturla, for instance nVidia GPUs do SIMD computations
with blocs of 16 values at a time, but the hardware behind can't
compute on so much data at a time. It's SIMD from our point of view,
just like Numpy does ;)
A computer with a CPU and a GPU is a
Mathieu Blondel skrev:
Peter Norvig suggested to merge Numpy into Cython but he didn't
mention SIMD as the reason (this one is from me).
I don't know what Norvig said or meant.
However:
There is NumPy support in Cython. Cython has a general syntax applicable
to any PEP 3118 buffer. (As
Mathieu Blondel skrev:
As I wrote earlier in this thread, I confused Cython and CPython. PN
was suggesting to include Numpy in the CPython distribution (not
Cython). The reason why was also given earlier.
First, that would currently not be possible, as NumPy does not support
Py3k.
C90, it certainly behaves
like one performance wise.
I'd say that a MIMD machine running NumPy is a Turing machine emulating
a SIMD/vector machine.
And now I am done with this stupid discussion...
Sturla Molden
___
NumPy-Discussion mailing list
) % 16
return tmp[offset:offset+N].view(dtype=dtype)
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Skipper Seabold skrev:
I'm curious about this as I use ss, which is just np.sum(a*a, axis),
in statsmodels and didn't much think about it.
Do the number of loops matter in the timings and is dot always faster
even without the blas dot?
The thing is that a*a returns a temporary array with
Francesc Alted skrev:
The response is clear: avoid memcpy() if you can. It is true that memcpy()
performance has improved quite a lot in latest gcc (it has been quite good in
Win versions since many years ago), but working with data in-place (i.e.
avoiding a memory copy) is always faster
Robert Kern skrev:
While this description is basically true of numpy arrays, I would
caution you that every language has a different lexicon, and the same
word can mean very different things in each. For example, Python lists
are *not* linked lists; they are like C++'s std::vectors with a
Robert Kern skrev:
collections.deque() is a linked list of 64-item chunks.
Thanks for that useful information. :-) But it would not help much for a
binary tree...
Since we are on the NumPy list... One could image making linked lists
using NumPy arrays with dtype=object. They are storage
Sebastian Haase skrev:
I know that cython's numpy is still getting better and better over
time, but is it already today possible to have numpy support when
using Cython in pure python mode?
I'm not sure. There is this odd memoryview syntax:
import cython
view = cython.int[:,:](my2darray)
René Dudfield skrev:
Another way is to make your C function then load it with ctypes(or
wrap it with something else) and pass it pointers with
array.ctype.data.
numpy.ctypeslib.ndpointer is preferred when using ndarrays with ctypes.
You can find the shape of the array in python, and
pass
René Dudfield skrev:
Another way is to make your C function then load it with ctypes
Also one should beware that ctypes is a stable part of the Python
standard library.
Cython is still unstable and in rapid development.
Pyrex is more stabile than Cython, but interfacing with ndarrays is
Xavier Gnata skrev:
I have a large 2D numpy array as input and a 1D array as output.
In between, I would like to use C code.
C is requirement because it has to be fast and because the algorithm
cannot be written in a numpy oriented way :( (no way...really).
There are certain algorithms
/mtt.pdf
http://home.online.no/%7Epjacklam/matlab/doc/mtt/doc/mtt.pdf
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
V. Armando Solé skrev:
import numpy
import time
a=numpy.arange(100.)
a.shape=1000,1000
t0=time.time()
b=numpy.dot(a.T,a)
print Elapsed time = ,time.time()-t0
reports an Elapsed time of 1.4 seconds under python 2.5 and 15 seconds
under python 2.6
My computer reports 0.34 seconds
V. Armando Solé skrev:
In python 2.6:
import numpy.core._dotblas as dotblas
...
ImportError: No module named _dotblas
import numpy.core._dotblas as dotblas
dotblas.__file__
'C:\\Python26\\lib\\site-packages\\numpy\\core\\_dotblas.pyd'
to a specialized SciPy JIT-compiler. I would
be fun to make if I could find time for it.
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Rohit Garg skrev:
gtx280--141GBps--has 1GB
ati4870--115GBps--has 1GB
ati5870--153GBps (launches sept 22, 2009)--2GB models will be there too
That is going to help if buffers are kept in graphics memory. But the
problem is that graphics memory is a scarse resource.
S.M.
could e.g. consider linking with a
BLAS wrapper that directs these special cases to the GPU and the rest to
ATLAS / MKL / netlib BLAS.
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
.
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Daniel Platz skrev:
data1 = numpy.zeros((256,200),dtype=int16)
data2 = numpy.zeros((256,200),dtype=int16)
This works for the first array data1. However, it returns with a
memory error for array data2. I have read somewhere that there is a
2GB limit for numpy arrays on a 32 bit
Mark Wendell skrev:
for i in range(5):
for j in range(5):
a[i,j].myMethod(var3,var4)
print a[i,j].attribute1
Again, is there a quicker way than above to call myMethod or access attribute1
One option is to look up the name of the method unbound, and then use
built-in
Alan G Isaac skrev:
http://article.gmane.org/gmane.comp.python.general/630847
Yes, but here you still have to look up the name 'f' from locals in each
iteration. map is written in C, once it has as PyObject* to the callable
it does not need to look up the name anymore. The dictionary
David Warde-Farley skrev:
The odd values might be from the format code in the error message:
PyErr_Format(PyExc_ValueError,
%ld requested and %ld written,
(long) size, (long) n);
Yes, I saw that. My C is rusty, but wouldn't
Charles R Harris skrev:
The size of long depends on the compiler as well as the operating
system. On linux x86_64, IIRC, it is 64 bits, on Windows64 I believe
it is 32. Ints always seem to be 32 bits.
If I remember the C standard correctly, a long is guaranteed to be at
least 32 bits,
an efficient median filter using a 3D ndarray. For example if you
use an image of 640 x 480 pixels and want a 9 pixel median filter, you
can put shifted images in an 640 x 480 x 9 ndarray, and call median
with axis=2.
Sturla Molden
___
NumPy
a contiguous input. This is also why I used a C pointer instead of
your buffer syntax in the first version. Then I changed my mind, not
sure why. So I'll try with a local copy first then. I don't think we
want close to a megabyte of Cython generated gibberish C just for the
median.
Sturla Molden
Citi, Luca skrev:
Hello Sturla,
In _median how can you, if n==2, use s[] if s is not defined?
What if n==1?
That was a typo.
Also, I think when returning an empty array, it should be of
the same type you would get in the other cases.
Currently median returns numpy.nan for empty input
V. Armando Solé skrev:
I am looking for a way to have a non contiguous array C in which the
left (1, 2000) elements point to A and the right (1, 4000)
elements point to B.
Any hint will be appreciated.
If you know in advance that A and B are going to be duplicated, you can
use
Sebastian Haase skrev:
A mockarray is initialized with a list of nd-arrays. The result is a
mock array having one additional dimention in front.
This is important, because often in the case of 'concatenation' a real
concatenation is not needed. But then there is a common tool called
Matlab,
file
store C structs written successively.
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
tested the Cython code /thoroughly/, but at least it does compile.
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Robert Kern skrev:
When he is talking about 2D, I believe he is referring to median
filtering rather than computing the median along an axis. I.e.,
replacing each pixel with the median of a specified neighborhood
around the pixel.
That's not something numpy's median function should be
disabled overwrite_input because the median function calls
numpy.apply_along_axis.
Regards,
Sturla Molden
median.py.gz
Description: application/gzip
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
Dag Sverre Seljebotn skrev:
Nitpick: This will fail on large arrays. I guess numpy.npy_intp is the
right type to use in this case?
Yup. You are right. Thanks.
Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
://projects.scipy.org/numpy/attachment/ticket/1213/quickselect.pyx
Cython needs something like Java's generics by the way :-)
Regards,
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sturla Molden skrev:
By the way, here is a more polished version, does it look ok?
No it doesn't... Got to keep the GIL for the general case (sorting
object arrays). Fixing that.
SM
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
Sturla Molden skrev:
http://projects.scipy.org/numpy/attachment/ticket/1213/generate_qselect.py
http://projects.scipy.org/numpy/attachment/ticket/1213/quickselect.pyx
My suggestion for a new median function is here:
http://projects.scipy.org/numpy/attachment/ticket/1213/median.py
as template), and use a dict as jump table.
Chad, you can continue to write quick select using NumPy's C quick sort
in numpy/core/src/_sortmodule.c.src. When you are done, it might be
about 10% faster than this. :-)
Reference:
http://ndevilla.free.fr/median/median.pdf
Best regards,
Sturla
Sturla Molden skrev:
We recently has a discussion regarding an optimization of NumPy's median
to average O(n) complexity. After some searching, I found out there is a
selection algorithm competitive in speed with Hoare's quick select.
Reference:
http://ndevilla.free.fr/median/median.pdf
myself. I was
thinking in the same lines, except I don't store those two arrays. I
just keep track of counts in them. For the even case, I also keep track
the elements closest to the pivot (smaller and bigger). It's incredibly
simple actually. So lets see who gets there first :-)
Sturla Molden
any view on this? Is there any way of creating multiple
independent random states that will work correctly? I know of SPRNG
(Scalable PRNG), but it is made to work with MPI (which I don't use).
Regards,
Sturla Molden
___
NumPy-Discussion
not release the GIL either, but preserves determinism
in presence of multiple threads. Thanks. :-)
Regards,
Sturla Molden
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sturla Molden skrev:
It seems there is a special version of the Mersenne Twister for this.
The code is LGPL (annoying for SciPy but ok for me).
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/DC/dgene.pdf
http://www.math.sci.hiroshima-u.ac.jp/%7Em-mat/MT/DC/dgene.pdf
Basically it encodes
Xavier Saint-Mleux skrev:
Of course, the mathematically correct way would be to use a correct
jumpahead function, but all the implementations that I know of are GPL.
A recent article about this is:
www.iro.umontreal.ca/~lecuyer/myftp/papers/jumpmt.pdf
I know of no efficient jumpahead
. through lazy evaluation and JIT compilaton
of expressions - can give up to a tenfold increase in performance. That
is where we must start optimising to get a faster NumPy. Incidentally,
this will also make it easier to leverage on modern GPUs.
Sturla Molden
Olivier Grisel wrote:
As usual, MS reinvents the wheel with DirectX Compute but vendors such
as AMD and nvidia propose both the OpenCL API +runtime binaries for
windows and their DirectX Compute counterpart, based on mostly the
same underlying implementation, e.g. CUDA in nvidia's case.
and solvers for linear
algebra and differential equations. Ufuncs with trancendental functions
might also benefit. SciPy would certainly benefit more from GPGPUs than
NumPy.
Just my five cents :-)
Regards,
Sturla Molden
___
NumPy-Discussion mailing list
Robert Kern wrote:
I believe that is exactly the point that Erik is making. :-)
I wasn't arguing against him, just suggesting a solution. :-)
I have big hopes for lazy evaluation, if we can find a way to to it right.
Sturla
___
NumPy-Discussion
Charles R Harris wrote:
Whether the code that gets compiled is written using lazy evaluation
(ala Sturla), or is expressed some other way seems like an independent
issue. It sounds like one important thing would be having arrays that
reside on the GPU.
Memory management is slow compared to
Sturla Molden wrote:
Memory management is slow compared to computation. Operations like
malloc, free and memcpy is not faster for VRAM than for RAM.
Actually it's not VRAM anymore, but whatever you call the memory
dedicated to the GPU.
It is cheap to put 8 GB of RAM into a computer
Charles R Harris wrote:
I mean, once the computations are moved elsewhere numpy is basically a
convenient way to address memory.
That is how I mostly use NumPy, though. Computations I often do in
Fortran 95 or C.
NumPy arrays on the GPU memory is an easy task. But then I would have to
James Bergstra wrote:
The plan you describe is a good one, and Theano
(www.pylearn.org/theano) almost exactly implements it. You should
check it out. It does not use 'with' syntax at the moment, but it
could provide the backend machinery for your mechanism if you want to
go forward with
If x and y are numpy
arrays of bools, I'd like to be able to create expressions like the
following:
not x (to invert each element of x)
x and y
x or y
x xor y
(not x) or y
The usual array broadcasting rules should apply. Is there any chance of
getting something like this into NumPy?
are shared. That is -lmsvcr71 for Python 2.5 and -lmsvcr90 for
Python 2.6. If crt resources are unshared, link with whatever crt you want.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
for
sure that gfortran + MS compiler can mix on 32 bits
http://gcc.gnu.org/wiki/GFortranBinaries
I use this 32 bit mingw binary to build my Cython and f2py extensions. I
works like a charm. I have licenses for Intel compilers at work, but I
prefer gfortran 4.4.
Sturla Molden
On 4/17/2009 11:50 AM, Gael Varoquaux wrote:
Could you elaborate on your reason.
Probably silly reasons though...
I have a distutils.cfg file and a build system set up that works. I
don't want to bother setting up a different build system when I already
have one that works. I can use the
reading the NumPy and SciPy docs today that dpss windows
are missing.)
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
).view(dtype=dtype)
You will have to make sure the address is an integer.
Also, could you elaborate why dtype=float would work better?
Because there is no such thing as a double type in Python?
Sturla Molden
___
Numpy-discussion mailing list
Numpy
How did you import the function? f2py? What did you put in your .pyf file?
*My Fortran code:*
subroutine print_string (a, c)
implicit none
character(len=255), dimension(c), intent(inout):: a
integer, intent(in) :: c
integer :: i
do i = 1, size(a)
print*, a(i)
end do
end subroutine
Sturla Molden wrote:
def fromaddress(address, nbytes, dtype=double):
I guess dtype=float works better...
S.M.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
')
Does this help?
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
(_x)
shape = _y.shape[::-1]
return y.reshape(shape, order='C')
else:
return _f2py_wrapper(x)
And then preferably never use Fortran ordered arrays directly.
Sturla Molden
___
Numpy-discussion mailing list
Numpy
. If you do compile with OpenMP, they
will make certain FFTs run in parallel. I can comment them out if you
prefer.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Would it be possible to make the changes as a patch (svn diff) - this
makes things easier to review.
I've added diff files to ticket #1055.
Yes, I would be more comfortable without them (for 1.3). This is
typically the kind of small changes which can be a PITA to deal with
just before a
Mon, 16 Mar 2009 00:33:28 +0900, David Cournapeau wrote:
Also, you could post the patch on the http://codereview.appspot.com site.
Then it would be easier to both review and to keep track of its
revisions
I have posted the files here:
http://projects.scipy.org/numpy/ticket/1055
Sturla
Well, that's nearly as good. (Though submitting a single svn diff
containing all changes could have been a bit more easy to handle than
separate patches for each file. But a small nitpick only.)
The problem is I am really bad at using these tools. I have TortoiseSVN
installed, but no idea how
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Will memmap be fixed to use offsets correctly before 1.3?
hi,
Just a friendly reminder that I will close the trunk for 1.3.0 at the
end of 15th March (I will more likely do it at the end of Monday Japan
time which roughly corresponds to 15th March midnight Pacific time),
cheers,
David
, order=order)
Reagards,
Sturla Molden
__all__ = ['memmap']
import warnings
from numeric import uint8, ndarray, dtype
import sys
dtypedescr = dtype
valid_filemodes = [r, c, r+, w+]
writeable_filemodes = [r+,w+]
mode_equivalents = {
readonly:r,
copyonwrite:c,
readwrite:r
Can you open a ticket for this?
Done. Ticket #1053
Surla
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
601 - 700 of 789 matches
Mail list logo