Fri, 25 Mar 2011 13:23:11 +0100, Sturla Molden wrote:
FFTs should keep the GIL locked, because sharing the interpreter is not
funny.
Array indices should be sizeof(void*), because following C standard and
Python C API is lame.
Median should be calculated in O(n log n) instead of O(n) time,
Fri, 25 Mar 2011 13:40:54 +0100, Sturla Molden wrote:
Den 25.03.2011 13:33, skrev Pauli Virtanen:
That npy_intp will not be redefined as ssize_t does not mean that the
type of array indices could not be changed.
By the way, what is the resonable array index for AMD64, where 32-bit
, you'd need to detect when you are working with Numpy
arrays, and get the half-float type information from the Numpy dtype
rather than from the exported buffer.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
Thanks!
On Sat, 26 Mar 2011 13:11:46 +0100, Paul Anton Letnes wrote:
[clip]
I hope you find this useful! Is there some way of submitting the patches
for review in a more convenient fashion than e-mail?
You can attach them on the trac to each ticket. That way they'll be easy
to find later
is the
approach to use if you want to share the data.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
temporaries, and
probably easier to write than doing things via the out= arguments.
For sparse matrices, things then depend on how they are laid out in
memory. You can probably alter the `.data` attribute of the arrays
directly, if you know how the underlying representation works.
--
Pauli
On Sat, 26 Mar 2011 19:13:43 +, Pauli Virtanen wrote:
[clip]
If you want to have control over temporaries, you can make use of the
out= argument of ufuncs (`numpy.dot` will gain it in 1.6.1 --- you can
call LAPACK routines from scipy.lib in the meantime, if your data is in
Fortran order
Tue, 29 Mar 2011 04:16:00 -0400, josef.pktd wrote:
Traceback (most recent call last):
File stdin, line 1, in module
File C:\Programs\Python32\lib\site-packages\numpy\lib\npyio.py,
line 222, in __getitem__
return format.read_array(value)
File
Tue, 29 Mar 2011 08:27:52 +, Pauli Virtanen wrote:
Tue, 29 Mar 2011 04:16:00 -0400, josef.pktd wrote:
Traceback (most recent call last):
File stdin, line 1, in module
File C:\Programs\Python32\lib\site-packages\numpy\lib\npyio.py,
line 222, in __getitem__
return format.read_array
.
Personally, I don't think going unicode makes much sense here. First, it
would be a Py3-only feature. Second, there is a real need for it only
when dealing with multibyte encodings, which are seldom used these days
with utf-8 rightfully dominating.
--
Pauli Virtanen
Thu, 31 Mar 2011 11:15:08 -0700, Mark Wiebe wrote:
[clip]
My reading of Pauli's thoughts was that putting it in unilaterally is
undesirable, something I definitely agree with. I think with Eli doing
the legwork of getting input and acceptance from the relevant parties,
we should help him out
Fri, 08 Apr 2011 16:24:58 +0530, dileep kunjaai wrote:
I defined a function hit_rate( ) i want to use this into import
function(name of function).
Please read the Python tutorial first:
http://docs.python.org/tutorial/
http://docs.python.org/tutorial/modules.html
are somewhat difficult to track. You
could try switching to another BLAS library (or, if you use ATLAS,
compile it differently) and checking if the problem disappears.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
Tue, 26 Apr 2011 11:36:19 -0500, Jason Grout wrote:
[clip]
Okay, just one more data point. Our people that are seeing the problem
with numpy returning a non-unitary V also see a non-unitary V being
returned by the test C call to zgesdd. In other words, it really
appears that zgesdd follows
On Tue, 26 Apr 2011 10:33:07 -0500, Jason Grout wrote:
[clip]
I was just looking up the documentation for ZGESDD and noticed that the
value we have for rwork in the numpy call [1] does not match the Lapack
docs. This was changed in Lapack 3.2.2 [2]. I've submitted a pull
request:
On Tue, 26 Apr 2011 14:52:52 -0500, Jason Grout wrote:
[clip]
The updated rwork calculation makes no difference with a 3x4 matrix
(both the old calculation and the new calculation give 66 in the 3x4
case), so I don't think that is affecting anything.
Actually, there *is* a difference for the
/+source/atlas/+bug/778217
So I thought people might like a heads up in case they run into it as
well.
It's terminating with SIGILL, which means that Ubuntu no longer ships
a precompiled version of Atlas usable on machines without SSE3.
--
Pauli Virtanen
On Fri, 06 May 2011 10:06:20 -0700, Nathaniel Smith wrote:
On Fri, May 6, 2011 at 3:04 AM, Pauli Virtanen p...@iki.fi wrote:
It's terminating with SIGILL, which means that Ubuntu no longer ships a
precompiled version of Atlas usable on machines without SSE3.
Yes, that's right.
BTW
Sun, 08 May 2011 14:45:45 -0700, Keith Goodman wrote:
I'm writing a function that accepts four possible dtypes: int32, int64,
float32, float64. The function will call a C extension (wrapped in
Cython). What are the equivalent C types? int, long, float, double,
respectively? Will that work on
Fri, 13 May 2011 17:39:26 +0200, Ondrej Marsalek wrote:
[clip]
while this does not (i.e. still produces just a warning):
$ python -W error -c 'import numpy; x=numpy.ones(2); x+=1j'
numpy.core.numeric.ComplexWarning: Casting complex values to real
discards the imaginary part
This is with
On Sat, 14 May 2011 09:45:12 -0600, Charles R Harris wrote:
[clip]
These are generated by the .. autosummary:: command, so the error
probably lies there.
The problem is that the signature of these routines is written as
remainder(x1, x2[, out])
and not as
remainder(x1, x2,
On Sun, 15 May 2011 10:32:17 +0200, Ralf Gommers wrote:
OK, the format for that part of the signature is in line 4910 in
ufunc_object.c. The question is, which should we fix, the format or the
autosummary?
The format please. That [, out] never made sense to me.
The problem here is that there
Wed, 18 May 2011 07:39:18 -0400, Neal Becker wrote:
The file is pickle saved on i386 and loaded on x86_64. It contains a
numpy array (amoungst other things).
On load it says:
RuntimeError: invalid signature
There's no such message in Numpy source code, so the error does
not come directly
On Wed, 18 May 2011 15:09:31 -0700, G Jones wrote:
[clip]
import numpy as np
x = np.memmap('mybigfile.bin',mode='r',dtype='uint8') print x.shape #
prints (42940071360,) in my case ndat = x.shape[0]
for k in range(1000):
y = x[k*ndat/1000:(k+1)*ndat/1000].astype('float32') #The astype
.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to it in the changelog or release notes.
Looks like a bug -- the change was not intentional.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Tue, 24 May 2011 11:41:28 +0200, Jan Wuyts wrote:
bmidata=np.asarray([x for x in data if x['paramname']=='bmi'],
dtype=data.dtype)
Use indexing:
bmidata = data[data['paramname'] == 'bmi']
___
NumPy-Discussion mailing list
%20Numeric/24.2/
No pre-made binaries for Python 2.7, for obvious reasons, however.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
at the same time as their
product.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
the new page.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Fri, 27 May 2011 21:06:45 +0300, Talla wrote:
[clip: AbinitBandStructuremaker.py]
FYI: try getting a newer version of the abinit software from abinit.org
They seem to have an updated version of this particular script
in their source package.
Any further discussion should probably go to their
Tue, 31 May 2011 11:44:15 -0500, Mark Wiebe wrote:
[clip]
I find very commendable to strive for consistency, mind you. I'm just
not not very comfortable with the idea of modifying old records a
posteriori to adjust to new policies...
I was under the impression this already was the policy,
. Maybe they can be memcpy'd to make aliases,
although that sounds dirty...
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, 10 Jun 2011 18:51:14 -0700, Keith Goodman wrote:
[clip]
Maybe this is the same question, but are you maybe yes, maybe no on this
too:
type(np.sum([1, 2, 3], dtype=np.int32)) == np.int32
False
Note that this is a comparison between two Python types...
Ben, what happens if
On Mon, 13 Jun 2011 11:08:18 -0500, Bruce Southey wrote:
[clip]
OSError:
/usr/local/lib/python3.2/site-packages/numpy/core/multiarray.pyd: cannot
open shared object file: No such file or directory
I think that's a result of Python 3.2 changing the extension module
file naming scheme (IIRC to a
On Tue, 14 Jun 2011 15:37:36 -0500, Mark Wiebe wrote:
[clip]
It would be nice to get a fix to this newly reported regression in:
http://projects.scipy.org/numpy/ticket/1867
I see that regression only in master, not in 1.6.x, so I don't think
it will delay 1.6.1.
Pauli
On Fri, 17 Jun 2011 15:39:21 -0500, Robert Kern wrote:
[clip]
File
/home/bvr/Programs/scipy/scipy/sparse/linalg/eigen/arpack/tests/test_arpack.py,
line 378 in test_ticket_1459_arpack_crash
Well, if it's testing that it crashes, test passed!
Here is the referenced ticket and the relevant
Tue, 21 Jun 2011 16:43:13 -0500, Mark Wiebe wrote:
[clip: __array_wrap__]
Those could stay as they are, and just the ufunc usage of __array_wrap__
can be deprecated. For classes which currently use __array_wrap__, they
would just need to also implement _numpy_ufunc_ to eliminate any
Hi,
Fri, 01 Jul 2011 16:45:47 +0200, Marc Labadens wrote:
I am trying to interface some python code using numpy array with some C
code.
You can read:
http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html#writing-an-extension-module
However, using Cython often saves you from writing
Sat, 16 Jul 2011 12:42:36 +0200, Stefan Krah wrote:
x = ndarray(buffer=bytearray([1,2,3,4,5,6,7,8,9,10]),
shape=[10], strides=[-1], dtype=B, offset=9)
[clip]
I do not understand the PyBUF_SIMPLE result. According to the C-API docs
a consumer would be allowed to access buf[9], which
Tue, 19 Jul 2011 11:05:18 +0200, Carlos Becker wrote:
Hi, I started with numpy a few days ago. I was timing some array
operations and found that numpy takes 3 or 4 times longer than Matlab on
a simple array-minus-scalar operation.
This looks as if there is a lack of vectorization, even though
On Tue, 19 Jul 2011 17:49:14 +0200, Carlos Becker wrote:
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
- Numpy: 0.026 sec avg
- Numpy with weave.blitz: 0.0041
To check if it's an issue with building without
operation
runs exactly at the same speed as C, so this issue must have
a platform-dependent explanation.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Tue, 19 Jul 2011 21:55:28 +0200, Ralf Gommers wrote:
On Sun, Jul 17, 2011 at 11:48 PM, Darren Dale dsdal...@gmail.com
wrote:
In numpy.distutils.system info:
default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib',
'/usr/lib'], platform_bits)
the same speed
as an equivalent C implementation, which *does* pre-allocation, using
exactly the same benchmark codes as you have posted?
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
Wed, 20 Jul 2011 09:04:09 +, Pauli Virtanen wrote:
Wed, 20 Jul 2011 08:49:21 +0200, Carlos Becker wrote:
Those are very interesting examples. I think that pre-allocation is
very important, and something similar happens in Matlab if no
pre-allocation is done: it takes 3-4x longer than
Wed, 20 Jul 2011 11:31:41 +, Pauli Virtanen wrote:
[clip]
There is a sharp order-of-magnitude change of speed in malloc+memset of
an array, which is not present in memset itself. (This is then also
reflected in the Numpy performance -- floating point operations probably
don't cost much
doesn't vectorize the slice operations such as
b[:,j-g]
but falls back to Numpy on them. You'll need to convert the slice
notation to a loop to get speed gains.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
Fri, 29 Jul 2011 10:52:12 +0200, Yoshi Rokuko wrote:
[clip]
A, B = mod.meth(C, prob=.95)
is ith possible to return two arrays?
The way to do this in Python is to build a tuple with
Py_BuildValue(OO, A, B) and return that.
___
NumPy-Discussion
)
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sat, 13 Aug 2011 18:14:11 +0200, Ralf Gommers wrote:
[clip]
We should check if there's still any code in SVN branches that is
useful.
If so the people who are interested in it should move it somewhere else.
Anyone?
All the SVN branches are available in Git, though some are hidden. Do
Sat, 13 Aug 2011 22:00:33 -0400, josef.pktd wrote:
[clip]
Does Trac require svn access to dig out old information? for example
links to old changesets, annotate/blame, ... ?
It does not require HTTP access to SVN, as it looks directly at the
SVN repo on the local disk.
It also probably doesn't
Fri, 19 Aug 2011 12:48:29 +0200, Ralf Gommers wrote:
[clip]
Hi Ognen,
Could you please disable http access to numpy and scipy svn?
Turns out also I had enough permissions to disable this. Now:
$ svn co http://svn.scipy.org/svn/numpy/trunk numpy
svn: Repository moved permanently to
of lines instead of a number of bytes?
First use standard Python I/O functions to determine the number of
bytes to skip at the beginning and the number of data items. Then pass
in `offset` and `shape` parameters to numpy.memmap.
--
Pauli Virtanen
___
NumPy
in several Python-for-science distributions),
and how difficult would it be for you to ship them with your stuff,
or to require the users to install them.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org
Hey,
Sat, 27 Aug 2011 12:28:14 +0200, David Cournapeau wrote:
I am finally at a stage where bento can do most of what numscons could
do. I would rather avoid having 3 different set of build scripts
(distutils+bento+numscons) to maintain in the long term, so I would
favor removing numscons
in exact arithmetic, for a computer it may still be
invertible (by a given algorithm). This type of things are not
unusual in floating-point computations.
The matrix condition number (`np.linalg.cond`) is a better measure
of whether a matrix is invertible or not.
--
Pauli Virtanen
Wed, 07 Sep 2011 12:52:44 -0700, Chris.Barker wrote:
[clip]
In [9]: temp['x'] = 3
In [10]: temp['y'] = 4
In [11]: temp['z'] = 5
[clip]
maybe it wouldn't be any faster, but with re-using temp, and one less
list-tuple conversion, and fewer python type to numpy type conversions,
maybe it
Sun, 11 Sep 2011 03:03:26 -0500, Pengkui Luo wrote:
[clip]
However, converting a large sparse matrix to dense would easily eat up
the memory. Is there a way for np.sign (as well as other ufunc) to take
a sparse matrix as parameter, and return a sparse matrix?
For CSR, CSC, and DIA you can do
Thu, 22 Sep 2011 08:12:12 +0200, Han Genuit wrote:
[clip]
I also noticed that it does strange things when using a list:
c[[True, False, True]]
array([[3, 4, 5],
[0, 1, 2],
[3, 4, 5]])
It casts the list with booleans to an integer array. Probably shouldn't
work like that...
Hi,
Thu, 22 Sep 2011 13:09:51 +0200, Marijn Verkerk wrote:
[clip]
ImportError: libptf77blas.so.3: cannot open shared object file: No such
file or directory
In the __config.py file the folder where libptf77 should be is present.
Any suggestions?
You need to make the dynamic linker able to
On Fri, 23 Sep 2011 11:57:15 -0600, Charles R Harris wrote:
[clip]
We're falling behind. The real world is impacting hard at the moment.
Matthew's suggestion is a good one.
Should we stop recommending using Trac for contributions, and just
say use git? (I think we recommend it somewhere.)
I'd
**
Yes, this is a known shortcoming of .tofile().
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
: your qrupdate code is licensed under the
GPL. To include it in Scipy, we need to have it distributable under a
BSD-compatible license. Would you be willing to relicense it?
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
(words):
i = np.arange(len(word))
canvas[i+j, 2*i] = list(word)
canvas[:,-1] = '\n'
return canvas.tostring().rstrip()
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
10.10.2011 23:02, Jesper Larsen kirjoitti:
Hi numpy-users
I have a 2d array of shape (ny, nx). I want to broadcast (copy) this
array to a target array of shape (nt, nz, ny, nz) or (nt, ny, nx) so
that the 2d array is repeated for each t and z index (corresponding to
nt and nz). I am not sure
that it can be vectorized. Without knowing
more about the actual algorithm you are trying to implement, it's not
easy to give more detailed help.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman
as all the lattice points inside
the bounding box are checked.
The only way to vectorize this I see is to do write the floodfill
algorithm on rectangular supercells, so that the constant costs are
amortized. Sounds a bit messy to do, though.
--
Pauli Virtanen
' in it on the upper right
corner of the page.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
13.10.2011 12:59, Alex van der Spek kirjoitti:
gives me a confusing result. I only asked to name the columns and change
their
types to half precision floats.
Structured arrays shouldn't be thought as an array with named columns,
as they are somewhat different.
What am I missing? How to do
17.10.2011 15:48, josef.p...@gmail.com kirjoitti:
On Mon, Oct 17, 2011 at 6:17 AM, Pauli Virtanen p...@iki.fi wrote:
[clip]
What am I missing? How to do this?
np.rec.fromarrays(arr.T, dtype=dt)
y.astype(float16).view(dt)
I think this will give surprises if the original array is not in C
understand.
No, master is supposed to be the integration branch with only finished
stuff in it.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
(or explicitly creates masked arrays), but
as far as I understand that's about the same situation as with np.ma.
.. [1] http://docs.scipy.org/doc/numpy/reference/arrays.maskna.html
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
25.10.2011 06:59, Matthew Brett kirjoitti:
res = np.longdouble(2)**64
res-1
36893488147419103231.0
Can you check if long double works properly (not a given) in C on that
platform:
long double x;
x = powl(2, 64);
x -= 1;
printf(%g %Lg\n, (double)x, x);
or, in
gives something wrong.
Can you try to check this by doing something like:
- do some set of calculations using np.longdouble in Numpy
(that requires the extra accuracy)
- at the end, cast the result back to double
--
Pauli Virtanen
___
NumPy
before formatting one. In the latter case, one would have to maintain a
list of broken C libraries (ugh).
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
why the error occurs in
only some of the cases.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
26.10.2011 10:07, Nathaniel Smith kirjoitti:
On Tue, Oct 25, 2011 at 4:49 PM, Matthew Brett matthew.br...@gmail.com
wrote:
I guess from your answer that such a warning would be complicated to
implement, and if that's the case, I can imagine it would be low
priority.
I assume the problem
C library. It's been in a
merge-limbo for some time now, however.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
31.10.2011 09:44, mark florisson kirjoitti:
[clip]
Ah, that's too bad. Is it anywhere near ready, or was it abandoned for
ironclad? Could you point me to the code?
It's quite ready and working, and as far as I understand, Enthought is
shipping it. I haven't used it, though.
The code is here:
04.11.2011 17:31, Gary Strangman kirjoitti:
[clip]
The question does still remain what to do when performing operations like
those above in IGNORE cases. Perform the operation underneath? Or not?
I have a feeling that if you don't start by mathematically defining the
scalar operations first,
a[0,0] = 2.0
print a[0,0]
[ 2. 2. 2. 2.]
a[1,0] = 3.0
a[0,1] = a[0,0] * a[1,0]
print a[0,1]
[ 6. 6. 6. 6.]
--
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy
autoreload.run()
to some startup file.
Created: Thomas Heller, 2000-04-17
Modified: Pauli Virtanen, 2006
# $Id: autoreload.py 3117 2006-09-27 20:28:46Z pauli $
#
# $Log: autoreload.py,v $
# Revision 1.9 2001/11/15 18:41:18 thomas
# Cleaned up and made working again before posting to c.l.p
wrapper library) Maybe this should be fixed?
--
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Neal Becker [EMAIL PROTECTED] kirjoitti:
Suppose I have a function F(), which is defined for 1-dim arguments. If the
user passes an n1 dim array, I want to apply F to each 1-dim view.
For example, for a 2-d array, apply F to each row and return a 2-d result.
For a 3-d array, select each
pointers,
type(Typename), pointer, from the Python side to the Fortran side, as opaque
handles.
--
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
su, 2007-11-04 kello 11:13 -1000, Eric Firing kirjoitti:
A quick scan of the tickets did not show me anything like the following,
but I might have missed it. The attached script generates a segfault on
my ubuntu feisty system with svn numpy. Running inside of ipython, the
segfault occurs
tm pbe nx constant_tsc
pni monitor est tm2 xtpr
so imho I do have SSE2.
If it's not wrong instructions for the processor type, this reminds me
of http://scipy.org/scipy/numpy/ticket/551 which also seemed to be
SSE2-specific on Debian/Ubuntu in my tests.
--
Pauli Virtanen
signature.asc
:
LD_LIBRARY_PATH=/usr/lib/atlas python crasher.py
--
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
function is interpreting numbers beginning with
zero as octal, and recognizing also hexadecimals.
This is a bit surprising, and whether this is the desired behavior is
questionable.
--
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion
la, 2008-01-19 kello 14:15 +0900, David Cournapeau kirjoitti:
Pauli Virtanen wrote:
pe, 2008-01-18 kello 18:06 +0900, David Cournapeau kirjoitti:
Hi there,
I got a mercurial mirror of numpy available. I put some basic info
on the wiki
http://scipy.org/scipy/numpy/wiki/HgMirror
([809119794, 825375289])
I guess some more testcases should be written...
--
Pauli Virtanen
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
ti, 2008-02-19 kello 13:38 -0500, Neal Becker kirjoitti:
Does numpy/scipy have a partial_sum and adj_difference function?
partial_sum[i] = \sum_{j=0}^{i} x[j]
adj_diff[i] = x[i] - x[i-1] : i 1, x[i] otherwise
cumsum and diff do something like this:
import numpy
a = [1,2,3,4,5,3,1]
%) in the complete nonlinear solution
loop than the one using the F90 code wrapped with f2py.
A silly question: did you check directly that the pure-numpy code and
the F90 code give the same results for the Jacobian-vector product
J(z0) z for some randomly chosen vectors z0, z?
--
Pauli Virtanen
functions to npy_math for 1.4.0: could be useful
for the next Scipy? This is pretty quick to do, I can just write up some
more tests one evening and commit.
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
, and noticed that
they were different (sha1sum).
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
to axis=None. These
seemed to be required by at least the masked arrays unit tests...
--
Pauli Virtanen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
((3, 4), dtype=np.object)
filler(a, a);
array([[[], [], [], []],
[[], [], [], []],
[[], [], [], []]], dtype=object)
a[0,3].append(9)
a
array([[[], [], [], [9]],
[[], [], [], []],
[[], [], [], []]], dtype=object)
--
Pauli Virtanen
ma, 2009-11-09 kello 23:13 +, Neil Crighton kirjoitti:
I've written some release notes (below) describing the changes to
arraysetops.py. If someone with commit access could check that these sound ok
and add them to the release notes file, that would be great.
Thanks, added!
Pauli
301 - 400 of 995 matches
Mail list logo