On Tue, Mar 10, 2009 at 2:49 PM, Charles R Harris charlesr.har...@gmail.com
wrote:
On Tue, Mar 10, 2009 at 3:16 PM, Stéfan van der Walt ste...@sun.ac.zawrote:
2009/3/10 Pauli Virtanen p...@iki.fi:
Nonzero Python object, hence True. Moreover, it's also True in Python:
Also in C:
On Sun, Jul 20, 2008 at 3:47 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Sun, Jul 20, 2008 at 17:42, Charles R Harris
[EMAIL PROTECTED] wrote:
Hi All,
I fixed ticket #754, but it leads to a ton of problems. The original
discussion is here. The problems that arise come from conversion to
On Wed, May 21, 2008 at 12:32 AM, Alexandra Geddes [EMAIL PROTECTED]
wrote:
Hi.
1. Is there a module or other code to write arrays to databases (they want
access databases)?
If you have $$, I think you can use mxODBC. Otherwise, I believe that you
have to use COM as Chris suggested.
2.
On Sat, May 10, 2008 at 1:37 PM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/5/10 Nathan Bell [EMAIL PROTECTED]:
On Sat, May 10, 2008 at 3:05 PM, Anne Archibald
[EMAIL PROTECTED] wrote:
I don't expect my opinion to prevail, but the point is that we do not
even have enough consensus to
On Fri, May 9, 2008 at 6:43 AM, Travis Oliphant [EMAIL PROTECTED]
wrote:
Hi all,
I'm having trouble emailing this list from work, so I'm using a
different email address.
After Nathan Bell's recent complaints, I'm a bit more uncomfortable with
the matrix change to scalar indexing. It
On Tue, May 6, 2008 at 9:31 AM, Andy Cheesman [EMAIL PROTECTED]
wrote:
Hi nice numpy people
I was wondering if anyone could shed some light on how to distinguish an
empty array of a given shape and an zeros array of the same dimensions.
An empty array is just uninitialized, while a zeros
On Mon, May 5, 2008 at 5:44 AM, David Cournapeau
[EMAIL PROTECTED] wrote:
Hi,
While working again on the fftpack module, to clean things up and
speed some backends (in particular fftw3, which is really sub-optimal
right now), I remembered how much unaligned data pointer in numpy arrays
On Sat, May 3, 2008 at 5:31 PM, Keith Goodman [EMAIL PROTECTED] wrote:
On Sat, May 3, 2008 at 5:05 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
Robert Kern wrote:
I can get a ~20% improvement with the following:
In [9]: def mycut(x, i):
...: A = x[:i,:i]
...:
On Wed, Apr 30, 2008 at 8:16 PM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/4/30 Charles R Harris [EMAIL PROTECTED]:
Some operations on stacks of small matrices are easy to get, for
instance,
+,-,*,/, and matrix multiply. The last is the interesting one. If A and B
are stacks of
On Tue, Apr 29, 2008 at 2:07 PM, Gael Varoquaux
[EMAIL PROTECTED] wrote:
On Tue, Apr 29, 2008 at 11:03:58PM +0200, Anne Archibald wrote:
I am puzzled by this. What is the rationale for x[i,:] not being a 1-d
object?
It breaks A*B[i, :] where A and B are matrices.
Shouldn't that be
On Tue, Apr 29, 2008 at 2:43 PM, Christopher Barker [EMAIL PROTECTED]
wrote:
Timothy Hochberg wrote:
However, there is matrix related
stuff that is at best poorly supported now, namely operations on stacks
of arrays (or vectors).
Tim, this is important, but also appears
, indeed we have it with plain old
arrays, so I think that's really beside the point.
Not entirely, there's no good way do deal with arrays of matrices at
present. This could be fixed by tweaking dot, but it could also be part of a
reform of the matrix class.
[CHOP]
Timothy Hochberg wrote
[CHOP]
The proposals thus far don't address two of the major issues I have with the
matrix class:
1. The matrices and arrays should become more alike if possible and
should share more of the same code base. From what I've seen, the people who
write the code (for numpy) don't actually
On Wed, Apr 9, 2008 at 7:01 AM, David Huard [EMAIL PROTECTED] wrote:
Hello Jarrod and co.,
here is my personal version of the histogram saga.
The current version of histogram puts in the rightmost bin all values
larger than range, but does not put in the leftmost bin all values smaller
On Mon, Apr 7, 2008 at 9:57 AM, Gael Varoquaux
[EMAIL PROTECTED] wrote:
On Mon, Apr 07, 2008 at 06:22:28PM +0200, Stéfan van der Walt wrote:
You're only a beginner for a short while, and after that the lack of
namespaces really start to bite. I am all in favour of catering for
those who
On Mon, Apr 7, 2008 at 10:30 AM, Gael Varoquaux
[EMAIL PROTECTED] wrote:
On Mon, Apr 07, 2008 at 10:16:22AM -0700, Timothy Hochberg wrote:
I prefer 'all' for this since it has the correct meaning. 'api'
assuming
that one can remember what it means doesn't fit. The 'all' module
would
On Fri, Apr 4, 2008 at 12:47 PM, Robert Kern [EMAIL PROTECTED] wrote:
On Fri, Apr 4, 2008 at 9:56 AM, Will Lee [EMAIL PROTECTED] wrote:
I understand the implication for the floating point comparison and the
need
for allclose. However, I think in a doctest context, this behavior
makes
On Fri, Apr 4, 2008 at 3:31 PM, Anne Archibald [EMAIL PROTECTED]
wrote:
On 04/04/2008, Alan G Isaac [EMAIL PROTECTED] wrote:
On Fri, 4 Apr 2008, Gael Varoquaux apparently wrote:
I really thing numpy should be as thin as possible, so
that you can really say that it is only an array
[SNIP]
The text is getting kind of broken up so I'm chopping it and starting from
scratch.
To the question of whether it's a good idea to change the default behavior
of mean and friends to not reduce over the chosen axis, I have to agree with
Robert: too much code breakage for to little gain, so
On Mon, Mar 10, 2008 at 11:50 AM, Francesc Altet [EMAIL PROTECTED] wrote:
A Monday 10 March 2008, Charles R Harris escrigué:
On Mon, Mar 10, 2008 at 11:08 AM, Francesc Altet [EMAIL PROTECTED]
wrote:
Hi,
In order to allow in-kernel queries in PyTables (www.pytables.org)
work with
On Mon, Mar 3, 2008 at 12:57 PM, Ray Schumacher [EMAIL PROTECTED]
wrote:
I'm trying to figure out what numpy.correlate does, and, what are people
using to calculate the phase shift of 1D signals?
(I coded on routine that uses rfft, conjugate, ratio, irfft, and argmax
based on a paper by
On Mon, Mar 3, 2008 at 2:45 PM, Ray Schumacher [EMAIL PROTECTED]
wrote:
At 01:24 PM 3/3/2008, you wrote:
If you use 'same' or 'full' you'll end of with different
amounts of offset. I imagine that this is due to the way the data is
padded.
The offset should be deterministic based on the
On Wed, Feb 27, 2008 at 4:10 PM, Stuart Brorson [EMAIL PROTECTED] wrote:
I have been poking at the limits of NumPy's handling of powers of
zero. I find some results which are disturbing, at least to me.
Here they are:
[SNIP]
** 0^(x+y*i): This one is tricky; please bear with me and
On Thu, Feb 28, 2008 at 1:47 PM, Andrea Gavana [EMAIL PROTECTED]
wrote:
Hi All,
I have some problems in figuring out a solution for an issue I am
trying to solve. I have a 3D grid of dimension Nx, Ny, Nz; for every
cell of this grid, I calculate the cell centroids (with the cell
On Mon, Feb 4, 2008 at 6:56 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi,
Can this be changed:
If I have a list L the usual N.asarray( L ) works well -- however I
just discovered that N.asarray( reversed( L ) ) breaks my code
Apparently reversed( L ) returns an iterator object,
On Mon, Feb 4, 2008 at 11:59 AM, Stuart Brorson [EMAIL PROTECTED] wrote:
round - works fine.
ceil - throws exception: 'complex' object has no attribute 'ceil'
floor - throws exception: 'complex' object has no attribute 'floor'
fix - throws exception: 'complex' object has no
On Mon, Feb 4, 2008 at 10:34 AM, Stuart Brorson [EMAIL PROTECTED] wrote:
Hi --
I'm fiddling with NumPy's chopping and truncating operators: round,
fix, ceil, and floor. In the case where they are passed real args,
they work just fine. However, I find that when they are passed
complex
On Jan 30, 2008 10:10 AM, Charles R Harris [EMAIL PROTECTED]
wrote:
[SNIP]
IIRC, the way to do closures in Python is something like
In [5]: def factory(x) :
...: def f() :
...: print x
...: f.x = x
...: return f
...:
In [6]: f = factory(Hello
On Jan 30, 2008 12:43 PM, Anne Archibald [EMAIL PROTECTED] wrote:
On 30/01/2008, Francesc Altet [EMAIL PROTECTED] wrote:
A Wednesday 30 January 2008, Nadav Horesh escrigué:
In the following piece of code:
import numpy as N
R = N.arange(9).reshape(3,3)
ax = [1,2]
R
On Jan 14, 2008 12:37 PM, Neal Becker [EMAIL PROTECTED] wrote:
I've never liked that python silently ignores slices with out of range
indexes. I believe this is a source of bugs (it has been for me). It
goes
completely counter to the python philosophy.
I vote to ban them from numpy.
On Jan 11, 2008 9:59 PM, Basilisk96 [EMAIL PROTECTED] wrote:
On Jan 11, 2008, Colin J. Williams wrote:
You make a good case that it's good not
to need to ponder what sort of
vector you are dealing with.
My guess is that the answer to your
question is no but I would need to
play
On Jan 10, 2008 8:53 AM, Stefan van der Walt [EMAIL PROTECTED] wrote:
Hi all,
We currently use an array scalar of value False as the mask in
MaskedArray. I would like to make sure that the mask value cannot be
modified, but when I try
import numpy as np
x = np.bool_(False)
Another possible approach is to treat downcasting similar to underflow. That
is give it it's own flag in the errstate and people can set it to ignore,
warn or raise on downcasting as desired. One could potentially have two
flags, one for downcasting across kinds (float-int, int-bool) and one for
On Jan 7, 2008 2:00 PM, Charles R Harris [EMAIL PROTECTED] wrote:
Hi,
On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:
Another possible approach is to treat downcasting similar to underflow.
That is give it it's own flag in the errstate and people can set it to
ignore
On Jan 4, 2008 3:28 PM, Scott Ransom [EMAIL PROTECTED] wrote:
On Friday 04 January 2008 05:17:56 pm Stuart Brorson wrote:
I realize NumPy != Matlab, but I'd wager that most users would
think that this is the natural behavior..
Well, that behavior won't happen. We won't mutate the
Here's a baroque way to do it using generated code:
def cg_combinations(seqs):
n = len(seqs)
chunks = [def f(%s): % ', '.join('s%s' % i for i in range(n))]
for i in reversed(range(n)):
chunks.append( * (n -i) + for x%s in s%s: % (i, i))
On Dec 12, 2007 7:29 AM, Søren Dyrsting [EMAIL PROTECTED] wrote:
Hi all
I need to perform computations involving large arrays. A lot of rows and
no more than e.g. 34 columns. My first choice is python/numpy because I'm
already used to code in matlab.
However I'm experiencing memory
On Dec 10, 2007 7:21 AM, Hans Meine [EMAIL PROTECTED] wrote:
Hi again,
I noticed that clip() needs two parameters, but wouldn't it be nice and
straightforward to just pass min= or max= as keyword arg?
In [2]: a = arange(10)
In [3]: a.clip(min = 2, max = 5)
Out[3]: array([2, 2, 2, 3, 4,
On Dec 4, 2007 3:05 AM, David Cournapeau [EMAIL PROTECTED]
wrote:
Gael Varoquaux wrote:
On Tue, Dec 04, 2007 at 02:13:53PM +0900, David Cournapeau wrote:
With recent kernels, you can get really good latency if you do it right
(around 1-2 ms worst case under high load, including high IO
On Nov 28, 2007 12:59 AM, Stefan van der Walt [EMAIL PROTECTED] wrote:
On Tue, Nov 27, 2007 at 11:07:30PM -0700, Charles R Harris wrote:
This is not a trivial problem, as you can see by googling mixed integer
least
squares (MILS). Much will depend on the nature of the parameters, the
On Nov 26, 2007 2:30 PM, Hans-Andreas Engel [EMAIL PROTECTED]
wrote:
Dear all:
After using numpy for several weeks, I am very happy about it and
deeply impressed about the performance improvements it brings in my
python code. Now I have stumbled upon a problem, where I cannot use
numpy to
and use some internal
function I'd probably get even faster.
Could .flat() help me somehow?
I doubt it. I'll look at this a little later if I can find some time and see
what I can come up with.
Timothy Hochberg wrote:
On Nov 27, 2007 11:38 AM, Giorgio F. Gilestro [EMAIL PROTECTED
On Nov 15, 2007 9:11 AM, Hans Meine [EMAIL PROTECTED] wrote:
Am Donnerstag, 15. November 2007 16:29:12 schrieb Warren Focke:
On Thu, 15 Nov 2007, George Nurser wrote:
It looks to me like
a,b = (zeros((2,)),)*2
is equivalent to
x= zeros((2,))
a,b=(x,)*2
Correct.
If this
On Nov 14, 2007 9:08 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi,
First here is some test I ran, where I think the last command shows a bug:
a = N.arange(4); a.shape=2,2; a
[[0 1]
[2 3]]
aa = N.array((a,a,a)); aa
[[[0 1]
[2 3]]
[[0 1]
[2 3]]
[[0 1]
[2 3]]]
On Nov 13, 2007 6:57 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
On Nov 13, 2007 2:18 PM, Stefan van der Walt [EMAIL PROTECTED] wrote:
Hi Sebastian
On Tue, Nov 13, 2007 at 01:11:33PM +0100, Sebastian Haase wrote:
Hi,
I need to check the array dtype in a way that it is ignoring
On Nov 10, 2007 3:33 PM, Michael McNeil Forbes [EMAIL PROTECTED]
wrote:
Why are numpy warnings printed rather than issued using the standard
warnings library? I know that the behaviour can be controlled by
seterr(), but it seem rather unpythonic not to use the warnings library.
Is there an
On Nov 13, 2007 11:48 AM, Michael McNeil Forbes [EMAIL PROTECTED]
wrote:
On 13 Nov 2007, at 8:46 AM, Travis E. Oliphant wrote:
Michael McNeil Forbes wrote:
Why are numpy warnings printed rather than issued using the standard
warnings library? ... in util.py ...
The warn option
On Nov 8, 2007 3:28 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
On Nov 7, 2007 6:46 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:
On Nov 7, 2007 10:35 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
On Nov 7, 2007 5:23 PM, Matthieu Brucher [EMAIL PROTECTED]
wrote:
I
On Nov 7, 2007 10:35 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
On Nov 7, 2007 5:23 PM, Matthieu Brucher [EMAIL PROTECTED]
wrote:
I don't understand. I'm thinking of most math functions in the
C-library. In C a boolean is just an integer of 0 or 1 (quasi, by
definition).
On Nov 6, 2007 7:22 AM, Lisandro Dalcin [EMAIL PROTECTED] wrote:
Mmm...
It looks as it 'mask' is being inernally converted from
[True, False, False, False, True]
to
[1, 0, 0, 0, 1]
so your are finally getting
x[1], x[0], x[0], x[0], x[1]
That would be my guess as well. And, it looks
On Oct 31, 2007 3:18 AM, Francesc Altet [EMAIL PROTECTED] wrote:
[SNIP]
Incidentally, all the improvements of the PyTables flavor of numexpr
have been reported to the original authors, but, for the sake of
keeping numexpr simple, they decided to implement only some of them.
However, people
On 10/28/07, Matthieu Brucher [EMAIL PROTECTED] wrote:
Little correction, only c[(2,3)] gives me what I expect, not c[[2,3]],
which
is even stranger.
c[(2,3)] is the same as c[2,3] and obviously works as you expected.
Well, this is not indicated in the documentation.
This is true
On 10/26/07, Sebastian Haase [EMAIL PROTECTED] wrote:
On 10/26/07, David Cournapeau [EMAIL PROTECTED] wrote:
P.S: IMHO, this is one of the main limitation of numpy (or any language
using arrays for speed; and this is really difficult to optimize: you
need compilation, JIT or similar to
On 10/16/07, Julien Hillairet [EMAIL PROTECTED] wrote:
2007/10/16, Bill Baxter [EMAIL PROTECTED]:
dot() also serves as Numpy's matrix multiply function. So it's trying
to interpret that as a (3,N) matrix times a (3,N) matrix.
See examples here:
On 10/10/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 11/10/2007, Robert Kern [EMAIL PROTECTED] wrote:
Appending to a list then converting the list to an array is the most
straightforward way to do it. If the performance of this isn't a
problem, I
recommend leaving it alone.
Just a
On 10/3/07, Christopher Barker [EMAIL PROTECTED] wrote:
Stefan van der Walt wrote:
The current behavior is consistent and well
defined:
a[x] == a[int(x)]
This is all possible because of PEP 357:
I think that the current behavior has always been possible; arbitrary
objects can be passed
On 10/2/07, Christopher Barker [EMAIL PROTECTED] wrote:
Jarrod Millman wrote:
I am hoping that most of you agree with the general principle of
bringing NumPy and SciPy into compliance with the standard naming
conventions.
+1
3. When we release NumPy 1.1, we will convert all (or
On 10/1/07, Eagle Jones [EMAIL PROTECTED] wrote:
New to python and numpy; hopefully I'm missing something obvious. I'd
like to be able to slice an array with a name. For example:
_T = 6:10
_R = 10:15
A = identity(20)
foo = A[_T, _R]
This doesn't work. Nor does _T=range(6:10); _R =
If you take a look at the source of numpy's linalg.py, you'll see that
solves uses dgesv /zgesv for real /complex solves. If you Google dgesv, you
get:
DGESV computes the solution to a real system of linear equations
A * X = B,
where A is an N-by-N matrix and X and B are N-by-NRHS
On 9/14/07, Joris De Ridder [EMAIL PROTECTED] wrote:
the question is how to reduce user astonishment.
IMHO this is exactly the point. There seems to be two questions here:
1) do we want to reduce user astonishment, and 2) if yes, how could
we do this? Not everyone seems to be convinced of
On 9/11/07, Robert Kern [EMAIL PROTECTED] wrote:
Mike Ressler wrote:
The following seems to be a wart: is it expected?
Set up a 10x10 array and some indexing arrays:
a=arange(100)
a.shape=(10,10)
q=array([0,2,4,6,8])
r=array([0,5])
Suppose I want to extract only the even
On 9/9/07, Orest Kozyar [EMAIL PROTECTED] wrote:
In the following output (see below), why would x[1,None] work, but
x[1,None,2] or even x[1,2,None] not work?
None is the same thing as newaxis (newaxis is just an alias for None). Armed
with that tidbit, a little perusing of the docs should
On 9/6/07, George Sakkis [EMAIL PROTECTED] wrote:
On Sep 5, 12:29 pm, Francesc Altet [EMAIL PROTECTED] wrote:
A Wednesday 05 September 2007, George Sakkis escrigué:
I was surprised to see that an in-place modification of a 2-d array
turns out to be slower from the respective
On 8/29/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 29/08/2007, Timothy Hochberg [EMAIL PROTECTED] wrote:
The main inconsistency I see above is that resize appears to only
require
ownership of the data if in fact the number of items changes. I don't
think
that's actually a bug, but I
On 8/29/07, Charles R Harris [EMAIL PROTECTED] wrote:
I still don't see why the method is needed at all. Given the conditions on
the array, the only thing it buys you over the resize function or a reshape
is the automatic deletion of the old memory if new memory is allocated.
Can you
On 8/24/07, Christopher Barker [EMAIL PROTECTED] wrote:
[SNIP]
You can have several different NaN,
You can? I thought NaN was defined by IEEE 754 as a particular bit
pattern (one for each precision, anyway).
There's more than one way to spell NaN in binary and they tend to mean
different
On 8/24/07, Timothy Hochberg [EMAIL PROTECTED] wrote:
On 8/24/07, Christopher Barker [EMAIL PROTECTED] wrote:
[SNIP]
You can have several different NaN,
You can? I thought NaN was defined by IEEE 754 as a particular bit
pattern (one for each precision, anyway).
There's more than
On 8/21/07, Charles R Harris [EMAIL PROTECTED] wrote:
On 8/20/07, Geoffrey Zhu [EMAIL PROTECTED] wrote:
Hi Everyone,
I am wondering if there is an extended outer product. Take the
example in Guide to Numpy. Instead of doing an multiplication, I
want to call a custom function for
On 8/21/07, Anne Archibald [EMAIL PROTECTED] wrote:
On 21/08/07, Timothy Hochberg [EMAIL PROTECTED] wrote:
This is just a general comment on recent threads of this type and not
directed specifically at Chuck or anyone else.
IMO, the emphasis on avoiding FOR loops at all costs
On 8/8/07, mark [EMAIL PROTECTED] wrote:
I am trying to figure out a way to define a vectorized function inside
a class.
This is what I tried:
class test:
def __init__(self):
self.x = 3.0
def func(self,y):
rv = self.x
if y
Nicely done Travis. Working code is always better than theory. I copied your
interface and used the brute-force, non-numpy approach to construct the
pivot table. On the one hand, it doesn't preserve the order that the entires
are discovered in as the original does. On the other hand, it's about
On 8/2/07, Lisandro Dalcin [EMAIL PROTECTED] wrote:
using numpy-1.0.3, I believe there are a reference leak somewhere.
Using a debug build of Python 2.5.1 (--with-pydebug), I get the
following
import sys, gc
import numpy
def testleaks(func, args=(), kargs={}, repeats=5):
for i in
On 7/30/07, Geoffrey Zhu [EMAIL PROTECTED] wrote:
Hi Timothy,
On 7/30/07, Timothy Hochberg [EMAIL PROTECTED] wrote:
On 7/30/07, Geoffrey Zhu [EMAIL PROTECTED] wrote:
Hi Everyone,
I am wondering what is the best (and fast) way to build a pivot table
aside from the 'brute
On 7/31/07, Eric Emsellem [EMAIL PROTECTED] wrote:
Hi,
I discovered a bug in one of my program probably due to a round-off
problem in a arange statement.
I use something like:
step = (end - start) / (npix - 1.)
gridX = num.arange(start-step/2., end+step/2., step)
where I wish to get a
On 7/31/07, Fernando Perez [EMAIL PROTECTED] wrote:
Hi all,
consider this little script:
from numpy import poly1d, float, float32
p=poly1d([1.,2.])
three=float(3)
three32=float32(3)
print 'three*p:',three*p
print 'three32*p:',three32*p
print 'p*three32:',p*three32
which produces
On 7/30/07, Geoffrey Zhu [EMAIL PROTECTED] wrote:
Hi Everyone,
I am wondering what is the best (and fast) way to build a pivot table
aside from the 'brute force way?'
What's the brute force way? It's easier to offer an improved suggestion if
we know what we're trying to beat.
I want to
On 7/20/07, Vincent Nijs [EMAIL PROTECTED] wrote:
Gael,
Sounds very interesting! Would you mind sharing an example (with code if
possible) of how you organize your experimental data in pytables. I have
been thinking about how I might organize my data in pytables and would luv
to hear how an
On 7/20/07, Charles R Harris [EMAIL PROTECTED] wrote:
[SNIP]
I expect using sqrt(x) will be faster than x**.5.
You might want to check that. I believe that x**0.5 is one of the magic
special cases that is optimized to run fast (by calling sqrt in this case).
IIRC the full set is [-1, 0,
On 7/20/07, Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:
On 7/20/07, Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:
On 7/20/07, Charles R Harris [EMAIL PROTECTED] wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also
On 7/17/07, Keith Goodman [EMAIL PROTECTED] wrote:
On 7/17/07, David Cournapeau [EMAIL PROTECTED] wrote:
I noticed that min and max already ignore Nan, which raises the
question: why are there nanmin and nanmax functions ?
Using min and max when you have NaNs is dangerous. Here's an example:
On 7/15/07, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi,
I compared for a 256x256 float32 normal-noise (x0=100,sigma=1) array
the times to do
1./ (a*a)
vs.
a**-2
U.timeIt('1./(a*a)', 1000)
(0.00090877471871, 0.00939644563778, 0.00120674694689, 0.00068554628)
U.timeIt('a**-2', 1000)
FWIW, I'm fairly certain that the binaries for win32 are compiled using
mingw, so I'm pretty certain that it's possible. I use MSVCC myself, so I
can't be of much help though.
On 7/13/07, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
Hi,
By making replacing line 807 in fcompiler/__init__.py
[CHOP: lots of examples]
It looks like bool_s could use some general rejiggering. Let me put forth a
concrete proposal that's based on matching bool_ behaviour to that of
Python's bools. There is another route that could be taken where bool_ and
bool are completely decoupled, but I'll skip over
On 7/9/07, Xavier Saint-Mleux [EMAIL PROTECTED] wrote:
Hi all,
The conversion from a numpy scalar to a python int is not consistent
with python's native conversion (or numarray's): if the scalar is out
of bounds for an int, python and numarray automatically create a long
while numpy still
On 7/10/07, Mark.Miller [EMAIL PROTECTED] wrote:
Just ran across something that doesn't quite make sense to me at the
moment.
Here's some code:
numpy.__version__
'1.0.2'
def f1(b,c):
b=b.astype(int)
c=c.astype(int)
return b,c
b,c = numpy.fromfunction(f1,(5,5))
On 7/10/07, Mark.Miller [EMAIL PROTECTED] wrote:
Sorry...can you clarify? I think that some of your message got cut off.
-Mark
Timothy Hochberg wrote:
It's because you are using arrays as indices (aka Fancy-Indexing). When
you do this everything works differently. In this case, everything
On 7/9/07, Alan G Isaac [EMAIL PROTECTED] wrote:
On Mon, 9 Jul 2007, Timothy Hochberg apparently wrote:
Why not simply use and | instead of + and *?
A couple reasons, none determinative.
1. numpy is right a Python is wrong it this case
I don't think I agree with this. Once you've decided
performance for some cases. but even ten, with many cols,
it's nearly impossible to know.
That sounds sensible. I have an interesting thought on how to this that's a
bit hard to describe. I'll try to throw it together and post another version
today or tomorrow.
//Torgil
On 7/9/07, Timothy Hochberg
On 7/9/07, Timothy Hochberg [EMAIL PROTECTED] wrote:
On 7/9/07, Torgil Svensson [EMAIL PROTECTED] wrote:
Elegant solution. Very readable and takes care of row0 nicely.
I want to point out that this is much more efficient than my version
for random/late string representation changes
On 7/8/07, Vincent Nijs [EMAIL PROTECTED] wrote:
Thanks for looking into this Torgil! I agree that this is a much more
complicated setup. I'll check if there is anything I can do on the data
end.
Otherwise I'll go with Timothy's suggestion and read in numbers as floats
and convert to int later
On 7/6/07, Travis Oliphant [EMAIL PROTECTED] wrote:
Timothy Hochberg wrote:
I'm working on getting some old code working with numpy and I noticed
that bool_ is not a subclass of int. Given that python's bool
subclasses into and that the other scalar types are subclasses of
their respective
PROTECTED] wrote:
El dv 25 de 05 del 2007 a les 14:19 -0700, en/na Timothy Hochberg va
escriure:
Don't feel bad, when I had a very similar problem early on when we
were first adding multiple types and it mystified me for considerably
longer than this seems to have stumped you.
Well, I wouldn't say
On 6/25/07, Hanno Klemm [EMAIL PROTECTED] wrote:
Tim,
Thank you very much, the code does what's it expected to do.
Unfortunately the thing is still pretty slow on large data sets.
This does seem like the kind of thing that there should be a faster way to
compute, particularly since you are
On 6/25/07, Giorgio F. Gilestro [EMAIL PROTECTED] wrote:
I find myself in a situation where an array may contain not-Numbers
that I set as NaN.
Yet, whatever operation I do on that array( average, sum...) will
threat the NaN as infinite values rather then ignoring them as I'd
like it'd do.
On 6/22/07, Hanno Klemm [EMAIL PROTECTED] wrote:
Hi,
I have an array which represents regularly spaced spatial data. I now
would like to compute the (semi-)variogram, i.e.
gamma(h) = 1/N(h) \sum_{i,j\in N(h)} (z_i - z_j)**2,
where h is the (approximate) spatial difference between the
except IndexError:
pass
y_result /= y_denominators
return x_result, y_result
Thanks,
Hanno
Timothy Hochberg [EMAIL PROTECTED] said:
--=_Part_157389_1558912.1182523880067
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Content
On 5/25/07, Francesc Altet [EMAIL PROTECTED] wrote:
A Dijous 24 Maig 2007 20:33, Francesc Altet escrigué:
[SNIP]
Just for the record: I've found the culprit. The problem here was the use
of
the stride1 variable that was declared just above the main switch for
opcodes
as:
intp stride1 =
It's alwys helpful if you can include a self contained example so it's easy
to figure out exactly what you are getting at. I say that because I'm not
entirely sure of the context here -- it appears that this is not numpy
related issue at all, but rather a general python question. If so, I think
On 5/7/07, Martin Spacek [EMAIL PROTECTED] wrote:
I want to find the indices of all the None objects in an object array:
import numpy as np
i = np.array([0, 1, 2, None, 3, 4, None])
np.where(i == None)
()
Using == doesn't work the same way on object arrays as it does on, say,
an array of
On 4/30/07, David Goldsmith [EMAIL PROTECTED] wrote:
(hint what is arctan(0+1j)?)
Well, at the risk of embarrassing myself, using arctan(x+iy) = I get:
arctan(0+1i) = -i*log((0+i*1)/sqrt(0^2 + 1^2)) = -i*log(i/1) = -i*log(i)
= -i*log(exp(i*pi/2)) = -i*i*pi/2 = pi/2...
Is there some reason
1 - 100 of 123 matches
Mail list logo