> > It seems to me that at the python level, there's not much reason to
> > choose the dtypes proposal over ctypes.
I disagree. I can see a point in unifying ctypes and dtypes but in my
mindset they've got two different scopes.
ctypes is an interface to the c language
dtype is a data description
> generating such a PDF, with some color and formatting tweaks so that
> it prints legibly on black and white as well as looking nice on
> screen.
This sounds promising. I actually have had problems printing the
guide, every information box showed up completely black except for the
emblem.
On 11/
Sourceforge mailing list doesn't like me. Here's another try.
I've managed to use visit through it's cli and giving it data through
pyvtk so that this code
if __name__=='__main__':
from numpy import *
from numpy.random import randn
n=1
t=linspace(0,1,n)+randn(n)*0.01
a=t+t
> Think about gridding physical problems expressed in cylindrical or
> spherical coordinates. The natural slices are not rectangles. You can
> use rectangular storage but only with O(n^3) waste.
I don't get this argument. Are you slicing your spherical coordinates
with a cartesian coordinate sys
<[EMAIL PROTECTED]> wrote:
> On 8/29/06, Torgil Svensson <[EMAIL PROTECTED]> wrote:
> > something like this?
> >
> > def list2index(L):
> >uL=sorted(set(L))
> >idx=dict((y,x) for x,y in enumerate(uL))
> >return uL,asmatrix(fromiter((idx
iter(iterable, dtype, count) works.
>
> > If both is given it could
> > preallocate memory and we only have to iterate over L once.
> >
> Regardless, L is only iterated over once. In general you can't rewind
> iterators, so that's a requirement. This is accomplished
> Yes, because you are adding a signed scalar to an unsigned scalar and a
> float64 is the only thing that can handle it
>
> t+numpy.uint64(1)
Thanks, this make sense. This is a good thing to have back in the head.
//Torgil
On 8/31/06, Travis Oliphant <[EMAIL PROTECTED]&g
I'm using windows datetimes (100nano-seconds since 0001,1,1) as time
in a numpy array and was hit by this behaviour.
>>> numpy.__version__
'1.0b4'
>>> a=numpy.array([63292539433000L],numpy.uint64)
>>> t=a[0]
>>> t
63292539433000L
>>> type(t)
>>> t+1
6.3292539433e+017
>>> type(t+1)
>>> t=
man <[EMAIL PROTECTED]> wrote:
> On 8/29/06, Torgil Svensson <[EMAIL PROTECTED]> wrote:
> > something like this?
> >
> > def list2index(L):
> >uL=sorted(set(L))
> >idx=dict((y,x) for x,y in enumerate(uL))
> >return uL,asmatrix(fromiter((
something like this?
def list2index(L):
uL=sorted(set(L))
idx=dict((y,x) for x,y in enumerate(uL))
return uL,asmatrix(fromiter((idx[x] for x in L),dtype=int))
//Torgil
On 8/29/06, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 8/29/06, Tim Hochberg <[EMAIL PROTECTED]> wrote:
> > Keith Go
def list2index(L):
idx=dict((y,x) for x,y in enumerate(set(L)))
return asmatrix(fromiter((idx[x] for x in L),dtype=int))
# old
$ python test.py
Numbers: 29.4062280655 seconds
Characters: 84.6239070892 seconds
Dates: 117.560418844 seconds
# new
$ python test.py
Numbers: 1.79700994492 secon
06, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Torgil Svensson wrote:
> > Hi
> >
> > ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I
> > first thought I had a performance issue but discovered that std() used
> > lots of memory and th
This is really a matplotlib problem.
>From matplotlib users mailing-list archives:
> From: Charlie Moad <[EMAIL PROTECTED]>
> Snapshot build for use with numpy-1.0b3
> 2006-08-23 06:11
>
> Here is a snapshot of svn this morning for those wanting to work with
the numpy beta. Both builds are for
Hi
ndarray.std(axis=1) seems to have memory issues on large 2D-arrays. I
first thought I had a performance issue but discovered that std() used
lots of memory and therefore caused lots of swapping.
I want to get an array where element i is the stadard deviation of row
i in the 2D array. Using val
Not really recommended. But it might "work" with just running the
script twice. I'm doing that with beta1 and the matplotlib that was
current at the time of that release. Laziness i guess.
//Torgil
On 8/25/06, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > Message: 4
>>> import numpy
>>> numpy.__version__
'1.0b1'
>>> from numpy import *
>>> A = [1,2,3,4,5,6,7,8,9]
>>> B = asmatrix(reshape(A,(3,3)))
>>> B
matrix([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> B**0
matrix([[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
>>> power(B,0)
m
> What do people think? Is it worth it? This could be a coding-sprint
> effort at SciPy.
>
>
> -Travis
Sounds like a good idea. This should make old code work while not
imposing unneccessary restrictions on numpy due to backward
compatibility.
//Torgil
> They are supposed to have different defaults because the functional
> forms are largely for backward compatibility where axis=0 was the default.
>
> -Travis
Isn't backwards compatibility what "oldnumeric" is for?
+1 for consistent defaults.
-
I've done something similar a few years ago (numarray,numeric). I
started roughly at the middle and did 64 points from a reference point
(xc,yc). This point together with a point at the edge of the image
(xp,yp) also defined a reference angle (a0). (ysize,xsize) is the
shape of the intensity image.
19 matches
Mail list logo