[Numpy-discussion] Array of Arrays

2007-03-23 Thread Alexander Michael
How can I do something like the following?

a = empty((5,7), dtype=4 element array of floats)

c = a[:,-1] # last column of 4 element arrays

a[0,0] = 2.0
print a[0,0]
[2. 2. 2. 2.]

a[1,0] = 3.0
a[0,1] = a[0,0] * a[1,0]

print a[0,1]
[6. 6. 6. 6.]

etc.

As always, thanks for the help!
Alex
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] histogramdd shapes

2007-03-23 Thread David Huard

Thanks Ben,

I submitted a patch with your fix.
Ticket 189 http://projects.scipy.org/scipy/numpy/ticket/189

David

2007/3/17, Ben Granett [EMAIL PROTECTED]:


Hi,
There seems to be a problem with histogramdd (numpy 1.0.1).  Depending on
the
number of bins in each axis, it may not produce an array with the
dimensions
oriented correctly.  The bug is in the axis swapping code at the end of
the
routine.

To reproduce, you can run,
 x = random.randn(100,3)
 H, edg = histogramdd(x, bins=(7,5,6))
 print H.shape
(5, 7, 6)



A fix is to replace this:
# Shape into a proper matrix
hist = hist.reshape(sort(nbin))
for i,j in enumerate(ni):
hist = hist.swapaxes(i,j)
if (hist.shape == nbin).all():
break


with this:
for i in arange(nbin.size):
j = ni[i]
hist = hist.swapaxes(i,j)
ni[i],ni[j] = ni[j],ni[i]


That is, elements of ni need to be swapped too.

I hope this wasn't a known bug, but I couldn't find any mention of
it.  Thanks,
Ben

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-23 Thread Travis Oliphant
Stefan van der Walt wrote:

On Thu, Mar 22, 2007 at 04:33:53PM -0700, Travis Oliphant wrote:
  

I would rather opt for changing the spline fitting algorithm than for
padding with zeros.
 

  

 From what I understand, the splines used in ndimage have the implicit 
mirror-symmetric boundary condition which also allows them to be 
computed rapidly.  There may be ways to adapt other boundary conditions 
and maintain rapid evaluation, but it is not trivial as far as I know.  
Standard spline-fitting allows multiple boundary conditions because 
matrix inversion is used.  I think the spline-fitting done in ndimage 
relies on equal-spacing and mirror-symmetry to allow simple IIR filters 
to be used to compute the spline coefficients very rapidly.



Thanks, Travis.  I wasn't aware of these restrictions.

Would it be possible to call fitpack to do the spline fitting?  I
noticed that it doesn't exhibit the same mirror-property:
  

It doesn't.   It uses standard spline-fitting techniques (which are also 
slower).   So, yes you could call fitpack but for a large 
multi-dimensional array it's going to take longer. 

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-23 Thread Travis Oliphant
James Turner wrote:

By the way, ringing at sharp edges is an intrinsic feature of higher-
order spline interpolation, right? I believe this kind of interpolant
is really intended for smooth (band-limited) data. I'm not sure why
the pre-filtering makes a difference though; I don't yet understand
well enough what the pre-filter actually does.
  

Yes, ringing at edges is an intrinsic feature of higher-order spline 
interpolation.   Eventually, the spline interpolant becomes a 
band-limited sinc-interpolator which will assume that edges are really 
points sampled from a sinc.  So, if you re-sample at different points 
you get the ringing effect.  But, you shouldn't see a lot of ringing 
below order 7.

The pre-filter obtains the spline-interpolation coefficients.  The 
spline assumes the continuous function represented by the samples at x_i is

f(x,y) = sum(c_ij beta^o(x-x_i) beta^o(y-y_i))

The pre-filter is computing the coefficients c_ij.  You then evaluate 
at any point you like using the continuous function implied by the 
spline.   The function beta^o is a spline function and depends on the 
order of the spline.

I'm not sure what people normally do in computer graphics, since I'm
working more with natural band-limited images, but in the case of
Stefan's rotate_artifacts example, wouldn't it be appropriate to
use the 1st- or 0th-order spline instead of the 2nd order? If your
real-life data are smooth enough, however, then in theory the
ringing with higher orders should go away.
  

Yes, that is true.  But, you really shouldn't see much ringing with a 
3rd order spline, though.

I studied these splines for a while, but my memory can fail me.  You can 
look at the papers of M. Unser for much more information.  Here is a 
link to a good review paper.

http://bigwww.epfl.ch/publications/unser0001.pdf


-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-23 Thread Sebastian Haase
On 3/22/07, Bill Baxter [EMAIL PROTECTED] wrote:
 On 3/23/07, Eric Firing [EMAIL PROTECTED] wrote:
  Sebastian Haase wrote:
   On 3/22/07, Stefan van der Walt [EMAIL PROTECTED] wrote:
   On Thu, Mar 22, 2007 at 08:13:22PM -0400, Brian Blais wrote:
   Hello,
  
   I'd like to concatenate a couple of 1D arrays to make it a 2D array, 
   with two columns
   (one for each of the original 1D arrays).  I thought this would work:
  
  
   In [47]:a=arange(0,10,1)
  
   In [48]:a
   Out[48]:array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
  
   In [49]:b=arange(-10,0,1)
  
   In [51]:b
   Out[51]:array([-10,  -9,  -8,  -7,  -6,  -5,  -4,  -3,  -2,  -1])
  
   In [54]:concatenate((a,b))
   Out[54]:
   array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
-7,  -6,  -5,  -4,  -3,  -2,  -1])
  
   In [55]:concatenate((a,b),axis=1)
   Out[55]:
   array([  0,   1,   2,   3,   4,   5,   6,   7,   8,   9, -10,  -9,  -8,
-7,  -6,  -5,  -4,  -3,  -2,  -1])
  
  
   but it never expands the dimensions.  Do I have to do this...
  
   In [65]:concatenate((a.reshape(10,1),b.reshape(10,1)),axis=1)
   Out[65]:
   array([[  0, -10],
   [  1,  -9],
   [  2,  -8],
   [  3,  -7],
   [  4,  -6],
   [  5,  -5],
   [  6,  -4],
   [  7,  -3],
   [  8,  -2],
   [  9,  -1]])
  
  
   ?
  
   I thought there would be an easier way.  Did I overlook something?
   How about
  
   N.vstack((a,b)).T
  
   Also mentioned here should be the use of
   newaxis.
   As in
   a[:,newaxis]
  
   However I never got a good feel for how to use it, so I can't
   complete the code you would need.
 
  n [9]:c = N.concatenate((a[:,N.newaxis], b[:,N.newaxis]), axis=1)
 
  In [10]:c
  Out[10]:
  array([[  0, -10],
  [  1,  -9],
  [  2,  -8],
  [  3,  -7],
  [  4,  -6],
  [  5,  -5],
  [  6,  -4],
  [  7,  -3],
  [  8,  -2],
  [  9,  -1]])
 

 Then of course, there's r_ and c_:

 c = numpy.c_[a,b]

 c = numpy.r_[a[None],b[None]].T

 --bb
So,
None is the same as newaxis - right?

But what isa[None]   vs.  a[:,N.newaxis] ?

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-23 Thread Zachary Pincus
Hello all,

On Mar 23, 2007, at 3:04 AM, Stefan van der Walt wrote:

 On Thu, Mar 22, 2007 at 11:20:37PM -0700, Zachary Pincus wrote:
 The actual transform operators then use these coefficients to
 (properly) compute pixel values at different locations. I just
 assumed that the pre-filtering was broken (even on natural images
 with smooth variations) because images pre-filtered with the spline
 filter didn't look like traditional pre-filtered images ought.

 IMO, ticket 213 should be closed as PEBCAK (I'll do that forthwith);
 further I'm sorry to have caused Peter to be further bothered about
 this non-issue.

 The ticket is more concerned with the usability of ndimage.
 Currently, the principle of least surprise is violated.  If you use
 the default arguments of ndimage.rotate (except for axes, which should
 still be fixed to be (1,0) by default) and rotate Lena by 30 degrees,
 you get something with green splotches on (attached to the ticket).

 IMO, we should either change the default parameters or update the
 documentation before closing the ticket.

Hmm, this is worrisome. There really shouldn't be ringing on  
continuous-tone images like Lena  -- right? (And at no step in an  
image like that should gaussian filtering be necessary if you're  
doing spline interpolation -- also right?)

I assumed the problem was a non-issue after some testing with a few  
natural images didn't introduce any artifacts like those, but now  
with the Lena example I'm not sure. This is frustrating, to be sure,  
mostly because of my limited knowledge on the subject -- just enough  
to be dangerous.

Zach



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-23 Thread Stefan van der Walt
On Fri, Mar 23, 2007 at 11:09:03AM -0400, Robert Pyle wrote:
  In [65]:concatenate((a.reshape(10,1),b.reshape(10,1)),axis=1)
  Out[65]:
  array([[  0, -10],
  [  1,  -9],
  [  2,  -8],
  [  3,  -7],
  [  4,  -6],
  [  5,  -5],
  [  6,  -4],
  [  7,  -3],
  [  8,  -2],
  [  9,  -1]])
 
 
  ?
 
  I thought there would be an easier way.  Did I overlook something?
 
 What's wrong with zip? Or did *I* miss the point?  (I'm just getting  
 the hang of numpy.)

If you use 'zip' you don't make use of numpy's fast array mechanisms.
I attach some code you can run as a benchmark.  From my ipython
session:

In [1]: run vsbench.py

In [2]: timeit using_vstack(x,y)
1000 loops, best of 3: 997 µs per loop

In [3]: timeit using_zip(x,y)
10 loops, best of 3: 503 ms per loop

In [4]: timeit using_custom_iteration(x,y)
1000 loops, best of 3: 1.64 ms per loop

Cheers
Stéfan
import numpy as N

x = N.random.random(10)
y = N.random.random(10)

def using_vstack(*args):
return N.vstack(args).T

def using_zip(*args):
return N.array(zip(*args))

def using_custom_iteration(*args):
x = N.empty((len(args[0]),len(args)))
for i,vals in enumerate(args):
x[:,i] = vals
return x

check = N.vstack((x,y)).T
assert N.all(using_vstack(x,y) == check)
assert N.all(using_zip(x,y) == check)
assert N.all(using_custom_iteration(x,y) == check)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenating 1-D arrays to 2D

2007-03-23 Thread Bill Baxter
On 3/24/07, Sebastian Haase [EMAIL PROTECTED] wrote:
 
  Then of course, there's r_ and c_:
 
  c = numpy.c_[a,b]
 
  c = numpy.r_[a[None],b[None]].T
 
  --bb
 So,
 None is the same as newaxis - right?

Yes, newaxis is None.  None is newaxis.  Same thing.  I just don't see
much advantage in spelling it numpy.newaxis, since it's pretty common
and not likely to ever change.

 But what isa[None]   vs.  a[:,N.newaxis] ?

a[None] is the same as a[None,:].  It prepends the new axis, so a
shape of (5,) becomes (1,5), a row vector

a[:,None] puts the new axis after the first axis, so shape of (5,)
becomes (5,1) a column vector

a[None,:,None] puts a new axis both before and after, so (5,) becomes (1,5,1).

If 'a' is higher dimensional, the same kind of thing goes.  Wherever
None/newaxis appears in the index, insert a new axis there in the
result.

Say A is shape (2,3,4), then
A[:,None,:,None] is shape (2,1,3,1,4).  (Same as A[:,None,:,None,:]
since unspecified trailing indices are always taken to be ':')

--bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] swig svn commits

2007-03-23 Thread Sebastian Haase
Hi,
this is regarding the svn commit by wfspotz.

Author: [EMAIL PROTECTED]
Date: 2007-03-23 13:04:37 -0500 (Fri, 23 Mar 2007)
New Revision: 3593

Modified:
  trunk/numpy/doc/swig/numpy.i
Log:
Added typecheck typemaps to INPLACE_ARRAY typemap suites

Hi wfspotz,
I was just wondering if you consider checking for native byte order
as part of your inplace-typemap.
I found that to be a problem in my SWIG type maps
that I hi-jacked / boiled down  from the older numpy swig files.
(They are mostly for 3D image data, only for a small number of types)

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array of Arrays

2007-03-23 Thread Sebastian Haase
On 3/23/07, Alexander Michael [EMAIL PROTECTED] wrote:
 On 3/23/07, Nadav Horesh [EMAIL PROTECTED] wrote:
  How about
 
   a = empty((5,7,4))
   c = a[...,-1]

 Solely because I want to use the array with code that assumes it is
 working with two-dimensional arrays but yet only performs operations
 on the outer two-dimensional array that would be consistent with an
 inner array type (i.e. scalar assignment, element-wise
 multiplication, etc.). I own all the code, so perhaps I can replace
 a[mask,-1] with a[mask,-1,...] and such. Hmm. Not bad reminder,
 thanks.

Hold on -- aren't the ... at the *end* always implicit:
I you have
a.shape = (6,5,4,3)
a[3,2] is the same as a[3,2,:,:]  is the same as a[3,2,...]

only if you wanted a[...,3,2]   you would have to change your code !?
Am I confused !?


-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array of Arrays

2007-03-23 Thread Alexander Michael
On 3/23/07, Sebastian Haase [EMAIL PROTECTED] wrote:
 Hold on -- aren't the ... at the *end* always implicit:
 I you have
 a.shape = (6,5,4,3)
 a[3,2] is the same as a[3,2,:,:]  is the same as a[3,2,...]

 only if you wanted a[...,3,2]   you would have to change your code !?
 Am I confused !?

Ha! I knew numpy had some trick up its sleeve! I missed that part from
the numpy book (still digesting it all (the recycled paper doesn't go
down so well ;))), but it says it right there:

If the number of objects in the selection tuple is less than N, then ':' is
assumed for any remaining dimensions.

I think that is exactly what I am looking for-- thanks for making me
aware of it!

Alex
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Detect subclass of ndarray

2007-03-23 Thread Charles R Harris

Anyone,

What is the easiest way to detect in python/C if an object is a subclass of
ndarray?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Simple multi-arg wrapper for dot()

2007-03-23 Thread Bill Baxter
I mentioned in another thread Travis started on the scipy list that I
would find it useful if there were a function like dot() that could
multiply more than just two things.

Here's a sample implementation called 'mdot'.

mdot(a,b,c,d) == dot(dot(dot(a,b),c),d)
mdot(a,(b,c),d) == dot(dot(a,dot(b,c),d)
mdot(a,(b,(c,d))) == dot(a,dot(b,dot(c,d))

---
def mdot(*args):
Multiply all the arguments using matrix product rules.
The output is equivalent to multiplying the arguments one by one
from left to right using dot().
Precedence can be controlled by creating tuples of arguments,
for instance mdot(a,((b,c),d)) multiplies a (a*((b*c)*d)).
Note that this means the output of dot(a,b) and mdot(a,b) will differ if
a or b is a pure tuple of numbers.

if len(args)==1:
return args[0]
elif len(args)==2:
return _mdot_r(args[0],args[1])
else:
return _mdot_r(args[:-1],args[-1])

def _mdot_r(a,b):
Recursive helper for mdot
if type(a)==types.TupleType:
if len(a)1:
a = mdot(*a)
else:
a = a[0]
if type(b)==types.TupleType:
if len(b)1:
b = mdot(*b)
else:
b = b[0]
return numpy.dot(a,b)
---

It does about twice as many function calls as just using dot which I'm
guessing is mostly why its a good bit slower than just doing the
nested dot calls.  Maybe one of you smart folks can figure out a way
to bring it closer to dot's speed.

--bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple multi-arg wrapper for dot()

2007-03-23 Thread Anne Archibald
On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote:
 I mentioned in another thread Travis started on the scipy list that I
 would find it useful if there were a function like dot() that could
 multiply more than just two things.

 Here's a sample implementation called 'mdot'.

 mdot(a,b,c,d) == dot(dot(dot(a,b),c),d)
 mdot(a,(b,c),d) == dot(dot(a,dot(b,c),d)
 mdot(a,(b,(c,d))) == dot(a,dot(b,dot(c,d))

 ---
 def mdot(*args):
 Multiply all the arguments using matrix product rules.
 The output is equivalent to multiplying the arguments one by one
 from left to right using dot().
 Precedence can be controlled by creating tuples of arguments,
 for instance mdot(a,((b,c),d)) multiplies a (a*((b*c)*d)).
 Note that this means the output of dot(a,b) and mdot(a,b) will differ if
 a or b is a pure tuple of numbers.
 
 if len(args)==1:
 return args[0]
 elif len(args)==2:
 return _mdot_r(args[0],args[1])
 else:
 return _mdot_r(args[:-1],args[-1])

 def _mdot_r(a,b):
 Recursive helper for mdot
 if type(a)==types.TupleType:
 if len(a)1:
 a = mdot(*a)
 else:
 a = a[0]
 if type(b)==types.TupleType:
 if len(b)1:
 b = mdot(*b)
 else:
 b = b[0]
 return numpy.dot(a,b)

You can do better:
In [1]: from numpy import *

In [2]: a = array([[0,-1],[1,0]])

In [3]: reduce(dot,(a,a,a,a))
Out[3]:
array([[1, 0],
   [0, 1]])

In [4]: def mdot(*args):
   ...: return reduce(dot,args)
   ...:

In [5]: mdot(a,a,a,a)
Out[5]:
array([[1, 0],
   [0, 1]])


   Not every two-line Python function has to come pre-written
   --Tim Peters

Anne
P.S. reduce isn't even a numpy thing, it's one of python's
much-neglected lispy functions.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Detect subclass of ndarray

2007-03-23 Thread Anne Archibald
On 23/03/07, Charles R Harris [EMAIL PROTECTED] wrote:
 Anyone,

 What is the easiest way to detect in python/C if an object is a subclass of
 ndarray?

Um, how about isinstance or issubclass? (if you want strictness you
can look at whether x.__class__ is zeros(1).__class__)

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] nd_image.affine_transform edge effects

2007-03-23 Thread James Turner
Hi Zach,

 Hmm, this is worrisome. There really shouldn't be ringing on
 continuous-tone images like Lena  -- right? (And at no step in an
 image like that should gaussian filtering be necessary if you're
 doing spline interpolation -- also right?)

That's hard to say. Just because it's mainly a continuous-tone image
doesn't necessarily mean it is well sampled everywhere. This depends
both on the subject and the camera optics. Unlike the data I usually
work with, I think everyday digital photographs (probably a photo
scan in the case of Lena) do not generally have the detector sampling
frequency matched to the optical resolution of the image. If that's
true, the presence of aliasing in interpolated images depends on the
structure of the subject and whether the scene has edges or high-
frequency patterns in it.

Stefan's rotated Lena example is indeed a bit bizarre on zooming in!
However, the artefacts are clearly localized to distinct edges, so I
suspect this is indeed some kind of aliasing. Moreover, it looks like
Lena has been decimated (reduced in size) prior to the rotation. That
is definitely a good way to get artefacts, unless an anti-aliasing
filter is applied before shrinking the image. My impression is that
this image is probably somewhat undersampled (to understand exactly
what that means, read up on the Sampling Theorem).

I can't say that the green blobs are *not* caused by a flaw in the
algorithm. In particular, I am not used to working with colour images
of this kind, so I don't have a good feeling for what aliasing looks
like in a case like this. However, I definitely would not rule out
intrinsic aliasing effects as the cause of this problem without
investigating it further. One experiment might be to blur the original
Lena with a Gaussian whose sigma is 1 pixel of the shrunken image
before actually shrinking her, then do the rotation.

 The first was on Stefan's artificial data which had sharp edges, and
 got very nasty ringing artifacts even with 3rd order splines. From
 your recollection, is this expected behavior based on splines and the
 nature of Stefan's image, or more likely to be a bug?

Your question was aimed at Travis, so I don't want to discourage him
from answering it :-), but looking at this in more detail, I do think
the amplitude of the artefacts here is greater than I might expect due
to ringing with a quadratic b-spline kernel, which I think has minima
with amplitudes 10% of the central peak. There has to be SOME
oscillation, but in Stefan's rotate_artifacts example it seems to be
at the level of ~100%. Also, it is not present on one of the inner
edges for some reason. So I do wonder if the algorithm in nd_image is
making this worse than it needs to be. These thoughts seem consistent
with Travis's comments. Is it possible to transform the same data
using the fitpack routines that Stefan mentioned in post 026641 and
compare the results? I just tried doing a similar rotation in PyRAF on
a monochrome image with a bicubic spline, and see considerably smaller
artefacts (just a compact overshoot of probably a few % at the edge).

Cheers,

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion