[Numpy-discussion] Sorting objects with ndarrays

2010-02-28 Thread Gael Varoquaux
Hi,

I need to have list of objects that contain ndarrays to be sorted. The
reason that I want them sorted is that these list are populated in an
arbitrary order, but there order really doesn't matter, and I am trying
to make it reproducible for debugging and hashing.

The problem is that ndarrays cannot be compared. So I have tried to
override the 'cmp' in the 'sorted' function, however I am comparing
fairly complex objects, and I am having a hard time predicting wich
member of the object will contain the array. So I am building a more and
more complex 'cmp' replacement.

Does anybody has a good idea what a better strategy would be?

Cheers,

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy Windows Superpack

2010-02-28 Thread David Cournapeau
Hi Patrick,

On Sun, Feb 28, 2010 at 1:35 PM, Patrick Marsh patrickmars...@gmail.com wrote:
 Greetings,
 I have been trying to build the numpy superpack on windows using the
 binaries posted by David.

Could you post *exactly* the sequence of commands you executed ?
Especially at the beginning, building things can be frustrating
because the cause of failures can be hard to diagnose.

FWIW, I've just built the nosse version with mingw on windows 7, there
was no issue at all,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Sorting objects with ndarrays

2010-02-28 Thread Pauli Virtanen
su, 2010-02-28 kello 10:25 +0100, Gael Varoquaux kirjoitti:
[clip]
 The problem is that ndarrays cannot be compared. So I have tried to
 override the 'cmp' in the 'sorted' function, however I am comparing
 fairly complex objects, and I am having a hard time predicting wich
 member of the object will contain the array. 

I don't understand what predicting which member of the object means?
Do you mean that in the array, you have classes that contain ndarrays as
their attributes, and the classes have __cmp__ implemented?

If not, can you tell why

def xcmp(a, b):
a_nd = isinstance(a, ndarray)
b_nd = isinstance(b, ndarray)

if a_nd and b_nd:
pass # compare ndarrays in some way
elif a_nd:
return 1  # sort ndarrays first
elif b_nd:
return -1 # sort ndarrays first
else:
return cmp(a, b) # ordinary compare

does not work?

Cheers,
Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Sorting objects with ndarrays

2010-02-28 Thread Gael Varoquaux
On Sun, Feb 28, 2010 at 01:05:15PM +0200, Pauli Virtanen wrote:
 su, 2010-02-28 kello 10:25 +0100, Gael Varoquaux kirjoitti:
 [clip]
  The problem is that ndarrays cannot be compared. So I have tried to
  override the 'cmp' in the 'sorted' function, however I am comparing
  fairly complex objects, and I am having a hard time predicting wich
  member of the object will contain the array. 

 I don't understand what predicting which member of the object means?
 Do you mean that in the array, you have classes that contain ndarrays as
 their attributes, and the classes have __cmp__ implemented?

Well, I might not have to compare ndarrays, but fairly arbitrary
structures (dictionnaries, classes and lists) as I am dealing with
semi-structured data coming from a stack of unorganised experimental
data. Python has some logic for comparing these structures by comparing
their members, but if these are ndarrays, I am back to my original
problem.

 If not, can you tell why

 def xcmp(a, b):
 a_nd = isinstance(a, ndarray)
 b_nd = isinstance(b, ndarray)

 if a_nd and b_nd:
 pass # compare ndarrays in some way
 elif a_nd:
 return 1  # sort ndarrays first
 elif b_nd:
 return -1 # sort ndarrays first
 else:
 return cmp(a, b) # ordinary compare

 does not work?

Because I have things like lists of ndarrays, on which this fails. If I
could say: use recursively xcmp instead of cmp for this sort, it would
work, but the only way I can think of doing this is by monkey-patching
temporarily __builtins__.cmp, which I'd like to avoid, as it is not
thread safe.

Cheers,

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Sorting objects with ndarrays

2010-02-28 Thread Gael Varoquaux
On Sun, Feb 28, 2010 at 03:01:18PM +0100, Friedrich Romstedt wrote:
  Well, I might not have to compare ndarrays, but fairly arbitrary
  structures (dictionnaries, classes and lists) as I am dealing with
  semi-structured data coming from a stack of unorganised experimental
  data. Python has some logic for comparing these structures by comparing
  their members, but if these are ndarrays, I am back to my original
  problem.

 I also do not understand how to build an oder on such a thing at all,
 maybe you can give a simple example?

Well, you can't really build an order in the mathematical sens of
ordering. All I care is that if you give me twice the samed shuffled list
of elements, it comes out identical. I am fighting the fact that
dictionnaries in Python have no order, and thus shuflle the data from run
to run.

 Hmm, you could also replace numpy.greater and similar temporarily with
 an with statement like:

 # Everything as usual, comparing ndarrays results in ndarrays here.

 with monkeypatched_operators:
 # Comparing ndarrays may result in scalars or what you need.
 pass  # Perform the sorting

 # Everything as usual ...

 Though that's maybe not threadsafe too.

Yes, it's not threadsafe either.

 Then you could use ndarray.flatten().tolist() to compare them using
 usual Python semantics?

That solves the local problem of comparing 2 arrays (though will be quite
slow), but not the general problem of sorting in a reproducible order
(may it be arbitary) objects containing arrays.

Anyhow, I solved the problem implementing a subclass of dict and using it
everywhere in my code. Right now it seems to be working for what I need.

Cheers,

Gaël
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Sebastian Walter
On Sun, Feb 28, 2010 at 12:30 AM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
 2010/2/27 Sebastian Walter sebastian.wal...@gmail.com:
 I'm sorry this comment turns out to be confusing.

 Maybe it's not important.

 It has apparently quite the contrary effect of what I wanted to achieve:
 Since there is already a polynomial module  in numpy I wanted to
 highlight their difference
 and stress that they are used to do arithmetic, e.g. compute

 f([x],[y]) = [x] * (sin([x])**2 + [y])

 in Taylor arithmetic.

 That's cool!  You didn't mention that.  Now I step by step find out
 what your module (package?) is for.  You are a mathematician?  Many
 physicists complain that mathematicians cannot make their point ;-)

I studied physics but switchted to applied maths.


 I think I can use that to make my upy accept arbitrary functions, but
 how do you apply sin() to a TTP?

perform truncated Taylor expansion of  [y]_D = sin([x]_D), i.e.
y_d = d^d/dt^d  sin( \sum_{k=0}^{D-1} x_k t^k) |_{t=0}
to obtain an explicit algorithm.


 One more question: You said, t is an external paramter.  I, and
 maybe not only me, interpreted this as a complicated name for
 variable.  So I assumed it will be a parameter to some method of the
 TTP.  But it isn't?  It's just the way to define the ring?  You could
 define it the same in Fourier space, except that you have to make the
 array large enough from the beginning?  Why not doing that, and
 saying, your computation relies on the Fourier transform of the
 representation?  Can this give insight why TTPs are a ring and why
 they have zero divisors?
it has zero divisors because for instance multiplication of the two
polynomials t*t^{D-1}
truncated at t^{D-1} yields is zero.


  In fact, you /have/ to provide
 external binary operators, because I guess you also want to have
 numpy.ndarrays as left operand.  In that case, the undarray will have
 higher precedence, and will treat your data structure as a scalar,
 applying it to all the undarray's elements.

 well, actually it should treat it as a scalar since the Taylor
 polynomial is something like a real or complex number.

 Maybe I misunderstood you completely, but didn't you want to implement
 arrays of polynomials using numpy?  So I guess you want to apply a
 vector from numpy pairwise to the polynomials in the P-object?

 no, the vectorization is something different. It's purpose becomes
 only clear when applied in Algorithmic Differentiation.

 Hey folks, here's a cool package, but the maintainer didn't tell us! ;-)

well, thanks :)


  E.g. if you have a function
 f: R^N - R
 x - y=f(x)
 where x = [x1,...,xN]

 and you want to compute the gradient g(x) of f(x), then you can compute
 df(x)/dxn by propagating  the following array of Taylor polynomials:

 x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ...,
 UTPS([xN_0,0]), dtype=object)
 y = f(x)

 So what is the result of applying f to some UTPS instance, is it a
 plain number, is an UTPS again?  How do you calculate?

 Can one calculate the derivative of some function using your method at
 a certain point without knowledge of the analytical derivative?  I
 guess that's the purpose.
Yes, that's the whole point: Obtaining (higher order) derivatives of
functions at machine precision for which no symbolic representation is
readily available.
That includes computer codes with recursions (e.g. for loops) that are
a no-go for symbolic differentiation. Supposedly (I've never done
that) you can even differentiate Monte Carlo simulations in that way.


 if you want to have the complete gradient, you will have to repeat N
 times. Each time for the same zero'th coefficients [x1,...,xN].

 Using the vecorized version, you would do only one propagation
 x = numpy.array( UTPS([x1_0, 1,0,...,0]), ..., UTPS([xn_0,
 0,...,1,...0]), ..., UTPS([xN_0,0,,1]), dtype=object)
 y = f(x)

 i.e. you save the overhead of calling the same function N times.

 Ok, I understand.  Today it's too late, I will reason tomorrow about it.

 Why don't you use multidimensional arrays?  Has it reasons in the C
 accessibility?  Now, as I see it, you implement your strides manually.
  With a multidimensional array, you could even create arrays of shape
 (10, 12) of D-polynomials by storing an ndarray of shape (10, 12, D)
 or (D, 10, 12).

 the goal is to have several Taylor polynomials evaluated in the same
 base point, e.g.
 [x_0, x_{11}, x_{21}, x_{31}]
 [x_0, x_{12}, x_{22}, x_{32}]
 [x_0, x_{13}, x_{23}, x_{33}]

 i.e. P=3, D=3
 One could use an (P,D) array. However, one would do unnecessary
 computations since x_0 is the same for all P polynomials.
 I.e. one implements the data structure as
 [x]_{D,P} := [x_0, x_{1,1},...,x_{1,D-1},x_{2,1},...,x_{P,D-1}].

 This results in a non-const stride access.

 No?  I think, the D axis stride shold be P with offset 1 and the P
 axis stride 1?  Is there a specific reason to store it raveled?  And
 not x_0 and ndarray(shape = (P, D - 1))?

1) cosmetic reasons

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Sebastian Walter
On Sun, Feb 28, 2010 at 12:47 AM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
 Sebastian, and, please, be not offended by what I wrote.  I regret a
 bit my jokes ... It's simply too late at night.
no offense taken

 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy Windows Superpack

2010-02-28 Thread Ralf Gommers
On Sun, Feb 28, 2010 at 12:35 PM, Patrick Marsh patrickmars...@gmail.comwrote:

 Greetings,

 I have been trying to build the numpy superpack on windows using the
 binaries posted by David.  Unfortunately,  I haven't even been able to
 correctly write the site.cfg file to locate all three sets of binaries
 needed for the superpack.  When I manually specify to use only the sse3
 binaries, I can get numpy to build from trunk, but it fails miserably when
 running the test suite.  In fact, in python26, the tests freeze python and
 causes it to exit.  I figured I'd try to get this set up correctly before
 even trying to compile the cpucaps nsis plugin.

 If someone has successfully used David's binaries would they be willing to
 share their site.cfg?  Thanks in advance.


I haven't been able to finish the binaries just yet, but I got this to work.
Without needing a site.cfg file, the paver script should be enough. In
pavement.py:
NOSSE_CFG = {'BLAS': r'/Users/rgommers/.wine/drive_c/local/bin/yop/nosse',
 'LAPACK': r'/Users/rgommers/.wine/drive_c/local/bin/yop/nosse'}


Then:
$ paver bdist_superpack
SNIP

  FOUND:
libraries = ['lapack', 'blas']
library_dirs = ['/Users/rgommers/.wine/drive_c/local/bin/yop/nosse']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

etc


Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python 3 porting

2010-02-28 Thread Xavier Gnata
Hi,

Do you plan to make some noise about that when numpy2.0 will be release?
IMHO you should.
Do you for instance plan to have a clear announcement on the scipy web site?

Xavier
 Hi,

 The test suite passes now on Pythons 2.4 - 3.1. Further testing is very
 welcome -- also on Python 2.x. Please check that your favourite software
 still builds and works with SVN trunk Numpy.

 Currently, Scipy has some known failures because of

 (i) removed new= keyword in numpy.histogram
 (ii) Cython supports only native size/alignment PEP 3118 buffers, and
  Numpy arrays are most naturally expressed in the standardized
  sizes. Supporting the full struct module alignment stuff appears
  to be a slight PITA. I'll try to take a look at how to address
  this.

 But everything else seems to work on Python 2.6.

 ***

 Python version 2.4.6 (#2, Jan 21 2010, 23:27:36) [GCC 4.4.1]
 Ran 2509 tests in 18.892s
 OK (KNOWNFAIL=4, SKIP=2)

 Python version 2.5.4 (r254:67916, Jan 20 2010, 21:44:03) [GCC 4.4.1]
 Ran 2512 tests in 18.531s
 OK (KNOWNFAIL=4)

 Python version 2.6.4 (r264:75706, Dec  7 2009, 18:45:15) [GCC 4.4.1]
 Ran 2519 tests in 19.367s
 OK (KNOWNFAIL=4)

 Python version 3.1.1+ (r311:74480, Nov  2 2009, 14:49:22) [GCC 4.4.1]
 Ran 2518 tests in 23.239s
 OK (KNOWNFAIL=5)


 Cheers,
 Pauli

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Apply a function to all indices

2010-02-28 Thread Ernest Adrogué
28/02/10 @ 01:56 (-0500), thus spake David Warde-Farley:
 On 26-Feb-10, at 8:12 AM, Ernest Adrogué wrote:
 
  Thanks for the tip. I didn't know that...
  Also, frompyfunc appears to crash python when the last argument is 0:
 
  In [9]: func=np.frompyfunc(lambda x: x, 1, 0)
 
  In [10]: func(np.arange(5))
  Violació de segment
 
  This with Python 2.5.5, Numpy 1.3.0 on GNU/Linux.
 
 
 (previous reply mysteriously didn't make it to the list...)
 
 Still happening to me in latest svn. Can you file a ticket? 
 http://projects.scipy.org/numpy/report

Filed.
http://projects.scipy.org/numpy/ticket/1416

Cheers.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SciPy Mail List and Contacting Dave Kuhlman

2010-02-28 Thread Wayne Watson




Google shows there is a mail list for SciPy, but when I go to the web
page it shows GMANE, and various feeds for SciPy-Dev and User. Maybe
I'm missing something? 

Information about gmane.comp.python.scientific.user
The archive for this list can be read the following ways:

  On the web, using
frames and threads.
  
  On the web, using a
blog-like, flat interface.
  
  Using
an NNTP newsreader.
  
  RSS feeds:

  All
messages from the list, with excerpted texts.
  
  Topics
from the list, with excerpted texts.
  
  All
messages from the list, with complete texts.
  
  Topics
from the list, with complete texts.
  

  

D. Kuhlman wrote an interesting Tutorial about SciPy (course outline)
in June 2006. Has it ever been updated? 
-- 
"There is nothing so annoying as to have two people 
 talking when you're busy interrupting." -- Mark Twain


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading and Comparing Two Files

2010-02-28 Thread Robert Love

On Feb 28, 2010, at 6:24 AM, Friedrich Romstedt wrote:

 2010/2/28 Robert Love rblove_li...@comcast.net:
 What is the efficient numpy way to compare data from two different files?  
 For the nth line in each file I want to operate on the numbers.   I've been 
 using loadtxt()
 
 data_5x5 = N.loadtxt(file5)
 
 data_8x8 = N.loadtxt(file8)
 
 for line in data_5x5:
pos5 = N.array([line[0], line[1],  line[2]])
 
 I believe there are several ways of doing that, and mine might not be
 the most efficient at all:
 
 for line5, line8 in zip(data_5x5, data_8x8):
# line5 and line8 are row vectors of paired lines
pass
 
 complete = numpy.hstack(data_5x5, data_8x8)  # If data_5x5.shape[0] ==
 data_8x8.shape[0], i.e., same number of rows.
 for line in complete:
# complete is comprised of concatenated row vectors.
pass
 
 for idx in xrange(0, min(data_5x5.shape[0], data_8x8.shape[0])):
line5 = data_5x5[idx]
line8 = data_8x8[idx]
# Do sth with the vectors. Or:
a1 = data_5x5[idx, (0, 1, 2)]  # Extract items 0, 1, 2 of line idx
 of first file.
a2 = data_8x8[idx, (0, 42)]   # Extract items 0, 42 of line idx of
 second file.
 


Thank you, I will try this last method listed.  I need to actually compute with 
the values from the two files to perform my comparison and the time tag is in 
different formats.  Your method will get me access to the contents of two files.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SciPy Mail List and Contacting Dave Kuhlman

2010-02-28 Thread josef . pktd
On Sun, Feb 28, 2010 at 11:37 AM, Wayne Watson
sierra_mtnv...@sbcglobal.net wrote:
 Google shows there is a mail list for SciPy, but when I go to the web page
 it shows GMANE, and various feeds for SciPy-Dev and User. Maybe I'm missing
 something?

 Information about gmane.comp.python.scientific.user

that's the gmane mirror/interface to scipy-user

the original location of scipy lists is here
http://mail.scipy.org/mailman/listinfo


 The archive for this list can be read the following ways:

 On the web, using frames and threads.
 On the web, using a blog-like, flat interface.
 Using an NNTP newsreader.
 RSS feeds:

 All messages from the list, with excerpted texts.
 Topics from the list, with excerpted texts.
 All messages from the list, with complete texts.
 Topics from the list, with complete texts.

 D. Kuhlman wrote an interesting Tutorial about SciPy (course outline) in
 June 2006. Has it ever been updated?

Not that that know of, the basics haven't changed much, but the scipy
part is largely an index to the content which is out of date. The most
up to date documentation and index is http://docs.scipy.org/doc/

As general scipy tutorial, I also like
http://johnstachurski.net/lectures/scipy.html (also the other parts
with numpy tutorials)

Josef


 --
 There is nothing so annoying as to have two people
  talking when you're busy interrupting. -- Mark Twain

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Speedup a code using apply_along_axis

2010-02-28 Thread Xavier Gnata
On 02/28/2010 08:17 PM, josef.p...@gmail.com wrote:
 On Sun, Feb 28, 2010 at 1:51 PM, Xavier Gnata xavier.gn...@gmail.com wrote:
   
 Hi,

 I'm sure I reinventing the wheel with the following code:
 from numpy import *
 from scipy import polyfit,stats

 def f(x,y,z):
return x+y+z
 M=fromfunction(f,(2000,2000,10))

 def foo(M):
ramp=where(M1000)[0]
 
 is this really what you want? I think this returns the indices not the values

   
Correct! It should be M[where(M1000)]
l=len(ramp)
t=arange(l)
if(l1):
return polyfit(t,ramp,1)[0]
else:
return 0

 print apply_along_axis(foo,2,M)


 In real life M is not the result of one fromfunction call but it does
 not matter.
 The basic idea is to compute the slope (and only the slope) along one
 axis of 3D array.
 Only the values below a given threshold should be taken into account.

 The current code is ugly and slow.
 How to remove the len and the if statement?
 How to rewrite the code in a numpy oriented way?
 
 Getting the slope or the linear fit can be done completely vectorized
 see numpy-discussion threads last April with titles
 polyfit on multiple data points  polyfit performance

 Josef


   
Ok but the problem is that I also want to apply a threshold.
In some cases, I end up less than 2 values below the threshold: There is
nothing to fit and it should return 0.

Humsounds like masked arrays could help...but I'm not familiar with
masked arrays...

Xavier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Friedrich Romstedt
2010/2/28 Sebastian Walter sebastian.wal...@gmail.com:
 I think I can use that to make my upy accept arbitrary functions, but
 how do you apply sin() to a TTP?

 perform truncated Taylor expansion of  [y]_D = sin([x]_D), i.e.
 y_d = d^d/dt^d  sin( \sum_{k=0}^{D-1} x_k t^k) |_{t=0}
 to obtain an explicit algorithm.


 One more question: You said, t is an external paramter.  I, and
 maybe not only me, interpreted this as a complicated name for
 variable.  So I assumed it will be a parameter to some method of the
 TTP.  But it isn't?  It's just the way to define the ring?

I guess you overlooked this question?

 You could
 define it the same in Fourier space, except that you have to make the
 array large enough from the beginning?  Why not doing that, and
 saying, your computation relies on the Fourier transform of the
 representation?  Can this give insight why TTPs are a ring and why
 they have zero divisors?
 it has zero divisors because for instance multiplication of the two
 polynomials t*t^{D-1}
 truncated at t^{D-1} yields is zero.

Yes, but I wanted to have a look from Fourier space of view.  Because
there everything is just a multiplication, and one does not have to
perform the convolution in mind.  I have to give up here.  In fact, I
do not really understand why my approach also works with DFT and not
only analytically with steady FT.  Consider the snippet:

 p1 = gdft_polynomials.Polynomial([1])
 p1.get_dft(3)
array([ 1.+0.j,  1.+0.j,  1.+0.j])
 p2 = gdft_polynomials.Polynomial([0, 1])
 p2.get_dft(3)
array([ 1.0+0.j   , -0.5+0.8660254j, -0.5-0.8660254j])
 p2.get_dft(4)
array([  1.e+00 +0.e+00j,
 6.12303177e-17 +1.e+00j,
-1.e+00 +1.22460635e-16j,  -1.83690953e-16 -1.e+00j])

As one increases the number of padding zeros, one increases Fourier
space resolution, without affecting result:

 p3 = gdft_polynomials.Polynomial([0, 1, 0, 0, 0])
 print p2 * p2
Polynomial(real part =
[  1.85037171e-16  -7.40148683e-17   1.e+00]
imaginary part =
[  2.59052039e-16   7.40148683e-17   0.e+00])
 print p2 * p3
Polynomial(real part =
[  1.66533454e-16   1.48029737e-16   1.e+00  -7.40148683e-17
  -4.44089210e-16  -3.70074342e-17]
imaginary part =
[  9.25185854e-17   1.48029737e-16   2.96059473e-16   1.11022302e-16
  -3.70074342e-16  -1.44497045e-16])


It's a bit of mystery to me.  Of course, one can argue, well, DFT is
information maintaining, and thus one can feel that it should work,
but this is only a gut feeling.

  E.g. if you have a function
 f: R^N - R
 x - y=f(x)
 where x = [x1,...,xN]

 and you want to compute the gradient g(x) of f(x), then you can compute
 df(x)/dxn by propagating  the following array of Taylor polynomials:

 x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ...,
 UTPS([xN_0,0]), dtype=object)
 y = f(x)

 So what is the result of applying f to some UTPS instance, is it a
 plain number, is an UTPS again?  How do you calculate?

 Can one calculate the derivative of some function using your method at
 a certain point without knowledge of the analytical derivative?  I
 guess that's the purpose.
 Yes, that's the whole point: Obtaining (higher order) derivatives of
 functions at machine precision for which no symbolic representation is
 readily available.
 That includes computer codes with recursions (e.g. for loops) that are
 a no-go for symbolic differentiation. Supposedly (I've never done
 that) you can even differentiate Monte Carlo simulations in that way.

http://en.wikipedia.org/wiki/Automatic_differentiation:
Both classical methods have problems with calculating higher
derivatives, where the complexity and errors increase. Finally, both
classical methods are slow at computing the partial derivatives of a
function with respect to many inputs, as is needed for gradient-based
optimization algorithms. Automatic differentiation solves all of these
problems.

Yeah.

Note at this point, that there is no chance for your project to be
integrated in scipy, because you maybe HAVE TO PUBLISH UNDER GPL/CPAL
(the ADOL-C is licensed GPL or CPAL).  I cannot find CPL on
www.opensource.org, but I guess it has been renamed to CPAL?  Anyway,
CPAL looks long enough to be GPL style ;-).  I also published my
projects under GPL first, and switched now to MIT, because Python,
numpy, scipy, matplotlib, ... are published under BSD kind too, and in
fact I like MIT/BSD more.  Please check if your aren't violating GPL
style licenses with publishing under BSD style.

  E.g. if you have a function
 f: R^N - R
 x - y=f(x)
 where x = [x1,...,xN]

 and you want to compute the gradient g(x) of f(x), then you can compute
 df(x)/dxn by propagating  the following array of Taylor polynomials:

 x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ...,
 UTPS([xN_0,0]), dtype=object)
 y = f(x)

But doesn't the call f(x) with x.shape = (N,) result in an array too?
But you want a scalar number?

 if you want to have the complete gradient, 

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Sebastian Walter
On Sun, Feb 28, 2010 at 9:06 PM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
 2010/2/28 Sebastian Walter sebastian.wal...@gmail.com:
 I think I can use that to make my upy accept arbitrary functions, but
 how do you apply sin() to a TTP?

 perform truncated Taylor expansion of  [y]_D = sin([x]_D), i.e.
 y_d = d^d/dt^d  sin( \sum_{k=0}^{D-1} x_k t^k) |_{t=0}
 to obtain an explicit algorithm.


 One more question: You said, t is an external paramter.  I, and
 maybe not only me, interpreted this as a complicated name for
 variable.  So I assumed it will be a parameter to some method of the
 TTP.  But it isn't?  It's just the way to define the ring?

 I guess you overlooked this question?

thought this is a rhetorical question.
Tbh I don't know what the standard name for such a formal variable is.


 You could
 define it the same in Fourier space, except that you have to make the
 array large enough from the beginning?  Why not doing that, and
 saying, your computation relies on the Fourier transform of the
 representation?  Can this give insight why TTPs are a ring and why
 they have zero divisors?
 it has zero divisors because for instance multiplication of the two
 polynomials t*t^{D-1}
 truncated at t^{D-1} yields is zero.

 Yes, but I wanted to have a look from Fourier space of view.  Because
 there everything is just a multiplication, and one does not have to
 perform the convolution in mind.  I have to give up here.  In fact, I
 do not really understand why my approach also works with DFT and not
 only analytically with steady FT.  Consider the snippet:

 p1 = gdft_polynomials.Polynomial([1])
 p1.get_dft(3)
 array([ 1.+0.j,  1.+0.j,  1.+0.j])
 p2 = gdft_polynomials.Polynomial([0, 1])
 p2.get_dft(3)
 array([ 1.0+0.j       , -0.5+0.8660254j, -0.5-0.8660254j])
 p2.get_dft(4)
 array([  1.e+00 +0.e+00j,
         6.12303177e-17 +1.e+00j,
        -1.e+00 +1.22460635e-16j,  -1.83690953e-16 -1.e+00j])

 As one increases the number of padding zeros, one increases Fourier
 space resolution, without affecting result:

 p3 = gdft_polynomials.Polynomial([0, 1, 0, 0, 0])
 print p2 * p2
 Polynomial(real part =
 [  1.85037171e-16  -7.40148683e-17   1.e+00]
 imaginary part =
 [  2.59052039e-16   7.40148683e-17   0.e+00])
 print p2 * p3
 Polynomial(real part =
 [  1.66533454e-16   1.48029737e-16   1.e+00  -7.40148683e-17
  -4.44089210e-16  -3.70074342e-17]
 imaginary part =
 [  9.25185854e-17   1.48029737e-16   2.96059473e-16   1.11022302e-16
  -3.70074342e-16  -1.44497045e-16])


 It's a bit of mystery to me.  Of course, one can argue, well, DFT is
 information maintaining, and thus one can feel that it should work,
 but this is only a gut feeling.

I'm of no help here, I'm not familiar enough with the DFT. All I know is that
F( conv(x,y)) = F(x) * F(y) and that one can speed up the convolution
in that way.
And most operations on truncated Taylor polynomials result in
algorithms that contain convolutions.



  E.g. if you have a function
 f: R^N - R
 x - y=f(x)
 where x = [x1,...,xN]

 and you want to compute the gradient g(x) of f(x), then you can compute
 df(x)/dxn by propagating  the following array of Taylor polynomials:

 x = numpy.array( UTPS([x1_0, 0]), ..., UTPS([xn_0, 1]), ...,
 UTPS([xN_0,0]), dtype=object)
 y = f(x)

 So what is the result of applying f to some UTPS instance, is it a
 plain number, is an UTPS again?  How do you calculate?

 Can one calculate the derivative of some function using your method at
 a certain point without knowledge of the analytical derivative?  I
 guess that's the purpose.
 Yes, that's the whole point: Obtaining (higher order) derivatives of
 functions at machine precision for which no symbolic representation is
 readily available.
 That includes computer codes with recursions (e.g. for loops) that are
 a no-go for symbolic differentiation. Supposedly (I've never done
 that) you can even differentiate Monte Carlo simulations in that way.

 http://en.wikipedia.org/wiki/Automatic_differentiation:
 Both classical methods have problems with calculating higher
 derivatives, where the complexity and errors increase. Finally, both
 classical methods are slow at computing the partial derivatives of a
 function with respect to many inputs, as is needed for gradient-based
 optimization algorithms. Automatic differentiation solves all of these
 problems.

 Yeah.

 Note at this point, that there is no chance for your project to be
 integrated in scipy, because you maybe HAVE TO PUBLISH UNDER GPL/CPAL
 (the ADOL-C is licensed GPL or CPAL).  I cannot find CPL on
 www.opensource.org, but I guess it has been renamed to CPAL?  Anyway,
 CPAL looks long enough to be GPL style ;-).  I also published my
 projects under GPL first, and switched now to MIT, because Python,
 numpy, scipy, matplotlib, ... are published under BSD kind too, and in
 fact I like MIT/BSD more.  Please check if your aren't violating GPL
 style licenses 

[Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread Ian Mallett
Hi,

I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every
single one of the vec3 sublists, I am currently applying transformations.  I
need to optimize this with numpy.

To get proper results, as far as I can tell, the vec3 lists must be
expressed as vec4s: [[1,2,3],[4,5,6],[7,8,9],...] -
[[1,2,3,1],[4,5,6,1],[7,8,9,1],...].   Each of these needs to be multiplied
by either a translation matrix, or a rotation and translation matrix.

I don't really know how to approach the problem . . .

Thanks,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy Windows Superpack

2010-02-28 Thread Patrick Marsh
Hi David,

There really isn't much in the way of commands that I've used - I haven't
gotten that far.  So far, I've downloaded your binaries and then attempted
to set up my numpy site.cfg file to use your binaries.  I used the following
as my site.cfg

[atlas]
library_dirs
= 
d:\svn\BlasLapack\binaries\nosse,d:\svn\BlasLapack\binaries\sse2,d:\svn\BlasLapack\binaries\sse3
atlas_libs = lapack, f77blas, cblas, atlas

However, when invoking 'setup.py config' it won't recognize a list of
directories, even though the example site.cfg has an example with one.  As
soon as I don't use a list of paths and only use one of them, I can get
setup.py bdist_wininst to run without error.

I'm going to play around with the paver script and follow Ralf's
instructions in the previous example and see what happens.

Patrick




On Sun, Feb 28, 2010 at 4:31 AM, David Cournapeau courn...@gmail.comwrote:

 Hi Patrick,

 On Sun, Feb 28, 2010 at 1:35 PM, Patrick Marsh patrickmars...@gmail.com
 wrote:
  Greetings,
  I have been trying to build the numpy superpack on windows using the
  binaries posted by David.

 Could you post *exactly* the sequence of commands you executed ?
 Especially at the beginning, building things can be frustrating
 because the cause of failures can be hard to diagnose.

 FWIW, I've just built the nosse version with mingw on windows 7, there
 was no issue at all,

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Patrick Marsh
Ph.D. Student / NSSL Liaison to the HWT
School of Meteorology / University of Oklahoma
Cooperative Institute for Mesoscale Meteorological Studies
National Severe Storms Laboratory
http://www.patricktmarsh.com
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread josef . pktd
On Sun, Feb 28, 2010 at 7:53 PM, Ian Mallett geometr...@gmail.com wrote:
 Hi,

 I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every
 single one of the vec3 sublists, I am currently applying transformations.  I
 need to optimize this with numpy.

 To get proper results, as far as I can tell, the vec3 lists must be
 expressed as vec4s: [[1,2,3],[4,5,6],[7,8,9],...] -
 [[1,2,3,1],[4,5,6,1],[7,8,9,1],...].   Each of these needs to be multiplied
 by either a translation matrix, or a rotation and translation matrix.

 I don't really know how to approach the problem . . .

I'm not sure what exactly you need but it sounds similar to
Row-wise dot product in numpy-discussion Sept 2009

there are several threads on rotation matrix, which might be useful
depending on the structure of your arrays

Josef



 Thanks,
 Ian

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy Windows Superpack

2010-02-28 Thread David Cournapeau
Patrick Marsh wrote:
 Hi David,
 
 There really isn't much in the way of commands that I've used - I 
 haven't gotten that far.  So far, I've downloaded your binaries and then 
 attempted to set up my numpy site.cfg file to use your binaries.  I used 
 the following as my site.cfg
 
 [atlas]
 library_dirs 
 = 
 d:\svn\BlasLapack\binaries\nosse,d:\svn\BlasLapack\binaries\sse2,d:\svn\BlasLapack\binaries\sse3
 atlas_libs = lapack, f77blas, cblas, atlas
 
 However, when invoking 'setup.py config' it won't recognize a list of 
 directories, even though the example site.cfg has an example with one.

First, you should not put the three paths into library_dirs, it does not 
make much sense here (I am not sure what distutils does exactly in this 
case, whether it took the first path or the last one, but it will only 
take into account one).

Then, I would advise to bypass site.cfg altogether, and just use env 
variables, as done in the paver script. E.g.:

set LAPACK=d:\svn\BlasLapack\binaries\nosse
python setup.py build -c mingw32 bdist_wininst

because then you can easily control which one gets included from the 
command line. It is also much easier to script it this way.

 I'm going to play around with the paver script and follow Ralf's 
 instructions in the previous example and see what happens.

In general, you should use the paver script as a reference. It contains 
a lot of small best-practice things I have ended up after quite a while.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] take not respecting masked arrays?

2010-02-28 Thread Peter Shinners
I have a 2D masked array that has indices into a 1D array. I want to use 
some form of take to fetch the values into the 2D array. I've tried 
both numpy.take and numpy.ma.take, but they both return a new unmasked 
array.

I can get it working by converting the take results into a masked array 
and applying the original mask. But the values that are masked are 
actually illegal indices. This means I need to switch the take mode away 
from raise, but I actually want raise.

I'm still new to numpy, so it's likely I've overlooked something. Is 
there a masked take?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread Charles R Harris
On Sun, Feb 28, 2010 at 5:53 PM, Ian Mallett geometr...@gmail.com wrote:

 Hi,

 I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every
 single one of the vec3 sublists, I am currently applying transformations.  I
 need to optimize this with numpy.

 To get proper results, as far as I can tell, the vec3 lists must be
 expressed as vec4s: [[1,2,3],[4,5,6],[7,8,9],...] -
 [[1,2,3,1],[4,5,6,1],[7,8,9,1],...].   Each of these needs to be
 multiplied by either a translation matrix, or a rotation and translation
 matrix.

 I don't really know how to approach the problem . . .


As I understand it, you want *different* matrices applied to each vector?
There are generalized ufuncs, which I haven't tried, but for small vectors
there is a trick. Let's see... heck, gmane looks to be dead at the moment.
Anyway, I posted the method on the list here a couple of years ago and I'll
put up a link if I can find it when gmane comes back.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread Ian Mallett
On Sun, Feb 28, 2010 at 6:31 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 As I understand it, you want *different* matrices applied to each vector?

Nope--I need the same matrix applied to each vector.

Because 3D translation matrices must, if I understand correctly be 4x4, the
vectors must first be changed to length 4 (adding a 1 for the last term).
Then, the matrices would be applied.  Then, the resulting n*4 array would be
converted back into a n*3 array (simply by dropping the last term).

Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread Charles R Harris
On Sun, Feb 28, 2010 at 7:35 PM, Ian Mallett geometr...@gmail.com wrote:

 On Sun, Feb 28, 2010 at 6:31 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 As I understand it, you want *different* matrices applied to each vector?

 Nope--I need the same matrix applied to each vector.

 Because 3D translation matrices must, if I understand correctly be 4x4, the
 vectors must first be changed to length 4 (adding a 1 for the last term).
 Then, the matrices would be applied.  Then, the resulting n*4 array would be
 converted back into a n*3 array (simply by dropping the last term).


Why not just add a vector to get translation? There is no need to go the
homogeneous form. Or you can just leave the vectors at length 4 and use a
slice to access the first three components. That way you can leave the ones
in place.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread Ian Mallett
On Sun, Feb 28, 2010 at 6:54 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Why not just add a vector to get translation? There is no need to go the
 homogeneous form. Or you can just leave the vectors at length 4 and use a
 slice to access the first three components. That way you can leave the ones
 in place.

Oh . . . duh :D

Excellent--and a 3D rotation matrix is 3x3--so the list can remain n*3.  Now
the question is how to apply a rotation matrix to the array of vec3?

  Chuck

Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Iterative Matrix Multiplication

2010-02-28 Thread Charles R Harris
On Sun, Feb 28, 2010 at 7:58 PM, Ian Mallett geometr...@gmail.com wrote:

 On Sun, Feb 28, 2010 at 6:54 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Why not just add a vector to get translation? There is no need to go the
 homogeneous form. Or you can just leave the vectors at length 4 and use a
 slice to access the first three components. That way you can leave the ones
 in place.

 Oh . . . duh :D

 Excellent--and a 3D rotation matrix is 3x3--so the list can remain n*3.
 Now the question is how to apply a rotation matrix to the array of vec3?


It looks like you want something like

res = dot(vec, rot) + tran

You can avoid an extra copy being made by separating the parts

res = dot(vec, rot)
res += tran

where I've used arrays, not matrices. Note that the rotation matrix
multiplies every vector in the array.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] take not respecting masked arrays?

2010-02-28 Thread Pierre GM
On Feb 28, 2010, at 8:59 PM, Peter Shinners wrote:
 I have a 2D masked array that has indices into a 1D array. I want to use 
 some form of take to fetch the values into the 2D array. I've tried 
 both numpy.take and numpy.ma.take, but they both return a new unmasked 
 array.


Mmh. Surprising.  np.ma.take should return a masked array if it's given a 
masked array as input. Can you pastebin the array that gives you trouble ? I 
need to investigate that.
As a temporary workaround, use np.take on first the _data, then the _mask and 
construct a new masked array from the two results.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] take not respecting masked arrays?

2010-02-28 Thread Charles R Harris
On Sun, Feb 28, 2010 at 9:01 PM, Pierre GM pgmdevl...@gmail.com wrote:

 On Feb 28, 2010, at 8:59 PM, Peter Shinners wrote:
  I have a 2D masked array that has indices into a 1D array. I want to use
  some form of take to fetch the values into the 2D array. I've tried
  both numpy.take and numpy.ma.take, but they both return a new unmasked
  array.


 Mmh. Surprising.  np.ma.take should return a masked array if it's given a
 masked array as input. Can you pastebin the array that gives you trouble ? I
 need to investigate that.
 As a temporary workaround, use np.take on first the _data, then the _mask
 and construct a new masked array from the two results.
 __


Ah, Pierre, now that you are here... ;) Can you take a look at the invalid
value warnings in the masked array tests and maybe fix them up by turning
off the warnings where appropriate? I'd do it myself except that I hesitate
to touch masked array stuff.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] how to work with numpy.int8 in c

2010-02-28 Thread James Bergstra
Could someone point me to documentation (or even numpy src) that shows
how to allocate a numpy.int8 in C, or check to see if a PyObject is a
numpy.int8?

Thanks,

James
-- 
http://www-etud.iro.umontreal.ca/~bergstrj
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] take not respecting masked arrays?

2010-02-28 Thread Peter Shinners
On 02/28/2010 08:01 PM, Pierre GM wrote:
 On Feb 28, 2010, at 8:59 PM, Peter Shinners wrote:

 I have a 2D masked array that has indices into a 1D array. I want to use
 some form of take to fetch the values into the 2D array. I've tried
 both numpy.take and numpy.ma.take, but they both return a new unmasked
 array.
  

 Mmh. Surprising.  np.ma.take should return a masked array if it's given a 
 masked array as input. Can you pastebin the array that gives you trouble ? I 
 need to investigate that.
 As a temporary workaround, use np.take on first the _data, then the _mask and 
 construct a new masked array from the two results.


Here is the code as I would like it to work. 
http://python.pastebin.com/CsEnUrSa


import numpy as np

values = np.array((40, 18, 37, 9, 22))
index = np.arange(3)[None,:] + np.arange(5)[:,None]
mask = index = len(values)

maskedindex = np.ma.array(index, mask=mask)

lookup = np.ma.take(values, maskedindex)
# This fails with an index error, but illegal indices are masked.
# It succeeds when mode=clip, but it does not return a masked array.
print lookup

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] take not respecting masked arrays?

2010-02-28 Thread Pierre GM
On Feb 28, 2010, at 11:12 PM, Charles R Harris wrote:
 
 
 __
 
 Ah, Pierre, now that you are here... ;) Can you take a look at the invalid 
 value warnings in the masked array tests and maybe fix them up by turning  
 off the warnings where appropriate? I'd do it myself except that I hesitate 
 to touch masked array stuff.

Chuck, did you just hijack the thread ;) ?
To replace thiings in context: a few weeks ago, we had a global flag in 
numpy.ma that prevented warnings to be printed. Now, the warnings are handled 
in a case-by-case basis in the numy.ma functions. The problem is that when a 
numpy function is called on a masked array instead of its numpy.ma equivalent, 
the warnings are not trapped (that's what happen in the tests). In order to 
trap them, I'll have to use a new approach (something like __array_prepare__), 
which is not necessarily difficult but not trivial either. I should have plenty 
of free time in the next weeks. I'm afraid that won't be on time for the 2.0 
release, though, sorry.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to work with numpy.int8 in c

2010-02-28 Thread David Cournapeau
On Mon, Mar 1, 2010 at 1:35 PM, James Bergstra
bergs...@iro.umontreal.ca wrote:
 Could someone point me to documentation (or even numpy src) that shows
 how to allocate a numpy.int8 in C, or check to see if a PyObject is a
 numpy.int8?

In numpy, the type is described in the dtype type object, so you
should create the appropriate PyArray_Descr when creating an array.
The exact procedure depends on how you create the array, but a simple
way to create arrays is PyArray_SimpleNew, where you don't need to
create your own dtype, and just pass the correponding typenum (C
enum), something like PyArray_SimpleNew(nd, dims, NPY_INT8).

If you need to create from a function which only takes PyArray_Descr,
you can easily create a simple descriptor object from the enum using
PyArray_DescrFromType.

You can see examples in numpy/core/src/multiarray/ctors.c

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] take not respecting masked arrays?

2010-02-28 Thread Pierre GM
On Mar 1, 2010, at 1:02 AM, Peter Shinners wrote:
 Here is the code as I would like it to work. 
 http://python.pastebin.com/CsEnUrSa
 
 
 import numpy as np
 
 values = np.array((40, 18, 37, 9, 22))
 index = np.arange(3)[None,:] + np.arange(5)[:,None]
 mask = index = len(values)
 
 maskedindex = np.ma.array(index, mask=mask)
 
 lookup = np.ma.take(values, maskedindex)
 # This fails with an index error, but illegal indices are masked.

OK, but this doesn't even work on a regular ndarray: np.take(values, index) 
raises an IndexError as well. Not much I can do there, then.

 # It succeeds when mode=clip, but it does not return a masked array.
 print lookup


Oh, I get it... The problem is that we use `take` on a ndarray (values) with a 
masked array as indices (maskedindex). OK, I could modify some of the mechanics 
so that a masked array is output even if a ndarray was parsed.
Now, about masked indices: OK, you're right, the result should be masked 
accordingly. Can you open a ticket, then ?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion