[Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Pearu Peterson
Hi,
I just noticed some weird behavior in operations with uint64 and int,
heres an example:

 numpy.uint64(3)+1
4.0
 type(numpy.uint64(3)+1)
type 'numpy.float64'

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy re-factoring project

2010-06-13 Thread Pauli Virtanen
Sun, 13 Jun 2010 06:54:29 +0200, Sturla Molden wrote:
[clip: memory management only in the interface]

You forgot views: if memory management is done in the interface layer, it 
must also make sure that the memory pointed to by a view is never moved 
around, and not freed before all the views are freed.

This probably cannot be done with existing Python objects, since IIRC, 
they always own their own memory. So one should refactor the memory 
management out of ndarray on the interface side.

The core probably also needs to decide when to create views (indexing), 
so a callback to the interface is needed.

 Second: If we cannot figure out how much to allocate before starting to
 use C (very unlikely), 

The point is that while you in principle can figure it out, you don't 
*want to* do this work in the interface layer. Computing the output size 
is just code duplication.

 we can always use a callback to Python. Or we can

This essentially amounts to having a

NpyArray_resize

in the interface layer, which the core can call.

Here, however, one has the requirement that all arrays needed by core are 
always passed in to it, including temporary arrays. Sounds a bit like 
Fortran 77 -- it's not going to be bad, if the number of temporaries is 
not very large.

If the core needs to allocate arrays by itself, this will unevitably lead 
to memory management that cannot avoided by having the allocation done in 
the interface layer, as the garbage collector must not mistake a 
temporary array referenced only by the core as unused.

An alternative here is to have an allocation routine

NpyArray_new_tmp
NpyArray_del_tmp

whose allocated memory is released automatically, if left dangling, after 
the call to the core is completed. This would be useful for temporary 
arrays, although then one would have to be careful not ever to return 
memory allocated by this back to the caller.

 have two C functions, the first determining the amount of allocation,
 the second doing the computation.

That sounds like a PITA, think about LAPACK.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy, swig and docstring

2010-06-13 Thread Fabrice Silva
Hi folks,
I am trying to wrap a library with swig, distutils and numpy, and am
facing troubles. In fact, swig documentation mention that it is possible
to mention a module docsstring in the %module directive : 

%module(docstring=This is the example module's docstring) example

where example is the name of the module to be generated.
When using numpy.distutils, I get the following error:

building extension _example sources
error: mismatch of extension names: example.i provides None but expected
'example'

the build_src command tries to get the module name by using a regular
expression (see numpy/distutils/command/build_src.py)

_swig_module_name_match = re.compile(
r'\s*%module\s*(.*\(\s*package\s*=\s*(?Ppackage[\w_]+).*\)|)
\s*(?Pname[\w_]+)',re.I).match

which in fact does not cope with the docstring option.
Is there a simple workaround or should I manually compile (which is
possible in this simple project) ?

Thanks

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] PyArray_Scalar() and Unicode

2010-06-13 Thread Pauli Virtanen
Sat, 12 Jun 2010 17:33:13 -0700, Dan Roberts wrote:
[clip: refactoring PyArray_Scalar]
 There are a few problems with this.  The biggest problem for me is
 that it appears PyUCS2Buffer_FromUCS4() doesn't produce UCS2 at all, but
 rather UTF-16 since it produces surrogate pairs for code points above
 0x.  My first question is: is there any time when the data produced
 by PyUCS2Buffer_FromUCS4() wouldn't be parseable by a standards
 compliant UTF-16 decoder?  

Since UTF-16 = UCS-2 + surrogate pairs, as far as I know, the data 
produced should always be parseable by DecodeUTF16.

Conversion to real UCS-2 from UCS-4 would be a lossy procedure, since not 
all code points can be represented with 2 bytes.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Nadav Horesh
int can be larger than numpy.int64 therefore it should be coerced to float64 
(or float96/float128)

  Nadav

-Original Message-
From: numpy-discussion-boun...@scipy.org on behalf of Pearu Peterson
Sent: Sun 13-Jun-10 12:08
To: Discussion of Numerical Python
Subject: [Numpy-discussion] Possible bug: uint64 + int gives float64
 
Hi,
I just noticed some weird behavior in operations with uint64 and int,
heres an example:

 numpy.uint64(3)+1
4.0
 type(numpy.uint64(3)+1)
type 'numpy.float64'

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Pearu Peterson
On Sun, Jun 13, 2010 at 4:45 PM, Nadav Horesh nad...@visionsense.com wrote:
 int can be larger than numpy.int64 therefore it should be coerced to float64 
 (or float96/float128)

Ok, I see. The results type is defined by the types of operands, not
by their values. I guess
this has been discussed earlier but with small operands this feature
may be unexpected.
For example, with the same rule the result of int64 + int should be
float64 while currently
it is int64.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Charles R Harris
On Sun, Jun 13, 2010 at 9:20 AM, Pearu Peterson pearu.peter...@gmail.comwrote:

 On Sun, Jun 13, 2010 at 4:45 PM, Nadav Horesh nad...@visionsense.com
 wrote:
  int can be larger than numpy.int64 therefore it should be coerced to
 float64 (or float96/float128)

 Ok, I see. The results type is defined by the types of operands, not
 by their values. I guess
 this has been discussed earlier but with small operands this feature
 may be unexpected.
 For example, with the same rule the result of int64 + int should be
 float64 while currently
 it is int64.


It's the combination of unsigned with signed that causes the promotion. The
int64 type can't hold the largest values in uint64. Strictly speaking,
doubles can't hold either of the 64 bit integer types without loss of
precision but at least the degradation is more gradual. Another possibility
here would be to raise an error/warning because no suitable integer type is
available, that would be perhaps less surprising than the promotion to
float. IIRC, there is also a case where numeric arrays get promoted to
objects, but I don't see my old table of such things floating about at the
moment.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Sturla Molden
Den 13.06.2010 18:19, skrev Charles R Harris:

 It's the combination of unsigned with signed that causes the 
 promotion. The int64 type can't hold the largest values in uint64. 
 Strictly speaking, doubles can't hold either of the 64 bit integer 
 types without loss of precision but at least the degradation is more 
 gradual. Another possibility here would be to raise an error/warning 
 because no suitable integer type is available, that would be perhaps 
 less surprising than the promotion to float.

As I see it, the 'correct' solution would be coercion to an arbitrarily 
long integer, such as 'long' in Python 2 and 'int' in Python 3.

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tensor contraction

2010-06-13 Thread Friedrich Romstedt
2010/6/13 Pauli Virtanen p...@iki.fi:
 def tensor_contraction_single(tensor, dimensions):
    Perform a single tensor contraction over the dimensions given
    swap = [x for x in range(tensor.ndim)
            if x not in dimensions] + list(dimensions)
    x = tensor.transpose(swap)
    for k in range(len(dimensions) - 1):
        x = np.diagonal(x, axis1=-2, axis2=-1)
    return x.sum(axis=-1)

 def _preserve_indices(indices, removed):
    Adjust values of indices after some items are removed
    for r in reversed(sorted(removed)):
        indices = [j if j = r else j-1 for j in indices]
    return indices

 def tensor_contraction(tensor, contractions):
    Perform several tensor contractions
    while contractions:
        dimensions = contractions.pop(0)
        tensor = tensor_contraction_single(tensor, dimensions)
        contractions = [_preserve_indices(c, dimensions)
                        for c in contractions]
    return tensor

Pauli,

I choke on your code for 10 min or so.  I believe there could be some
more comments.

Alan,

Do you really need multiple tensor contractions in one step?  If yes,
I'd like to put in my 2 cents in coding such one using a different
approach, doing all the contractions in one step (via broadcasting).
It's challenging.  We can generalise this problem as much as we want,
e.g. to contracting three instead of only two dimensions.  But first,
in case you have only two dimensions to contract at one single time
instance, then Josef's first suggestion would be fine I think.  Simply
push out the diagonal dimension to the end via .diagonal() and sum
over the last so created dimension.  E.g.:

# First we create some bogus array to play with:
 a = numpy.arange(5 ** 4).reshape(5, 5, 5, 5)

# Let's see how .diagonal() acts (just FYI, I haven't verified that it
is what we want):
 a.diagonal(axis1=0, axis2=3)
array([[[  0, 126, 252, 378, 504],
[  5, 131, 257, 383, 509],
[ 10, 136, 262, 388, 514],
[ 15, 141, 267, 393, 519],
[ 20, 146, 272, 398, 524]],

   [[ 25, 151, 277, 403, 529],
[ 30, 156, 282, 408, 534],
[ 35, 161, 287, 413, 539],
[ 40, 166, 292, 418, 544],
[ 45, 171, 297, 423, 549]],

   [[ 50, 176, 302, 428, 554],
[ 55, 181, 307, 433, 559],
[ 60, 186, 312, 438, 564],
[ 65, 191, 317, 443, 569],
[ 70, 196, 322, 448, 574]],

   [[ 75, 201, 327, 453, 579],
[ 80, 206, 332, 458, 584],
[ 85, 211, 337, 463, 589],
[ 90, 216, 342, 468, 594],
[ 95, 221, 347, 473, 599]],

   [[100, 226, 352, 478, 604],
[105, 231, 357, 483, 609],
[110, 236, 362, 488, 614],
[115, 241, 367, 493, 619],
[120, 246, 372, 498, 624]]])
# Here, you can see (obviously :-) that the last dimension is the
diagonal ... just believe in the semantics 
 a.diagonal(axis1=0, axis2=3).shape
(5, 5, 5)

# Sum over the diagonal shape parameter:
# Again I didn't check this result's numbers.
 a.diagonal(axis1=0, axis2=3).sum(axis=-1)
array([[1260, 1285, 1310, 1335, 1360],
   [1385, 1410, 1435, 1460, 1485],
   [1510, 1535, 1560, 1585, 1610],
   [1635, 1660, 1685, 1710, 1735],
   [1760, 1785, 1810, 1835, 1860]])

The .diagonal() approach has the benefit that one doesn't have to care
about where the diagonal dimension ends up, it's always the last
dimension of the resulting array.  With my solution, this was not so
fine, because it could also become the first dimension of the
resulting array.

For the challenging part, I'll await your response first ...

Friedrich
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tensor contraction

2010-06-13 Thread Alan Bromborsky
Friedrich Romstedt wrote:
 2010/6/13 Pauli Virtanen p...@iki.fi:
   
 def tensor_contraction_single(tensor, dimensions):
Perform a single tensor contraction over the dimensions given
swap = [x for x in range(tensor.ndim)
if x not in dimensions] + list(dimensions)
x = tensor.transpose(swap)
for k in range(len(dimensions) - 1):
x = np.diagonal(x, axis1=-2, axis2=-1)
return x.sum(axis=-1)

 def _preserve_indices(indices, removed):
Adjust values of indices after some items are removed
for r in reversed(sorted(removed)):
indices = [j if j = r else j-1 for j in indices]
return indices

 def tensor_contraction(tensor, contractions):
Perform several tensor contractions
while contractions:
dimensions = contractions.pop(0)
tensor = tensor_contraction_single(tensor, dimensions)
contractions = [_preserve_indices(c, dimensions)
for c in contractions]
return tensor
 

 Pauli,

 I choke on your code for 10 min or so.  I believe there could be some
 more comments.

 Alan,

 Do you really need multiple tensor contractions in one step?  If yes,
 I'd like to put in my 2 cents in coding such one using a different
 approach, doing all the contractions in one step (via broadcasting).
 It's challenging.  We can generalise this problem as much as we want,
 e.g. to contracting three instead of only two dimensions.  But first,
 in case you have only two dimensions to contract at one single time
 instance, then Josef's first suggestion would be fine I think.  Simply
 push out the diagonal dimension to the end via .diagonal() and sum
 over the last so created dimension.  E.g.:

 # First we create some bogus array to play with:
   
 a = numpy.arange(5 ** 4).reshape(5, 5, 5, 5)
 

 # Let's see how .diagonal() acts (just FYI, I haven't verified that it
 is what we want):
   
 a.diagonal(axis1=0, axis2=3)
 
 array([[[  0, 126, 252, 378, 504],
 [  5, 131, 257, 383, 509],
 [ 10, 136, 262, 388, 514],
 [ 15, 141, 267, 393, 519],
 [ 20, 146, 272, 398, 524]],

[[ 25, 151, 277, 403, 529],
 [ 30, 156, 282, 408, 534],
 [ 35, 161, 287, 413, 539],
 [ 40, 166, 292, 418, 544],
 [ 45, 171, 297, 423, 549]],

[[ 50, 176, 302, 428, 554],
 [ 55, 181, 307, 433, 559],
 [ 60, 186, 312, 438, 564],
 [ 65, 191, 317, 443, 569],
 [ 70, 196, 322, 448, 574]],

[[ 75, 201, 327, 453, 579],
 [ 80, 206, 332, 458, 584],
 [ 85, 211, 337, 463, 589],
 [ 90, 216, 342, 468, 594],
 [ 95, 221, 347, 473, 599]],

[[100, 226, 352, 478, 604],
 [105, 231, 357, 483, 609],
 [110, 236, 362, 488, 614],
 [115, 241, 367, 493, 619],
 [120, 246, 372, 498, 624]]])
 # Here, you can see (obviously :-) that the last dimension is the
 diagonal ... just believe in the semantics 
   
 a.diagonal(axis1=0, axis2=3).shape
 
 (5, 5, 5)

 # Sum over the diagonal shape parameter:
 # Again I didn't check this result's numbers.
   
 a.diagonal(axis1=0, axis2=3).sum(axis=-1)
 
 array([[1260, 1285, 1310, 1335, 1360],
[1385, 1410, 1435, 1460, 1485],
[1510, 1535, 1560, 1585, 1610],
[1635, 1660, 1685, 1710, 1735],
[1760, 1785, 1810, 1835, 1860]])

 The .diagonal() approach has the benefit that one doesn't have to care
 about where the diagonal dimension ends up, it's always the last
 dimension of the resulting array.  With my solution, this was not so
 fine, because it could also become the first dimension of the
 resulting array.

 For the challenging part, I'll await your response first ...

 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   
I am writing symbolic tensor package for general relativity.  In making 
symbolic tensors concrete
I generate numpy arrays stuffed with sympy functions and symbols.  The 
operations are tensor product
(numpy.multiply.outer), permutation of indices (swapaxes),  partial and 
covariant (both vector operators that
increase array dimensions by one) differentiation, and contraction.  I 
think I need to do the contraction last
to make sure everything comes out correctly.  Thus in many cases I would 
be performing multiple contractions
on the tensor resulting from all the other operations.  One question to 
ask would be considering that I am stuffing
the arrays with symbolic objects and all the operations on the objects 
would be done using the sympy modules,
would using numpy operations to perform the contractions really save any 
time over just doing the contraction in
python code with a numpy array.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tensor contraction

2010-06-13 Thread Friedrich Romstedt
2010/6/13 Alan Bromborsky abro...@verizon.net:
 I am writing symbolic tensor package for general relativity.  In making
 symbolic tensors concrete
 I generate numpy arrays stuffed with sympy functions and symbols.

That sound's interesting.

 The
 operations are tensor product
 (numpy.multiply.outer), permutation of indices (swapaxes),  partial and
 covariant (both vector operators that
 increase array dimensions by one) differentiation, and contraction.

I would like to know more precisely what this differentiations do, and
how it comes that they add an index to the tensor.

 I think I need to do the contraction last
 to make sure everything comes out correctly.  Thus in many cases I would
 be performing multiple contractions
 on the tensor resulting from all the other operations.

Hm, ok, so I guess I shall give my 1 cent now.

Ok.

# First attempt (FYI, failed):

# The general procedure is, to extract a multi-dimensional diagonal array.
# The sum \sum_{ij = 0}^{M} \sum_{kl = 0}^{N} is actually the sum over a
# 2D array with indices I \equiv i \equiv j and K \equiv k \equiv
l.  Meaning:
# \sum_{(I, K) = (0, 0)}^{(M, N)}.
# Thus, if we extract the indices with 2D arrays [[0], [1], ...,
[N - 1]] for I and
# [[0, 1, ..., M - 1]] on the other side for K, then numpy's broadcasting
# mechanism will broadcast them to the same shape, yielding (N, M) arrays.
# Then finally we sum over this X last dimensions when there were X
# contractions, and we're done.

# Hmmm, when looking closer at the problem, it seems that this isn't
# adequate.  Because we would have to insert open slices, but cannot
# create them outside of the [] operator ...

# So now follows second attemt:

def contract(arr, *contractions):
*CONTRACTIONS is e.g.:
(0, 1), (2, 3)
meaning two contractions, one of 0  1, and one of 2  2,
but also:
(0, 1, 2),
is allowed, meaning contract 0  1  2.

# First, we check if we can contract using the *contractions* given ...

for contraction in contractions:
# Extract the dimensions used.
dimensions = numpy.asarray(arr.shape)[list(contraction)]

# Check if they are all the same.
dimensionsdiff = dimensions - dimensions[0]
if numpy.abs(dimensionsdiff).sum() != 0:
raise ValueError('Contracted indices must be of same dimension.')

# So now, we can contract.
#
# First, pull the contracted dimensions all to the front ...

# The names of the indices.
names = range(arr.ndim)

# Pull all of the contractions.
names_pulled = []
for contraction in contractions:
names_pulled = names_pulled + list(contraction)
# Remove the pulled index names from the pool:
for used_index in contraction:
# Some more sanity check
if used_index not in names:
raise ValueError('Each index can only be used in one
contraction.')
names.remove(used_index)

# Concatenate the pulled indices and the left-over indices.
names_final = names_pulled + names

# Perform the swap.
arr = arr.transpose(names_final)

# Perform the contractions ...

for contraction in contractions:
# The respective indices are now, since we pulled them, the
frontmost indices:
ncontraction = len(contraction)
# The index array:
# shape[0] = shape[1] = ... = shape[ncontraction - 1]
I = numpy.arange(0, arr.shape[0])
# Perform contraction:
index = [I] * ncontraction
arr = arr[tuple(index)].sum(axis=0)

# If I made no mistake, we should be done now.
return arr

Ok, it didn't get much shorter than Pauli's solution, so you decide ...

 One question to
 ask would be considering that I am stuffing
 the arrays with symbolic objects and all the operations on the objects
 would be done using the sympy modules,
 would using numpy operations to perform the contractions really save any
 time over just doing the contraction in
 python code with a numpy array.

I don't know anything about sympy.  I think there's some typo around:
I guess you mean creating some /sympy/ array and doing the operations
using that instead of using a numpy array having sympy dtype=object
content?

Friedrich
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tensor contraction

2010-06-13 Thread David Goldsmith
Is this not what
core.numeric.tensordothttp://docs.scipy.org/numpy/docs/numpy.core.numeric.tensordot/does?

DG

On Sun, Jun 13, 2010 at 12:37 PM, Friedrich Romstedt 
friedrichromst...@gmail.com wrote:

 2010/6/13 Alan Bromborsky abro...@verizon.net:
  I am writing symbolic tensor package for general relativity.  In making
  symbolic tensors concrete
  I generate numpy arrays stuffed with sympy functions and symbols.

 That sound's interesting.

  The
  operations are tensor product
  (numpy.multiply.outer), permutation of indices (swapaxes),  partial and
  covariant (both vector operators that
  increase array dimensions by one) differentiation, and contraction.

 I would like to know more precisely what this differentiations do, and
 how it comes that they add an index to the tensor.

  I think I need to do the contraction last
  to make sure everything comes out correctly.  Thus in many cases I would
  be performing multiple contractions
  on the tensor resulting from all the other operations.

 Hm, ok, so I guess I shall give my 1 cent now.

 Ok.

# First attempt (FYI, failed):

# The general procedure is, to extract a multi-dimensional diagonal
 array.
# The sum \sum_{ij = 0}^{M} \sum_{kl = 0}^{N} is actually the sum over a
# 2D array with indices I \equiv i \equiv j and K \equiv k \equiv
 l.  Meaning:
# \sum_{(I, K) = (0, 0)}^{(M, N)}.
# Thus, if we extract the indices with 2D arrays [[0], [1], ...,
 [N - 1]] for I and
# [[0, 1, ..., M - 1]] on the other side for K, then numpy's
 broadcasting
# mechanism will broadcast them to the same shape, yielding (N, M)
 arrays.
# Then finally we sum over this X last dimensions when there were X
# contractions, and we're done.

# Hmmm, when looking closer at the problem, it seems that this isn't
# adequate.  Because we would have to insert open slices, but cannot
# create them outside of the [] operator ...

# So now follows second attemt:

 def contract(arr, *contractions):
*CONTRACTIONS is e.g.:
(0, 1), (2, 3)
meaning two contractions, one of 0  1, and one of 2  2,
but also:
(0, 1, 2),
is allowed, meaning contract 0  1  2.

# First, we check if we can contract using the *contractions* given ...

for contraction in contractions:
# Extract the dimensions used.
dimensions = numpy.asarray(arr.shape)[list(contraction)]

# Check if they are all the same.
dimensionsdiff = dimensions - dimensions[0]
if numpy.abs(dimensionsdiff).sum() != 0:
raise ValueError('Contracted indices must be of same
 dimension.')

# So now, we can contract.
#
# First, pull the contracted dimensions all to the front ...

# The names of the indices.
names = range(arr.ndim)

# Pull all of the contractions.
names_pulled = []
for contraction in contractions:
names_pulled = names_pulled + list(contraction)
# Remove the pulled index names from the pool:
for used_index in contraction:
# Some more sanity check
if used_index not in names:
raise ValueError('Each index can only be used in one
 contraction.')
names.remove(used_index)

# Concatenate the pulled indices and the left-over indices.
names_final = names_pulled + names

# Perform the swap.
arr = arr.transpose(names_final)

# Perform the contractions ...

for contraction in contractions:
# The respective indices are now, since we pulled them, the
 frontmost indices:
ncontraction = len(contraction)
# The index array:
# shape[0] = shape[1] = ... = shape[ncontraction - 1]
I = numpy.arange(0, arr.shape[0])
# Perform contraction:
index = [I] * ncontraction
arr = arr[tuple(index)].sum(axis=0)

# If I made no mistake, we should be done now.
return arr

 Ok, it didn't get much shorter than Pauli's solution, so you decide ...

  One question to
  ask would be considering that I am stuffing
  the arrays with symbolic objects and all the operations on the objects
  would be done using the sympy modules,
  would using numpy operations to perform the contractions really save any
  time over just doing the contraction in
  python code with a numpy array.

 I don't know anything about sympy.  I think there's some typo around:
 I guess you mean creating some /sympy/ array and doing the operations
 using that instead of using a numpy array having sympy dtype=object
 content?

 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Mathematician: noun, someone who disavows certainty when their uncertainty
set is non-empty, even if that set has measure zero.

Hope: noun, that delusive spirit which escaped Pandora's jar and, with her
lies, prevents mankind from committing a general suicide.  (As 

Re: [Numpy-discussion] Installing numpy on py 3.1.2 , osx

2010-06-13 Thread Vincent Davis
As an update. I was able to get py 3.2.1 current source to build/install.
After that I was able to build and install the current numpy. Everything
seems ok.
I did try to run the tests but get an error. It looks like nose has a
problem.

 np.test()
Running unit tests for numpy
Traceback (most recent call last):
  File stdin, line 1, in module
  File
/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
line 321, in test
self._show_system_info()
  File
/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
line 187, in _show_system_info
nose = import_nose()
  File
/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
line 56, in import_nose
import nose
  File nose/__init__.py, line 1, in module
from nose.core import collector, main, run, run_exit, runmodule
  File nose/core.py, line 142
print %s version %s % (os.path.basename(sys.argv[0]), __version__)

Vincent


On Tue, Jun 8, 2010 at 7:02 PM, Vincent Davis vinc...@vincentdavis.netwrote:

 Have any of you built py 3.1.2 from source on a mac?
 Vincent

 On Tue, Jun 8, 2010 at 6:56 PM, David da...@silveregg.co.jp wrote:
  On 06/09/2010 08:04 AM, Vincent Davis wrote:
  I do have limits.h in 10.4 sdk
 
  So what next. Ay Ideas?
  I had tried to build py 3.1.2 from source but that did not work either.
 
  I had the same issue when I tried the python 3 branch on mac os x. I
  have not found the issue yet, but I am afraid it is quite subtle, and
  possibly a bug in python 3 distutils (plus some issues in
 numpy.distutils).
 
  cheers,
 
  David
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 


  *Vincent Davis
720-301-3003 *
vinc...@vincentdavis.net
 my blog http://vincentdavis.net |
LinkedInhttp://www.linkedin.com/in/vincentdavis
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tensor contraction

2010-06-13 Thread Alan Bromborsky
Friedrich Romstedt wrote:
 2010/6/13 Alan Bromborsky abro...@verizon.net:
   
 I am writing symbolic tensor package for general relativity.  In making
 symbolic tensors concrete
 I generate numpy arrays stuffed with sympy functions and symbols.
 

 That sound's interesting.

   
 The
 operations are tensor product
 (numpy.multiply.outer), permutation of indices (swapaxes),  partial and
 covariant (both vector operators that
 increase array dimensions by one) differentiation, and contraction.
 

 I would like to know more precisely what this differentiations do, and
 how it comes that they add an index to the tensor.

   
 I think I need to do the contraction last
 to make sure everything comes out correctly.  Thus in many cases I would
 be performing multiple contractions
 on the tensor resulting from all the other operations.
 

 Hm, ok, so I guess I shall give my 1 cent now.

 Ok.

 # First attempt (FYI, failed):

 # The general procedure is, to extract a multi-dimensional diagonal array.
 # The sum \sum_{ij = 0}^{M} \sum_{kl = 0}^{N} is actually the sum over a
 # 2D array with indices I \equiv i \equiv j and K \equiv k \equiv
 l.  Meaning:
 # \sum_{(I, K) = (0, 0)}^{(M, N)}.
 # Thus, if we extract the indices with 2D arrays [[0], [1], ...,
 [N - 1]] for I and
 # [[0, 1, ..., M - 1]] on the other side for K, then numpy's broadcasting
 # mechanism will broadcast them to the same shape, yielding (N, M) arrays.
 # Then finally we sum over this X last dimensions when there were X
 # contractions, and we're done.

 # Hmmm, when looking closer at the problem, it seems that this isn't
 # adequate.  Because we would have to insert open slices, but cannot
 # create them outside of the [] operator ...

 # So now follows second attemt:

 def contract(arr, *contractions):
 *CONTRACTIONS is e.g.:
 (0, 1), (2, 3)
 meaning two contractions, one of 0  1, and one of 2  2,
 but also:
 (0, 1, 2),
 is allowed, meaning contract 0  1  2.

 # First, we check if we can contract using the *contractions* given ...

 for contraction in contractions:
 # Extract the dimensions used.
 dimensions = numpy.asarray(arr.shape)[list(contraction)]

 # Check if they are all the same.
 dimensionsdiff = dimensions - dimensions[0]
 if numpy.abs(dimensionsdiff).sum() != 0:
 raise ValueError('Contracted indices must be of same dimension.')

 # So now, we can contract.
 #
 # First, pull the contracted dimensions all to the front ...

 # The names of the indices.
 names = range(arr.ndim)

 # Pull all of the contractions.
 names_pulled = []
 for contraction in contractions:
 names_pulled = names_pulled + list(contraction)
 # Remove the pulled index names from the pool:
 for used_index in contraction:
 # Some more sanity check
 if used_index not in names:
 raise ValueError('Each index can only be used in one
 contraction.')
 names.remove(used_index)

 # Concatenate the pulled indices and the left-over indices.
 names_final = names_pulled + names

 # Perform the swap.
 arr = arr.transpose(names_final)

 # Perform the contractions ...

 for contraction in contractions:
 # The respective indices are now, since we pulled them, the
 frontmost indices:
 ncontraction = len(contraction)
 # The index array:
 # shape[0] = shape[1] = ... = shape[ncontraction - 1]
 I = numpy.arange(0, arr.shape[0])
 # Perform contraction:
 index = [I] * ncontraction
 arr = arr[tuple(index)].sum(axis=0)

 # If I made no mistake, we should be done now.
 return arr

 Ok, it didn't get much shorter than Pauli's solution, so you decide ...

   
 One question to
 ask would be considering that I am stuffing
 the arrays with symbolic objects and all the operations on the objects
 would be done using the sympy modules,
 would using numpy operations to perform the contractions really save any
 time over just doing the contraction in
 python code with a numpy array.
 

 I don't know anything about sympy.  I think there's some typo around:
 I guess you mean creating some /sympy/ array and doing the operations
 using that instead of using a numpy array having sympy dtype=object
 content?

 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   
No I stuff a numpy array with sympy objects.  Except for contraction and 
differentiation numpy does everything I need to do with the tensors. 
Namely -

addition
subtraction
multiplication by a scalar (sympy object)
tensor product (multiply.outer)
index permutation (swapaxes)

Plus -

contraction
differentiation

There are two types of vector derivatives (think 

Re: [Numpy-discussion] Installing numpy on py 3.1.2 , osx

2010-06-13 Thread David Cournapeau
On Mon, Jun 14, 2010 at 5:17 AM, Vincent Davis vinc...@vincentdavis.netwrote:

 As an update. I was able to get py 3.2.1 current source to build/install.
 After that I was able to build and install the current numpy. Everything
 seems ok.
 I did try to run the tests but get an error. It looks like nose has a
 problem.

  np.test()
 Running unit tests for numpy
 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
 line 321, in test
 self._show_system_info()
   File
 /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
 line 187, in _show_system_info
 nose = import_nose()
   File
 /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
 line 56, in import_nose
 import nose
   File nose/__init__.py, line 1, in module
 from nose.core import collector, main, run, run_exit, runmodule
   File nose/core.py, line 142
 print %s version %s % (os.path.basename(sys.argv[0]), __version__)


You need to install a nose version compatible with python 3

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy on py 3.1.2 , osx

2010-06-13 Thread Vincent Davis
On Sun, Jun 13, 2010 at 5:06 PM, David Cournapeau courn...@gmail.com wrote:


 On Mon, Jun 14, 2010 at 5:17 AM, Vincent Davis vinc...@vincentdavis.net
 wrote:

 As an update. I was able to get py 3.2.1 current source to build/install.
 After that I was able to build and install the current numpy. Everything
 seems ok.
 I did try to run the tests but get an error. It looks like nose has a
 problem.
  np.test()
 Running unit tests for numpy
 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
 line 321, in test
     self._show_system_info()
   File
 /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
 line 187, in _show_system_info
     nose = import_nose()
   File
 /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/numpy/testing/nosetester.py,
 line 56, in import_nose
     import nose
   File nose/__init__.py, line 1, in module
     from nose.core import collector, main, run, run_exit, runmodule
   File nose/core.py, line 142
     print %s version %s % (os.path.basename(sys.argv[0]), __version__)

 You need to install a nose version compatible with python 3

I kinda get that, I posted on the nose list ask what source/version to
install. I installed the most recent.

Thanks
Vincent

 David

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Serializing numpy record array

2010-06-13 Thread Vishal Rana
I created a record array (from strings and floats) with no dtype defined as:

ra = np.core.records.fromrecords(sq_list, names=(a, b, c))

ra.a is found to be of type numpy.float64, when I serialize it using pyamf
under Mac OS X it works great but under ubuntu 10.04 it fails. Looks like
serialization is failing for type numpy.float64. Is the any work around, I
was trying to set dtype=object is that ok?

Also how can I set same dtype (for eg. object) for all a, b, and c?

Thanks
Vishal Rana
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy on py 3.1.2 , osx

2010-06-13 Thread David
On 06/14/2010 08:10 AM, Vincent Davis wrote:


 I kinda get that, I posted on the nose list ask what source/version to
 install. I installed the most recent.

They have a special branch for py3k support, I think you have to use that,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Serializing numpy record array

2010-06-13 Thread Robert Kern
On Sun, Jun 13, 2010 at 19:52, Vishal Rana ranavis...@gmail.com wrote:
 I created a record array (from strings and floats) with no dtype defined as:
 ra = np.core.records.fromrecords(sq_list, names=(a, b, c))
 ra.a is found to be of type numpy.float64, when I serialize it using pyamf
 under Mac OS X it works great but under ubuntu 10.04 it fails. Looks like
 serialization is failing for type numpy.float64.

Please don't just say that something fails. Show us exactly what you
did (copy-and-paste a minimal, but complete example of the code that
fails) and show us exactly what errors you get (copy-and-paste the
tracebacks).

Since I'm sure most of us here aren't familiar with the details of
PyAMF, perhaps you can read its documentation or its code to determine
exactly how it is trying to serialize objects and tell us that. Is it
using pickle? Is it using its own scheme?

 Is the any work around, I
 was trying to set dtype=object is that ok?

Probably not what you want.

 Also how can I set same dtype (for eg. object) for all a, b, and c?

In [3]: x = np.rec.fromrecords([('string', 10, 15.5)], names=['a', 'b', 'c'])

In [4]: x
Out[4]:
rec.array([('string', 10, 15.5)],
  dtype=[('a', '|S6'), ('b', 'i4'), ('c', 'f8')])

In [5]: x.astype(np.dtype([('a', object), ('b', object), ('c', object)]))
Out[5]:
rec.array([('string', 10, 15.5)],
  dtype=[('a', '|O4'), ('b', '|O4'), ('c', '|O4')])

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Serializing numpy record array

2010-06-13 Thread Vishal Rana
Robert,

There is no error on python side but on flex client which is *Channel*.*
Call*.*Failed* faultDetail=NetConnection.Call.Failed: *HTTP*: Status 503

Here is the documentation for type mapping flex vs python:
http://pyamf.org/architecture/typemap.html, pyamf takes python data types so
I am not sure if numpy dtypes will work with it or not, but it working on
Mac OS X.

Thanks
Vishal Rana


On Sun, Jun 13, 2010 at 6:04 PM, Robert Kern robert.k...@gmail.com wrote:

 On Sun, Jun 13, 2010 at 19:52, Vishal Rana ranavis...@gmail.com wrote:
  I created a record array (from strings and floats) with no dtype defined
 as:
  ra = np.core.records.fromrecords(sq_list, names=(a, b, c))
  ra.a is found to be of type numpy.float64, when I serialize it using
 pyamf
  under Mac OS X it works great but under ubuntu 10.04 it fails. Looks like
  serialization is failing for type numpy.float64.

 Please don't just say that something fails. Show us exactly what you
 did (copy-and-paste a minimal, but complete example of the code that
 fails) and show us exactly what errors you get (copy-and-paste the
 tracebacks).

 Since I'm sure most of us here aren't familiar with the details of
 PyAMF, perhaps you can read its documentation or its code to determine
 exactly how it is trying to serialize objects and tell us that. Is it
 using pickle? Is it using its own scheme?

  Is the any work around, I
  was trying to set dtype=object is that ok?

 Probably not what you want.

  Also how can I set same dtype (for eg. object) for all a, b, and c?

 In [3]: x = np.rec.fromrecords([('string', 10, 15.5)], names=['a', 'b',
 'c'])

 In [4]: x
 Out[4]:
 rec.array([('string', 10, 15.5)],
  dtype=[('a', '|S6'), ('b', 'i4'), ('c', 'f8')])

 In [5]: x.astype(np.dtype([('a', object), ('b', object), ('c', object)]))
 Out[5]:
 rec.array([('string', 10, 15.5)],
  dtype=[('a', '|O4'), ('b', '|O4'), ('c', '|O4')])

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Serializing numpy record array

2010-06-13 Thread Robert Kern
On Sun, Jun 13, 2010 at 20:10, Vishal Rana ranavis...@gmail.com wrote:
 Robert,
 There is no error on python side but on flex client which
 is Channel.Call.Failed faultDetail=NetConnection.Call.Failed: HTTP:
 Status 503
 Here is the documentation for type mapping flex vs
 python: http://pyamf.org/architecture/typemap.html, pyamf takes python data
 types so I am not sure if numpy dtypes will work with it or not, but it
 working on Mac OS X.

It looks like you will need to convert to plain Python builtin types
that appear on that list in order to pass anything to your Flex
client. Use the .tolist() method.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy for jython

2010-06-13 Thread David
On 06/14/2010 10:23 AM, Jarl Haggerty wrote:
 Does anyone have any interest in a port of numpy to jython?

I am insterested, and I think interest is growing thanks to the cloud 
computing fad, since a lot of this instrastructure is based on java 
(hadoop, hbase, cassandra to name a few). Being able to use numpy on top 
of those technologies would be extremely useful.

Now, I think reimplementing numpy from scratch is the wrong way if you 
want to be able to reuse existing code (scipy, etc...). There is a 
starting effort to refactor numpy to make it easier to port on new VM, 
and JVM would be an obvious candidate to test those things.

cheers,

David

P.S: Nitpick: please avoid sending rar archives, only a few people can 
read them easily. Use zip or tarballs
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy for jython

2010-06-13 Thread Robert Kern
On Sun, Jun 13, 2010 at 20:58, David da...@silveregg.co.jp wrote:
 On 06/14/2010 10:23 AM, Jarl Haggerty wrote:
 Does anyone have any interest in a port of numpy to jython?

 I am insterested, and I think interest is growing thanks to the cloud
 computing fad, since a lot of this instrastructure is based on java
 (hadoop, hbase, cassandra to name a few). Being able to use numpy on top
 of those technologies would be extremely useful.

Disco!

  http://discoproject.org/

This has to be the most interesting, apparently-mature project that no
one is talking about that I have seen in a long time.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Serializing numpy record array

2010-06-13 Thread Vishal Rana
Thanks Robert, it worked.


On Sun, Jun 13, 2010 at 6:23 PM, Robert Kern robert.k...@gmail.com wrote:

 On Sun, Jun 13, 2010 at 20:10, Vishal Rana ranavis...@gmail.com wrote:
  Robert,
  There is no error on python side but on flex client which
  is Channel.Call.Failed faultDetail=NetConnection.Call.Failed: HTTP:
  Status 503
  Here is the documentation for type mapping flex vs
  python: http://pyamf.org/architecture/typemap.html, pyamf takes python
 data
  types so I am not sure if numpy dtypes will work with it or not, but it
  working on Mac OS X.

 It looks like you will need to convert to plain Python builtin types
 that appear on that list in order to pass anything to your Flex
 client. Use the .tolist() method.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion