[Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matts Bjoerck
Hi,

I've started using complex192 for some calculations and came across two things
that seems to be bugs:

In [1]: sqrt(array([-1.0],dtype = complex192))
Out[1]: array([0.0+-6.1646549e-4703j], dtype=complex192)

In [2]: sqrt(array([-1.0],dtype = complex128))
Out[2]: array([ 0.+1.j])

In [3]: x
Out[3]: array([0.0+0.0j, 1.0+0.0j], dtype=complex192)

In [4]: -x
Out[4]: array([3.3621031e-4932+0.0012454e-5119j,
1.6794099e-4932+0.0011717e-5119j], dtype=complex192)

So, sqrt and using a negation does not seem to work. Have anyone else came
across this as well?

using numpy 1.0.3 on ubuntu

Any help appriciated

/Matts

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread lorenzo bolla
It doesn't work on Windows, either.

In [35]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192))
Out[35]: array([0.0+2.9996087e-305j], dtype=complex192)

In [36]: numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex128))
Out[36]: array([ 0.+1.j])

In [37]: numpy.__version__
Out[37]: '1.0.5.dev4567'

hth,
L.

On 1/7/08, Matthieu Brucher [EMAIL PROTECTED] wrote:

 i,

 I managed to reproduce your bugs on a FC6 box :
  import numpy as n

  n.sqrt(n.array([-1.0],dtype = n.complex192))
 array([0.0+9.2747134e+492j], dtype=complex192)

  n.sqrt(n.array([-1.0],dtype = n.complex128))
 array([ 0.+1.j])

  x=n.array([0.0+0.0j, 1.0+0.0j], dtype=n.complex192)
  x
 array([0.0+0.0j, 1.0+0.0j], dtype=complex192)

  -x
 array([3.3621031e-4932+-1.0204727e+2057j, 1.6794099e-4932+5.5355029e-4930j],
 dtype=complex192)

  n.__version__
 '1.0.5.dev4494'

 Matthieu

 2008/1/7, Matts Bjoerck  [EMAIL PROTECTED]:
 
  Hi,
 
  I've started using complex192 for some calculations and came across two
  things
  that seems to be bugs:
 
  In [1]: sqrt(array([-1.0],dtype = complex192))
  Out[1]: array([0.0+-6.1646549e-4703j], dtype=complex192)
 
  In [2]: sqrt(array([-1.0],dtype = complex128))
  Out[2]: array([ 0.+1.j])
 
  In [3]: x
  Out[3]: array([0.0+0.0j, 1.0+0.0j], dtype=complex192)
 
  In [4]: -x
  Out[4]: array([3.3621031e-4932+0.0012454e-5119j,
  1.6794099e-4932+0.0011717e-5119j], dtype=complex192)
 
  So, sqrt and using a negation does not seem to work. Have anyone else
  came
  across this as well?
 
  using numpy 1.0.3 on ubuntu
 
  Any help appriciated
 
  /Matts
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion
 



 --
 French PhD student
 Website : http://matthieu-brucher.developpez.com/
 Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
 LinkedIn : http://www.linkedin.com/in/matthieubrucher
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matthieu Brucher
i,

I managed to reproduce your bugs on a FC6 box :
 import numpy as n

 n.sqrt(n.array([-1.0],dtype = n.complex192))
array([0.0+9.2747134e+492j], dtype=complex192)

 n.sqrt(n.array([-1.0],dtype = n.complex128))
array([ 0.+1.j])

 x=n.array([0.0+0.0j, 1.0+0.0j], dtype=n.complex192)
 x
array([0.0+0.0j, 1.0+0.0j], dtype=complex192)

 -x
array([3.3621031e-4932+-1.0204727e+2057j, 1.6794099e-4932+5.5355029e-4930j],
dtype=complex192)

 n.__version__
'1.0.5.dev4494'

Matthieu

2008/1/7, Matts Bjoerck [EMAIL PROTECTED]:

 Hi,

 I've started using complex192 for some calculations and came across two
 things
 that seems to be bugs:

 In [1]: sqrt(array([-1.0],dtype = complex192))
 Out[1]: array([0.0+-6.1646549e-4703j], dtype=complex192)

 In [2]: sqrt(array([-1.0],dtype = complex128))
 Out[2]: array([ 0.+1.j])

 In [3]: x
 Out[3]: array([0.0+0.0j, 1.0+0.0j], dtype=complex192)

 In [4]: -x
 Out[4]: array([3.3621031e-4932+0.0012454e-5119j,
 1.6794099e-4932+0.0011717e-5119j], dtype=complex192)

 So, sqrt and using a negation does not seem to work. Have anyone else came
 across this as well?

 using numpy 1.0.3 on ubuntu

 Any help appriciated

 /Matts

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Matts Bjoerck
Thanks for the fast replies, now I know it's not my machine that gives me 
trouble.
In the meantime I tested a couple of other functions. It seems that all of them
fail with complex192.

/Matts

In [19]: x192 = arange(0,2,0.5,dtype = complex192)*pi

In [20]: x128 = arange(0,2,0.5,dtype = complex128)*pi

In [21]: sin(x192)
Out[21]: 
array([0.0+0.0j, 1.6794099e-4932+0.0j, 1.5925715e-4932+4.2044193e+3906j,
   1.6794099e-4932+2.3484019e-4941j], dtype=complex192)

In [22]: sin(x128)
Out[22]: 
array([  0.e+00+0.j,   1.e+00+0.j,   1.22460635e-16-0.j,
-1.e+00-0.j])

In [23]: cos(x192)
Out[23]: 
array([1.6794099e-4932+-2.7540704e+667j, 1.5909299e-4932+0e-5119j,
   1.6794099e-4932+0.074e-5119j, 1.5934769e-4932+0e-5119j],
dtype=complex192)

In [24]: cos(x128)
Out[24]: 
array([  1.e+00-0.j,   6.12303177e-17-0.j,  -1.e+00-0.j,
-1.83690953e-16+0.j])

In [25]: exp(x192)
Out[25]: 
array([1.6794099e-4932+0.0j, 1.6830259e-4932+1.5656013e-4940j,
   1.6867092e-4932+1.741791e-2343j, 1.6904736e-4932+0e-5119j], 
dtype=complex192)

In [26]: exp(x128)
Out[26]: 
array([   1.+0.j,4.81047738+0.j,   23.14069263+0.j,
111.31777849+0.j])

In [27]: log10(x192)
Warning: divide by zero encountered in log10
Out[27]: 
array([3.3604615e-4932+0.0j, 1.675419e-4932+1.5656013e-4940j,
   1.6777496e-4932+7.1009005e-2357j, 1.6783371e-4932+0.0017686e-5119j],
dtype=complex192)

In [28]: log10(x128)
Warning: divide by zero encountered in log10
Out[28]: 
array([ -inf+0.j,   1.96119877e-001+0.j,   4.97149873e-001+0.j,
 6.73241132e-001+0.j])

In [29]: log(x192)
Warning: divide by zero encountered in log
Out[29]: 
array([3.3604615e-4932+-2.7540703e+667j, 1.6774503e-4932+0.0007074e-5119j,
   1.6796475e-4932+7.0728523e-2357j, 1.6803131e-4932+1.5656013e-4940j],
dtype=complex192)

In [30]: log(x128)
Warning: divide by zero encountered in log
Out[30]: 
array([ -inf+0.j,   4.51582705e-001+0.j,   1.14472989e+000+0.j,
 1.55019499e+000+0.j])





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Ryan May
Charles R Harris wrote:
 
 
 On Jan 7, 2008 8:47 AM, Ryan May [EMAIL PROTECTED] mailto:[EMAIL 
 PROTECTED] wrote:
 
 Stuart Brorson wrote:
  I realize NumPy != Matlab, but I'd wager that most users would think
  that this is the natural behavior..
  Well, that behavior won't happen. We won't mutate the dtype of
 the array because
  of assignment. Matlab has copy(-on-write) semantics for things
 like slices while
  we have view semantics. We can't safely do the reallocation of
 memory [1].
 
  That's fair enough.  But then I think NumPy should consistently
  typecheck all assignmetns and throw an exception if the user attempts
  an assignment which looses information.
 
 
 Yeah, there's no doubt in my mind that this is a bug, if for no other
 reason than this inconsistency:
 
 
 One place where Numpy differs from MatLab is the way memory is handled.
 MatLab is always generating new arrays, so for efficiency it is worth
 preallocating arrays and then filling in the parts. This is not the case
 in Numpy where lists can be used for things that grow and subarrays are
 views. Consequently, preallocating arrays in Numpy should be rare and
 used when either the values have to be generated explicitly, which is
 what you see when using the indexes in your first example. As to
 assignment between arrays, it is a mixed question. The problem again is
 memory usage. For large arrays, it makes since to do automatic
 conversions, as is also the case in functions taking output arrays,
 because the typecast can be pushed down into C where it is time and
 space efficient, whereas explicitly converting the array uses up
 temporary space. However, I can imagine an explicit typecast function,
 something like
 
 a[...] = typecast(b)
 
 that would replace the current behavior. I think the typecast function
 could be implemented by returning a view of b with a castable flag set
 to true, that should supply enough information for the assignment
 operator to do its job. This might be a good addition for Numpy 1.1.

While that seems like an ok idea, I'm still not sure what's wrong with
raising an exception when there will be information loss.  The exception
is already raised with standard python complex objects.  I can think of
many times in my code where explicit looping is a necessity, so
pre-allocating the array is the only way to go.

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Zachary Pincus
 For large arrays, it makes since to do automatic
 conversions, as is also the case in functions taking output arrays,
 because the typecast can be pushed down into C where it is time and
 space efficient, whereas explicitly converting the array uses up
 temporary space. However, I can imagine an explicit typecast  
 function,
 something like

 a[...] = typecast(b)

 that would replace the current behavior. I think the typecast  
 function
 could be implemented by returning a view of b with a castable flag  
 set
 to true, that should supply enough information for the assignment
 operator to do its job. This might be a good addition for Numpy 1.1.

 While that seems like an ok idea, I'm still not sure what's wrong with
 raising an exception when there will be information loss.  The  
 exception
 is already raised with standard python complex objects.  I can  
 think of
 many times in my code where explicit looping is a necessity, so
 pre-allocating the array is the only way to go.

The issue Charles is dealing with here is how to *suppress* the  
proposed exception in cases (as the several that I described) where  
the information loss is explicitly desired.

With what's currently in numpy now, you would have to do something  
like this:
A[...] = B.astype(A.dtype)
to set a portion of A to B, unless you are *certain* that A and B are  
of compatible types.

This is ugly and also bug-prone, seeing as how there's some violation  
of the don't-repeat-yourself principle. (I.e. A is mentioned twice,  
and to change the code to use a different array, you need to change  
the variable name A twice.)

Moreover, and worse, the phrase 'A = B.astype(A.dtype)' creates and  
destroys a temporary array the same size as B. It's equivalent to:
temp = B.astype(A.dtype)
A[...] = temp

Which is no good if B is very large. Currently, the type conversion  
in 'A[...] = B' cases is handled implicitly, deep down in the C code  
where it is very very fast, and no temporary array is made.

Charles suggests a 'typecast' operator that would set a flag on the  
array B so that trying to convert it would *not* raise an exception,  
allowing for the fast, C-level conversion. (This assumes your  
proposed change in which by default such conversions would raise  
exceptions.) This 'typecast' operation wouldn't do anything but set a  
flag, so it doesn't create a temporary array or do any extra work.

But still, every time that you are not *certain* what the type of a  
result from a given operation is, any code that looks like:
A[i] = calculate(...)
will need to look like this instead:
A[i] = typecast(calculate(...))

I agree with others that such assignments aren't highly common, but  
there will be broken code from this. And as Charles demonstrates,  
getting the details right of how to implement such a change is non- 
trivial.

Zach

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
Hi All,

I'm thinking that one way to make the automatic type conversion a bit safer
to use would be to add a CASTABLE flag to arrays. Then we could write
something like

a[...] = typecast(b)

where typecast returns a view of b with the CASTABLE flag set so that the
assignment operator can check whether to implement the current behavior or
to raise an error. Maybe this could even be part of the dtype scalar,
although that might cause a lot of problems with the current default
conversions. What do folks think?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2008-01-07 Thread Travis E. Oliphant
Bruce Sherwood wrote:
 Okay, I've implemented the scheme below that was proposed by Scott 
 Daniels on the VPython mailing list, and it solves my problem. It's also 
 much faster than using numpy directly: even with the def and if 
 overhead: sqrt(scalar) is over 3 times faster than the numpy sqrt, and 
 sqrt(array) is very nearly as fast as the numpy sqrt.
   
Using math.sqrt short-circuits the ufunc approach of returning numpy 
scalars (with all the methods and attributes of 0-d arrays --- which is 
really their primary reason for being).

It is also faster because it avoids the numpy ufunc machinery (which is 
some overhead --- the error setting control and broadcasting facility 
doesn't happen for free).
 Thanks to those who made suggestions. There remains the question of why 
 operator overloading of the kind I've described worked with Numeric and 
 Boost but not with numpy and Boost. 
Basically, it boils down to the fact that I took a shortcut with 
implementing generic multiplication (which all the scalars use for now) 
on numpy scalars so that the multiplication ufunc is called when they 
are encountered.  

Thus, when the numpy.float64 is first (its multiply implementation gets 
called first) and it uses the equivalent of

ufunc.multiply(scalar, vector)

I suspect that because your vector can be converted to an array, this 
procedure works (albeit more slowly than you would like), and so the 
vector object never gets a chance to try. 

A quick fix is to make it so that ufunc.multiply(scalar, vector) raises 
NotImplemented which may not be desireable, either.

Alternatively, the generic scalar operations should probably not be so 
inclusive and should allow the other object a chance to perform the 
operation more often (by returning NotImplemented).


-Travis O.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Travis E. Oliphant
Charles R Harris wrote:
 Hi All,

 I'm thinking that one way to make the automatic type conversion a bit 
 safer to use would be to add a CASTABLE flag to arrays. Then we could 
 write something like

 a[...] = typecast(b)

 where typecast returns a view of b with the CASTABLE flag set so that 
 the assignment operator can check whether to implement the current 
 behavior or to raise an error. Maybe this could even be part of the 
 dtype scalar, although that might cause a lot of problems with the 
 current default conversions. What do folks think?

That is an interesting approach.The issue raised of having to 
convert lines of code that currently work (which does implicit casting) 
would still be there (like ndimage), but it would not cause unnecessary 
data copying, and would help with this complaint (that I've heard before 
and have some sympathy towards).

I'm intrigued.

-Travis O.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Nils Wagner
On Mon, 7 Jan 2008 19:42:40 +0100
  Francesc Altet [EMAIL PROTECTED] wrote:
 A Monday 07 January 2008, Nils Wagner escrigué:
  numpy.sqrt(numpy.array([-1.0], 
dtype=numpy.complex192))

 Traceback (most recent call last):
File stdin, line 1, in module
 AttributeError: 'module' object has no attribute
 'complex192'

  numpy.__version__

 '1.0.5.dev4673'
 
 It seems like you are using a 64-bit platform, and they 
tend to have 
 complex256 (quad-precision) types instead of complex192 
 (extended-precision) typical in 32-bit platforms.
 
 Cheers,
 
 -- 
0,0   Francesc Altet http://www.carabos.com/
 V   V   Cárabos Coop. V.   Enjoy Data
 -
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

 import numpy
 numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex256))
Segmentation fault
  
Shall I file a bug report ?

Nils
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bugs using complex192

2008-01-07 Thread Francesc Altet
A Monday 07 January 2008, Nils Wagner escrigué:
  numpy.sqrt(numpy.array([-1.0], dtype=numpy.complex192))

 Traceback (most recent call last):
File stdin, line 1, in module
 AttributeError: 'module' object has no attribute
 'complex192'

  numpy.__version__

 '1.0.5.dev4673'

It seems like you are using a 64-bit platform, and they tend to have 
complex256 (quad-precision) types instead of complex192 
(extended-precision) typical in 32-bit platforms.

Cheers,

-- 
0,0   Francesc Altet     http://www.carabos.com/
V   V   Cárabos Coop. V.   Enjoy Data
 -
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Anne Archibald
On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:

 One place where Numpy differs from MatLab is the way memory is handled.
 MatLab is always generating new arrays, so for efficiency it is worth
 preallocating arrays and then filling in the parts. This is not the case in
 Numpy where lists can be used for things that grow and subarrays are views.
 Consequently, preallocating arrays in Numpy should be rare and used when
 either the values have to be generated explicitly, which is what you see
 when using the indexes in your first example. As to assignment between
 arrays, it is a mixed question. The problem again is memory usage. For large
 arrays, it makes since to do automatic conversions, as is also the case in
 functions taking output arrays, because the typecast can be pushed down into
 C where it is time and space efficient, whereas explicitly converting the
 array uses up temporary space. However, I can imagine an explicit typecast
 function, something like

 a[...] = typecast(b)

 that would replace the current behavior. I think the typecast function could
 be implemented by returning a view of b with a castable flag set to true,
 that should supply enough information for the assignment operator to do its
 job. This might be a good addition for Numpy 1.1.

This is introducing a fairly complex mechanism to cover, as far as I
can see, two cases: conversion of complex to real, and conversion of
float to integer. Conversion of complex to real can already be done
explicitly without a temporary:

a[...] = b.real

That leaves only conversion of float to integer.

Does this single case cause enough confusion to warrant an exception
and a way around it?

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 12:00 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote:

 Charles R Harris wrote:
  Hi All,
 
  I'm thinking that one way to make the automatic type conversion a bit
  safer to use would be to add a CASTABLE flag to arrays. Then we could
  write something like
 
  a[...] = typecast(b)
 
  where typecast returns a view of b with the CASTABLE flag set so that
  the assignment operator can check whether to implement the current
  behavior or to raise an error. Maybe this could even be part of the
  dtype scalar, although that might cause a lot of problems with the
  current default conversions. What do folks think?

 That is an interesting approach.The issue raised of having to
 convert lines of code that currently work (which does implicit casting)
 would still be there (like ndimage), but it would not cause unnecessary
 data copying, and would help with this complaint (that I've heard before
 and have some sympathy towards).

 I'm intrigued.


Maybe  we could also set a global flag, typesafe, that in the current Numpy
version would default to false, giving current behavior, but could be set
true to get the new behavior. Then when Numpy 1.1 comes out we could make
the global default true. That way folks could keep the current code working
until they are ready to use the typesafe feature.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

2008-01-07 Thread Charles R Harris
On Jan 7, 2008 12:08 PM, Anne Archibald [EMAIL PROTECTED] wrote:

 On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
 
  One place where Numpy differs from MatLab is the way memory is handled.
  MatLab is always generating new arrays, so for efficiency it is worth
  preallocating arrays and then filling in the parts. This is not the case
 in
  Numpy where lists can be used for things that grow and subarrays are
 views.
  Consequently, preallocating arrays in Numpy should be rare and used when
  either the values have to be generated explicitly, which is what you see
  when using the indexes in your first example. As to assignment between
  arrays, it is a mixed question. The problem again is memory usage. For
 large
  arrays, it makes since to do automatic conversions, as is also the case
 in
  functions taking output arrays, because the typecast can be pushed down
 into
  C where it is time and space efficient, whereas explicitly converting
 the
  array uses up temporary space. However, I can imagine an explicit
 typecast
  function, something like
 
  a[...] = typecast(b)
 
  that would replace the current behavior. I think the typecast function
 could
  be implemented by returning a view of b with a castable flag set to
 true,
  that should supply enough information for the assignment operator to do
 its
  job. This might be a good addition for Numpy 1.1.

 This is introducing a fairly complex mechanism to cover, as far as I
 can see, two cases: conversion of complex to real, and conversion of
 float to integer. Conversion of complex to real can already be done
 explicitly without a temporary:

 a[...] = b.real

 That leaves only conversion of float to integer.

 Does this single case cause enough confusion to warrant an exception
 and a way around it?


I think it could be something like C++ where implicit downcasts in general
are flagged. If we went that way, conversions like uint32 to int32 would
also be flagged. The question is how much care we want to take with types.
Since Numpy is much more type oriented than, say, MatLab, such type safety
might be a useful addition. Python scalar types would remain castable, so
you wouldn't have to typecast every darn thing.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Hi,

I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
having troubles accessing a numpy scalar with the __array_interface__.
Is this supposed to work? Or should __array_interface__ trigger an
AttributeError on a numpy scalar? Note that I haven't done any digging
on this myself...

-Andrew
---BeginMessage---
Bugs item #1827190, was opened at 2007-11-06 17:35
Message generated for change (Comment added) made by mcfletch
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=105988aid=1827190group_id=5988

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: GL
Group: v3.0.0
Status: Pending
Resolution: None
Priority: 5
Private: No
Submitted By: Chris Waters (crwaters)
Assigned to: Mike C. Fletcher (mcfletch)
Summary: Default (Numpy) array return values not accepted

Initial Comment:
The values returned by the default array type, numpy, are not accepted by 
OpenGL; more importantly, they are not accepted by the respective set/bind 
functions.

This output is from the attached test file, run with the latest CVS revision:

glGenTextures(1) - 1 (type 'long')
glGenTextures(2) - [2 3] (list: type 'numpy.ndarray', elements: type 
'numpy.uint32')
Calling: glBindTexture(GL_TEXTURE_2D, 1)
 (created from glGenTextures(1))
No Exceptions
Calling: glBindTexture(GL_TEXTURE_2D, 2)
 (created from glGenTextures(2), element 0)
Exception Caught: argument 2: type 'exceptions.TypeError': wrong type

The returned type of the array is numpy.ndarray, with each element having the 
type numpy.uint32. This element type is also not immediately convertable to a 
function argument type such as GLuint.

The return type of glGenTextures(1), however, is of the type long due to the 
special-case functionality. This is not the case for functions that do not 
handle special cases similar to this, such as 
OpenGL.GL.EXT.framebuffer_object.glGenFramebuffersEXT

A quick global work-around is to change the array type to ctypes after 
importing OpenGL:
 from OpenGL.arrays import formathandler
 formathandler.FormatHandler.chooseOutput( 'ctypesarrays' )

--

Comment By: Mike C. Fletcher (mcfletch)
Date: 2008-01-07 13:32

Message:
Logged In: YES 
user_id=34901
Originator: NO

I've added an optional flag to the top-level module which allows for using
numpy scalars. ALLOW_NUMPY_SCALARS which when true allows your test case to
succeed.

Coded in Python, however, it's rather slow (and rather poorly
implemented).  I looked into implementing this using the
__array_interface__ on scalars, but the data-pointer there appears to be
randomly generated.  Without that, a conversion at the Python level and
then passing onto the original function seems the only solution.

I doubt we'll get a *good* solution to this in the near term.

--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=105988aid=1827190group_id=5988

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
___
PyOpenGL Homepage
http://pyopengl.sourceforge.net
___
PyOpenGL-Devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/pyopengl-devel
---End Message---
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Scott Ransom
On Monday 07 January 2008 02:13:56 pm Charles R Harris wrote:
 On Jan 7, 2008 12:00 PM, Travis E. Oliphant [EMAIL PROTECTED] 
wrote:
  Charles R Harris wrote:
   Hi All,
  
   I'm thinking that one way to make the automatic type conversion a
   bit safer to use would be to add a CASTABLE flag to arrays. Then
   we could write something like
  
   a[...] = typecast(b)
  
   where typecast returns a view of b with the CASTABLE flag set so
   that the assignment operator can check whether to implement the
   current behavior or to raise an error. Maybe this could even be
   part of the dtype scalar, although that might cause a lot of
   problems with the current default conversions. What do folks
   think?
 
  That is an interesting approach.The issue raised of having to
  convert lines of code that currently work (which does implicit
  casting) would still be there (like ndimage), but it would not
  cause unnecessary data copying, and would help with this complaint
  (that I've heard before and have some sympathy towards).
 
  I'm intrigued.

 Maybe  we could also set a global flag, typesafe, that in the current
 Numpy version would default to false, giving current behavior, but
 could be set true to get the new behavior. Then when Numpy 1.1 comes
 out we could make the global default true. That way folks could keep
 the current code working until they are ready to use the typesafe
 feature.

I'm a bit confused as to which types of casting you are proposing to 
change.  As has been pointed out by several people, users very often 
_want_ to lose information.  And as I pointed out, it is one of the 
reasons why we are all using numpy today as opposed to numeric!

I'd bet that the vast majority of the people on this list believe that 
the OPs problem of complex numbers being automatically cast into floats 
is a real problem.  Fine.  We should be able to raise an exception in 
that case.

However, two other very common cases of lost information are not 
obviously a problems, and are (for many of us) the _preferred_ actions.  
These examples are:

1.  Performing floating point math in higher precision, but casting to a 
lower-precision float if that float is on the lhs of the assignment.  
For example:

In [22]: a = arange(5, dtype=float32)

In [23]: a += arange(5.0)

In [24]: a
Out[24]: array([ 0.,  2.,  4.,  6.,  8.], dtype=float32)

To me, that is fantastic.  I've obviously explicitly requested that I 
want a to hold 32-bit floats.  And if I'm careful and use in-place 
math, I get 32-bit floats at the end (and no problems with large 
temporaries or memory doubling by an automatic cast to float64).

In [25]: a = a + arange(5.0)

In [26]: a
Out[26]: array([  0.,   3.,   6.,   9.,  12.])

In this case, I'm reassigning a from 32-bits to 64-bits because I'm 
not using in-place math.  The temporary array created on the rhs 
defines the type of the new assignment.  Once again, I think this is 
good.

2.  Similarly, if I want to stuff floats into a int array:

In [28]: a
Out[28]: array([0, 1, 2, 3, 4])

In [29]: a += 2.5

In [30]: a
Out[30]: array([2, 3, 4, 5, 6])

Here, I get C-like rounding/casting of my originally integer array 
because I'm using in-place math.  This is often a very useful behavior.

In [31]: a = a + 2.5

In [32]: a
Out[32]: array([ 4.5,  5.5,  6.5,  7.5,  8.5])

But here, without the in-place math, a gets converted to doubles.

I can certainly say that in my code (which is used by a fair number of 
people in my field), each of these use cases are common.  And I think 
they are one of the _strengths_ of numpy.

I will be very disappointed if this default behavior changes.

Scott



-- 
Scott M. RansomAddress:  NRAO
Phone:  (434) 296-0320   520 Edgemont Rd.
email:  [EMAIL PROTECTED] Charlottesville, VA 22903 USA
GPG Fingerprint: 06A9 9553 78BE 16DB 407B  FFCA 9BFA B6FF FFD3 2989
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
One of my collaborators would like to use 16bit float arrays. According to 
http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in 
numpy.core.numerictypes, it appears that this should be possible, but the 
following doesnt work:

a=arange(10, dtype='float16')
TypeError: data type not understood

Can anyone offer some advice?

Thanks,
Darren

-- 
Darren S. Dale, Ph.D.
Staff Scientist
Cornell High Energy Synchrotron Source
Cornell University
275 Wilson Lab
Rt. 366  Pine Tree Road
Ithaca, NY 14853

[EMAIL PROTECTED]
office: (607) 255-3819
fax: (607) 255-9001
http://www.chess.cornell.edu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Matthieu Brucher
float16 is not defined in my version of Numpy :(

Matthieu

2008/1/7, Darren Dale [EMAIL PROTECTED]:

 One of my collaborators would like to use 16bit float arrays. According to
 http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16
 in
 numpy.core.numerictypes, it appears that this should be possible, but the
 following doesnt work:

 a=arange(10, dtype='float16')
 TypeError: data type not understood

 Can anyone offer some advice?

 Thanks,
 Darren

 --
 Darren S. Dale, Ph.D.
 Staff Scientist
 Cornell High Energy Synchrotron Source
 Cornell University
 275 Wilson Lab
 Rt. 366  Pine Tree Road
 Ithaca, NY 14853

 [EMAIL PROTECTED]
 office: (607) 255-3819
 fax: (607) 255-9001
 http://www.chess.cornell.edu
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Travis E. Oliphant
Darren Dale wrote:
 One of my collaborators would like to use 16bit float arrays. According to 
 http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in 
 numpy.core.numerictypes, it appears that this should be possible, but the 
 following doesnt work:
   

No, it's only possible, if the C-implementation NumPy was compiled with 
has 16-bits for its float scalar.

Only float, double, and long double are implemented.  These translate to 
various bit-widths on different platforms.  numerictypes is overly 
aggressive at guessing what they might translate to.

-Travis O.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread dmitrey
Some days ago there was mentioned a parallel numpy that is developed by 
Brian Granger.

Does the project have any blog or website? Has it any description about 
API and abilities? When 1st release is intended?

Regards, D


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Timothy Hochberg
Another possible approach is to treat downcasting similar to underflow. That
is give it it's own flag in the errstate and people can set it to ignore,
warn or raise on downcasting as desired. One could potentially have two
flags, one for downcasting across kinds (float-int, int-bool) and one for
downcasting within kinds (float64-float32). In this case, I personally
would set the first to raise and the second to ignore and would suggest that
as the default.

IMO:

   1. It's a no brainer to raise and exception when assigning a complex
   value to a float or integer target. Using Z.real or Z.imag is
   clearer and has the same performance.
   2. I'm fairly dubious about assigning float to ints as is. First off
   it looks like a bug magnet to me due to accidentally assigning a floating
   point value to a target that one believes to be float but is in fact
   integer. Second, C-style rounding is pretty evil; it's not always consistent
   across platforms, so relying on it for anything other than truncating
   already integral values is asking for trouble.
   3. Downcasting within kinds seems much less hazardous than downcasting
   across kinds, although I'd still be happy to be able regulate it with
   errstate.


-tim
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Andrew Straw
Travis E. Oliphant wrote:
 Andrew Straw wrote:
 Hi,

 I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
 having troubles accessing a numpy scalar with the __array_interface__.
 Is this supposed to work? Or should __array_interface__ trigger an
 AttributeError on a numpy scalar? Note that I haven't done any digging
 on this myself...
   
 
 Yes, the __array_interface__ approach works (but read-only).  In fact, 
 what happens is that a 0-d array is created and you are given the 
 information for it.
 

Thanks. How is the array reference kept alive? Maybe that's Mike's issue?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Anne Archibald
On 07/01/2008, Timothy Hochberg [EMAIL PROTECTED] wrote:

 I'm fairly dubious about assigning float to ints as is. First off it looks
 like a bug magnet to me due to accidentally assigning a floating point value
 to a target that one believes to be float but is in fact integer. Second,
 C-style rounding is pretty evil; it's not always consistent across
 platforms, so relying on it for anything other than truncating already
 integral values is asking for trouble.

I'm not sure I agree that this is a bug magnet: if your array is
integer and you think it's float, you're going to get burned sooner
rather than later, whether you assign to it or not. The only case I
can see would be a problem would be if zeros() and ones() created
integers - which they don't.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Zachary Pincus
Hello all,

In order to help make things regarding this casting issue more  
explicit, let me present the following table of potential down-casts.

(Also, for the record, nobody is proposing automatic up-casting of  
any kind. The proposals on the table focus on preventing some or all  
implicit down-casting.)

(1) complex - anything else:
Data is lost wholesale.

(2) large float - smaller float (also large complex - smaller  
complex):
Precision is lost.

(3) float - integer:
Precision is lost, to a much greater degree than (2).

(4) large integer - smaller integer (also signed/unsigned conversions):
Data gets lost/mangled/wrapped around.

The original requests for exceptions to be raised focused on case  
(1), where it is most clear that loss-of-data is happening in a way  
that is unlikely to be intentional.

Robert Kern suggested that exceptions might be raised for cases (1)  
and (3), which are cross-kind casts, but not for within-kind  
conversions like (2) and (4). However, I personally don't think that  
down-converting from float to int is any more or less fraught than  
converting from int32 to int8: if you need a warning/exception in one  
case, you'd need it in the rest. Moreover, there's the principle of  
least surprise, which would suggest that complex rules for when  
exceptions get raised based on the kind of conversion being made is  
just asking for trouble.

So, as a poll, if you are in favor of exceptions instead of automatic  
down-conversion, where do you draw the line? What causes an error?  
Robert seemed to be in favor of (1) and (3). Anne seemed to think  
that only (1) was problematic enough to worry about. I am personally  
cool toward the exceptions, but I think that case (4) is just as  
bad as case (3) in terms of data-loss, though I agree that case (1)  
seems the worst (and I don't really find any of them particularly  
bad, though case (1) is something of an oddity for newcomers, I agree.)

Finally, if people really want these sort of warnings, I would  
suggest that they be rolled into the get/setoptions mechanism, so  
that there's a fine-grained mechanism for turning them to off/warn/ 
raise exception.

Zach
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Charles R Harris
Hi,

On Jan 7, 2008 1:00 PM, Darren Dale [EMAIL PROTECTED] wrote:

 One of my collaborators would like to use 16bit float arrays. According to
 http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16
 in
 numpy.core.numerictypes, it appears that this should be possible, but the
 following doesnt work:

 a=arange(10, dtype='float16')
 TypeError: data type not understood

 Can anyone offer some advice?


Does he care about speed? I think some graphics cards might support float16,
but any normal implementation in C would reguire software floating point, a
new type, and you couldn't rely on the normal operators. It might be doable
in C++ with operator overloading and templates, but doing it in C would be a
real hassle.


 Thanks,
 Darren


Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Charles R Harris
Hi,

On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:


 Another possible approach is to treat downcasting similar to underflow.
 That is give it it's own flag in the errstate and people can set it to
 ignore, warn or raise on downcasting as desired. One could potentially have
 two flags, one for downcasting across kinds (float-int, int-bool) and one
 for downcasting within kinds (float64-float32). In this case, I personally
 would set the first to raise and the second to ignore and would suggest that
 as the default.

 IMO:

1. It's a no brainer to raise and exception when assigning a complex
value to a float or integer target. Using Z.real or Z.imag is
clearer and has the same performance.
2. I'm fairly dubious about assigning float to ints as is. First off
it looks like a bug magnet to me due to accidentally assigning a floating
point value to a target that one believes to be float but is in fact
integer. Second, C-style rounding is pretty evil; it's not always 
 consistent
across platforms, so relying on it for anything other than truncating
already integral values is asking for trouble.
3. Downcasting within kinds seems much less hazardous than
downcasting across kinds, although I'd still be happy to be able regulate 
 it
with errstate.

 Maybe a combination of a typecast function and the errstate would work
well. The typecast function would provide a clear local override of the
default errstate flags, while the user would have the option to specify what
sort of behavior they care about in general.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote:
 Travis E. Oliphant wrote:
   
 Andrew Straw wrote:
 
 Hi,

 I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
 having troubles accessing a numpy scalar with the __array_interface__.
 Is this supposed to work? Or should __array_interface__ trigger an
 AttributeError on a numpy scalar? Note that I haven't done any digging
 on this myself...
   
   
 Yes, the __array_interface__ approach works (but read-only).  In fact, 
 what happens is that a 0-d array is created and you are given the 
 information for it.

 

 Thanks. How is the array reference kept alive? Maybe that's Mike's issue?
   
Hmm... I thought that I thought through this one.  But, as I look at the 
code now, I don't think it is kept alive.  I suspect that is the problem.

-Travis

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Unexpected integer overflow handling

2008-01-07 Thread Zachary Pincus
Hello all,

On my (older) version of numpy (1.0.4.dev3896), I found several  
oddities in the handling of assignment of long-integer values to  
integer arrays:

In : numpy.array([2**31], dtype=numpy.int8)
 
---
ValueErrorTraceback (most recent call  
last)
/Users/zpincus/ipython console
ValueError: setting an array element with a sequence.

While this might be reasonable to be an error condition, the precise  
error raised seems not quite right! But not all overflow errors are  
caught in this way:

In : numpy.array([2**31-1], dtype=numpy.int8)
Out: array([-1], dtype=int8)

As above, numpy is quite happy allowing overflows; it just breaks  
when doing a python long-int to int conversion. The conversion from  
numpy-long-int to int does the right thing though (if by right  
thing you mean allows silent overflow, which is a matter of  
discussion elsewhere right now...):

In : numpy.array(numpy.array([2**31], dtype=numpy.int64),  
dtype=numpy.int8)
Out: array([0], dtype=int8)

At least on item assignment, the overflow exception is less odd:

In : a = numpy.empty(shape=(1,), dtype=numpy.int8)
In : a[0] = 2**31
 
---
OverflowError Traceback (most recent call  
last)
/Users/zpincus/ipython console
OverflowError: long int too large to convert to int

Things work right with array element assignment:
In : a[0] = numpy.array([2**31], dtype=numpy.int64)[0]

But break again with array scalars, and with the strange ValueError  
again!
In : a[0] = numpy.array(2**31, dtype=numpy.int64)
 
---
ValueErrorTraceback (most recent call  
last)
/Users/zpincus/ipython console
ValueError: setting an array element with a sequence.

Note that non-long-int-to-int array scalar conversions work:
In : a[0] = numpy.array(2**31-1, dtype=numpy.int64)

Is this still the case for the current version of numpy?

Best,
Zach
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
On Monday 07 January 2008 03:53:06 pm Charles R Harris wrote:
 Hi,

 On Jan 7, 2008 1:00 PM, Darren Dale [EMAIL PROTECTED] wrote:
  One of my collaborators would like to use 16bit float arrays. According
  to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to
  float16 in
  numpy.core.numerictypes, it appears that this should be possible, but the
  following doesnt work:
 
  a=arange(10, dtype='float16')
  TypeError: data type not understood
 
  Can anyone offer some advice?

 Does he care about speed? I think some graphics cards might support
 float16, but any normal implementation in C would reguire software floating
 point, a new type, and you couldn't rely on the normal operators. It might
 be doable in C++ with operator overloading and templates, but doing it in C
 would be a real hassle.

He is analyzing many sets of 2D data, each array is 3450x3450 pixels. I think 
memory/space is a greater concern at the moment than speed, 24 vs 48 
megabytes. They are more concerned with processing these arrays than viewing 
them.

Darren
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Brian Granger
Dmitrey,

This work is being funded by a new NASA grant that I have at Tech-X
Corporation where I work.  The grant officially begins as of Jan 18th,
so not much has been done as of this point.  We have however been
having some design discussions with various people.

Here is a broad overview of the proposed work:

1) Distributed arrays (dense) based on numpy+pyrex+mpi

2) Parallel algorithms that work with those arrays (ffts, linear
algebra, ufuncs, stats, etc)

The code will be BSD and will end up mostly in the ipython1 project,
but parts of it might end up also in numpy and mpi4py as well.

I am more than willing to share more details about the work if you are
interested.  But, I will surely post to the numpy list as things move
forward.

Brian

On Jan 7, 2008 1:13 PM, dmitrey [EMAIL PROTECTED] wrote:
 Some days ago there was mentioned a parallel numpy that is developed by
 Brian Granger.

 Does the project have any blog or website? Has it any description about
 API and abilities? When 1st release is intended?

 Regards, D


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Does float16 exist?

2008-01-07 Thread Darren Dale
On Monday 07 January 2008 03:09:33 pm Travis E. Oliphant wrote:
 Darren Dale wrote:
  One of my collaborators would like to use 16bit float arrays. According
  to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to
  float16 in numpy.core.numerictypes, it appears that this should be
  possible, but the following doesnt work:

 No, it's only possible, if the C-implementation NumPy was compiled with
 has 16-bits for its float scalar.

 Only float, double, and long double are implemented.  These translate to
 various bit-widths on different platforms.  numerictypes is overly
 aggressive at guessing what they might translate to.

Thanks for the clarification, Travis.

Darren
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is __array_interface__ supposed to work on numpy scalars?

2008-01-07 Thread Travis E. Oliphant
Andrew Straw wrote:
 Hi,

 I'm forwarding a bug from PyOpenGL. The developer, Mike Fletcher, is
 having troubles accessing a numpy scalar with the __array_interface__.
 Is this supposed to work? Or should __array_interface__ trigger an
 AttributeError on a numpy scalar? Note that I haven't done any digging
 on this myself...
   

The other approach would be to get the data attribute directly (which 
returns a pointer to a read-only buffer object containing the data).

I just fixed the reference problem with the scalar array_interface, though.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread dmitrey
The only one thing I'm very interested in for now - why the most 
simplest matrix operations are not implemented to be parallel in numpy 
yet (for several-CPU computers, like my AMD Athlon X2). First of all 
it's related to matrix multiplication and devision, either point or 
matrix (i.e. like A\B, A*B, dot(A,B)).
Another one highly convenient and rather simple thing to be implemented 
is direct Decart multiplication like it is mentioned in
pyslice (IIRC I had some troubles with installing the one)
http://scipy.org/Topical_Software#head-cf472934357fda4558aafdf558a977c4d59baecb
I guess for ~95% of users it will be enough, and only 5% will require 
message-pass between subprocesses etc.
BTW, IIRC latest MATLAB can uses 2-processors CPU already, and next 
version is promised to handle 4-processors as well.
Regards, D.

Brian Granger wrote:
 Dmitrey,

 This work is being funded by a new NASA grant that I have at Tech-X
 Corporation where I work.  The grant officially begins as of Jan 18th,
 so not much has been done as of this point.  We have however been
 having some design discussions with various people.

 Here is a broad overview of the proposed work:

 1) Distributed arrays (dense) based on numpy+pyrex+mpi

 2) Parallel algorithms that work with those arrays (ffts, linear
 algebra, ufuncs, stats, etc)

 The code will be BSD and will end up mostly in the ipython1 project,
 but parts of it might end up also in numpy and mpi4py as well.

 I am more than willing to share more details about the work if you are
 interested.  But, I will surely post to the numpy list as things move
 forward.

 Brian

 On Jan 7, 2008 1:13 PM, dmitrey [EMAIL PROTECTED] wrote:
   
 Some days ago there was mentioned a parallel numpy that is developed by
 Brian Granger.

 Does the project have any blog or website? Has it any description about
 API and abilities? When 1st release is intended?

 Regards, D


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Robert Kern
dmitrey wrote:
 The only one thing I'm very interested in for now - why the most 
 simplest matrix operations are not implemented to be parallel in numpy 
 yet (for several-CPU computers, like my AMD Athlon X2). First of all 
 it's related to matrix multiplication and devision, either point or 
 matrix (i.e. like A\B, A*B, dot(A,B)).

Eric Jones has made an attempt.

   http://svn.scipy.org/svn/numpy/branches/multicore/

Unfortunately, the overhead of starting the threads and acquiring/releasing 
thread locks wipes out most of the performance gains until you get fairly large 
arrays. It is possible that this comes from the particular implementation, 
rather than being intrinsic to the problem.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth.
   -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Matthieu Brucher
2008/1/7, dmitrey [EMAIL PROTECTED]:

 The only one thing I'm very interested in for now - why the most
 simplest matrix operations are not implemented to be parallel in numpy
 yet (for several-CPU computers, like my AMD Athlon X2). First of all
 it's related to matrix multiplication and devision, either point or
 matrix (i.e. like A\B, A*B, dot(A,B)).
 Another one highly convenient and rather simple thing to be implemented
 is direct Decart multiplication like it is mentioned in
 pyslice (IIRC I had some troubles with installing the one)

 http://scipy.org/Topical_Software#head-cf472934357fda4558aafdf558a977c4d59baecb
 I guess for ~95% of users it will be enough, and only 5% will require
 message-pass between subprocesses etc.
 BTW, IIRC latest MATLAB can uses 2-processors CPU already, and next
 version is promised to handle 4-processors as well.
 Regards, D.


Matlab surely relies on MKL to do this (Matlab ships with MKL or ACML now).
The latest Intel library handles multiprocessing, so if you want to use
multithreading, use MKL (and it can handle quad-cores with no sweat). So
Numpy is multithreaded.

On a side note, one of my friends published an article on a markovian
predictor for prefetching data objects over a network to speed up
computations. It was implemented on Java (search google for Jackal and
Java), but it could help in the long term if it is managable.

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] CASTABLE flag

2008-01-07 Thread Timothy Hochberg
On Jan 7, 2008 2:00 PM, Charles R Harris [EMAIL PROTECTED] wrote:

 Hi,

 On Jan 7, 2008 1:16 PM, Timothy Hochberg [EMAIL PROTECTED] wrote:

 
  Another possible approach is to treat downcasting similar to underflow.
  That is give it it's own flag in the errstate and people can set it to
  ignore, warn or raise on downcasting as desired. One could potentially have
  two flags, one for downcasting across kinds (float-int, int-bool) and one
  for downcasting within kinds (float64-float32). In this case, I personally
  would set the first to raise and the second to ignore and would suggest that
  as the default.
 
  IMO:
 
 1. It's a no brainer to raise and exception when assigning a
 complex value to a float or integer target. Using Z.real or 
 Z.imag is clearer and has the same performance.
 2. I'm fairly dubious about assigning float to ints as is. First
 off it looks like a bug magnet to me due to accidentally assigning a
 floating point value to a target that one believes to be float but is in
 fact integer. Second, C-style rounding is pretty evil; it's not always
 consistent across platforms, so relying on it for anything other than
 truncating already integral values is asking for trouble.
 3. Downcasting within kinds seems much less hazardous than
 downcasting across kinds, although I'd still be happy to be able 
  regulate it
 with errstate.
 
  Maybe a combination of a typecast function and the errstate would work
 well. The typecast function would provide a clear local override of the
 default errstate flags, while the user would have the option to specify what
 sort of behavior they care about in general.


Note that using the 'with' statement, you can have reasonably lightweight
local control using errstate. For example, assuming the hypothetical
dowcasting flag was named 'downcast':

 with errstate(downcast='ignore'):
  anint[:] = afloat

That would be local enough for me although it may not be to everyone's
taste.

If we were to go the a[:] = typecast(b) route, I have a hankering for some
more fine grained control of the rounding. Something like a[:] =
lazy_floor(b), a[:] = lazy_truncate(b) or a[:] = lazy_ceil(b), only
with better names. However, I admit that it's not obvious how to implement
that.

Yet another approach, is to add an 'out' argument to asstype such as already
exists for many of the other methods. Then a.astype(int, out=b), will
efficiently stick an integerized version of a into b.

FWIW, I suspect that the usefulness of casting in avoiding of temporaries is
probably overstated. If you really want to avoid temporaries, you either
need to program in a very tedious fashion, suitable only for very localized
hot spots, or you need to use something like numexpr. If someone comes back
with some measurements from real code that show a big time hit, I'll concede
the case, but so far it all sounds like guessing and my guess is that it'll
rarely make a difference. (And, the cases where it does make a difference
will be places your already doing crazy things to optimize the snot out of
the code and an extra astype or such won't matter)

-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread eric jones
Robert Kern wrote:
 dmitrey wrote:
   
 The only one thing I'm very interested in for now - why the most 
 simplest matrix operations are not implemented to be parallel in numpy 
 yet (for several-CPU computers, like my AMD Athlon X2). First of all 
 it's related to matrix multiplication and devision, either point or 
 matrix (i.e. like A\B, A*B, dot(A,B)).
 

 Eric Jones has made an attempt.

http://svn.scipy.org/svn/numpy/branches/multicore/

 Unfortunately, the overhead of starting the threads and acquiring/releasing 
 thread locks wipes out most of the performance gains until you get fairly 
 large 
 arrays. It is possible that this comes from the particular implementation, 
 rather than being intrinsic to the problem.

   
Yes, the problem in this implementation is that it uses pthreads for 
synchronization instead of spin locks with a work pool implementation 
tailored to numpy.  The thread synchronization overhead is horrible 
(300,000-400,000 clock cycles) and swamps anything other than very large 
arrays. I have played with spin-lock based solutions that cut this to, 
on average 3000-4000 cylces.  With that, arrays of 1e3-1e4 elements can 
start seeing parallelization benefits.  However, this code hasn't passed 
the mad-scientist tinkering stage...  I haven't touched it in at least 6 
months, and I doubt I'll get back to it very soon (go Brian!).  It did 
look promising for up to 4 processors on some operations (sin, cos, 
etc.) and worth-it-but-less-beneficial on simple operations (+,-,*, 
etc.).  Coupled with something like weave.blitz or numexpr that can (or 
could) compile multiple binary operations into a single kernel, the 
scaling for expressions with multiple simple operations would scale very 
well. 

My tinkering was aimed at a framework that would allow you to write 
little computational kernels in a prescribed way, and then let a numpy 
load-balancer automatically split the work up between worker threads 
that execute these little kernels.  Ideally, this load-balancer would be 
pluggable.  The inner loop for numpy's universal functions is probably 
very close or exactly the interface for  these little kernels. Also, it 
be nice to couple this with weave so that kernels written with weave 
could execute in parallel without user effort.  (This is all like the 
map part of map-reduce architecture... The reduce part also need to 
fit in the architecture to generically handle things like sum, etc.)

Getting it to work with all flavors of numpy arrays (contiguous, 
non-contiguous, buffered, etc.) is quite a chore, but the contiguous 
arrays (and perhaps some non-contiguous) offer some relatively low 
hanging fruit.  Here's to hoping Brian's project bears fruit.

I haven't thought about matrix ops much, so I don't know if they would 
fit this (minimally) described architecture.  I am sure that they would 
scale well.


eric




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread Andrew Straw
dmitrey wrote:
 The only one thing I'm very interested in for now - why the most 
 simplest matrix operations are not implemented to be parallel in numpy 
 yet (for several-CPU computers, like my AMD Athlon X2).
For what it's worth, sometimes I *want* my numpy operations to happen 
only on one core. (So that I can do more important stuff with the others.)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-07 Thread David Cournapeau
Andrew Straw wrote:
 dmitrey wrote:
   
 The only one thing I'm very interested in for now - why the most 
 simplest matrix operations are not implemented to be parallel in numpy 
 yet (for several-CPU computers, like my AMD Athlon X2).
 
 For what it's worth, sometimes I *want* my numpy operations to happen 
 only on one core. (So that I can do more important stuff with the others.)
   
I have not investigated really deeply, but matlab, at least the version 
I tried, had the choice between multi-core and mono-core (at runtime). 
Personally, I have not been really impressed by the speed enhancement: 
nothing scientific, but I tried it on several of my old matlab scripts 
(I rarely use matlab since I am using numpy), and for some things it was 
a bit faster, for some a bit slower. But the multi-core support was said 
to be experimental for that version (it was not enabled by default for 
sure).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Fernando Perez
On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote:
 Hi,

 for my work related on scons, I have a branch build_with_scons in
 the numpy trunk, which I have initialized exactly as documented on the
 numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches).
 When I try to update my branch with the trunk, I got suprising merge
 request, in perticular, it tried to merge all trunk revision up to 2871,
 whereas I created my branch with a copy from the trunk at revision 4676.
 Am I missing something ? Shouldn't it try to merge from the revision I
 started the branch (since this revision is a common ancestor) ?

AFAIK, the merge command must ALWAYS be given with explicit revision
brackets, since this is precisely the information SVN does not track
at all.  Quoting:

http://svnbook.red-bean.com/en/1.0/ch04s04.html

But as discussed in the section called Best Practices for Merging,
you don't want to merge the changes you've already merged before; you
only want to merge everything new on your branch since the last time
you merged. The trick is to figure out what's new.

The first step is to run svn log on the trunk, and look for a log
message about the last time you merged from the branch:

$ cd calc/trunk
$ svn log
…

r406 | user | 2004-02-08 11:17:26 -0600 (Sun, 08 Feb 2004) | 1 line

Merged my-calc-branch changes r341:405 into the trunk.

…

Aha! Since all branch-changes that happened between revisions 341 and
405 were previously merged to the trunk as revision 406, you now know
that you want to merge only the branch changes after that—by comparing
revisions 406 and HEAD.


As you can see, they recommend that you indicate in that particular
log message the revision brackets of what you've already merged, so
you can find that information easily later. Once you have this
information on record, you start doing all your future updates on the
branch with specific revision ranges, and that seems to work
reasonably well.

HTH,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Fernando Perez wrote:
 On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED] wrote:
   
 Hi,

 for my work related on scons, I have a branch build_with_scons in
 the numpy trunk, which I have initialized exactly as documented on the
 numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches).
 When I try to update my branch with the trunk, I got suprising merge
 request, in perticular, it tried to merge all trunk revision up to 2871,
 whereas I created my branch with a copy from the trunk at revision 4676.
 Am I missing something ? Shouldn't it try to merge from the revision I
 started the branch (since this revision is a common ancestor) ?
 

 AFAIK, the merge command must ALWAYS be given with explicit revision
 brackets, since this is precisely the information SVN does not track
 at all.  Quoting:

 http://svnbook.red-bean.com/en/1.0/ch04s04.html
 
 But as discussed in the section called Best Practices for Merging,
 you don't want to merge the changes you've already merged before; you
 only want to merge everything new on your branch since the last time
 you merged. The trick is to figure out what's new.

 The first step is to run svn log on the trunk, and look for a log
 message about the last time you merged from the branch:

 $ cd calc/trunk
 $ svn log
 …
 
 r406 | user | 2004-02-08 11:17:26 -0600 (Sun, 08 Feb 2004) | 1 line

 Merged my-calc-branch changes r341:405 into the trunk.
 
 …

 Aha! Since all branch-changes that happened between revisions 341 and
 405 were previously merged to the trunk as revision 406, you now know
 that you want to merge only the branch changes after that—by comparing
 revisions 406 and HEAD.
 

 As you can see, they recommend that you indicate in that particular
 log message the revision brackets of what you've already merged, so
 you can find that information easily later. Once you have this
 information on record, you start doing all your future updates on the
 branch with specific revision ranges, and that seems to work
 reasonably well.

   
I understand this if doing the merge at hand with svn merge (that's what 
I did previously), but I am using svnmerge, which is supposed to avoid 
all this (I find the whole process extremely error-prone). More 
specifically, I am surprised by the svnmerge starting revisions,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Fernando Perez
On Jan 7, 2008 10:54 PM, David Cournapeau [EMAIL PROTECTED] wrote:

 I understand this if doing the merge at hand with svn merge (that's what
 I did previously), but I am using svnmerge, which is supposed to avoid
 all this (I find the whole process extremely error-prone). More
 specifically, I am surprised by the svnmerge starting revisions,

OK, I've always just used manual revision tracking, since I basically
don't trust svnmerge.  Seems I was right, since the manual process at
least works, even if it's clunky and requires lots of care.

Maybe you should be using a DVCS. Oh wait... ;-)

Cheers,


f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Fernando Perez wrote:
 On Jan 7, 2008 10:54 PM, David Cournapeau [EMAIL PROTECTED] wrote:

   
 I understand this if doing the merge at hand with svn merge (that's what
 I did previously), but I am using svnmerge, which is supposed to avoid
 all this (I find the whole process extremely error-prone). More
 specifically, I am surprised by the svnmerge starting revisions,
 

 OK, I've always just used manual revision tracking, since I basically
 don't trust svnmerge.  Seems I was right, since the manual process at
 least works, even if it's clunky and requires lots of care.
   
Between a working clunky solution and a clunky non working solution, the 
choice is not difficult :) I just wanted to check whether I did 
something wrong or if svnmerge was fragile (I was a bit surprised, 
because I could swear I managed to make it work at some point in another 
branch)
 Maybe you should be using a DVCS. Oh wait... ;-)
   
Don't worry, I have never used subversion for my own projects, I've 
never understood the complexity of the thing :)

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
Hi David,

How did you initialize svnmerge ?

Matthieu

2008/1/8, David Cournapeau [EMAIL PROTECTED]:

 Fernando Perez wrote:
  On Jan 7, 2008 10:41 PM, David Cournapeau [EMAIL PROTECTED]
 wrote:
 
  Hi,
 
  for my work related on scons, I have a branch build_with_scons in
  the numpy trunk, which I have initialized exactly as documented on the
  numpy wiki (http://projects.scipy.org/scipy/numpy/wiki/MakingBranches).
  When I try to update my branch with the trunk, I got suprising merge
  request, in perticular, it tried to merge all trunk revision up to
 2871,
  whereas I created my branch with a copy from the trunk at revision
 4676.
  Am I missing something ? Shouldn't it try to merge from the revision I
  started the branch (since this revision is a common ancestor) ?
 
 
  AFAIK, the merge command must ALWAYS be given with explicit revision
  brackets, since this is precisely the information SVN does not track
  at all.  Quoting:
 
  http://svnbook.red-bean.com/en/1.0/ch04s04.html
  
  But as discussed in the section called Best Practices for Merging,
  you don't want to merge the changes you've already merged before; you
  only want to merge everything new on your branch since the last time
  you merged. The trick is to figure out what's new.
 
  The first step is to run svn log on the trunk, and look for a log
  message about the last time you merged from the branch:
 
  $ cd calc/trunk
  $ svn log
  …
  
  r406 | user | 2004-02-08 11:17:26 -0600 (Sun, 08 Feb 2004) | 1 line
 
  Merged my-calc-branch changes r341:405 into the trunk.
  
  …
 
  Aha! Since all branch-changes that happened between revisions 341 and
  405 were previously merged to the trunk as revision 406, you now know
  that you want to merge only the branch changes after that—by comparing
  revisions 406 and HEAD.
  
 
  As you can see, they recommend that you indicate in that particular
  log message the revision brackets of what you've already merged, so
  you can find that information easily later. Once you have this
  information on record, you start doing all your future updates on the
  branch with specific revision ranges, and that seems to work
  reasonably well.
 
 
 I understand this if doing the merge at hand with svn merge (that's what
 I did previously), but I am using svnmerge, which is supposed to avoid
 all this (I find the whole process extremely error-prone). More
 specifically, I am surprised by the svnmerge starting revisions,

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Matthieu Brucher wrote:
 Hi David,

 How did you initialize svnmerge ?
As said in the numpy wiki. More precisely:
- In a svn checkout of the trunk, do svn up to be up to date
- svn copy TRUNK MY_BRANCH
- use svnmerge init MY_BRANCH
- svn ci -F svnmerge-commit.txt
- svn switch MY_BRANCH
- svnmerge init TRUNK
- svn ci -F svnmerge-commit.txt

One thing which is strange is that in the numpy trunk, you can see the 
following as svnmerge-integrated:

Property svnmerge-integrated set to /branches/build_with_scons:1-4676 
/branches/cleanconfig_rtm:1-4610 /branches/distutils-revamp:1-2752 
/branches/distutils_scons_command:1-4619 /branches/multicore:1-3687 
/branches/numpy.scons:1-4484 /trunk:1-2871

Does that mean that you cannot use svnmerge with several branches at the 
same time ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
2008/1/8, David Cournapeau [EMAIL PROTECTED]:

 Matthieu Brucher wrote:
  Hi David,
 
  How did you initialize svnmerge ?
 As said in the numpy wiki. More precisely:
 - In a svn checkout of the trunk, do svn up to be up to date
 - svn copy TRUNK MY_BRANCH
 - use svnmerge init MY_BRANCH
 - svn ci -F svnmerge-commit.txt
 - svn switch MY_BRANCH
 - svnmerge init TRUNK
 - svn ci -F svnmerge-commit.txt

 One thing which is strange is that in the numpy trunk, you can see the
 following as svnmerge-integrated:

 Property svnmerge-integrated set to /branches/build_with_scons:1-4676
 /branches/cleanconfig_rtm:1-4610 /branches/distutils-revamp:1-2752
 /branches/distutils_scons_command:1-4619 /branches/multicore:1-3687
 /branches/numpy.scons:1-4484 /trunk:1-2871

 Does that mean that you cannot use svnmerge with several branches at the
 same time ?


No, this is what I expected. Could you send us the content of
svnmerge-integrated for the build_with_scons branch before the merge ?

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher
2008/1/8, Matthieu Brucher [EMAIL PROTECTED]:



 2008/1/8, David Cournapeau [EMAIL PROTECTED]:
 
  Matthieu Brucher wrote:
   Hi David,
  
   How did you initialize svnmerge ?
  As said in the numpy wiki. More precisely:
  - In a svn checkout of the trunk, do svn up to be up to date
  - svn copy TRUNK MY_BRANCH
  - use svnmerge init MY_BRANCH
  - svn ci -F svnmerge-commit.txt
  - svn switch MY_BRANCH
  - svnmerge init TRUNK
  - svn ci -F svnmerge-commit.txt
 
  One thing which is strange is that in the numpy trunk, you can see the
  following as svnmerge-integrated:
 
  Property svnmerge-integrated set to /branches/build_with_scons:1-4676
  /branches/cleanconfig_rtm:1-4610 /branches/distutils-revamp:1-2752
  /branches/distutils_scons_command:1-4619 /branches/multicore:1-3687
  /branches/numpy.scons:1-4484 /trunk:1-2871
 
  Does that mean that you cannot use svnmerge with several branches at the
  same time ?


 No, this is what I expected. Could you send us the content of
 svnmerge-integrated for the build_with_scons branch before the merge ?

 Matthieu


Oups, safe for the /trunk:1-2871 part. This should be deleted before a
commit to the trunk, I think.


-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread David Cournapeau
Matthieu Brucher wrote:


 2008/1/8, Matthieu Brucher [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED]:



 2008/1/8, David Cournapeau [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]:

 Matthieu Brucher wrote:
  Hi David,
 
  How did you initialize svnmerge ?
 As said in the numpy wiki. More precisely:
 - In a svn checkout of the trunk, do svn up to be up to date
 - svn copy TRUNK MY_BRANCH
 - use svnmerge init MY_BRANCH
 - svn ci -F svnmerge-commit.txt
 - svn switch MY_BRANCH
 - svnmerge init TRUNK
 - svn ci -F svnmerge-commit.txt

 One thing which is strange is that in the numpy trunk, you can
 see the
 following as svnmerge-integrated:

 Property svnmerge-integrated set to
 /branches/build_with_scons:1-4676
 /branches/cleanconfig_rtm:1-4610 /branches/distutils-revamp:1-2752
 /branches/distutils_scons_command:1-4619
 /branches/multicore:1-3687
 /branches/numpy.scons:1-4484 /trunk:1-2871

 Does that mean that you cannot use svnmerge with several
 branches at the
 same time ?


 No, this is what I expected. Could you send us the content of
 svnmerge-integrated for the build_with_scons branch before the
 merge ?

 Matthieu


 Oups, safe for the /trunk:1-2871 part. This should be deleted before 
 a commit to the trunk, I think.
Yes, that's what I (quite unclearly) meant: since revision numbers are 
per- repository in svn, I don't understand the point of tracking trunk 
revisions: I would think that tracking the last merged version for each 
branch to be enough (for the kind of merge svn does, at least). If trunk 
version are tracked, then I would expect two branches using svnmerge to 
clash each other,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-07 Thread Matthieu Brucher

  Oups, safe for the /trunk:1-2871 part. This should be deleted before
  a commit to the trunk, I think.
 Yes, that's what I (quite unclearly) meant: since revision numbers are
 per- repository in svn, I don't understand the point of tracking trunk
 revisions: I would think that tracking the last merged version for each
 branch to be enough (for the kind of merge svn does, at least). If trunk
 version are tracked, then I would expect two branches using svnmerge to
 clash each other,


In fact, the trunk should be tracked from all the branches, although there
will be the problem with merging the different branches (I did not have many
troubles with that, but I only tried with a few differences) into the trunk.
I don't think only one branch wants to be up to date with the trunk ;). But
each time you create a branch, you must delete the svnmerge_integrated
property and set it correctly.
Or you can merge by hand the trunk into each branch, but I don't know if
it's hard or not.

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion