Re: [Numpy-discussion] More on doc-ing new functions

2009-06-16 Thread Pauli Virtanen
Mon, 15 Jun 2009 09:52:05 -0700, David Goldsmith kirjoitti:
[clip]
> Is there a protocol for making sure these things get done?  (Just don't
> want to reinvent the wheel.)

I don't think so.

The current way is that people forget to do it, and then someone fixes it 
afterwards :)

I'm not sure how much rubber hose we can use on developers. But please,

* When you add new functions in Python APIs, please
insert the function to the correct autosummary:: directive in
the correct RST file under doc/source/reference/routines.* *

* When you add new functions in the C APIs, please
insert at least stub function documentation (cfunction::
directives) in the corresponding RST file 
doc/source/reference/c-api.* *

Sphinx has some coverage assessing features which we probably should try 
out. There's also doc/summarize.py in Numpy's SVN that should report the 
status, but it's currently broken.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] state of G3F2PY?

2009-06-16 Thread william ratcliff
Hi!  I'm looking at trying to bind a rather large (>150K lines of code)
crystallography library to python and would like to know what the state of
F2py is.  Are allocatable arrays supported?  Derived types?  Modules,
Pointers, etc.?  Is there a list somewhere?  Has anyone else looked into
wrapping such a large code base?  If so, any pointers?


Thanks,
William
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Array resize question

2009-06-16 Thread Cristi Constantin
Good day.
I have this array:

a = array([[u'0', u'0', u'0', u'0', u'0', u' '],
   [u'1', u'1', u'1', u'1', u'1', u' '],
   [u'2', u'2', u'2', u'2', u'2', u' '],
   [u'3', u'3', u'3', u'3', u'3', u' '],
   [u'4', u'4', u'4', u'4', u'4', u'']], 
  dtype='http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.resize.html
Thank you in advance.




  ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array resize question

2009-06-16 Thread Neil Martinsen-Burrell
On 2009-06-16 09:42 , Cristi Constantin wrote:
> Good day.
> I have this array:
>
> a = array([[u'0', u'0', u'0', u'0', u'0', u' '],
> [u'1', u'1', u'1', u'1', u'1', u' '],
> [u'2', u'2', u'2', u'2', u'2', u' '],
> [u'3', u'3', u'3', u'3', u'3', u' '],
> [u'4', u'4', u'4', u'4', u'4', u'']],
> dtype='
> I want to resize it, but i don't want to alter the order of elements.
>
> a.resize((5,10)) # Will result in
> array([[u'0', u'0', u'0', u'0', u'0', u' ', u'1', u'1', u'1', u'1'],
> [u'1', u' ', u'2', u'2', u'2', u'2', u'2', u' ', u'3', u'3'],
> [u'3', u'3', u'3', u' ', u'4', u'4', u'4', u'4', u'4', u''],
> [u'', u'', u'', u'', u'', u'', u'', u'', u'', u''],
> [u'', u'', u'', u'', u'', u'', u'', u'', u'', u'']],
> dtype='
> That means all my values are mutilated. What i want is the order to be
> kept and only the last elements to become empty. Like this:
> array([[u'0', u'0', u'0', u'0', u'0', u' ', u'', u'', u'', u''],
> [u'1', u'1', u'1', u'1', u'1', u' ', u'', u'', u'', u''],
> [u'2', u'2', u'2', u'2', u'2', u' ', u'', u'', u'', u''],
> [u'3', u'3', u'3', u'3', u'3', u' ', u'', u'', u'', u''],
> [u'4', u'4', u'4', u'4', u'4', u' ', u'', u'', u'', u'']],
> dtype='
> I tried to play with resize like this:
> a.resize((5,10), refcheck=True, order=False)
> # SystemError: NULL result without error in PyObject_Call
>
> vCont1.resize((5,10),True,False)
> # TypeError: an integer is required
>
> Can anyone tell me how this "resize" function works ?
> I already checked the help file :
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.resize.html
> Thank you in advance.

Resize takes the existing elements and puts them in a new order.  This 
is its purpose.  What you want to do is to stack two arrays together to 
make a new array:

In [4]: x = np.random.random((5,5))

In [6]: x.shape
Out[6]: (5, 5)

In [7]: y = np.zeros((5,5))

In [8]: y.shape
Out[8]: (5, 5)

In [10]: z = np.hstack((x,y))

In [11]: z.shape
Out[11]: (5, 10)

In [12]: z
Out[12]:
array([[ 0.72215359,  0.32388934,  0.24858866,  0.40907379,  0.26072476,
  0.,  0.,  0.,  0.,  0.],
[ 0.59085241,  0.88075534,  0.2288914 ,  0.49258006,  0.28175061,
  0.,  0.,  0.,  0.,  0.],
[ 0.50355137,  0.30180634,  0.09177751,  0.08608373,  0.04114688,
  0.,  0.,  0.,  0.,  0.],
[ 0.06053053,  0.80426792,  0.21038812,  0.28098004,  0.88956146,
  0.,  0.,  0.,  0.,  0.],
[ 0.17359959,  0.4629072 ,  0.30100704,  0.45434713,  0.86597028,
  0.,  0.,  0.,  0.,  0.]])

This should work the same with your unicode arrays in place of the 
floating point arrays here.  (To make an array with all empty strings, 
you can do y = np.empty((5,5), dtype='U'); y[:] = '')

-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] state of G3F2PY?

2009-06-16 Thread Kurt Smith
On Tue, Jun 16, 2009 at 8:21 AM, william
ratcliff wrote:
> Hi!  I'm looking at trying to bind a rather large (>150K lines of code)
> crystallography library to python and would like to know what the state of
> F2py is.  Are allocatable arrays supported?  Derived types?  Modules,
> Pointers, etc.?  Is there a list somewhere?  Has anyone else looked into
> wrapping such a large code base?  If so, any pointers?

I've never used the current distributed version of f2py (numpy.f2py I
believe is the default one) to wrap a library as large as this, and I
don't believe that f2py can handle assumed-shape arrays as arguments
to routines -- I haven't checked its support for allocatable, though,
but I don't think so.  I'm certainly open to correction, though.  In
my experience, f2py excels at wrapping Fortran 77-style programs with
older array passing conventions, where the shape information is passed
in as arguments to a subroutine/function.

It won't solve your problem currently, but you might be interested in
my GSoC project which aims to do just what you want -- binding a
fortran library to Cython/Python, with support for
allocatable/assumed-shape/assumed-size array arguments, pointers,
derived types, modules, etc.  It is being heavily developed as we
speak, but will (hopefully) be usable by sometime this fall.

Your library seems pretty large, which would be an excellent test for
the GSoC project.  If you are willing, and when the project is ready
to tackle such a library, we'd be glad to work with you to get your
library wrapped.  As mentioned, we won't be at this point until the
fall, though.

Pearu has been kind enough to let us use the 'fparser' module from the
G3F2PY project as the fortran parser, so the work of f2py continues on
in our GSoC work.

There are a few software requrirements -- we've tried to keep
dependencies to a minimum:

1) a fortran compiler that supports the intrinsic ISO_C_BINDING module
-- gfortran 4.3.3 supports it, as do pretty much every current version
of other compilers.
2) The GSoC project is distributed with Cython.
3) Python version >= 2.5.
4) Hopefully you're not on windows -- we certainly plan on supporting
Windows at some point in the future, but we don't have access to it
for testing right now.

Hope this helps,

Kurt Smith
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] state of G3F2PY?

2009-06-16 Thread william ratcliff
I would be interested in testing your GSOC project and will do what I can in
the mean time.  I do develop on windows, but the library lives on linux,
macos, and windows, so we can test on anyg--it also binds with ifort,
gfortran, etc. so seems rather robust.

Cheers,
William

On Tue, Jun 16, 2009 at 12:13 PM, Kurt Smith  wrote:

> On Tue, Jun 16, 2009 at 8:21 AM, william
> ratcliff wrote:
> > Hi!  I'm looking at trying to bind a rather large (>150K lines of code)
> > crystallography library to python and would like to know what the state
> of
> > F2py is.  Are allocatable arrays supported?  Derived types?  Modules,
> > Pointers, etc.?  Is there a list somewhere?  Has anyone else looked into
> > wrapping such a large code base?  If so, any pointers?
>
> I've never used the current distributed version of f2py (numpy.f2py I
> believe is the default one) to wrap a library as large as this, and I
> don't believe that f2py can handle assumed-shape arrays as arguments
> to routines -- I haven't checked its support for allocatable, though,
> but I don't think so.  I'm certainly open to correction, though.  In
> my experience, f2py excels at wrapping Fortran 77-style programs with
> older array passing conventions, where the shape information is passed
> in as arguments to a subroutine/function.
>
> It won't solve your problem currently, but you might be interested in
> my GSoC project which aims to do just what you want -- binding a
> fortran library to Cython/Python, with support for
> allocatable/assumed-shape/assumed-size array arguments, pointers,
> derived types, modules, etc.  It is being heavily developed as we
> speak, but will (hopefully) be usable by sometime this fall.
>
> Your library seems pretty large, which would be an excellent test for
> the GSoC project.  If you are willing, and when the project is ready
> to tackle such a library, we'd be glad to work with you to get your
> library wrapped.  As mentioned, we won't be at this point until the
> fall, though.
>
> Pearu has been kind enough to let us use the 'fparser' module from the
> G3F2PY project as the fortran parser, so the work of f2py continues on
> in our GSoC work.
>
> There are a few software requrirements -- we've tried to keep
> dependencies to a minimum:
>
> 1) a fortran compiler that supports the intrinsic ISO_C_BINDING module
> -- gfortran 4.3.3 supports it, as do pretty much every current version
> of other compilers.
> 2) The GSoC project is distributed with Cython.
> 3) Python version >= 2.5.
> 4) Hopefully you're not on windows -- we certainly plan on supporting
> Windows at some point in the future, but we don't have access to it
> for testing right now.
>
> Hope this helps,
>
> Kurt Smith
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ticket #1096

2009-06-16 Thread Brian Lewis
http://projects.scipy.org/numpy/ticket/1096

Is the fix to this to check if (line 95 of
trunk/numpy/core/src/umathmodule.c.src
)

   const @type@ tmp = x - y;

is -inf or not.  And if it is, just to return -inf.  If so, is there any
chance someone can commit this quick fixI want to use it :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ticket #1096

2009-06-16 Thread Pauli Virtanen
On 2009-06-16, Brian Lewis  wrote:
> http://projects.scipy.org/numpy/ticket/1096
>
> Is the fix to this to check if (line 95 of
> trunk/numpy/core/src/umathmodule.c.src
> )
>
>const @type@ tmp = x - y;
>
> is -inf or not.  And if it is, just to return -inf.  

That's not the correct fix.

Anyway, fixed in r7059.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Interleaved Arrays and

2009-06-16 Thread Robert
Ian Mallett wrote:

> 
> n = #blah
> testlist = []
> for x in xrange(n):
> for y in xrange(n):
> testlist.append([x,y])
> testlist.append([x+1,y])
> 
> If "testlist" is an array (i.e., I could go: "array(testlist)"), it 
> works nicely.  However, my Python method is certainly improveable with 
> numpy.  I suspect the best way is interleaving the arrays [x,y->yn] and 
> [x+1,y->yn] n times, but I couldn't figure out how to do that...
> 


e.g with column_stack

 >>> n = 10
 >>> xx = np.ones(n)
 >>> yy = np.arange(n)
 >>> aa = np.column_stack((xx,yy))
 >>> bb = np.column_stack((xx+1,yy))
 >>> aa
array([[ 1.,  0.],
[ 1.,  1.],
[ 1.,  2.],
[ 1.,  3.],
[ 1.,  4.],
[ 1.,  5.],
[ 1.,  6.],
[ 1.,  7.],
[ 1.,  8.],
[ 1.,  9.]])
 >>> bb
array([[ 2.,  0.],
[ 2.,  1.],
[ 2.,  2.],
[ 2.,  3.],
[ 2.,  4.],
[ 2.,  5.],
[ 2.,  6.],
[ 2.,  7.],
[ 2.,  8.],
[ 2.,  9.]])
 >>> np.column_stack((aa,bb))
array([[ 1.,  0.,  2.,  0.],
[ 1.,  1.,  2.,  1.],
[ 1.,  2.,  2.,  2.],
[ 1.,  3.,  2.,  3.],
[ 1.,  4.,  2.,  4.],
[ 1.,  5.,  2.,  5.],
[ 1.,  6.,  2.,  6.],
[ 1.,  7.,  2.,  7.],
[ 1.,  8.,  2.,  8.],
[ 1.,  9.,  2.,  9.]])
 >>> cc = _
 >>> cc.reshape((n*2,2))
array([[ 1.,  0.],
[ 2.,  0.],
[ 1.,  1.],
[ 2.,  1.],
[ 1.,  2.],
[ 2.,  2.],
[ 1.,  3.],
[ 2.,  3.],
[ 1.,  4.],
[ 2.,  4.],
[ 1.,  5.],
[ 2.,  5.],
[ 1.,  6.],
[ 2.,  6.],
[ 1.,  7.],
[ 2.,  7.],
[ 1.,  8.],
[ 2.,  8.],
[ 1.,  9.],
[ 2.,  9.]])
 >>>


However I feel too, there is a intuitive abbrev function like 
'interleave' or so missing in numpy shape_base or so.


Robert



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ticket #1096

2009-06-16 Thread Brian Lewis
On Tue, Jun 16, 2009 at 11:47 AM, Pauli Virtanen  wrote:

> Anyway, fixed in r7059.
>

Thanks!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Interleaved Arrays and

2009-06-16 Thread Neil Martinsen-Burrell
On 06/16/2009 02:18 PM, Robert wrote:
>   >>>  n = 10
>   >>>  xx = np.ones(n)
>   >>>  yy = np.arange(n)
>   >>>  aa = np.column_stack((xx,yy))
>   >>>  bb = np.column_stack((xx+1,yy))
>   >>>  aa
> array([[ 1.,  0.],
>  [ 1.,  1.],
>  [ 1.,  2.],
>  [ 1.,  3.],
>  [ 1.,  4.],
>  [ 1.,  5.],
>  [ 1.,  6.],
>  [ 1.,  7.],
>  [ 1.,  8.],
>  [ 1.,  9.]])
>   >>>  bb
> array([[ 2.,  0.],
>  [ 2.,  1.],
>  [ 2.,  2.],
>  [ 2.,  3.],
>  [ 2.,  4.],
>  [ 2.,  5.],
>  [ 2.,  6.],
>  [ 2.,  7.],
>  [ 2.,  8.],
>  [ 2.,  9.]])
>   >>>  np.column_stack((aa,bb))
> array([[ 1.,  0.,  2.,  0.],
>  [ 1.,  1.,  2.,  1.],
>  [ 1.,  2.,  2.,  2.],
>  [ 1.,  3.,  2.,  3.],
>  [ 1.,  4.,  2.,  4.],
>  [ 1.,  5.,  2.,  5.],
>  [ 1.,  6.,  2.,  6.],
>  [ 1.,  7.,  2.,  7.],
>  [ 1.,  8.,  2.,  8.],
>  [ 1.,  9.,  2.,  9.]])
>   >>>  cc = _
>   >>>  cc.reshape((n*2,2))
> array([[ 1.,  0.],
>  [ 2.,  0.],
>  [ 1.,  1.],
>  [ 2.,  1.],
>  [ 1.,  2.],
>  [ 2.,  2.],
>  [ 1.,  3.],
>  [ 2.,  3.],
>  [ 1.,  4.],
>  [ 2.,  4.],
>  [ 1.,  5.],
>  [ 2.,  5.],
>  [ 1.,  6.],
>  [ 2.,  6.],
>  [ 1.,  7.],
>  [ 2.,  7.],
>  [ 1.,  8.],
>  [ 2.,  8.],
>  [ 1.,  9.],
>  [ 2.,  9.]])
>   >>>
>
>
> However I feel too, there is a intuitive abbrev function like
> 'interleave' or so missing in numpy shape_base or so.

Using fancy indexing, you can set strided portions of an array equal to 
another array.  So::

In [2]: aa = np.empty((10,2))

In [3]: aa[:, 0] = 1

In [4]: aa[:,1] = np.arange(10)

In [5]: bb = np.empty((10,2))

In [6]: bb[:,0] = 2

In [7]: bb[:,1] = aa[:,1] # this works

In [8]: cc = np.empty((20,2))

In [9]: cc[::2,:] = aa

In [10]: cc[1::2,:] = bb

In [11]: cc
Out[11]:
array([[ 1.,  0.],
[ 2.,  0.],
[ 1.,  1.],
[ 2.,  1.],
[ 1.,  2.],
[ 2.,  2.],
[ 1.,  3.],
[ 2.,  3.],
[ 1.,  4.],
[ 2.,  4.],
[ 1.,  5.],
[ 2.,  5.],
[ 1.,  6.],
[ 2.,  6.],
[ 1.,  7.],
[ 2.,  7.],
[ 1.,  8.],
[ 2.,  8.],
[ 1.,  9.],
[ 2.,  9.]])

Using this syntax, interleave could be a one-liner.

-Neil
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Interleaved Arrays and

2009-06-16 Thread Robert
Neil Martinsen-Burrell wrote:
> On 06/16/2009 02:18 PM, Robert wrote:
>>   >>>  n = 10
>>   >>>  xx = np.ones(n)
>>   >>>  yy = np.arange(n)
>>   >>>  aa = np.column_stack((xx,yy))
>>   >>>  bb = np.column_stack((xx+1,yy))
>>   >>>  aa
>> array([[ 1.,  0.],
>>  [ 1.,  1.],
>>  [ 1.,  2.],
>>  [ 1.,  3.],
>>  [ 1.,  4.],
>>  [ 1.,  5.],
>>  [ 1.,  6.],
>>  [ 1.,  7.],
>>  [ 1.,  8.],
>>  [ 1.,  9.]])
>>   >>>  bb
>> array([[ 2.,  0.],
>>  [ 2.,  1.],
>>  [ 2.,  2.],
>>  [ 2.,  3.],
>>  [ 2.,  4.],
>>  [ 2.,  5.],
>>  [ 2.,  6.],
>>  [ 2.,  7.],
>>  [ 2.,  8.],
>>  [ 2.,  9.]])
>>   >>>  np.column_stack((aa,bb))
>> array([[ 1.,  0.,  2.,  0.],
>>  [ 1.,  1.,  2.,  1.],
>>  [ 1.,  2.,  2.,  2.],
>>  [ 1.,  3.,  2.,  3.],
>>  [ 1.,  4.,  2.,  4.],
>>  [ 1.,  5.,  2.,  5.],
>>  [ 1.,  6.,  2.,  6.],
>>  [ 1.,  7.,  2.,  7.],
>>  [ 1.,  8.,  2.,  8.],
>>  [ 1.,  9.,  2.,  9.]])
>>   >>>  cc = _
>>   >>>  cc.reshape((n*2,2))
>> array([[ 1.,  0.],
>>  [ 2.,  0.],
>>  [ 1.,  1.],
>>  [ 2.,  1.],
>>  [ 1.,  2.],
>>  [ 2.,  2.],
>>  [ 1.,  3.],
>>  [ 2.,  3.],
>>  [ 1.,  4.],
>>  [ 2.,  4.],
>>  [ 1.,  5.],
>>  [ 2.,  5.],
>>  [ 1.,  6.],
>>  [ 2.,  6.],
>>  [ 1.,  7.],
>>  [ 2.,  7.],
>>  [ 1.,  8.],
>>  [ 2.,  8.],
>>  [ 1.,  9.],
>>  [ 2.,  9.]])
>>   >>>
>>
>>
>> However I feel too, there is a intuitive abbrev function like
>> 'interleave' or so missing in numpy shape_base or so.
> 
> Using fancy indexing, you can set strided portions of an array equal to 
> another array.  So::
> 
> In [2]: aa = np.empty((10,2))
> 
> In [3]: aa[:, 0] = 1
> 
> In [4]: aa[:,1] = np.arange(10)
> 
> In [5]: bb = np.empty((10,2))
> 
> In [6]: bb[:,0] = 2
> 
> In [7]: bb[:,1] = aa[:,1] # this works
> 
> In [8]: cc = np.empty((20,2))
> 
> In [9]: cc[::2,:] = aa
> 
> In [10]: cc[1::2,:] = bb
> 
> In [11]: cc
> Out[11]:
> array([[ 1.,  0.],
> [ 2.,  0.],
> [ 1.,  1.],
> [ 2.,  1.],
> [ 1.,  2.],
> [ 2.,  2.],
> [ 1.,  3.],
> [ 2.,  3.],
> [ 1.,  4.],
> [ 2.,  4.],
> [ 1.,  5.],
> [ 2.,  5.],
> [ 1.,  6.],
> [ 2.,  6.],
> [ 1.,  7.],
> [ 2.,  7.],
> [ 1.,  8.],
> [ 2.,  8.],
> [ 1.,  9.],
> [ 2.,  9.]])
> 
> Using this syntax, interleave could be a one-liner.
> 
> -Neil

that method of 'filling an empty with a pattern' was mentioned in 
the other (general) interleaving question. It requires however a 
lot of particular numbers and :'s in the code, and requires even 
more statements which can hardly be written in functional style - 
in one line?. The other approach is more jount, free of fancy 
indexing assignments.

The general interleaving should work efficiently in one like this:

np.column_stack/concatenate((r,g,b,), axis=...).reshape(..)


But as all this is not intuitive, something like this should be in 
numpy perhaps? :

def interleave( tup_arrays, axis = None )


Robert

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Interleaved Arrays and

2009-06-16 Thread Neil Martinsen-Burrell
On 2009-06-16 16:05 , Robert wrote:
> Neil Martinsen-Burrell wrote:
>> On 06/16/2009 02:18 PM, Robert wrote:
>>>>>>   n = 10
>>>>>>   xx = np.ones(n)
>>>>>>   yy = np.arange(n)
>>>>>>   aa = np.column_stack((xx,yy))
>>>>>>   bb = np.column_stack((xx+1,yy))
>>>>>>   aa
>>> array([[ 1.,  0.],
>>>   [ 1.,  1.],
>>>   [ 1.,  2.],
>>>   [ 1.,  3.],
>>>   [ 1.,  4.],
>>>   [ 1.,  5.],
>>>   [ 1.,  6.],
>>>   [ 1.,  7.],
>>>   [ 1.,  8.],
>>>   [ 1.,  9.]])
>>>>>>   bb
>>> array([[ 2.,  0.],
>>>   [ 2.,  1.],
>>>   [ 2.,  2.],
>>>   [ 2.,  3.],
>>>   [ 2.,  4.],
>>>   [ 2.,  5.],
>>>   [ 2.,  6.],
>>>   [ 2.,  7.],
>>>   [ 2.,  8.],
>>>   [ 2.,  9.]])
>>>>>>   np.column_stack((aa,bb))
>>> array([[ 1.,  0.,  2.,  0.],
>>>   [ 1.,  1.,  2.,  1.],
>>>   [ 1.,  2.,  2.,  2.],
>>>   [ 1.,  3.,  2.,  3.],
>>>   [ 1.,  4.,  2.,  4.],
>>>   [ 1.,  5.,  2.,  5.],
>>>   [ 1.,  6.,  2.,  6.],
>>>   [ 1.,  7.,  2.,  7.],
>>>   [ 1.,  8.,  2.,  8.],
>>>   [ 1.,  9.,  2.,  9.]])
>>>>>>   cc = _
>>>>>>   cc.reshape((n*2,2))
>>> array([[ 1.,  0.],
>>>   [ 2.,  0.],
>>>   [ 1.,  1.],
>>>   [ 2.,  1.],
>>>   [ 1.,  2.],
>>>   [ 2.,  2.],
>>>   [ 1.,  3.],
>>>   [ 2.,  3.],
>>>   [ 1.,  4.],
>>>   [ 2.,  4.],
>>>   [ 1.,  5.],
>>>   [ 2.,  5.],
>>>   [ 1.,  6.],
>>>   [ 2.,  6.],
>>>   [ 1.,  7.],
>>>   [ 2.,  7.],
>>>   [ 1.,  8.],
>>>   [ 2.,  8.],
>>>   [ 1.,  9.],
>>>   [ 2.,  9.]])
>>>>>>
>>>
>>>
>>> However I feel too, there is a intuitive abbrev function like
>>> 'interleave' or so missing in numpy shape_base or so.
>>
>> Using fancy indexing, you can set strided portions of an array equal to
>> another array.  So::
>>
>> In [2]: aa = np.empty((10,2))
>>
>> In [3]: aa[:, 0] = 1
>>
>> In [4]: aa[:,1] = np.arange(10)
>>
>> In [5]: bb = np.empty((10,2))
>>
>> In [6]: bb[:,0] = 2
>>
>> In [7]: bb[:,1] = aa[:,1] # this works
>>
>> In [8]: cc = np.empty((20,2))
>>
>> In [9]: cc[::2,:] = aa
>>
>> In [10]: cc[1::2,:] = bb
>>
>> In [11]: cc
>> Out[11]:
>> array([[ 1.,  0.],
>>  [ 2.,  0.],
>>  [ 1.,  1.],
>>  [ 2.,  1.],
>>  [ 1.,  2.],
>>  [ 2.,  2.],
>>  [ 1.,  3.],
>>  [ 2.,  3.],
>>  [ 1.,  4.],
>>  [ 2.,  4.],
>>  [ 1.,  5.],
>>  [ 2.,  5.],
>>  [ 1.,  6.],
>>  [ 2.,  6.],
>>  [ 1.,  7.],
>>  [ 2.,  7.],
>>  [ 1.,  8.],
>>  [ 2.,  8.],
>>  [ 1.,  9.],
>>  [ 2.,  9.]])
>>
>> Using this syntax, interleave could be a one-liner.
>>
>> -Neil
>
> that method of 'filling an empty with a pattern' was mentioned in
> the other (general) interleaving question. It requires however a
> lot of particular numbers and :'s in the code, and requires even
> more statements which can hardly be written in functional style -
> in one line?. The other approach is more jount, free of fancy
> indexing assignments.

jount?  I think that assigning to a strided index is very clear, but 
that is a difference of opinion.  All of the calls to np.empty are the 
equivalent of the column_stack's in your example.  I think that 
operations on segments of arrays are fundamental to an array-processing 
language such as NumPy.  Using ";" you can put as many of those 
statements as you would like one line. :)

> The general interleaving should work efficiently in one like this:
>
> np.column_stack/concatenate((r,g,b,), axis=...).reshape(..)
>
> But as all this is not intuitive, something like this should be in
> numpy perhaps? :
>
> def interleave( tup_arrays, axis = None )

Here is a minimally tested implementation.  If anyone really wants this 
for numpy, I'll gladly add comments and tests.  I couldn't figure out 
how to automatically find the greatest dtype, so I added an argument to 
specify, otherwise it uses the type of the first array.

def interleave(arrays, axis=0, dtype=None):
 assert len(arrays) > 0
 first = arrays[0]
 assert all([arr.shape == first.shape for arr in arrays])
 new_shape = list(first.shape)
 new_shape[axis] *= len(arrays)
 if dtype is None:
 new_dtype = first.dtype
 else:
 new_dtype = dtype
 interleaved = np.empty(new_shape, new_dtype)
 axis_slice = [slice(None, None, None)]*axis + \
  [slice(0,None,len(arrays))] + [Ellipsis]
 for i, arr in enumerate(arrays):
 axis_slice[axis] = slice(i, None, len(arrays))
 interleaved[tuple(axis_slice)] = arr
 return interleaved
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman

Re: [Numpy-discussion] Interleaved Arrays and

2009-06-16 Thread Anne Archibald
I'm not sure it's worth having a function to replace a one-liner
(column_stack followed by reshape). But if you're going to implement
this with slice assignment, you should take advantage of the
flexibility this method allows and offer the possibility of
interleaving "raggedly", that is, where the size of the arrays drops
at some point, so that you could interleave arrays of size 4, 4, and 3
to get one array of size 11. This allows split and join operations,
for example for multiprocessing. On the other hand you should also
include a documentation warning that this can be slow when
interleaving large numbers of small arrays.

Anne

2009/6/16 Neil Martinsen-Burrell :
> On 2009-06-16 16:05 , Robert wrote:
>> Neil Martinsen-Burrell wrote:
>>> On 06/16/2009 02:18 PM, Robert wrote:
    >>>   n = 10
    >>>   xx = np.ones(n)
    >>>   yy = np.arange(n)
    >>>   aa = np.column_stack((xx,yy))
    >>>   bb = np.column_stack((xx+1,yy))
    >>>   aa
 array([[ 1.,  0.],
           [ 1.,  1.],
           [ 1.,  2.],
           [ 1.,  3.],
           [ 1.,  4.],
           [ 1.,  5.],
           [ 1.,  6.],
           [ 1.,  7.],
           [ 1.,  8.],
           [ 1.,  9.]])
    >>>   bb
 array([[ 2.,  0.],
           [ 2.,  1.],
           [ 2.,  2.],
           [ 2.,  3.],
           [ 2.,  4.],
           [ 2.,  5.],
           [ 2.,  6.],
           [ 2.,  7.],
           [ 2.,  8.],
           [ 2.,  9.]])
    >>>   np.column_stack((aa,bb))
 array([[ 1.,  0.,  2.,  0.],
           [ 1.,  1.,  2.,  1.],
           [ 1.,  2.,  2.,  2.],
           [ 1.,  3.,  2.,  3.],
           [ 1.,  4.,  2.,  4.],
           [ 1.,  5.,  2.,  5.],
           [ 1.,  6.,  2.,  6.],
           [ 1.,  7.,  2.,  7.],
           [ 1.,  8.,  2.,  8.],
           [ 1.,  9.,  2.,  9.]])
    >>>   cc = _
    >>>   cc.reshape((n*2,2))
 array([[ 1.,  0.],
           [ 2.,  0.],
           [ 1.,  1.],
           [ 2.,  1.],
           [ 1.,  2.],
           [ 2.,  2.],
           [ 1.,  3.],
           [ 2.,  3.],
           [ 1.,  4.],
           [ 2.,  4.],
           [ 1.,  5.],
           [ 2.,  5.],
           [ 1.,  6.],
           [ 2.,  6.],
           [ 1.,  7.],
           [ 2.,  7.],
           [ 1.,  8.],
           [ 2.,  8.],
           [ 1.,  9.],
           [ 2.,  9.]])
    >>>


 However I feel too, there is a intuitive abbrev function like
 'interleave' or so missing in numpy shape_base or so.
>>>
>>> Using fancy indexing, you can set strided portions of an array equal to
>>> another array.  So::
>>>
>>> In [2]: aa = np.empty((10,2))
>>>
>>> In [3]: aa[:, 0] = 1
>>>
>>> In [4]: aa[:,1] = np.arange(10)
>>>
>>> In [5]: bb = np.empty((10,2))
>>>
>>> In [6]: bb[:,0] = 2
>>>
>>> In [7]: bb[:,1] = aa[:,1] # this works
>>>
>>> In [8]: cc = np.empty((20,2))
>>>
>>> In [9]: cc[::2,:] = aa
>>>
>>> In [10]: cc[1::2,:] = bb
>>>
>>> In [11]: cc
>>> Out[11]:
>>> array([[ 1.,  0.],
>>>          [ 2.,  0.],
>>>          [ 1.,  1.],
>>>          [ 2.,  1.],
>>>          [ 1.,  2.],
>>>          [ 2.,  2.],
>>>          [ 1.,  3.],
>>>          [ 2.,  3.],
>>>          [ 1.,  4.],
>>>          [ 2.,  4.],
>>>          [ 1.,  5.],
>>>          [ 2.,  5.],
>>>          [ 1.,  6.],
>>>          [ 2.,  6.],
>>>          [ 1.,  7.],
>>>          [ 2.,  7.],
>>>          [ 1.,  8.],
>>>          [ 2.,  8.],
>>>          [ 1.,  9.],
>>>          [ 2.,  9.]])
>>>
>>> Using this syntax, interleave could be a one-liner.
>>>
>>> -Neil
>>
>> that method of 'filling an empty with a pattern' was mentioned in
>> the other (general) interleaving question. It requires however a
>> lot of particular numbers and :'s in the code, and requires even
>> more statements which can hardly be written in functional style -
>> in one line?. The other approach is more jount, free of fancy
>> indexing assignments.
>
> jount?  I think that assigning to a strided index is very clear, but
> that is a difference of opinion.  All of the calls to np.empty are the
> equivalent of the column_stack's in your example.  I think that
> operations on segments of arrays are fundamental to an array-processing
> language such as NumPy.  Using ";" you can put as many of those
> statements as you would like one line. :)
>
>> The general interleaving should work efficiently in one like this:
>>
>> np.column_stack/concatenate((r,g,b,), axis=...).reshape(..)
>>
>> But as all this is not intuitive, something like this should be in
>> numpy perhaps? :
>>
>> def interleave( tup_arrays, axis = None )
>
> Here is a minimally tested implementation.  If anyone really wants this
> for numpy, I'll gladly add comments and tests.  I couldn't figure out
> how to automatically find the greatest dtype, so I added an argument to
> specify, otherwis

Re: [Numpy-discussion] Scipy 0.6.0 to 0.7.0, sparse matrix change

2009-06-16 Thread Nathan Bell
On Mon, Jun 15, 2009 at 6:47 AM, Fadhley Salim
 wrote:
>
> I'm trying to track down a numerical discrepancy in our proejct. We
> noticed that a certain set of results are different having upgraded from
> scipy 0.6.0 to 0.7.0.
>
> The following item from the Scipy change-log is our current number-one
> suspect. Could anybody who knows suggest what was actually involved in
> the change which I have highlighted with stars below?
>
> [...]
>
> The handling of diagonals in the ``spdiags`` function has been changed.
> It now agrees with the MATLAB(TM) function of the same name.
>
> *** Numerous efficiency improvements to format conversions and sparse
> matrix arithmetic have been made.  Finally, this release contains
> numerous bugfixes. ***
>

Can you elaborate on why you think sparse matrices may be the culprit?
 None of the changes between 0.6 and 0.7 should produce different
numerical results (beyond standard floating point margins).

--
Nathan Bell wnb...@gmail.com
http://www.wnbell.com/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion