Re: [Numpy-discussion] Keyword argument support for vectorize.

2012-04-09 Thread Michael McNeil Forbes
On 8 Apr 2012, at 12:09 PM, Ralf Gommers wrote:
 That looks like a useful enhancement. Integrating in the existing  
 vectorize class should be the way to go.

Okay.  I will push forward.  I would also like to add support for  
freezing (or excluding) certain arguments from the vectorization.   
Any ideas for a good argument name?  (I am just using exclude=['p']  
for now).

The use case I have is vectorizing polynomial evaluation `polyval(p,  
x)`.  The coefficient array `p` should not be vectorized over, only  
the variable `x`, so something like:

@vectorize(exclude=set(['p']))
def mypolyval(p, x):
return np.polyval(p, x)

would work like np.polyval currently behaves:

  mypolyval([1.0,2.0],[0.0,3.0])
array([ 2., 5.])

(Of course, numpy already has polyval: I am actually trying to wrap  
similar functions that use Theano for automatic differentiation, but  
the idea is the same).

It seems like functools.partial is the appropriate tool to use here  
which means I will have to deal with the
This will require overcoming the issues with how vectorize deduces the  
number of parameters, but if I integrate this with the vectorize  
class, then this should be easy to patch as well.

http://mail.scipy.org/pipermail/numpy-discussion/2010-September/052642.html

Michael.

 On Sat, Apr 7, 2012 at 12:18 AM, Michael McNeil Forbes 
 michael.for...@gmail.com 
  wrote:
 Hi,

 I added a simple enhancement patch to provide vectorize with simple
 keyword argument support.  (I added a new kwvectorize decorator, but
 suspect this could/should easily be rolled into the existing  
 vectorize.)

 http://projects.scipy.org/numpy/ticket/2100
...

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Keyword argument support for vectorize.

2012-04-09 Thread Nathaniel Smith
On Mon, Apr 9, 2012 at 10:53 AM, Michael McNeil Forbes
michael.for...@gmail.com wrote:
 It seems like functools.partial is the appropriate tool to use here
 which means I will have to deal with the

functools was added in Python 2.5, and so far numpy is still trying to
maintain 2.4 compatibility. (Not that this is particularly complicated
code to reimplement.)

- N
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NpyAccessLib method documentation?

2012-04-09 Thread William Johnston

Hello,

Is there NpyAccessLib documentation available?

I need to use DLLImport for a C# IronPython DLR app and am not sure which 
methods to include.

Thank you.

Regards,
William Johnston
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slice specified axis

2012-04-09 Thread Jonathan T. Niehof
On 04/06/2012 06:54 AM, Benjamin Root wrote:

 Take a peek at how np.gradient() does it. It creates a list of None with
 a length equal to the number of dimensions, and then inserts a slice
 object in the appropriate spot in the list.

List of slice(None), correct? At least that's what I see in the source, and:

  a = numpy.array([[1,2],[3,4]])
  operator.getitem(a, (None, slice(1, 2)))
array([[[3, 4]]])
  operator.getitem(a, (slice(None), slice(1, 2)))
array([[2],
[4]])

-- 
Jonathan Niehof
ISR-3 Space Data Systems
Los Alamos National Laboratory
MS-D466
Los Alamos, NM 87545

Phone: 505-667-9595
email: jnie...@lanl.gov

Correspondence /
Technical data or Software Publicly Available
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slice specified axis

2012-04-09 Thread Benjamin Root
On Mon, Apr 9, 2012 at 12:14 PM, Jonathan T. Niehof jnie...@lanl.govwrote:

 On 04/06/2012 06:54 AM, Benjamin Root wrote:

  Take a peek at how np.gradient() does it. It creates a list of None with
  a length equal to the number of dimensions, and then inserts a slice
  object in the appropriate spot in the list.

 List of slice(None), correct? At least that's what I see in the source,
 and:

   a = numpy.array([[1,2],[3,4]])
   operator.getitem(a, (None, slice(1, 2)))
 array([[[3, 4]]])
   operator.getitem(a, (slice(None), slice(1, 2)))
 array([[2],
[4]])


Correct, sorry, I was working from memory.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OFFTOPIC] creating/working NumPy-ndarrays in C++

2012-04-09 Thread Chris Barker
2012/4/8 Hänel Nikolaus Valentin valentin.hae...@epfl.ch
:
http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html

that looks like it hasn't been updated since 2006 -- Id say that
makes it a non-starter

The new numpy-boost project looks promising, though.

 which was also mentioned on this list, I now know of four alternatives
 for boost+NumPy (including the built-in support).

4?

Given the pace of change in numpy, it looks to me like the new
numpy-boost is the only viable (Boost) option, other than rolling your
own.

For the record, Cython (with or without another lib like Blitz++)
doesn't preclude strong OOP design patterns.

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OFFTOPIC] creating/working NumPy-ndarrays in C++

2012-04-09 Thread Hänel Nikolaus Valentin
Hi Chris,

thanks for your answer.

* Chris Barker chris.bar...@noaa.gov [2012-04-09]:
 2012/4/8 Hänel Nikolaus Valentin valentin.hae...@epfl.ch
 :
 http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html

 that looks like it hasn't been updated since 2006 -- Id say that
 makes it a non-starter

Yeah, thats what I thought... Until I found it in several production
codes...

 The new numpy-boost project looks promising, though.

  which was also mentioned on this list, I now know of four alternatives
  for boost+NumPy (including the built-in support).

 4?

http://www.boost.org/doc/libs/1_49_0/libs/python/doc/v2/numeric.html
(old)

https://github.com/ndarray/Boost.NumPy
(new)

http://code.google.com/p/numpy-boost/

http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html

Or am I missing something?

V-
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [OFFTOPIC] creating/working NumPy-ndarrays in C++

2012-04-09 Thread Chris Barker
2012/4/9 Hänel Nikolaus Valentin valentin.hae...@epfl.ch:

http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html

 that looks like it hasn't been updated since 2006 -- Id say that
 makes it a non-starter

 Yeah, thats what I thought... Until I found it in several production
 codes...

are they maintaining it?


 4?

 http://www.boost.org/doc/libs/1_49_0/libs/python/doc/v2/numeric.html
 (old)

 https://github.com/ndarray/Boost.NumPy
 (new)

 http://code.google.com/p/numpy-boost/
 (also pretty old -- I see this:)

- Numpy (numpy.scipy.org) (Tested versions: 1.1.1, though = 1.0 should work)
- Python (www.python.org) (Tested versions: 2.5.2, though = 2.3 should work)

both pretty old versions.

http://www.eos.ubc.ca/research/clouds/software/pythonlibs/num_util/num_util_release2/Readme.html


also pretty old.

So I'd go with the actively maintained on -- or Cython -- what I can
tell you is that Cython is being very widely used in the
numerical/scientific computing community -- but I haven't seen a lot
of Boost users. Maybe they use different mailing lists, and dont go to
SciPy or Pycon...

I'm not sure you made your use case clear -- are you writing C++
specifically for calling form Python? or are you working on a C++ lib
that will be used in C++ apps as well as Python apps?

-Chris



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-09 Thread Zachary Pincus
 That all sounds like no option -- sad.
 Cython is no solution cause, all I want is to leave Python Syntax in
 favor for strong OOP design patterns.

What about ctypes?

For straight numerical work where sometimes all one needs to hand across the 
python-to-C/C++/Fortran boundary is a pointer to some memory and the size of 
the memory region. So I will often just write a very thin C interface to 
whatever I'm using and then call into it with ctypes.

So you could just design your algorithm in C++ with all the strong OOP design 
patterns you want, and then just write a minimal C interface on top with one 
or two functions that receive pointers to memory. Then allocate numpy arrays in 
python with whatever memory layout you need, and use the a ctypes attribute 
to find the pointer data etc. that you need to pass over. 

Zach

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-09 Thread Dag Sverre Seljebotn
On 04/08/2012 08:25 PM, Holger Herrlich wrote:

 That all sounds like no option -- sad.
 Cython is no solution cause, all I want is to leave Python Syntax in
 favor for strong OOP design patterns.

I'm sorry, I'm trying and trying to make heads and tails of this 
paragraph, but I don't manage to. If you could rephrase it that would be 
very helpful. (But I'm afraid that if you believe that C++ is more 
object-oriented than Python, you'll find most people disagree. Perhaps 
you meant that you want static typing?)

Any wrapping tool (Cython, ctypes, probably SWIG but don't know) will 
allow you to pass the pointer to the data array, the npy_intp* shape 
array, and the npy_intp* strides array.

Really, that's all you need. And given those three, writing a simple C++ 
class wrapping the arrays and allowing you to conveniently index into 
the array is done in 15 minutes.

If you need more than that -- that is, what you want is essentially to 
use NumPy from C++, which slicing and ufuncs and reductions and so on 
-- then you should probably look into a C++ array library (such as Eigen 
or Blitz++ or the array stuff in boost). Then, pass the aforementioned 
data and shape/stride arrays to the array library.

Dag
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] YouTrack testbed

2012-04-09 Thread Bryan Van de Ven
On 4/3/12 4:18 PM, Ralf Gommers wrote:
 Here some first impressions.

 The good:
 - It's responsive!
 - It remembers my preferences (view type, # of issues per page, etc.)
 - Editing multiple issues with the command window is easy.
 - Search and filter functionality is powerful

 The bad:
 - Multiple projects are supported, but issues are then really mixed. 
 The way this works doesn't look very useful for combined admin of 
 numpy/scipy trackers.
 - I haven't found a way yet to make versions and subsystems appear in 
 the one-line issue overview.
 - Fixed issues are still shown by default. There are several open 
 issues filed against youtrack about this, with no reasonable answers.
 - Plain text attachments (.txt, .diff, .patch) can't be viewed, only 
 downloaded.
 - No direct VCS integration, only via Teamcity (not set up, so can't 
 evaluate).
 - No useful default views as in Trac 
 (http://projects.scipy.org/scipy/report).

Ralf,  regarding some of the issues:

I think for numpy/scipy trackers, we could simply run separate instances 
of YouTrack for each. Also we can certainly create some standard 
queries. It's a small pain not to have useful defaults, but it's only a 
one-time pain. :)

Also, what kind of integration are you looking for with github? There 
does appear to be the ability to issue commands to youtrack through git 
commits, which does not depend on TeamCity, as best I can tell:

http://confluence.jetbrains.net/display/YTD3/GitHub+Integration
http://blogs.jetbrains.com/youtrack/tag/github-integration/

I'm not sure this is what you were thinking about though.

For the other issues, Maggie or I can try and see what we can find out 
about implementing them, or working around them, this week.

Of course, we'd like to evaluate any other viable issue trackers as 
well. Do you have any suggestions for other systems besides YouTrack?

Bryan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Getting C-function pointers from Python to C

2012-04-09 Thread Travis Oliphant
Hi all, 

Some of you are aware of Numba.   Numba allows you to create the equivalent of 
C-function's dynamically from Python.   One purpose of this system is to allow 
NumPy to take these functions and use them in operations like ufuncs, 
generalized ufuncs, file-reading, fancy-indexing, and so forth.  There are 
actually many use-cases that one can imagine for such things. 

One question is how do you pass this function pointer to the C-side.On the 
Python side, Numba allows you to get the raw integer address of the equivalent 
C-function pointer that it just created out of the Python code.One can 
think of this as a 32- or 64-bit integer that you can cast to a C-function 
pointer.   

Now, how should this C-function pointer be passed from Python to NumPy?   One 
approach is just to pass it as an integer --- in other words have an API in C 
that accepts an integer as the first argument that the internal function 
interprets as a C-function pointer.

This is essentially what ctypes does when creating a ctypes function pointer 
out of: 

  func = ctypes.CFUNCTYPE(restype, *argtypes)(integer)

Of course the problem with this is that you can easily hand it integers which 
don't make sense and which will cause a segfault when control is passed to this 
function

We could also piggy-back on-top of Ctypes and assume that a ctypes 
function-pointer object is passed in.   This allows some error-checking at 
least and also has the benefit that one could use ctypes to access a c-function 
library where these functions were defined. I'm leaning towards this approach.  
 

Now, the issue is how to get the C-function pointer (that npy_intp integer) 
back and hand it off internally.   Unfortunately, ctypes does not make it very 
easy to get this address (that I can see).There is no ctypes C-API, for 
example.There are two potential options: 

1) Create an API for such Ctypes function pointers in NumPy and use the 
ctypes object structure.  If ctypes were to ever change it's object structure 
we would have to adapt this API.   

Something like this is what is envisioned here: 

 typedef struct {
PyObject_HEAD
char *b_ptr;
 } _cfuncptr_object;

then the function pointer is:

(*((void **)(((_sp_cfuncptr_object *)(obj))-b_ptr)))

which could be wrapped-up into a nice little NumPy C-API call like

void * Npy_ctypes_funcptr(obj)


2) Use the Python API of ctypes to do the same thing.   This has the 
advantage of not needing to mirror the simple _cfuncptr_object structure in 
NumPy but it is *much* slower to get the address.   It basically does the 
equivalent of 

ctypes.cast(obj, ctypes.c_void_p).value


There is working code for this in the ctypes_callback branch of my 
scipy fork on github.


I would like to propose two things: 

* creating a Npy_ctypes_funcptr(obj) function in the C-API of NumPy and
* implement it with the simple pointer dereference above (option #1)


Thoughts?

-Travis






 
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting C-function pointers from Python to C

2012-04-09 Thread Nathaniel Smith
...isn't this an operation that will be performed once per compiled
function? Is the overhead of the easy, robust method (calling ctypes.cast)
actually measurable as compared to, you know, running an optimizing
compiler?

I mean, I doubt there'd be any real problem with adding this extra API to
numpy, but it does seem like there might be higher priority targets :-)
On Apr 10, 2012 1:12 AM, Travis Oliphant teoliph...@gmail.com wrote:

 Hi all,

 Some of you are aware of Numba.   Numba allows you to create the
 equivalent of C-function's dynamically from Python.   One purpose of this
 system is to allow NumPy to take these functions and use them in operations
 like ufuncs, generalized ufuncs, file-reading, fancy-indexing, and so
 forth.  There are actually many use-cases that one can imagine for such
 things.

 One question is how do you pass this function pointer to the C-side.On
 the Python side, Numba allows you to get the raw integer address of the
 equivalent C-function pointer that it just created out of the Python code.
One can think of this as a 32- or 64-bit integer that you can cast to a
 C-function pointer.

 Now, how should this C-function pointer be passed from Python to NumPy?
 One approach is just to pass it as an integer --- in other words have an
 API in C that accepts an integer as the first argument that the internal
 function interprets as a C-function pointer.

 This is essentially what ctypes does when creating a ctypes function
 pointer out of:

  func = ctypes.CFUNCTYPE(restype, *argtypes)(integer)

 Of course the problem with this is that you can easily hand it integers
 which don't make sense and which will cause a segfault when control is
 passed to this function

 We could also piggy-back on-top of Ctypes and assume that a ctypes
 function-pointer object is passed in.   This allows some error-checking at
 least and also has the benefit that one could use ctypes to access a
 c-function library where these functions were defined. I'm leaning towards
 this approach.

 Now, the issue is how to get the C-function pointer (that npy_intp
 integer) back and hand it off internally.   Unfortunately, ctypes does not
 make it very easy to get this address (that I can see).There is no
 ctypes C-API, for example.There are two potential options:

1) Create an API for such Ctypes function pointers in NumPy and use
 the ctypes object structure.  If ctypes were to ever change it's object
 structure we would have to adapt this API.

Something like this is what is envisioned here:

 typedef struct {
PyObject_HEAD
char *b_ptr;
 } _cfuncptr_object;

then the function pointer is:

(*((void **)(((_sp_cfuncptr_object *)(obj))-b_ptr)))

which could be wrapped-up into a nice little NumPy C-API call like

void * Npy_ctypes_funcptr(obj)


2) Use the Python API of ctypes to do the same thing.   This has
 the advantage of not needing to mirror the simple _cfuncptr_object
 structure in NumPy but it is *much* slower to get the address.   It
 basically does the equivalent of

ctypes.cast(obj, ctypes.c_void_p).value


There is working code for this in the ctypes_callback branch of my
 scipy fork on github.


 I would like to propose two things:

* creating a Npy_ctypes_funcptr(obj) function in the C-API of NumPy
 and
* implement it with the simple pointer dereference above (option #1)


 Thoughts?

 -Travis







 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting C-function pointers from Python to C

2012-04-09 Thread Travis Oliphant

On Apr 9, 2012, at 7:21 PM, Nathaniel Smith wrote:

 ...isn't this an operation that will be performed once per compiled function? 
 Is the overhead of the easy, robust method (calling ctypes.cast) actually 
 measurable as compared to, you know, running an optimizing compiler?
 
 

Yes, there can be significant overhead.   The compiler is run once and creates 
the function.   This function is then potentially used many, many times.
Also, it is entirely conceivable that the build step happens at a separate 
compilation time, and Numba actually loads a pre-compiled version of the 
function from disk which it then uses at run-time.

I have been playing with a version of this using scipy.integrate and 
unfortunately the overhead of ctypes.cast is rather significant --- to the 
point of making the code-path using these function pointers to be useless when 
without the ctypes.cast overhed the speed up is 3-5x. 

In general, I think NumPy will need its own simple function-pointer object to 
use when handing over raw-function pointers between Python and C.   SciPy can 
then re-use this object which also has a useful C-API for things like signature 
checking.I have seen that ctypes is nice but very slow and without a 
compelling C-API. 


The kind of new C-level cfuncptr object I imagine has attributes: 

void *func_ptr;
char *signature string  /* something like 'dd-d' to indicate a 
function that takes two doubles and returns a double */

methods would be:

from_ctypes  (classmethod)
to_ctypes 

and simple inline functions to get the function pointer and the signature. 

 I mean, I doubt there'd be any real problem with adding this extra API to 
 numpy, but it does seem like there might be higher priority targets :-)
 
 

Not if you envision doing a lot of code-development using Numba ;-)

-Travis




 On Apr 10, 2012 1:12 AM, Travis Oliphant teoliph...@gmail.com wrote:
 Hi all,
 
 Some of you are aware of Numba.   Numba allows you to create the equivalent 
 of C-function's dynamically from Python.   One purpose of this system is to 
 allow NumPy to take these functions and use them in operations like ufuncs, 
 generalized ufuncs, file-reading, fancy-indexing, and so forth.  There are 
 actually many use-cases that one can imagine for such things.
 
 One question is how do you pass this function pointer to the C-side.On 
 the Python side, Numba allows you to get the raw integer address of the 
 equivalent C-function pointer that it just created out of the Python code.
 One can think of this as a 32- or 64-bit integer that you can cast to a 
 C-function pointer.
 
 Now, how should this C-function pointer be passed from Python to NumPy?   One 
 approach is just to pass it as an integer --- in other words have an API in C 
 that accepts an integer as the first argument that the internal function 
 interprets as a C-function pointer.
 
 This is essentially what ctypes does when creating a ctypes function pointer 
 out of:
 
  func = ctypes.CFUNCTYPE(restype, *argtypes)(integer)
 
 Of course the problem with this is that you can easily hand it integers which 
 don't make sense and which will cause a segfault when control is passed to 
 this function
 
 We could also piggy-back on-top of Ctypes and assume that a ctypes 
 function-pointer object is passed in.   This allows some error-checking at 
 least and also has the benefit that one could use ctypes to access a 
 c-function library where these functions were defined. I'm leaning towards 
 this approach.
 
 Now, the issue is how to get the C-function pointer (that npy_intp integer) 
 back and hand it off internally.   Unfortunately, ctypes does not make it 
 very easy to get this address (that I can see).There is no ctypes C-API, 
 for example.There are two potential options:
 
1) Create an API for such Ctypes function pointers in NumPy and use 
 the ctypes object structure.  If ctypes were to ever change it's object 
 structure we would have to adapt this API.
 
Something like this is what is envisioned here:
 
 typedef struct {
PyObject_HEAD
char *b_ptr;
 } _cfuncptr_object;
 
then the function pointer is:
 
(*((void **)(((_sp_cfuncptr_object *)(obj))-b_ptr)))
 
which could be wrapped-up into a nice little NumPy C-API call like
 
void * Npy_ctypes_funcptr(obj)
 
 
2) Use the Python API of ctypes to do the same thing.   This has the 
 advantage of not needing to mirror the simple _cfuncptr_object structure in 
 NumPy but it is *much* slower to get the address.   It basically does the 
 equivalent of
 
ctypes.cast(obj, ctypes.c_void_p).value
 
 
There is working code for this in the ctypes_callback branch of my 
 scipy fork on github.
 
 
 I would like to propose two things:
 
* creating a Npy_ctypes_funcptr(obj) function in the C-API of 

Re: [Numpy-discussion] Slice specified axis

2012-04-09 Thread Tony Yu
On Mon, Apr 9, 2012 at 12:22 PM, Benjamin Root ben.r...@ou.edu wrote:



 On Mon, Apr 9, 2012 at 12:14 PM, Jonathan T. Niehof jnie...@lanl.govwrote:

 On 04/06/2012 06:54 AM, Benjamin Root wrote:

  Take a peek at how np.gradient() does it. It creates a list of None with
  a length equal to the number of dimensions, and then inserts a slice
  object in the appropriate spot in the list.

 List of slice(None), correct? At least that's what I see in the source,
 and:

   a = numpy.array([[1,2],[3,4]])
   operator.getitem(a, (None, slice(1, 2)))
 array([[[3, 4]]])
   operator.getitem(a, (slice(None), slice(1, 2)))
 array([[2],
[4]])


 Correct, sorry, I was working from memory.

 Ben Root


I guess I wasn't reading very carefully and assumed that you meant a list
of `slice(None)` instead of a list of `None`. In any case, both your
solution and Matthew's solution work (and both are more readable than my
original implementation).

After I got everything cleaned up (and wrote documentation and tests), I
found out that numpy already has a function to do *exactly* what I wanted
in the first place: `np.split` (the slicing was just one component of
this). I was initially misled by the
docstringhttps://github.com/numpy/numpy/pull/249,
but with a list of indices, you can split an array into subarrays of
variable length (I wanted to use this to save and load ragged arrays).
Well, I guess it was a learning experience, at least.

In case anyone is wondering about the original question, `np.split` (and
`np.array_split`) uses `np.swapaxes` to specify the slicing axis.

Thanks for all your help.
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Masked Arrays in NumPy 1.x

2012-04-09 Thread Travis Oliphant
Hey all, 

I've been waiting for Mark Wiebe to arrive in Austin where he will spend 
several weeks, but I also know that masked arrays will be only one of the 
things he and I are hoping to make head-way on while he is in Austin.
Nevertheless, we need to make progress on the masked array discussion and if we 
want to finalize the masked array implementation we will need to finish the 
design.  

I've caught up on most of the discussion including Mark's NEP, Nathaniel's NEP 
and other writings and the very-nice mailing list discussion that included a 
somewhat detailed discussion on the algebra of IGNORED.   I think there are 
some things still to be decided.  However, I think some things are pretty 
clear: 

1) Masked arrays are going to be fundamental in NumPy and these should 
replace most people's use of numpy.ma.   The numpy.ma code will remain as a 
compatibility layer

2) The reality of #1 and NumPy's general philosophy to date means that 
masked arrays in NumPy should support the common use-cases of masked arrays 
(including getting and setting of the mask from the Python and C-layers).  
However, the semantic of what the mask implies may change from what numpy.ma 
uses to having  a True value meaning selected.   

3) There will be missing-data dtypes in NumPy.   Likely only a limited 
sub-set (string, bytes, int64, int32, float32, float64, complex64, complex32, 
and object) with an API that allows more to be defined if desired.   These will 
most likely use Mark's nice machinery for managing the calculation structure 
without requiring new C-level loops to be defined. 

4) I'm still not sure about whether the IGNORED concept is necessary or 
not.I really like the separation that was emphasized between implementation 
(masks versus bit-patterns) and operations (propagating versus 
non-propagating).   Pauli even created another dimension which I don't totally 
grok and therefore can't remember.   Pauli?  Do you still feel that is a 
necessary construction?  But, do we need the IGNORED concept to indicate what 
amounts to different default key-word arguments to functions that operate on 
NumPy arrays containing missing data (however that is represented)?My 
current weak view is that it is not really necessary.   But, I could be 
convinced otherwise. 

I think the good news is that given Mark's hard-work and Nathaniel's follow-up 
we are really quite far along.   I would love to get Nathaniel's opinion about 
what remains un-done in the current NumPy code-base.   I would also appreciate 
knowing (from anyone with an interest) opinions of items 1-4 above and anything 
else I've left out.   

Thanks,

-Travis




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion