Re: [Numpy-discussion] Numpy/Cython Google Summer of Code project idea

2008-03-08 Thread Fernando Perez
On Fri, Mar 7, 2008 at 11:41 AM, william ratcliff
<[EMAIL PROTECTED]> wrote:
> Will Cython be compatible with OpenMP?  I tried with weave some time back
> and failed miserably.  Has anyone tried with ctypes?

As far as I know cython has no explicit OpenMP support, but it *may*
be possible to get it to generate the proper directives, using similar
tricks to those that C++ wrapping uses:

http://wiki.cython.org/WrappingCPlusPlus

Note that this is just an idea, I haven't actually tried to do it.

cheers

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy/Cython Google Summer of Code project idea

2008-03-08 Thread Fernando Perez
Hi Joris,

On Fri, Mar 7, 2008 at 8:10 AM, Joris De Ridder
<[EMAIL PROTECTED]> wrote:

>  Thanks. I've a few questions concerning the objections against ctypes.
>  It's part of the Python standard library, brand new from v2.5, and it
>  allows creating extensions. Disregarding it, requires therefore good
>  arguments, I think. I trust you that there are, but I would like to
>  understand them better.  For ctypes your extensions needs to be
>  compiled as a shared library, but as numpy is moving towards Scons
>  which seem to facilitate this quite a lot, is this still a difficulty/
>  objection? Secondly, looking at the examples given by Travis in his
>  Numpy Book, neither pyrex nor ctypes seem to be particularly user-
>  friendly concerning Numpy ndarrays (although ctypes does seem slightly
>  easier). From your email, I understand it's possibly to mediate this
>  for Cython. From a technical point of view, would it also be possible
>  to make ctypes work better with Numpy, and if yes, do you have any
>  idea whether it would be more or less work than for Cython?

As Chris B. said,  I also think that ctypes and cython are simply
different, complementary tools.  Cython allows you to create complete
functions that can potentially run at C speed entirely, by letting you
bypass some of the more dynamic (but expensive) features of python,
while retaining a python-like sytnax and having to learn a lot less of
the Python C API.  Ctypes is pure python, so while you can access
arbitrary shared libraries, it won't help you one bit if you need to
write new looping code and the execution speed in pure python isn't
enough.  At that point if ctypes is your only tool, you'd need to
write a pure C library (to the pure Python C API, with manual memory
management included) and access it via ctypes.

The point we're trying to reach is one where most of the extension
code for numpy is Cython, to improve its long-term maintainability, to
make it easier for non-experts in the C API to contribute 'low level'
tools, and to open up future possibilities for fast code generation.
I don't want to steal Travis' thunder, but I've heard him make some
very interesting comments about his long term ideas for novel tools to
express high-level routines in python/cython into highly efficient
low-level representations,  in a way that code written explicitly to
the python C API may well make very difficult.

I hope this (Travis' ideas teaser and all :) provides some better
perspective on the recent enthusiasm regarding cython, as a tool
complementary to ctypes that could greatly benefit numpy and scipy.
If it doesn't it just means I did a poor job of communicating, so keep
on asking.  We all really want to make sure that this is something
where we reach technical consensus; the fact that Sage has been so
successful with this approach is a very strong argument in favor (and
they've done LOTS of non-trivial work on cython to further their
goal), but we still need to ensure that the numpy/scipy community is
equally on board with the decision.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy/Cython Google Summer of Code project idea

2008-03-08 Thread Fernando Perez
Hi Robin,

On Fri, Mar 7, 2008 at 8:36 AM, Robin <[EMAIL PROTECTED]> wrote:

>  I hadn't seen the link Ondrej provided, although the 40 hour week
>  seems to be a Python/PSF requirement. Prior to posting I had checked
>  the Google information, where they say the time commitment depends on
>  both the scope of your project and the requirements of your mentoring
>  organisation. They also say they have had successful applicants in
>  previous years from full-time students at non-US universities (who
>  don't get a summer break), so I thought it might be possible for me to
>  be considered. I also asked in #gsoc where I was advised 20 hours per
>  week would be a good baseline, again depending on the project.
>
>  Of course, I hope to contribute to Numpy/Scipy anyway - but this
>  scheme would be a great way to kick-start that.
>
>  I look forward to seeing Numpy/Scipy accepted as a mentor organisation
>  this year anyway, even if I am unable to take part.

I don't want to mislead anyone because I'm not directly involved with
the actual mentoring, so forgive any confusion I may have caused.  My
current understanding is that we just don't have the time and
resources right now for numpy/scipy to be a separate mentor
organization, and thus we'd go in under the PSF umbrella.  In that
case, we'd probably be bound to the PSF guidelines, I imagine.

I offered to get the ball rolling on the cython idea because time is
tight and at the Sage/Scipy meeting there was  lot of interest on this
topic from everyone present.  But the actual mentoring will need to
come from others who are much more directly involved with cython and
numpy at the C API level than myself, so I'll try not to answer
anything too specifically on that front to avoid spreading
misinformation.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: PyTables 2.0.3 released

2008-03-08 Thread Francesc Altet
===
 Announcing PyTables 2.0.3
===

PyTables is a library for managing hierarchical datasets and designed to
efficiently cope with extremely large amounts of data with support for
full 64-bit file addressing.  PyTables runs on top of the HDF5 library
and NumPy package for achieving maximum throughput and convenient use.

This is a maintenance release that mainly fixes a couple of important
bugs (bad update of multidimensional columns in table objects, and
problems using large indexes in 32-bit platforms), some small
enhancements, and most importantly, support for the latest HDF5 1.8.0 
library.  Also, binaries have been compiled against the latest stable 
version of HDF5, 1.6.7, released during the past February.  Thanks to 
the broadening PyTables community for all the valuable feedback.

In case you want to know more in detail what has changed in this
version, have a look at ``RELEASE_NOTES.txt``.  Find the HTML version
for this document at:
http://www.pytables.org/moin/ReleaseNotes/Release_2.0.3

You can download a source package of the version 2.0.3 with
generated PDF and HTML docs and binaries for Windows from
http://www.pytables.org/download/stable/

For an on-line version of the manual, visit:
http://www.pytables.org/docs/manual-2.0.3

Migration Notes for PyTables 1.x users
==

If you are a user of PyTables 1.x, probably it is worth for you to look
at ``MIGRATING_TO_2.x.txt`` file where you will find directions on how
to migrate your existing PyTables 1.x apps to the 2.x versions.  You can
find an HTML version of this document at
http://www.pytables.org/moin/ReleaseNotes/Migrating_To_2.x

Resources
=

Go to the PyTables web site for more details:

http://www.pytables.org

About the HDF5 library:

http://hdfgroup.org/HDF5/

About NumPy:

http://numpy.scipy.org/

To know more about the company behind the development of PyTables, see:

http://www.carabos.com/

Acknowledgments
===

Thanks to many users who provided feature improvements, patches, bug
reports, support and suggestions.  See the ``THANKS`` file in the
distribution package for a (incomplete) list of contributors.  Many
thanks also to SourceForge who have helped to make and distribute this
package!  And last, but not least thanks a lot to the HDF5 and NumPy
(and numarray!) makers. Without them, PyTables simply would not exist.

Share your experience
=

Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.

-- 
>0,0<   Francesc Altet     http://www.carabos.com/
V   V   Cárabos Coop. V.   Enjoy Data
 "-"
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [ANN] numscons 0.5.1: building scipy

2008-03-08 Thread David Cournapeau
Hi,

Mumscons 0.5.1 is available through pypi (eggs and tarballs). This 
is the first version which can build the whole scipy source tree. To 
build scipy with numscons, you should first get the code in the branch:

svn co http://svn.scipy.org/svn/scipy/branches/build_with_scons

And then build it like numpy:

python setupscons.py install

Technically speaking, you can build scipy with numscons above a numpy 
build the standard way, but that's not a good idea (because of potential 
libraries and compilers mismatches between distutils and numscons). See 
http://projects.scipy.org/scipy/numpy/wiki/NumScons for more details.

The only tested platform for now are:
 - linux + gcc; other compilers on linux should work as well.
 - solaris + sunstudio with sunperf.

On both those platforms, only a few tests do not pass. I don't expect 
windows or mac OS X to work yet, but I can not test those platforms ATM. 
I am releasing the current state of numscons because I won't have much 
time to work on numscons the next few weeks unfortunately.

PLEASE DO NOT USE IT FOR PRODUCTION USE ! There are still some serious 
issues:
- I painfully discovered that at least g77 is extremely sensitive to 
different orders of linker flags (can cause crashes). I don't have any 
problem anymore on my workstation (Ubuntu 32 bits, atlas + gcc/g77), but 
this needs more testing.
- there are some race conditions with f2py which I do not fully 
understand yet, and which prevents parallel build to work (so do not use 
the scons command --jobs option)
- optimization flags of proprietary compilers: they are a PITA. They 
often break IEEE conformance in quite a hard way, and this causes 
crashes or wrong results (for example, the -fast option of sun compilers 
breaks the argsort function of numpy).

So again, this is really just a release for people to test things if 
they want, but nothing else.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.distutils bug, fix, comments?

2008-03-08 Thread Matthew Brett
Hi,

I think I found a bug in numpy/distutils/ccompiler.py - and wanted to
check that no-one has any objections before I fix it.

These lines (390ff distutils.ccompiler.py)

  for _cc in ['msvc', 'bcpp', 'cygwinc', 'emxc', 'unixc']:
 _m = sys.modules.get('distutils.'+_cc+'compiler')
 if _m is not None:
setattr(getattr(_m, _cc+'compiler'), 'gen_lib_options',
gen_lib_options)

occasionally cause an error with message of form module has no
attribute 'unixccompiler'.

As far as I can see, the line beginning '_m' can only return None, or,
in my case, the
 distutils.unixccompiler module.  Then the getattr phrase will request
 an attribute 'unixccompiler' from the distutils.unixccompiler module,
causing an error.

I'm suggesting changing the relevant line to:

setattr(_m, 'gen_lib_options',

Any objections?  If not I'll commit soon...

Matthew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils bug, fix, comments?

2008-03-08 Thread Robert Kern
On Sat, Mar 8, 2008 at 4:10 PM, Matthew Brett <[EMAIL PROTECTED]> wrote:
> Hi,
>
>  I think I found a bug in numpy/distutils/ccompiler.py - and wanted to
>  check that no-one has any objections before I fix it.
>
>  These lines (390ff distutils.ccompiler.py)
>
>   for _cc in ['msvc', 'bcpp', 'cygwinc', 'emxc', 'unixc']:
>  _m = sys.modules.get('distutils.'+_cc+'compiler')
>  if _m is not None:
> setattr(getattr(_m, _cc+'compiler'), 'gen_lib_options',
> gen_lib_options)
>
>  occasionally cause an error with message of form module has no
>  attribute 'unixccompiler'.
>
>  As far as I can see, the line beginning '_m' can only return None, or,
>  in my case, the
>   distutils.unixccompiler module.  Then the getattr phrase will request
>   an attribute 'unixccompiler' from the distutils.unixccompiler module,
>  causing an error.
>
>  I'm suggesting changing the relevant line to:
>
> setattr(_m, 'gen_lib_options',
>
>  Any objections?  If not I'll commit soon...

I believe you are correct.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Slice and assign into new NDarray...

2008-03-08 Thread Vince Fulco
* This may be a dupe as gmail hotkeys sent a draft prematurely...

After scouring material and books I remain stumped with this one as a
new Numpy user-

I have an ND array with shape (10,15) and want to slice or subset(?) the data
into a new 2D array with the following criteria:

1) Separate each 5 observations along axis=0 (row) and transpose them to
the new array with shape (50,3)


Col1 Co2 Col3

Slice1 Slice2 Slice3
...
...
...

Slice1 should have the coordinates[0:5,0], Slice2[0:5,1] and so
on...I've tried initializing the target ND array with

D = NP.zeros((50,3), dtype='int') and then assigning into it with
something like:

for s in xrange(original_array.shape[0]):
D= NP.transpose([data[s,i:i+step] for i in range(0,data.shape[1], 
step)])

with step = 5 but I get errors i.e. IndexError: invalid index

Also tried various combos of explicitly referencing D coordinates but
to no avail.


TIA,

Vince Fulco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slice and assign into new NDarray...

2008-03-08 Thread Anne Archibald
On 08/03/2008, Vince Fulco <[EMAIL PROTECTED]> wrote:

>  I have an ND array with shape (10,15) and want to slice or subset(?) the data
>  into a new 2D array with the following criteria:
>
>  1) Separate each 5 observations along axis=0 (row) and transpose them to
>  the new array with shape (50,3)
>
>
>  Col1 Co2 Col3
>
>  Slice1 Slice2 Slice3
>  ...
>  ...
>  ...
>
>  Slice1 should have the coordinates[0:5,0], Slice2[0:5,1] and so
>  on...I've tried initializing the target ND array with
>
>  D = NP.zeros((50,3), dtype='int') and then assigning into it with
>  something like:
>
>  for s in xrange(original_array.shape[0]):
> D= NP.transpose([data[s,i:i+step] for i in range(0,data.shape[1], 
> step)])
>
>  with step = 5 but I get errors i.e. IndexError: invalid index
>
>  Also tried various combos of explicitly referencing D coordinates but
>  to no avail.

You're not going to get a slice - in the sense of a view on the same
underlying array, and through which you can modify the original array
- but this is perfectly possible without for loops.

First set up the array:
In [12]: a = N.arange(150)

In [13]: a = N.reshape(a, (-1,15))

You can check that the values are sensible. Now reshape it so that you
can split up slice1, slice2, and slice3:
In [14]: b = N.reshape(a, (-1, 3, 5))

slice1 is b[:,0,:]. Now we want to flatten the first and third
coordinates together. reshape() doesn't do that, exactly, but if we
swap the axes around we can use reshape to put them together:
In [15]: c = N.reshape(b.swapaxes(1,2),(-1,3))

This reshape necessarily involves copying the original array. You can
check that it gives you the value you want.

I recommend reading
http://www.scipy.org/Numpy_Functions_by_Category for all those times
you know what you want to do but can't find the function to make numpy
do it.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion