Re: [Numpy-discussion] Including .f2py_f2cmap in numpy.distutils?

2015-09-28 Thread Pearu Peterson
Hi,

Currently, .f2py_f2cmap must be located in a directory where setup.py or
f2py.py is called (to be exact, where numpy.f2py.capi_maps is imported).
This location is hardcoded and there is no way to specify the file location
within setup.py scripts.

However, you don't need to use .f2py_f2cmap file for specifying the correct
mapping. Read the code in numpy/f2py/capi_maps.py and you'll find that
inserting the following codelet to setup.py file might work (untested code):

from numpy.f2py.capi_maps import f2c_map
f2c_map['real'].update(sp='float', dp='double', qp='long_double')

HTH,
Pearu


On Fri, Sep 25, 2015 at 10:04 PM, Eric Hermes  wrote:

> Hello,
>
>
>
> I am attempting to set up a numpy.distutils setup.py for a small python
> program that uses a Fortran module. Currently, the setup is able to compile
> and install the program seemingly successfully, but the f2py script
> erroneously maps the data types I am using to float, rather than double. I
> have the proper mapping set up in a .f2py_f2cmap in the source directory,
> but it does not seem to be copied to the build directory at compile time,
> and I cannot figure out how to make it get copied. Is there a simple way to
> do what I am trying to do? Alternatively, is there a way to specify the
> mapping in my setup.py scripts?
>
>
>
> Here's a github repo with the project:
>
>
>
> https://github.com/ehermes/ased3
>
>
>
> Thanks,
>
> Eric Hermes
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and callbacks with variables

2015-08-13 Thread Pearu Peterson
Hi Casey,

On Wed, Aug 12, 2015 at 11:46 PM, Casey Deen  wrote:

> Hi Pearu-
>
>Thanks so much!  This works!  Can you point me to a reference for the
> format of the .pyf files?  My ~day of searching found a few pages on the
> scipy website, but nothing which went into this amount of detail.
>
>
Try this:

  https://sysbio.ioc.ee/projects/f2py2e/usersguide/index.html#signature-file


> I also asked Stackoverflow, and unless you object, I'd like to add your
> explanation and mark it as SOLVED for future poor souls wrestling with
> this problem.  I'll also update the github repository with before and
> after versions of the .pyf file.
>
>
Go ahead with stackoverflow.

Best regards,
Pearu

Cheers,
> Casey
>
> On 08/12/2015 09:34 PM, Pearu Peterson wrote:
> > Hi Casey,
> >
> > What you observe, is not a f2py bug. When f2py sees a code like
> >
> > subroutine foo
> >   call bar
> > end subroutine foo
> >
> > then it will not make an attempt to analyze bar because of implicit
> > assumption that all statements that has no references to foo arguments
> > are irrelevant for wrapper function generation.
> > For your example, f2py needs some help. Try the following signature in
> > .pyf file:
> >
> > subroutine barney ! in :flintstone:nocallback.f
> > use test__user__routines, fred=>fred, bambam=>bambam
> > intent(callback, hide) fred
> > external fred
> > intent(callback,hide) bambam
> > external bambam
> > end subroutine barney
> >
> > Btw, instead of
> >
> >   f2py -c -m flintstone flintstone.pyf callback.f nocallback.f
> >
> > use
> >
> >   f2py -c flintstone.pyf callback.f nocallback.f
> >
> > because module name comes from the .pyf file.
> >
> > HTH,
> > Pearu
> >
> > On Wed, Aug 12, 2015 at 7:12 PM, Casey Deen  > <mailto:d...@mpia.de>> wrote:
> >
> > Hi all-
> >
> >I've run into what I think might be a bug in f2py and callbacks to
> > python.  Or, maybe I'm not using things correctly.  I have created a
> > very minimal example which illustrates my problem at:
> >
> > https://github.com/soylentdeen/fluffy-kumquat
> >
> > The issue seems to affect call backs with variables, but only when
> they
> > are called indirectly (i.e. from other fortran routines).  For
> example,
> > if I have a python function
> >
> > def show_number(n):
> > print("%d" % n)
> >
> > and I setup a callback in a fortran routine:
> >
> >   subroutine cb
> > cf2py intent(callback, hide) blah
> >   external blah
> >   call blah(5)
> >   end
> >
> > and connect it to the python routine
> > fortranObject.blah = show_number
> >
> > I can successfully call the cb routine from python:
> >
> > >fortranObject.cb
> > 5
> >
> > However, if I call the cb routine from within another fortran
> routine,
> > it seems to lose its marbles
> >
> >   subroutine no_cb
> >   call cb
> >   end
> >
> > capi_return is NULL
> > Call-back cb_blah_in_cb__user__routines failed.
> >
> > For more information, please have a look at the github repository.
> I've
> > reproduced the behavior on both linux and mac.  I'm not sure if this
> is
> > an error in the way I'm using the code, or if it is an actual bug.
> Any
> > and all help would be very much appreciated.
> >
> > Cheers,
> > Casey
> >
> >
> > --
> > Dr. Casey Deen
> > Post-doctoral Researcher
> > d...@mpia.de <mailto:d...@mpia.de>
> >  +49-6221-528-375 
> > Max Planck Institut für Astronomie (MPIA)
> > Königstuhl 17  D-69117 Heidelberg, Germany
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> >
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
>
> --
> Dr. Casey Deen
> Post-doctoral Researcher
> d...@mpia.de   +49-6221-528-375
> Max Planck Institut für Astronomie (MPIA)
> Königstuhl 17  D-69117 Heidelberg, Germany
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and callbacks with variables

2015-08-12 Thread Pearu Peterson
Hi Casey,

What you observe, is not a f2py bug. When f2py sees a code like

subroutine foo
  call bar
end subroutine foo

then it will not make an attempt to analyze bar because of implicit
assumption that all statements that has no references to foo arguments are
irrelevant for wrapper function generation.
For your example, f2py needs some help. Try the following signature in .pyf
file:

subroutine barney ! in :flintstone:nocallback.f
use test__user__routines, fred=>fred, bambam=>bambam
intent(callback, hide) fred
external fred
intent(callback,hide) bambam
external bambam
end subroutine barney

Btw, instead of

  f2py -c -m flintstone flintstone.pyf callback.f nocallback.f

use

  f2py -c flintstone.pyf callback.f nocallback.f

because module name comes from the .pyf file.

HTH,
Pearu

On Wed, Aug 12, 2015 at 7:12 PM, Casey Deen  wrote:

> Hi all-
>
>I've run into what I think might be a bug in f2py and callbacks to
> python.  Or, maybe I'm not using things correctly.  I have created a
> very minimal example which illustrates my problem at:
>
> https://github.com/soylentdeen/fluffy-kumquat
>
> The issue seems to affect call backs with variables, but only when they
> are called indirectly (i.e. from other fortran routines).  For example,
> if I have a python function
>
> def show_number(n):
> print("%d" % n)
>
> and I setup a callback in a fortran routine:
>
>   subroutine cb
> cf2py intent(callback, hide) blah
>   external blah
>   call blah(5)
>   end
>
> and connect it to the python routine
> fortranObject.blah = show_number
>
> I can successfully call the cb routine from python:
>
> >fortranObject.cb
> 5
>
> However, if I call the cb routine from within another fortran routine,
> it seems to lose its marbles
>
>   subroutine no_cb
>   call cb
>   end
>
> capi_return is NULL
> Call-back cb_blah_in_cb__user__routines failed.
>
> For more information, please have a look at the github repository.  I've
> reproduced the behavior on both linux and mac.  I'm not sure if this is
> an error in the way I'm using the code, or if it is an actual bug.  Any
> and all help would be very much appreciated.
>
> Cheers,
> Casey
>
>
> --
> Dr. Casey Deen
> Post-doctoral Researcher
> d...@mpia.de   +49-6221-528-375
> Max Planck Institut für Astronomie (MPIA)
> Königstuhl 17  D-69117 Heidelberg, Germany
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py problem with multiple fortran source files

2015-06-13 Thread Pearu Peterson
Hi,

On Fri, Jun 12, 2015 at 7:23 PM, 石头  wrote:

> Hi,everybody,
> I'm new to f2py, and I got some trouble when  wrapped  some fortran files
> to Python.
> I have download  a Fortran library (
> https://github.com/brianlockwood/ForK), I want to compile these files
> into a library and call the library by other Fortran file wrote by myself.
> Here are my questions:
> 1. How should I compile the library(in this case,Fork), and what command
> should I use;
>

In ForK, try

  make all

that should result kriginglib.a. You might need to add -fPIC option to
FFLAGS in Makefile before executing make.


> 2. How can I use the library and my own Fortran source fileI( All.f90 )
> with the f2py command to generate the module I can use py Python
>

Try

f2py -c -m mylib All.f90 /path/to/kriginglib.a
python
>>> import mylib
>>> print mylib.__doc__

HTH,
Pearu




> Thanks!
> Shishijie
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and debug mode

2014-10-03 Thread Pearu Peterson
Hi,

When you run f2py without -c option, the wrapper source files are generated
without compiling them.
With these source files and fortranobject.c, you can build the extension
module with your specific compiler options using the compiler framework of
your choice.
I am not familiar with Visual Studio specifics to suggest a more detailed
solution but after generating wrapper source files there is no f2py
specific way to build the extension module, in fact, `f2py -c` relies on
distutils compilation/linkage methods.

HTH,
Pearu

On Tue, Sep 30, 2014 at 4:15 PM, Bayard  wrote:

> Hello to all.
> I'm aiming to wrap a Fortran program into Python. I started to work with
> f2py, and am trying to setup a debug mode where I could reach
> breakpoints in Fortran module launched by Python. I've been looking in
> the existing post, but not seeing things like that.
>
> I'm used to work with visual studio 2012 and Intel Fortran compiler, I
> have tried to get that point doing :
> 1) Run f2py -m to get *.c wrapper
> 2) Embed it in a C Project in Visual Studio, containing also with
> fortranobject.c and fortranobject.h,
> 3) Create a solution which also contains my fortran files compiled as a lib
> 4) Generate in debug mode a "dll" with extension pyd (to get to that
> point name of the "main" function in Fortran by "_main").
>
> I compiled without any error, and reach break point in C Wrapper, but
> not in Fortran, and the fortran code seems not to be executed (whereas
> it is when compiling with f2py -c). Trying to understand f2py code, I
> noticed that f2py is not only writing c-wrapper, but compiling it in a
> specific way. Is there a way to get a debug mode in Visual Studio with
> f2py (some members of the team are used to it) ? Any alternative tool we
> should use for debugging ?
>
> Thanks for answering
> Ferdinand
>
>
>
>
> ---
> Ce courrier électronique ne contient aucun virus ou logiciel malveillant
> parce que la protection avast! Antivirus est active.
> http://www.avast.com
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py and Fortran STOP statement issue

2013-11-20 Thread Pearu Peterson
Hi,

The issue with wrapping Fortran codes that contain STOP statements has been
raised several times in past with no good working solution proposed.

Recently the issue was raised again in f2py issues. Since the user was
filling to test out few ideas with positive results, I decided to describe
the outcome (a working solution) in the following wikipage:

  https://code.google.com/p/f2py/wiki/FAQ2ed

Just FYI,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py with allocatable arrays

2012-07-03 Thread Pearu Peterson
On Tue, Jul 3, 2012 at 5:20 PM, Sturla Molden  wrote:

>
> As for f2py: Allocatable arrays are local variables for internal use,
> and they are not a part of the subroutine's calling interface. f2py only
> needs to know about the interface, not the local variables.
>

One can have allocatable arrays in module data block, for instance, where
they a global. f2py supports wrapping these allocatable arrays to python.
See, for example,


http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#allocatable-arrays

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question on F/C-ordering in numpy svd

2012-01-13 Thread Pearu Peterson


On 01/12/2012 04:21 PM, Ivan Oseledets wrote:
> Dear all!
>
> I quite new to numpy and python.
> I am a matlab user, my work is mainly
> on multidimensional arrays, and I have a question on the svd function
> from numpy.linalg
>
> It seems that
>
> u,s,v=svd(a,full_matrices=False)
>
> returns u and v in the F-contiguous format.

The reason for this is that the underlying computational routine
is in Fortran (when using system lapack library, for instance) that 
requires and returns F-contiguous arrays and the current behaviour 
guarantees the most memory efficient computation of svd.

> That is not in a good agreement with other numpy stuff, where
> C-ordering is default.
> For example, matrix multiplication, dot() ignores ordering and returns
> result always in C-ordering.
> (which is documented), but the svd feature is not documented.

In generic numpy operation, the particular ordering of arrays
should not matter as the underlying code should know how to
compute array operation results from different input orderings
efficiently.

This behaviour of svd should be documented. However, one
should check that when using the svd from numpy lapack_lite (which is 
f2c code and could use also C-ordering, in principle),
F-contiguous arrays are actually returned.

Regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to start at line # x when using numpy.memmap

2011-08-19 Thread Pearu Peterson


On 08/19/2011 05:01 PM, Brent Pedersen wrote:
> On Fri, Aug 19, 2011 at 7:29 AM, Jeremy Conlin  wrote:
>> On Fri, Aug 19, 2011 at 7:19 AM, Pauli Virtanen  wrote:
>>> Fri, 19 Aug 2011 07:00:31 -0600, Jeremy Conlin wrote:
 I would like to use numpy's memmap on some data files I have. The first
 12 or so lines of the files contain text (header information) and the
 remainder has the numerical data. Is there a way I can tell memmap to
 skip a specified number of lines instead of a number of bytes?
>>>
>>> First use standard Python I/O functions to determine the number of
>>> bytes to skip at the beginning and the number of data items. Then pass
>>> in `offset` and `shape` parameters to numpy.memmap.
>>
>> Thanks for that suggestion. However, I'm unfamiliar with the I/O
>> functions you are referring to. Can you point me to do the
>> documentation?
>>
>> Thanks again,
>> Jeremy
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> this might get you started:
>
>
> import numpy as np
>
> # make some fake data with 12 header lines.
> with open('test.mm', 'w') as fhw:
>  print>>  fhw, "\n".join('header' for i in range(12))
>  np.arange(100, dtype=np.uint).tofile(fhw)
>
> # use normal python io to determine of offset after 12 lines.
> with open('test.mm') as fhr:
>  for i in range(12): fhr.readline()
>  offset = fhr.tell()

I think that before reading a line the program should
check whether the line starts with "#". Otherwise fhr.readline()
may return a very large junk of data (may be the rest of the file 
content) that ought to be read only via memmap.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build of current Git HEAD for NumPy fails

2011-08-19 Thread Pearu Peterson


On 08/19/2011 02:26 PM, Dirk Ullrich wrote:
> Hi,
>
> when trying to build current Git HAED of NumPy with - both for
> $PYTHON=python2 or $PYTHON=python3:
>
> $PYTHON setup.py config_fc --fcompiler=gnu95 install --prefix=$WHATEVER
>
> I get the following error - here for PYTHON=python3.2

The command works fine here with Numpy HEAD and Python 2.7.
Btw, why do you specify --fcompiler=gnu95 for numpy? Numpy
has no Fortran sources. So, fortran compiler is not needed
for building Numpy (unless you use Fortran libraries
for numpy.linalg).

> running build_clib
...
>File 
> "/common/packages/build/makepkg-du/python-numpy-git/src/numpy-build/build/py3k/numpy/distutils/command/build_clib.py",
> line 179, in build_a_library
>  fcompiler.extra_f77_compile_args =
> build_info.get('extra_f77_compile_args') or []
> AttributeError: 'str' object has no attribute 'extra_f77_compile_args'

Reading the code, I don't see how this can happen. Very strange.
Anyway, I cleaned up build_clib to follow similar coding convention
as in build_ext. Could you try numpy head again?

Regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [f2py] How to specify compile options in setup.py

2011-08-16 Thread Pearu Peterson
,

On Tue, Aug 16, 2011 at 7:50 PM, Jose Gomez-Dans wrote:

> Hi,
>
> Up to now, I have managed to build Fortran extensions with f2py by ussing
> the following command:
> $ python setup.py config_fc --fcompiler=gnu95
>  --f77flags='-fmy_flags' --f90flags='-fmy_flags' build
>
> I think that these options should be able to go in a setup.py file, and use
> the f2py_options file. One way of doing this is to extend sys.argv with the
> required command line options:
> import sys
> sys.argv.extend ( ['config_fc', '--fcompiler=gnu95',
>  '--f77flags="-fmy_flags"', "--f90flags='-fmy_flags"] )
>
> This works well if all the extensions require the same flags. In my case,
> however, One of the extensions requires a different set of flags (in
> particular, it requires that flag  -fdefault-real-8 isn't set, which is
> required by the extensions). I tried setting the f2py_options in the
> add_extension method call:
>
> config.add_extension( 'my_extension', sources = my_sources,
>  f2py_options=['f77flags="-ffixed-line-length-0" -fdefault-real-8',
> 'f90flags="-fdefault-real-8"']  )
>
> This compiles the extensions (using the two dashes in front of the f2py
> option eg --f77flags results in an unrecognised option), but the f2p_options
> goes unheeded. Here's the relevant bit of the output from python setup.py
> build:
>
> compiling Fortran sources
> Fortran f77 compiler: /usr/bin/gfortran -ffixed-line-length-0 -fPIC -O3
> -march=native
> Fortran f90 compiler: /usr/bin/gfortran -ffixed-line-length-0 -fPIC -O3
> -march=native
> Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form
> -fno-second-underscore -ffixed-line-length-0 -fPIC -O3 -march=native
> compile options: '-Ibuild/src.linux-i686-2.7
> -I/usr/lib/pymodules/python2.7/numpy/core/include -I/usr/include/python2.7
> -c'
> extra options: '-Jbuild/temp.linux-i686-2.7/my_dir
> -Ibuild/temp.linux-i686-2.7/my_dir'
>
> How can I disable (or enable) one option for compiling one particular
> extension?
>
>
You cannot do it unless you update numpy from git repo.
I just implemented the support for extra_f77_compile_args and
extra_f90_compile_args
options that can be used in config.add_extension as well as in
config.add_library.
See
  https://github.com/numpy/numpy/commit/43862759

So, with recent numpy the following will work

config.add_extension( 'my_extension', sources = my_sources,
extra_f77_compile_args =
["-ffixed-line-length-0", "-fdefault-real-8"],
extra_f90_compile_args =
["-fdefault-real-8"],
  )

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py - undefined symbol: _intel_fast_memset [SEC=UNCLASSIFIED]

2011-08-16 Thread Pearu Peterson


On 08/16/2011 02:32 PM, Jin Lee wrote:
> Hello,
>
> This is my very first attempt at using f2py but I have come across a problem. 
> If anyone can assist me I would appreciate it very much.
>
> I have a very simple test Fortran source, sub.f90 which is:
>
> subroutine sub1(x,y)
> implicit none
>
> integer, intent(in) :: x
> integer, intent(out) :: y
>
> ! start
> y = x
>
> end subroutine sub1
>
>
> I then used f2py to produce an object file, sub.so:
>
> f2py -c -m sub sub.f90 --fcompiler='gfortran'
>
> After starting a Python interactive session I tried to import the 
> Fortran-derived Python module but I get an error message:
>
 import sub
> Traceback (most recent call last):
>File "", line 1, in
> ImportError: ./sub.so: undefined symbol: _intel_fast_memset
>
>
> Can anyone suggest what this error message means and how I can overcome it, 
> please?

Try
   f2py -c -m sub sub.f90 --fcompiler=gnu95

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ULONG not in UINT16, UINT32, UINT64 under 64-bit windows, is this possible?

2011-08-15 Thread Pearu Peterson
Hi,

A student of mine using 32-bit numpy 1.5 under 64-bit Windows 7 noticed that
giving a numpy array with dtype=uint32 to an extension module the
following codelet would fail:

switch(PyArray_TYPE(ARR)) {
  case PyArray_UINT16: /* do smth */ break;
  case PyArray_UINT32: /* do smth */ break;
  case PyArray_UINT64: /* do smth */ break;
  default: /* raise type error exception */
}

The same test worked fine under Linux.

Checking the value of PyArray_TYPE(ARR) (=8) showed that it corresponds to
NPY_ULONG (when counting the items in the enum definition).

Is this situation possible where NPY_ULONG does not correspond to a 16 or 32
or 64 bit integer?
Or does this indicate a bug somewhere for this particular platform?

Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] AttributeError in numpy.distutils

2011-05-30 Thread Pearu Peterson
On Sun, May 29, 2011 at 9:19 PM, Ralf Gommers
wrote:

>
>
> On Sun, May 22, 2011 at 8:14 PM, Branimir Sesar 
> wrote:
>
>> On 05/22/2011 04:17 AM, Ralf Gommers wrote:
>> >
>> >
>> > On Thu, May 19, 2011 at 8:28 PM, Branimir Sesar
>> > mailto:bse...@astro.caltech.edu>> wrote:
>> >
>> > Dear Numpy users,
>> >
>> > I've encountered an AttributeError in numpy.distutils
>> >
>> >File
>> >
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/build_src.py",
>> > line 646, in swig_sources
>> >  extension.swig_opts.remove('-c++')
>> > AttributeError: 'str' object has no attribute 'remove'
>> >
>> > while compiling some code with Python 2.7.1 and Numpy 1.6.0.
>> >
>> >
>> > What are you doing here, compiling numpy? Building some of your own
>> > swig-ed code? Please give the details needed to reproduce your issue.
>>
>> I've been trying to compile Scikits ANN
>> (http://projects.scipy.org/scikits/wiki/AnnWrapper) with Python 2.7.1,
>> numpy 1.6.0, and SWIG 2.0.3 but the compilation breaks down down with
>> the error given below. Previously, I was able to compile Scikits ANN
>> using Enthought Python Distribution 7.0.2 (Python 2.7.1, numpy 1.5.1,
>> swig 1.3.40).
>>
>> running install
>> running bdist_egg
>> running egg_info
>> running build_src
>> build_src
>> building extension "scikits.ann._ANN" sources
>> Traceback (most recent call last):
>>File "setup.py", line 139, in 
>>  test_suite = 'nose.collector',
>>File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/core.py",
>>
>> line 186, in setup
>>  return old_setup(**new_attr)
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/core.py", line
>> 152, in setup
>>  dist.run_commands()
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/dist.py", line
>> 953, in run_commands
>>  self.run_command(cmd)
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/dist.py", line
>> 972, in run_command
>>  cmd_obj.run()
>>File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/install.py",
>>
>> line 57, in run
>>  r = self.setuptools_run()
>>File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/install.py",
>>
>> line 51, in setuptools_run
>>  self.do_egg_install()
>>File "build/bdist.linux-x86_64/egg/setuptools/command/install.py",
>> line 96, in do_egg_install
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/cmd.py", line
>> 326, in run_command
>>  self.distribution.run_command(command)
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/dist.py", line
>> 972, in run_command
>>  cmd_obj.run()
>>File "build/bdist.linux-x86_64/egg/setuptools/command/bdist_egg.py",
>> line 167, in run
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/cmd.py", line
>> 326, in run_command
>>  self.distribution.run_command(command)
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/dist.py", line
>> 972, in run_command
>>  cmd_obj.run()
>>File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/egg_info.py",
>>
>> line 8, in run
>>  self.run_command("build_src")
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/cmd.py", line
>> 326, in run_command
>>  self.distribution.run_command(command)
>>File "/home/bsesar/usr/pydebug/lib/python2.7/distutils/dist.py", line
>> 972, in run_command
>>  cmd_obj.run()
>> File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/build_src.py",
>>
>> line 152, in run
>>  self.build_sources()
>> File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/build_src.py",
>>
>> line 169, in build_sources
>>  self.build_extension_sources(ext)
>> File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/build_src.py",
>>
>> line 332, in build_extension_sources
>>  sources = self.swig_sources(sources, ext)
>> File
>>
>> "/home/bsesar/usr/pydebug/lib/python2.7/site-packages/numpy/distutils/command/build_src.py",
>>
>> line 646, in swig_sources
>>  extension.swig_opts.remove('-c++')
>> AttributeError: 'str' object has no attribute 'remove'
>>
>> Looks like this is a bug introduced in numpy 1.6.0 by commit ff0822c4.
>
>
This is not a bug as explained in

  http://projects.scipy.org/numpy/ticket/1851


> Right above this line (numpy/distutils/command/build_src.py, line 646) add
> this:
>
> if isinstance(extension.swig_opts, basestring):
> extension.swig_opts = extension.swig_opts.split()
>
> Then you should be able to compile scikits.ann.
>
>
The bug is in scikits/ann/setup.py. Applying the following patch

--- scikits/ann/setup.py(revision 2267)
+++ scikits/ann/setup.py(working copy)
@@ -45,11 +45,11 @@
 library_dirs = [ann_library_dir],

Re: [Numpy-discussion] Numpy steering group?

2011-05-26 Thread Pearu Peterson
On Fri, May 27, 2011 at 7:39 AM, Matthew Brett wrote:

> Hi,
>
> On Thu, May 26, 2011 at 9:32 PM, Pearu Peterson
>  wrote:
> > Hi,
> >
> > Would it be possible to setup a signing system where anyone who would
> like
> > to support Clint could sign and advertise the system on relevant mailing
> > lists?
> > This would provide larger body of supporters for this letter and perhaps
> > will have greater impact to whom the letter will be
> > addressed. Personally, I would be happy to sign to such a letter.
> >
> > On the letter: the letter should also mention scipy community as they
> > benefit
> > most from the ATLAS speed.
>
> Maybe it would be best phrased then as 'numpy and scipy developers'
> instead of the steering group?
>
> I'm not sure how this kind of thing works for tenure letters, I would
> guess that if there are a very large number of signatures it might be
> difficult to see who is being represented...  I'm open to suggestions.
>  I can also ask Clint.
>
> I've added you as an editor - would you consider adding your name at
> the end, and maybe something about scipy? - you know the scipy blas /
> lapack stuff much better than I do.
>

Done for adding the name. The document is currently numpy oriented and I am
not sure where to enter with scipy..

Technical summary of the situation with scipy blas/lapack stuff:
The main difference in between numpy and scipy lapack-wise is that numpy has
a lite C version of few lapack routines in case the lapack libraries are not
available for building it while for scipy the lapack libraries are
prerequisites as scipy provides interfaces to a much larger number of lapack
routines. Having ATLAS in addition would greatly increase the performance of
these routines.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy steering group?

2011-05-26 Thread Pearu Peterson
Hi,

Would it be possible to setup a signing system where anyone who would like
to support Clint could sign and advertise the system on relevant mailing
lists?
This would provide larger body of supporters for this letter and perhaps
will have greater impact to whom the letter will be
addressed. Personally, I would be happy to sign to such a letter.

On the letter: the letter should also mention scipy community as they
benefit
most from the ATLAS speed.

Best regards,
Pearu


On Fri, May 27, 2011 at 12:03 AM, Matthew Brett wrote:

> Hi,
>
> On Wed, May 4, 2011 at 9:24 AM, Robert Kern  wrote:
> > On Wed, May 4, 2011 at 11:14, Matthew Brett 
> wrote:
> >> Hi,
> >>
> >> On Tue, May 3, 2011 at 7:58 PM, Robert Kern 
> wrote:
> >
> >>> I can't speak for the rest of the group, but as for myself, if you
> >>> would like to draft such a letter, I'm sure I will agree with its
> >>> contents.
> >>
> >> Thank you - sadly I am not confident in deserving your confidence, but
> >> I will do my best to say something sensible.   Any objections to a
> >> public google doc?
> >
> > Even better!
>
> I've put up a draft here:
>
> numpy-whaley-support -
>
> https://docs.google.com/document/d/1gPhUUjWqNpRatw90kCqL1WPWvn1yicf2VAowWSyHlno/edit?hl=en_US&authkey=CPv49_cK
>
> I didn't know who to put as signatories.  Maybe an extended steering
> group like (from http://scipy.org/Developer_Zone):
>
> Jarrod Millman
> Eric Jones
> Robert Kern
> Travis Oliphant
> Stefan van der Walt
>
> plus:
>
> Pauli
> Ralf
> Chuck
>
> or something like that?  Anyone else care to sign / edit?  Mark W for
> example?  Sorry, I haven't been following the numpy commits very
> carefully of late.
>
> Best,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert integer into bit array

2011-05-16 Thread Pearu Peterson
On Tue, May 17, 2011 at 8:05 AM, Pearu Peterson wrote:

>
>
> On Tue, May 17, 2011 at 12:04 AM, Nikolas Tautenhahn wrote:
>
>> Hi,
>>
>> > Here
>> >
>> >
>> http://code.google.com/p/pylibtiff/source/browse/#svn%2Ftrunk%2Flibtiff%2Fbitarray-0.3.5-numpy
>> >
>> > you can find bitarray with numpy support.
>> >
>>
>> Thanks, that looks promising - to get a numpy array, I need to do
>>
>> numpy.array(bitarray.bitarray(numpy.binary_repr(i, l)))
>>
>> for an integer i and l with i < 2**l, right?
>>
>>
> If l < 64 and little endian is assumed then you can use the
>
>   fromword(i, l)
>
> method:
>
> >>> from libtiff import bitarray
> >>> barr = bitarray.bitarray(0, 'little')
> >>> barr.fromword(3,4)
> >>> barr
> bitarray('1100')
>
> that will append 4 bits of the value 3 to the bitarray barr.
>

>>> numpy.array(barr)
array([ True,  True, False, False], dtype=bool)

to complete the example...

Pearu


> Also check out various bitarray `to*` and `from*` methods.
>
> HTH,
> Pearu
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert integer into bit array

2011-05-16 Thread Pearu Peterson
On Tue, May 17, 2011 at 12:04 AM, Nikolas Tautenhahn  wrote:

> Hi,
>
> > Here
> >
> >
> http://code.google.com/p/pylibtiff/source/browse/#svn%2Ftrunk%2Flibtiff%2Fbitarray-0.3.5-numpy
> >
> > you can find bitarray with numpy support.
> >
>
> Thanks, that looks promising - to get a numpy array, I need to do
>
> numpy.array(bitarray.bitarray(numpy.binary_repr(i, l)))
>
> for an integer i and l with i < 2**l, right?
>
>
If l < 64 and little endian is assumed then you can use the

  fromword(i, l)

method:

>>> from libtiff import bitarray
>>> barr = bitarray.bitarray(0, 'little')
>>> barr.fromword(3,4)
>>> barr
bitarray('1100')

that will append 4 bits of the value 3 to the bitarray barr.

Also check out various bitarray `to*` and `from*` methods.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert integer into bit array

2011-05-16 Thread Pearu Peterson
Hi,

I have used bitarray for that

  http://pypi.python.org/pypi/bitarray/

Here


http://code.google.com/p/pylibtiff/source/browse/#svn%2Ftrunk%2Flibtiff%2Fbitarray-0.3.5-numpy

you can find bitarray with numpy support.

HTH,
Pearu

On Mon, May 16, 2011 at 9:55 PM, Nikolas Tautenhahn  wrote:

> Hi,
>
> for some research, I need to convert lots of integers into their bit
> representation - but as a bit array, not a string like
> numpy.binary_repr() returns it.
>
> So instead of
> In [22]: numpy.binary_repr(23)
> Out[22]: '10111
>
>
> I'd need:
> numpy.binary_magic(23)
> Out: array([ True, False,  True,  True,  True], dtype=bool)
>
> is there any way to do this efficiently?
>
> best regards,
> Nik
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py complications

2011-05-12 Thread Pearu Peterson
On Thu, May 12, 2011 at 4:14 PM, Jose Gomez-Dans wrote:

> Hi,
>
> We have some fortran code that we'd like to wrap using f2py. The code
> consists of a library that we compile into an .so file, and a file that
> wraps that library, which is then wrapped by f2py to provide a clean
> interface to the library. The process thus includes two steps:
> 1.- Use a makefile to compile the library
> 2.- Use f2py to create the python bindings from a fortran file, and link
> dynamically to the library created in the previous step.
>
> Now, I don't see why just a call to f2py shouldn't suffice (we don't really
> need to have an .so lying around, and it implies that we have to set eg
> $LD_LIBRARY_PATH to search for it).
>

It would be sufficient to just call f2py once.


> I thought about using a pyf file for this, and use the only :: option:
> $ f2py -h my_model.pyf -m my_model  my_wrapper.f90 only: func1 func2 func3
> : all_my_other_files.f even_more_files.f90
>

The above command (with using -h option) will just create the my_model.pyf
file, no extra magic here,


> $ f2py -c  -m my_model  --f90flags="-fdefault-real-8 -O3 -march=native
> -m32" --f90exec=/usr/bin/gfortran  --f77exec=/usr/bin/gfortran --opt=-O3
>  my_model.pyf
>
>
You need to include all .f and .f90 files to the f2py command and -m has no
effect when .pyf is specified:

f2py -c   --f90flags="-fdefault-real-8 -O3 -march=native -m32"
--f90exec=/usr/bin/gfortran  --f77exec=/usr/bin/gfortran --opt=-O3
 my_model.pyf all_my_other_files.f even_more_files.f90

This command (with .pyf file in command line) reads only the my_model.pyf
file and creates wrapper code. It does not scan any Fortran files but only
compiles them (with -c in command line) and links to the extension module.

In fact, IIRC, the two above command lines can be joined to one:

  f2py  -m my_model  my_wrapper.f90 only: func1 func2 func3 :
all_my_other_files.f even_more_files.f90  --f90flags="-fdefault-real-8 -O3
-march=native -m32" --f90exec=/usr/bin/gfortran  --f77exec=/usr/bin/gfortran
--opt=-O3


> This however, doesn't seem to work, with python complaining about missing
> things. If I put all my *.f and *f90 files after the my_model.pyf (which
> doesn't seem to have them in the file), I get undefined symbol errors when
> importing the .so in python.
>
>
Are you sure that you specified all needed Fortran files in the f2py command
line? Where are these symbols defined that are reported to be undefined?

Additionally, it would be great to have this compilation in a
> distutils-friendly package, but how do you specify all these compiler flags?
>
>
It is possible. See numpy/distutils/tests for examples. To use gfortran, run

  python setup.py build --fcompiler=gnu95

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-06 Thread Pearu Peterson
On Sat, May 7, 2011 at 12:00 AM, DJ Luscher  wrote:

> Pearu Peterson  gmail.com> writes:
> >
> > On Fri, May 6, 2011 at 10:18 PM, DJ Luscher  lanl.gov> wrote:
> >
> > I have encountered another minor hangup.  For assumed-shape array-valued
> > functions defined within a fortran module there seems to be some trouble
> in
> > the autogenerated subroutine wrapper interface.  I think it has to do
> with   >
> the order in which variables are declared in the interface specification.
> >
> > in the subroutine interface specification the size(a) and size(b) are
> used to
> > dimension outer above (before) the declaration of a and b themselves.
>  This
> > halts my compiler.  The wrapper seems to compile OK if a and b are
> declared
> > above outer in the interface.
> > thanks again for your help,
> >
> > DJ
> >
> > Your example works fine here:$ f2py  -m foo foo_out.f90 -c$ python -c
> 'import
> > foo; print foo.foo.outer([1,2],[3,4])'[[ 3.  4.] [ 6.  8.]]with outer
> defined
> > before a and b. I would presume that compiler would
> > give a warning, at least, when this would be a problem. Anyway, try to
> apply >
> the following patch:
> > to see if changing the order will fix the hang.Pearu
> >
> >
> indeed - it works fine as is when I compile with gfortran, but not ifort.
>  I
> suppose there may be some compiler option for ifort to overcome that, but I
> couldn't tell from a brief scan of the doc.
>
> the patch works when I add in two separate loops over args: (~line 138 in
> func2subr.py):
>
>for a in args:
>if a in dumped_args: continue
> if isscalar(vars[a]):
> add(var2fixfortran(vars,a,f90mode=f90mode))
>dumped_args.append(a)
>for a in args:
>if a in dumped_args: continue
>if isintent_in(vars[a]):
>add(var2fixfortran(vars,a,f90mode=f90mode))
>dumped_args.append(a)
>
> not sure if that was your intention,


yes, that is what the patch was generated from.


> but when I tried to use just "isintent_in"
> or to include both conditions in same loop,


that would not work as you noticed..


> the input arrays (a and b) were
> declared ahead of the derived shape-array (outer), but also ahead of the
> integers used to define a and b (e.g. f2py_a_d0).
>
>
I have committed the patch:

  https://github.com/numpy/numpy/commit/6df2ac21

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-06 Thread Pearu Peterson
On Fri, May 6, 2011 at 10:18 PM, DJ Luscher  wrote:

>
> I have encountered another minor hangup.  For assumed-shape array-valued
> functions defined within a fortran module there seems to be some trouble in
> the
> autogenerated subroutine wrapper interface.  I think it has to do with the
> order
> in which variables are declared in the interface specification.
>
> for example:
> 
>  ! -*- fix -*-
>  module foo
>  contains
>  function outer(a,b)
>implicit none
>real, dimension(:), intent(in)   :: a, b
>real, dimension(size(a),size(b)) :: outer
>
>outer = spread(a,dim=2,ncopies=size(b) ) *
> &  spread(b,dim=1,ncopies=size(a) )
>
>  end function outer
>  end module
>
> when compiled by f2py creates the file:
> 
> ! -*- f90 -*-
> ! This file is autogenerated with f2py (version:2)
> ! It contains Fortran 90 wrappers to fortran functions.
>
>  subroutine f2pywrap_foo_outer (outerf2pywrap, a, b, f2py_a_d0, f2p&
> &y_b_d0)
>  use foo, only : outer
>  integer f2py_a_d0
>  integer f2py_b_d0
>  real a(f2py_a_d0)
>  real b(f2py_b_d0)
>  real outerf2pywrap(size(a),size(b))
>  outerf2pywrap = outer(a, b)
>  end subroutine f2pywrap_foo_outer
>
>  subroutine f2pyinitfoo(f2pysetupfunc)
>  interface
>  subroutine f2pywrap_foo_outer (outerf2pywrap, outer, a, b, f2py_a_&
> &d0, f2py_b_d0)
>  integer f2py_a_d0
>  integer f2py_b_d0
>  real outer(size(a),size(b))
>  real a(f2py_a_d0)
>  real b(f2py_b_d0)
>  real outerf2pywrap(size(a),size(b))
>  end subroutine f2pywrap_foo_outer
>  end interface
>  external f2pysetupfunc
>  call f2pysetupfunc(f2pywrap_foo_outer)
>  end subroutine f2pyinitfoo
>
> in the subroutine interface specification the size(a) and size(b) are used
> to
> dimension outer above (before) the declaration of a and b themselves.  This
> halts my compiler.  The wrapper seems to compile OK if a and b are declared
> above outer in the interface.
> thanks again for your help,
> DJ
>
>
Your example works fine here:

$ f2py  -m foo foo_out.f90 -c
$ python -c 'import foo; print foo.foo.outer([1,2],[3,4])'
[[ 3.  4.]
 [ 6.  8.]]

with outer defined before a and b. I would presume that compiler would
give a warning, at least, when this would be a problem. Anyway, try to apply
the following patch:


diff --git a/numpy/f2py/func2subr.py b/numpy/f2py/func2subr.py
index 0f76920..f746108 100644
--- a/numpy/f2py/func2subr.py
+++ b/numpy/f2py/func2subr.py
@@ -90,7 +90,6 @@ def createfuncwrapper(rout,signature=0):
 v['dimension'][i] = dn
 rout['args'].extend(extra_args)
 need_interface = bool(extra_args)
-

 ret = ['']
 def add(line,ret=ret):
@@ -143,8 +142,13 @@ def createfuncwrapper(rout,signature=0):
 dumped_args.append(a)
 for a in args:
 if a in dumped_args: continue
+if isintent_in(vars[a]):
+add(var2fixfortran(vars,a,f90mode=f90mode))
+dumped_args.append(a)
+for a in args:
+if a in dumped_args: continue
 add(var2fixfortran(vars,a,f90mode=f90mode))
-
+
 add(l)

 if need_interface:


to see if changing the order will fix the hang.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-06 Thread Pearu Peterson
On Fri, May 6, 2011 at 5:55 PM, DJ Luscher  wrote:

> Pearu Peterson  gmail.com> writes:
> >
> >
> > Thanks for the bug report!These issues are now fixed in:
> https://github.com/numpy/numpy/commit/f393b604  Ralf, feel free to apply
> this
> changeset to 1.6.x branch if appropriate.Regards,Pearu
> >
>
> Excellent! Thank you.
>
> I'll cautiously add another concern because I believe it is related.  Using
> f2py to compile subroutines where dimensions for result variable are
> derived
> from two argument usage of size of an assumed shape input variable does not
> compile for me.
>
> example:
> 
>
>subroutine trans(x,y)
>
>  implicit none
>
>  real, intent(in), dimension(:,:) :: x
>  real, intent(out), dimension( size(x,2), size(x,1) ) :: y
>
>  integer :: N, M, i, j
>
>  N = size(x,1)
>  M = size(x,2)
>
>  DO i=1,N
>do j=1,M
>  y(j,i) = x(i,j)
>END DO
>  END DO
>
>end subroutine trans
>
> For this example the autogenerated fortran wrapper is:
> 
>  subroutine f2pywraptrans (x, y, f2py_x_d0, f2py_x_d1)
>  integer f2py_x_d0
>  integer f2py_x_d1
>  real x(f2py_x_d0,f2py_x_d1)
>  real y(shape(x,-1+2),shape(x,-1+1))
>  interface
>  subroutine trans(x,y)
>  real, intent(in),dimension(:,:) :: x
>  real, intent(out),dimension( size(x,2), size(x,1) ) :: y
>  end subroutine trans
>  end interface
>  call trans(x, y)
>  end
>
> which (inappropriately?) translates the fortran SIZE(var,dim) into fortran
> SHAPE(var, kind).  Please let me know if it is poor form for me to follow
> on
> with this secondary issue, but it seems like it is related and a
> straight-forward fix.
>

This issue is now fixed in

  https://github.com/numpy/numpy/commit/a859492c

I had to implement size function in C that can be called both as size(var)
and size(var, dim).
The size-to-shape mapping feature is now removed. I have updated the
corresponding
release notes in

https://github.com/numpy/numpy/commit/1f2e751b

Thanks for testing these new f2py features,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-05 Thread Pearu Peterson
On Thu, May 5, 2011 at 11:51 PM, DJ Luscher  wrote:

>
> Ralf Gommers  googlemail.com> writes:
>
> >
> > Hi,
> >
> > I am pleased to announce the availability of the second release
> > candidate of NumPy 1.6.0.
> >
> > Compared to the first release candidate, one segfault on (32-bit
> > Windows + MSVC) and several memory leaks were fixed. If no new
> > problems are reported, the final release will be in one week.
> >
> > Sources and binaries can be found at
> > http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc2/
> > For (preliminary) release notes see below.
> >
> > Enjoy,
> > Ralf
> >
> > =
> > NumPy 1.6.0 Release Notes
> > =
> >
> > Fortran assumed shape array and size function support in ``numpy.f2py``
> > ---
> >
> > F2py now supports wrapping Fortran 90 routines that use assumed shape
> > arrays.  Before such routines could be called from Python but the
> > corresponding Fortran routines received assumed shape arrays as zero
> > length arrays which caused unpredicted results. Thanks to Lorenz
> > Hüdepohl for pointing out the correct way to interface routines with
> > assumed shape arrays.
> >
> > In addition, f2py interprets Fortran expression ``size(array, dim)``
> > as ``shape(array, dim-1)`` which makes it possible to automatically
> > wrap Fortran routines that use two argument ``size`` function in
> > dimension specifications. Before users were forced to apply this
> > mapping manually.
> >
>
>
> Regarding the f2py support for assumed shape arrays:
>
> I'm just struggling along trying to learn how to use f2py to interface with
> fortran source, so please be patient if I am missing something obvious.
>  That
> said, in test cases I've run the new f2py assumed-shape-array support in
> Numpy
> 1.6.0.rc2 seems to conflict with the support for f90-style modules.  For
> example:
>
> 
>
>   ! -*- fix -*-
>
>   module easy
>
>   real, parameter :: anx(4) = (/1.,2.,3.,4./)
>
>   contains
>
>   subroutine sum(x, res)
> implicit none
> real, intent(in) :: x(:)
> real, intent(out) :: res
>
> integer :: i
>
> !print *, "sum: size(x) = ", size(x)
>
> res = 0.0
>
> do i = 1, size(x)
>   res = res + x(i)
> enddo
>
>   end subroutine sum
>
>   end module easy
>
>
> when compiled with:
> f2py -c --fcompiler=intelem foo_mod.f90  -m e
>
> then:
>
> python
> import e
> print e.easy.sum(e.easy.anx)
>
> returns: 0.0
>
> Also (and I believe related) f2py can no longer compile source with assumed
> shape array valued functions within a module.  Even though the python
> wrapped
> code did not function properly when called from python, it did work when
> called
> from other fortran code.  It seems that the interface has been broken.  The
> previous version of Numpy I was using was 1.3.0 all on Ubuntu 10.04, Python
> 2.6,
> and using Intel fortran compiler.
>
> thanks for your consideration and feedback.
>
>
Thanks for the bug report!

These issues are now fixed in:

  https://github.com/numpy/numpy/commit/f393b604

Ralf, feel free to apply this changeset to 1.6.x branch if appropriate.

Regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Pearu Peterson
Note that hello.foo(a) returns the value of Fortran `a` value. This explains
the printed 2 value.
So, use

>>> a = hello.foo(a)

and not

>>> hello.foo(a)

As Sameer noted in previous mail, passing Python scalar values to Fortran by
reference is not
possible because Python scalars are immutable. Hence the need to use `a =
foo(a)`.

HTH,
Pearu

On Tue, Apr 12, 2011 at 9:52 PM, Mathew Yeates  wrote:

> bizarre
> I get
> =
> >>> hello.foo(a)
>  Hello from Fortran!
>  a= 1
> 2
> >>> a
> 1
> >>> hello.foo(a)
>  Hello from Fortran!
>  a= 1
> 2
> >>> print a
> 1
> >>>
> =====
>
> i.e. The value of 2 gets printed! This is numpy 1.3.0
>
> -Mathew
>
>
> On Tue, Apr 12, 2011 at 11:45 AM, Pearu Peterson
>  wrote:
> >
> >
> > On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates 
> wrote:
> >>
> >> I have
> >> subroutine foo (a)
> >>  integer a
> >>  print*, "Hello from Fortran!"
> >>  print*, "a=",a
> >>  a=2
> >>  end
> >>
> >> and from python I want to do
> >> >>> a=1
> >> >>> foo(a)
> >>
> >> and I want a's value to now be 2.
> >> How do I do this?
> >
> > With
> >
> >  subroutine foo (a)
> >  integer a
> > !f2py intent(in, out) a
> >  print*, "Hello from Fortran!"
> >  print*, "a=",a
> >  a=2
> >  end
> >
> > you will have desired effect:
> >
> >>>> a=1
> >>>> a = foo(a)
> >>>> print a
> > 2
> >
> > HTH,
> > Pearu
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> >
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Pearu Peterson
On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates  wrote:

> I have
> subroutine foo (a)
>  integer a
>  print*, "Hello from Fortran!"
>  print*, "a=",a
>  a=2
>  end
>
> and from python I want to do
> >>> a=1
> >>> foo(a)
>
> and I want a's value to now be 2.
> How do I do this?
>

With

 subroutine foo (a)
 integer a
!f2py intent(in, out) a
 print*, "Hello from Fortran!"
 print*, "a=",a
 a=2
 end

you will have desired effect:

>>> a=1
>>> a = foo(a)
>>> print a
2

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] division operator

2011-04-04 Thread Pearu Peterson


On 04/04/2011 01:49 PM, Alex Ter-Sarkissov wrote:
> I have 2 variables, say var1=10,var2=100. To divide I do either
> divide(float(var1),float(var2)) or simply float(var1)/float(var2). I'm
> just wondering if there's a smarter way of doing this?

 >>> from __future__ import division
 >>> var1 = 10
 >>> var2 = 100
 >>> print var1/var2
 0.1

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] ANN: Numpy 1.6.0 beta 1

2011-03-31 Thread Pearu Peterson
On Thu, Mar 31, 2011 at 1:00 PM, Scott Sinclair  wrote:

> On 31 March 2011 11:37, Pearu Peterson  wrote:
> >
> >
> > On Thu, Mar 31, 2011 at 12:19 PM, David Cournapeau 
> > wrote:
> >>
> >> On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen  wrote:
> >> > In article
> >> > ,
> >> >  Ralf Gommers  wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> I am pleased to announce the availability of the first beta of NumPy
> >> >> 1.6.0. Due to the extensive changes in the Numpy core for this
> >> >> release, the beta testing phase will last at least one month. Please
> >> >> test this beta and report any problems on the Numpy mailing list.
> >> >>
> >> >> Sources and binaries can be found at:
> >> >> http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/
> >> >> For (preliminary) release notes see below.
> >>
> >> I see a segfault on Ubuntu 64 bits for the test
> >> TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py.
> >> Am I the only one seeing it ?
> >>
> >
> > The test work here ok on Ubuntu 64 with numpy master. Could you try the
> > maintenance/1.6.x branch where the related bugs are fixed.
>
> For what it's worth, the maintenance/1.6.x branch works for me on 64-bit
> Ubuntu:
>
> (numpy-1.6.x)scott@godzilla:~$ python -c "import numpy; numpy.test()"
>

You might want to run

 python -c "import numpy; numpy.test('full')"

as the corresponding test is decorated as slow.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] ANN: Numpy 1.6.0 beta 1

2011-03-31 Thread Pearu Peterson
On Thu, Mar 31, 2011 at 12:19 PM, David Cournapeau wrote:

> On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen  wrote:
> > In article
> > ,
> >  Ralf Gommers  wrote:
> >
> >> Hi,
> >>
> >> I am pleased to announce the availability of the first beta of NumPy
> >> 1.6.0. Due to the extensive changes in the Numpy core for this
> >> release, the beta testing phase will last at least one month. Please
> >> test this beta and report any problems on the Numpy mailing list.
> >>
> >> Sources and binaries can be found at:
> >> http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/
> >> For (preliminary) release notes see below.
>
> I see a segfault on Ubuntu 64 bits for the test
> TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py.
> Am I the only one seeing it ?
>
>
The test work here ok on Ubuntu 64 with numpy master. Could you try the
maintenance/1.6.x branch where the related bugs are fixed.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-29 Thread Pearu Peterson
On Tue, Mar 29, 2011 at 11:03 AM, Dag Sverre Seljebotn <
d.s.seljeb...@astro.uio.no> wrote:

>
> I think it should be a(1:n*stride:stride) or something.
>
>
Yes, it was my typo and I assumed that n is the length of the original
array.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-29 Thread Pearu Peterson
On Tue, Mar 29, 2011 at 8:13 AM, Pearu Peterson wrote:

>
>
> On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden  wrote:
>
>> Den 28.03.2011 19:12, skrev Pearu Peterson:
>> >
>> > FYI, f2py in numpy 1.6.x supports also assumed shape arrays.
>>
>> How did you do that? Chasm-interop, C bindings from F03, or marshalling
>> through explicit-shape?
>>
>
> The latter.
>  Basically, if you have
>
> subroutine foo(a)
> real a(:)
> end
>
> then f2py automatically creates a wrapper subroutine
>
> subroutine wrapfoo(a, n)
> real a(n)
> integer n
> !f2py intent(in) :: a
> !f2py intent(hide) :: n = shape(a,0)
> interface
> subroutine foo(a)
> real a(:)
> end
> end interface
> call foo(a)
> end
>
> that can be wrapped with f2py in ordinary way.
>
>
>> Can f2py pass strided memory from NumPy to Fortran?
>>
>>
> No. I haven't thought about it.
>
>
Now, after little bit of thinking and testing, I think supporting strided
arrays in f2py
is easily doable. For the example above, f2py just must generate the
following wrapper subroutine

subroutine wrapfoo(a, stride, n)
real a(n)
integer n, stride
!f2py intent(in) :: a
!f2py intent(hide) :: n = shape(a,0)
!f2py intent(hide) :: stride = getstrideof(a)
interface
subroutine foo(a)
real a(:)
end
end interface
call foo(a(1:stride:n))
end

Now the question is, how important this feature would be? How high I should
put it in my todo list?
If there is interest, the corresponding numpy ticket should be assigned to
me.

Best regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-28 Thread Pearu Peterson
On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden  wrote:

> Den 28.03.2011 19:12, skrev Pearu Peterson:
> >
> > FYI, f2py in numpy 1.6.x supports also assumed shape arrays.
>
> How did you do that? Chasm-interop, C bindings from F03, or marshalling
> through explicit-shape?
>

The latter.
 Basically, if you have

subroutine foo(a)
real a(:)
end

then f2py automatically creates a wrapper subroutine

subroutine wrapfoo(a, n)
real a(n)
integer n
!f2py intent(in) :: a
!f2py intent(hide) :: n = shape(a,0)
interface
subroutine foo(a)
real a(:)
end
end interface
call foo(a)
end

that can be wrapped with f2py in ordinary way.


> Can f2py pass strided memory from NumPy to Fortran?
>
>
No. I haven't thought about it.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-28 Thread Pearu Peterson
On Mon, Mar 28, 2011 at 6:01 PM, Sturla Molden  wrote
>
>
> I'll try to clarify this:
>
> ** Most Fortran 77 compilers (and beyond) assume explicit-shape and
> assumed-size arrays are contiguous blocks of memory. That is, arrays
> declared like a(m,n) or a(m,*). They are usually passed as a pointer to
> the first element. These are the only type of Fortran arrays f2py supports.
>

FYI, f2py in numpy 1.6.x supports also assumed shape arrays.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1

2011-03-24 Thread Pearu Peterson
On Fri, Mar 25, 2011 at 1:44 AM, Derek Homeier <
de...@astro.physik.uni-goettingen.de> wrote:

> On 24.03.2011, at 9:11AM, Pearu Peterson wrote:
> >
> > Intel-64bit:
> > ERROR: test_assumed_shape.TestAssumedShapeSumExample.test_all
> > --
> > Traceback (most recent call last):
> >  File "/sw/lib/python3.2/site-packages/nose/case.py", line 372, in setUp
> >try_run(self.inst, ('setup', 'setUp'))
> >  File "/sw/lib/python3.2/site-packages/nose/util.py", line 478, in
> try_run
> >return func()
> >  File "/sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py", line
> 352, in setUp
> >module_name=self.module_name)
> >  File "/sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py", line
> 73, in wrapper
> >memo[key] = func(*a, **kw)
> >  File "/sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py", line
> 134, in build_module
> >% (cmd[4:], asstr(out)))
> > RuntimeError: Running f2py failed: ['-m', '_test_ext_module_5403',
> '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_free.f90',
> '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_use.f90',
> '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/precision.f90']
> >Reading .f2py_f2cmap ...
> >Mapping "real(kind=rk)" to "double"
> >
> > Hmm, this should not happen as real(kind=rk) should be mapped to "float".
> > It seems that you have .f2py_f2cmap file lying around. Could you remove
> it and try again.
>
> Yes, it's in the tarball and was installed together with the f2py tests!
>

Indeed, f2py tests suite contains the .f2py_f2cmap file. Its effect should
be local to the corresponding test but it seems not.
I'll look into it..


> building extension "_test_ext_module_5403" sources
> f2py options: []
> f2py:>
> /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403module.c
> creating /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum
> creating
> /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7
> getctype: "real(kind=rk)" is mapped to C "float" (to override define
> dict(real = dict(rk="")) in
> /private/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpPo744G/.f2py_f2cmap
> file).
> ...
> gfortran:f77:
> /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403-f2pywrappers.f
>
> /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403-f2pywrappers.f:26.21:
>
>  real :: res
> 1
> Error: Symbol 'res' at (1) already has basic type of REAL
>
> /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403-f2pywrappers.f:26.21:
>
>  real :: res
> 1
> Error: Symbol 'res' at (1) already has basic type of REAL
> ...
>
> f2py is the one installed with this numpy version, gfortran is
> COLLECT_GCC=/sw/bin/gfortran
>
> COLLECT_LTO_WRAPPER=/sw/lib/gcc4.5/libexec/gcc/x86_64-apple-darwin10.6.0/4.5.2/lto-wrapper
> Ziel: x86_64-apple-darwin10.6.0
> Konfiguriert mit: ../gcc-4.5.2/configure --prefix=/sw
> --prefix=/sw/lib/gcc4.5 --mandir=/sw/share/man --infodir=/sw/lib/gcc4.5/info
> --enable-languages=c,c++,fortran,objc,obj-c++,java --with-gmp=/sw
> --with-libiconv-prefix=/sw --with-ppl=/sw --with-cloog=/sw --with-mpc=/sw
> --with-system-zlib --x-includes=/usr/X11R6/include
> --x-libraries=/usr/X11R6/lib --program-suffix=-fsf-4.5 --enable-lto
> Thread-Modell: posix
> gcc-Version 4.5.2 (GCC)
>
>
Can you send me the _test_ext_module_5403-f2pywrappers.f file. It should
exist there when the compilation fails.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1

2011-03-24 Thread Pearu Peterson
On Thu, Mar 24, 2011 at 10:11 AM, Pearu Peterson
wrote:

>
> Regarding this test failure, could you hack the
> numpy/f2py/tests/test_kind.py script by adding the following code
>
> for i in range(20):
>   print '%s -> %s, %s' % (i, selected_real_kind(i), selectedrealkind(i))
>
> and send me the output? Also, what Fortran compiler version has been used
> to build the test modules?
>

Even better, you can report the result to

  http://projects.scipy.org/numpy/ticket/1767

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1

2011-03-24 Thread Pearu Peterson
On Thu, Mar 24, 2011 at 2:04 AM, Derek Homeier <
de...@astro.physik.uni-goettingen.de> wrote:

> On 24 Mar 2011, at 00:34, Derek Homeier wrote:
>
> > tests with the fink-installed pythons on MacOS X mostly succeeded,
> > with one failure in python2.4 and a couple of issues seemingly
> > related to PPC floating point accuracy, as below:
> >
> Probably last update for tonight: with the 'full' test suite, there's one
> additional failure and error, respectively under 10.5/ppc and
> 10.6/x86_64 (in all Python versions):
>
> PowerPC:
>
> FAIL: test_kind.TestKind.test_all
> --
> Traceback (most recent call last):
>   File "/sw/lib/python2.5/site-packages/nose/case.py", line 187, in runTest
>self.test(*self.arg)
>  File "/sw/lib/python2.5/site-packages/numpy/f2py/tests/test_kind.py", line
> 30, in test_all
>'selectedrealkind(%s): expected %r but got %r' %  (i,
> selected_real_kind(i), selectedrealkind(i)))
>  File "/sw/lib/python2.5/site-packages/numpy/testing/utils.py", line 34, in
> assert_
>raise AssertionError(msg)
> AssertionError: selectedrealkind(16): expected 10 but got 16
>

Regarding this test failure, could you hack the
numpy/f2py/tests/test_kind.py script by adding the following code

for i in range(20):
  print '%s -> %s, %s' % (i, selected_real_kind(i), selectedrealkind(i))

and send me the output? Also, what Fortran compiler version has been used to
build the test modules?



> Intel-64bit:
> ERROR: test_assumed_shape.TestAssumedShapeSumExample.test_all
> --
> Traceback (most recent call last):
>   File "/sw/lib/python3.2/site-packages/nose/case.py", line 372, in setUp
>try_run(self.inst, ('setup', 'setUp'))
>  File "/sw/lib/python3.2/site-packages/nose/util.py", line 478, in try_run
>return func()
>  File "/sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py", line 352,
> in setUp
>module_name=self.module_name)
>  File "/sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py", line 73,
> in wrapper
>memo[key] = func(*a, **kw)
>  File "/sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py", line 134,
> in build_module
>% (cmd[4:], asstr(out)))
> RuntimeError: Running f2py failed: ['-m', '_test_ext_module_5403',
> '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_free.f90',
> '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_use.f90',
> '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/precision.f90']
>
   Reading .f2py_f2cmap ...
   Mapping "real(kind=rk)" to "double"

Hmm, this should not happen as real(kind=rk) should be mapped to "float".
It seems that you have .f2py_f2cmap file lying around. Could you remove it
and try again.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.6: branching and release notes

2011-03-13 Thread Pearu Peterson
On Sun, Mar 13, 2011 at 11:22 AM, Ralf Gommers
wrote:

> Hi all,
>
> On Tuesday (~2am GMT) I plan to create the 1.6.x branch and tag the
> first beta. So please get your last commits for 1.6 in by Monday
> evening.
>
> Also, please review and add to the 1.6.0 release notes. I put in
> headers for several items that need a few lines in the notes, I hope
> this can be filled in by the authors of those features (Charles:
> Legendre polynomials, Pearu: assumed shape arrays, Mark: a bunch of
> stuff).
>

Done for assumed shape arrays and size function supports.

Best regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to compile Fortran using setup.py

2011-03-12 Thread Pearu Peterson
On Fri, Mar 11, 2011 at 3:58 AM, Ondrej Certik  wrote:

> Hi,
>
> I spent about an hour googling and didn't figure this out. Here is my
> setup.py:
>
> setup(
>name = "libqsnake",
>cmdclass = {'build_ext': build_ext},
>version = "0.1",
>packages = [
>'qsnake',
>'qsnake.calculators',
>'qsnake.calculators.tests',
>'qsnake.data',
>'qsnake.mesh2d',
>'qsnake.tests',
>],
>package_data = {
>'qsnake.tests': ['phaml_data/domain.*'],
>},
>include_dirs=[numpy.get_include()],
>ext_modules = [Extension("qsnake.cmesh", [
>"qsnake/cmesh.pyx",
>"qsnake/fmesh.f90",
>])],
>description = "Qsnake standard library",
>license = "BSD",
> )
>
>
You can specify Fortran code, that you don't want to process with f2py, in
the libraries list
and then use the corresponding library in the extension, for example:

setup(...
   libraries = [('foo', dict(sources=['qsnake/fmesh.f90']))],
   ext_modules = [Extension("qsnake.cmesh",
  sources =
["qsnake/cmesh.pyx"],
  libraries = ['foo']
)],
  ...
)

See also scipy/integrate/setup.py that resolves the same issue but just
using the configuration function approach.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Pearu Peterson
Thanks!
Pearu

On Thu, Dec 2, 2010 at 11:08 PM, Charles R Harris
 wrote:
>
>
> On Thu, Dec 2, 2010 at 1:52 PM, Pearu Peterson 
> wrote:
>>
>> Hi,
>>
>> I have followed Development workflow instructions in
>>
>>  http://docs.scipy.org/doc/numpy/dev/gitwash/
>>
>> but I am having a problem with the last step:
>>
>> $ git push upstream ticket1679:master
>> fatal: remote error:
>>  You can't push to git://github.com/numpy/numpy.git
>>  Use g...@github.com:numpy/numpy.git
>>
>
> Do what the message says, the first address is readonly. You can change the
> settings in .git/config, mine looks like
>
> [core]
>     repositoryformatversion = 0
>     filemode = true
>     bare = false
>     logallrefupdates = true
> [remote "origin"]
>     fetch = +refs/heads/*:refs/remotes/origin/*
>     url = g...@github.com:charris/numpy
> [branch "master"]
>     remote = origin
>     merge = refs/heads/master
> [remote "upstream"]
>     url = g...@github.com:numpy/numpy
>     fetch = +refs/heads/*:refs/remotes/upstream/*
> [alias]
>     mb = merge --no-ff
>
> Where upstream is the numpy repository.
>
> Chuck
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Pearu Peterson
Hi,

I have followed Development workflow instructions in

  http://docs.scipy.org/doc/numpy/dev/gitwash/

but I am having a problem with the last step:

$ git push upstream ticket1679:master
fatal: remote error:
  You can't push to git://github.com/numpy/numpy.git
  Use g...@github.com:numpy/numpy.git

What I am doing wrong?

Here's some additional info:
$ git remote -v show
origin  g...@github.com:pearu/numpy.git (fetch)
origin  g...@github.com:pearu/numpy.git (push)
upstreamgit://github.com/numpy/numpy.git (fetch)
upstreamgit://github.com/numpy/numpy.git (push)
$ git branch -a
  master
* ticket1679
  remotes/origin/HEAD -> origin/master
  remotes/origin/maintenance/1.0.3.x
  remotes/origin/maintenance/1.1.x
  remotes/origin/maintenance/1.2.x
  remotes/origin/maintenance/1.3.x
  remotes/origin/maintenance/1.4.x
  remotes/origin/maintenance/1.5.x
  remotes/origin/master
  remotes/origin/ticket1679
  remotes/upstream/maintenance/1.0.3.x
  remotes/upstream/maintenance/1.1.x
  remotes/upstream/maintenance/1.2.x
  remotes/upstream/maintenance/1.3.x
  remotes/upstream/maintenance/1.4.x
  remotes/upstream/maintenance/1.5.x
  remotes/upstream/master


Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Atlas build issues

2010-10-20 Thread Pearu Peterson


On 10/20/2010 11:48 AM, Gael Varoquaux wrote:
> On Wed, Oct 20, 2010 at 04:41:38PM +0900, David wrote:
>>> - ldd /volatile/varoquau/dev/numpy/numpy/linalg/lapack_lite.so: does it
>>> load the libraries you think are you loading ?
>
> For the dynamic libraries, it seems so. I am using static libraries for
> atlas/blas (.a)
>
>>> - nm atlas_libraries | grep zgesdd_ for every library in atlas (I don't
>>> know how the recent ones work, but this function should normally be in
>>> libf77blas.so)

Hmm, zgesdd.f is part of LAPACK, why the function should be in BLAS 
library..

>
> Nothing. That's clearly my problem, but I don't know how to fix it.
>
> Thanks for your suggestions,

I haven't build atlas libraries for awhile and may be something
is changed in atlas.. That said, I would try instructions from

http://www.scipy.org/Installing_SciPy/Linux#head-89e1f6afaa3314d98a22c79b063cceee2cc6313c

and may be using some older atlas version.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Atlas build issues

2010-10-20 Thread Pearu Peterson


On 10/20/2010 10:23 AM, Gael Varoquaux wrote:
> I am really sorry to be landing on the mailing list with Atlas build
> issues. I usually manage be myself to build Atlas, but this time I have
> been fighting for a couple of days with little success.
>
> The reason I need to build Atlas is that our work computers are stuck on
> Mandriva 2008.0, in which the version of Atlas packaged by the system is
> not usable.
>
> Anyhow, I lost quite a while with Atlas 3.9.26, for which I was never
> able to build libaries that could be used in a '.so' (seemed that the
> -fPIC was not working, but really I don't understand what was going on in
> the maze of makefiles). After that I switched to 3.9;25, which I have
> already have gotten working on another system (Mandriva 2008, but 32
> bits). Know everything builds, but I get a missing symbol at the import:
>
> from numpy.linalg import lapack_lite
> ImportError: /volatile/varoquau/dev/numpy/numpy/linalg/lapack_lite.so:
> undefined symbol: zgesdd_
>
> I don't have g77 installed on the system at all, to avoid gfortran/g77
> mixups.
>
> I am attaching the stout of 'python setup.py build_ext -i', in case it
> contains useful information.
>
> Would anybody have suggestions with regards to what I am doing wrong?

I could not read the stdout but perhaps you didn't build atlas with 
complete lapack. The basic steps for that are described in
   http://math-atlas.sourceforge.net/errata.html#completelp

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another merge at github

2010-10-18 Thread Pearu Peterson


On 10/16/2010 09:53 PM, Charles R Harris wrote:
> Here
> .
> This looks harmless but it makes the history really ugly. We need to get
> the word out *not* to do things this way.

Sorry, that was me and my git ignorance. I was trying to commit a small
patch but got push rejection. I presumed that my tree was not updated. 
So I did `git pull` and then `git push` again which resulted the 
unindented merge above.

I see that there are long discussions in numpy ml about the git usage 
and mis usage. I wonder whether this has converged to something that
could be used as reference for git beginners like me.

Thanks and sorry about messing up the history,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another merge at github

2010-10-18 Thread Pearu Peterson


On 10/16/2010 09:53 PM, Charles R Harris wrote:
> Here
> .
> This looks harmless but it makes the history really ugly. We need to get
> the word out *not* to do things this way.

Sorry, that was me and my git ignorance. I was trying to commit a small
patch but got push rejection. I presumed that my tree was not updated. 
So I did `git pull` and then `git push` again which resulted the 
unindented merge above.

I see that there are long discussions in numpy ml about the git usage 
and mis usage. I wonder whether this has converged to something that
could be used as reference for git beginners like me.

Thanks and sorry about messing up the history,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] compile fortran from python

2010-10-10 Thread Pearu Peterson
Hi,

You can create a setup.py file containing

def configuration(parent_package='',top_path=None):
 from numpy.distutils.misc_util import Configuration
 config = Configuration(None,parent_package,top_path)
 config.add_library('flib', sources = [test.f95])
 config.add_extension('ctest', sources = ['ctest.c'], 
libraries=['flib'])
 return config

if __name__ == "__main__":
 from numpy.distutils.core import setup
 setup(configuration=configuration)

#eof

Running

   python setup.py config_fc --fcompiler=gnu95 build

will build a Fortran library flib.a and link it to
an extension module ctest.so. Use build_ext --inplace
if ctest.so should end up in current working directory.

Is that what you wanted to achieve?

HTH,
Pearu

On 10/09/2010 02:18 PM, Ioan Ferencik wrote:
> I would like to compile some Fortran  code using python,  build a
> shared library, and link to it using python. But I get a message
> saying the compiler does not recognise the extension of my file.
>
> this is my command:
>
> gcc -fPIC -c -shared -fno-underscoring test.f95 -o ./lib/libctest.so.1.0
>
> what is the easiest method to achieve this?
>I suspect  I could create a custom extension and customise the
> unixccompiler object or could I just use the compiler object defined
> by f2py(fcompiler).
>
> Cheers
>
>
> Ioan Ferencik
> PhD student
> Aalto University
> School of Science and Technology
> Faculty Of Civil and Env. Engineering
> Lahti Center
> Tel: +358505122707
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please help on compilation fortran module using f2py

2010-10-06 Thread Pearu Peterson
Hi,

On 10/06/2010 04:57 AM, Jing wrote:
>   Hi, everyone:
>
> I am new to the python numpy and f2py. I really need help on compiling
> FORTRAN module using f2py. I have been searched internet without any
> success. Here is my setup: I have a Ubuntu 10.04 LTS with python 2.6,
> numpy 1.3.0 and f2py 2 (installed from ubuntu) and gfortran compiler
> 4.4. I have a simple Fortran subroutine (in CSM_CH01_P1_1a_F.f95 file)
> as shown below:
> 
> !
>
> subroutine sim_model_1(ts, n, a)
>
> !
>
> !f2py integer, intent(in) :: ts, n
>
> !f2py real,dimension(n), intent(inout) :: a
>
> implicit none

The problem is in Fortran code. The ts, n, and a variables need to be 
declared for fortran compiler too. The f2py directives are invisible
to the fortran compiler, they are just comments that are used by f2py.
So, try adding these lines to the Fortran code:

   integer, intent(in) :: ts, n
   real, dimension(n) :: a

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py problem with complex inout in subroutine

2010-07-24 Thread Pearu Peterson
Hi Mark,

On Mon, Jul 19, 2010 at 11:49 AM, Mark Bakker  wrote:
> Thanks for fixing this, Pearu.
> Complex arrays with intent(inout) don't seem to work either.
> They compile, but a problem occurs when calling the routine.

What problem?

> Did you fix that as well?

I guess so, see below.

> Here's an example that doesn't work (sorry, I cannot update to svn 8478 on
> my machine right now):
>
>     subroutine test3(nlab,omega)
>     implicit none
>     integer, intent(in) :: nlab
>     complex(kind=8), dimension(nlab), intent(inout) :: omega
>     integer :: n
>     do n = 1,nlab
>     omega(n) = cmplx(1,1,kind=8)
>     end do
>     end subroutine test3

The example works fine here:

$ f2py -c -m foo test3.f90
>>> import foo
>>> from numpy import *
>>> omega=array([1,2,3,4],dtype='D')
>>> foo.test3(omega)
>>> print omega
--> print(omega)
[ 1.+1.j  1.+1.j  1.+1.j  1.+1.j]

If you cannot update numpy to required revision, you can also modify
the broken file directly. It only involves replacing four lines with
one line in numpy/f2py/cfuncs.py file.
See

  http://projects.scipy.org/numpy/changeset/8478

for details.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py problem with complex inout in subroutine

2010-07-10 Thread Pearu Peterson


On 07/09/2010 02:03 PM, Mark Bakker wrote:
> Hello list. The following subroutine fails to compile with f2py.
> I use a complex variable with intent(inout). It works fine with two real
> variables, so I have a workaround, but it would be nicer with a complex
> variable.
> Any thoughts on what I am doing wrong?

compilation failed because of typos in the generated code. This is fixed 
in svn revision 8478.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Pearu Peterson
On Sun, Jun 13, 2010 at 4:45 PM, Nadav Horesh  wrote:
> int can be larger than numpy.int64 therefore it should be coerced to float64 
> (or float96/float128)

Ok, I see. The results type is defined by the types of operands, not
by their values. I guess
this has been discussed earlier but with small operands this feature
may be unexpected.
For example, with the same rule the result of int64 + int should be
float64 while currently
it is int64.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Pearu Peterson
Hi,
I just noticed some weird behavior in operations with uint64 and int,
heres an example:

>>> numpy.uint64(3)+1
4.0
>>> type(numpy.uint64(3)+1)


Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to resize numpy.memmap?

2010-06-06 Thread Pearu Peterson
Hi again,
To answer to the second part of my question, here follows an example
demonstrating how to "resize" a memmap:

>>> fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
>>> fp._mmap.resize(11)
>>> cp = numpy.ndarray.__new__(numpy.memmap, (fp._mmap.size(),), 
>>> dtype=fp.dtype, buffer=fp._mmap, offset=0, order='C')
>>> cp[-1] = 99
>>> cp[1] = 33
>>> cp
memmap([ 0, 33,  0,  0,  0,  0,  0,  0,  0,  0, 99], dtype=uint8)
>>> fp
memmap([ 0, 33,  0,  0,  0,  0,  0,  0,  0,  0], dtype=uint8)
>>> del fp, cp
>>> fp = numpy.memmap('test.dat', mode='r')
>>> fp
memmap([ 0, 33,  0,  0,  0,  0,  0,  0,  0,  0, 99], dtype=uint8)

Would there be any interest in turning the above code to numpy.memmap
method, say, to resized(newshape)? For example, for resolving the original
problem, one could have

fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
fp = fp.resized(11)

Regards,
Pearu

On Sun, Jun 6, 2010 at 10:19 PM, Pearu Peterson
 wrote:
> Hi,
>
> I am creating a rather large file (typically 100MBi-1GBi) with numpy.memmap
> but in some cases the initial estimate to the file size is just few
> bytes too small.
> So, I was trying to resize the memmap with a failure as demonstrated
> with the following
> example:
>
>>>> fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
>>>> fp.resize(11, refcheck=False)
> ...
> ValueError: cannot resize this array:  it does not own its data
>
> My question is, is there a way to "fix" this or may be there exist some other
> technique to resize memmap. I have tried resizing memmap's _mmap attribute
> directly:
>
>>>> fp._mmap.resize(11)
>>>> fp._mmap.size()
>    11
>
> but the size of memmap instance remains unchanged:
>>>> fp.size
>    10
>
> Thanks,
> Pearu
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to resize numpy.memmap?

2010-06-06 Thread Pearu Peterson
Hi,

I am creating a rather large file (typically 100MBi-1GBi) with numpy.memmap
but in some cases the initial estimate to the file size is just few
bytes too small.
So, I was trying to resize the memmap with a failure as demonstrated
with the following
example:

>>> fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
>>> fp.resize(11, refcheck=False)
...
ValueError: cannot resize this array:  it does not own its data

My question is, is there a way to "fix" this or may be there exist some other
technique to resize memmap. I have tried resizing memmap's _mmap attribute
directly:

>>> fp._mmap.resize(11)
>>> fp._mmap.size()
11

but the size of memmap instance remains unchanged:
>>> fp.size
10

Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: "could not crack entity declaration"

2010-03-25 Thread Pearu Peterson
Try renaming GLMnet.f90 to GLMnet.f.
HTH,
Pearu

David Warde-Farley wrote:
> I decided to give wrapping this code a try:
> 
>   http://morrislab.med.utoronto.ca/~dwf/GLMnet.f90
> 
> I'm afraid my Fortran skills are fairly limited, but I do know that  
> gfortran compiles it fine. f2py run on this file produces lots of  
> errors of the form,
> 
> Reading fortran codes...
>   Reading file 'GLMnet.f90' (format:fix)
> Line #263 in GLMnet.f90:"  real  
> x(no,ni),y(no),w(no),vp(ni),ca(nx,nlam)  353"
>   updatevars: could not crack entity declaration "ca(nx,nlam)353".  
> Ignoring.
> Line #264 in GLMnet.f90:"  real  
> ulam(nlam),a0(nlam),rsq(nlam),alm(nlam)  354"
>   updatevars: could not crack entity declaration "alm(nlam)354".  
> Ignoring.
> Line #265 in GLMnet.f90:"  integer  
> jd(*),ia(nx),nin(nlam)355"
>   updatevars: could not crack entity declaration "nin(nlam)355".  
> Ignoring.
> Line #289 in GLMnet.f90:"  real  
> x(no,ni),y(no),w(no),vp(ni),ulam(nlam)   378"
>   updatevars: could not crack entity declaration "ulam(nlam)378".  
> Ignoring.
> Line #290 in GLMnet.f90:"  real  
> ca(nx,nlam),a0(nlam),rsq(nlam),alm(nlam) 379"
>   updatevars: could not crack entity declaration "alm(nlam)379".  
> Ignoring.
> Line #291 in GLMnet.f90:"  integer  
> jd(*),ia(nx),nin(nlam)380"
>   updatevars: could not crack entity declaration "nin(nlam)380".  
> Ignoring.
> Line #306 in GLMnet.f90:"  call  
> chkvars(no,ni,x,ju)  392"
>   analyzeline: No name/args pattern found for li
> 
> Is it the numbers that it is objecting to (I'm assuming these are some  
> sort of punchcard thing)? Do I need to modify the code in some way to  
> make it f2py-friendly?
> 
> Thanks,
> 
> David
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py compiler version errors

2010-03-17 Thread Pearu Peterson
Hi,
You are running rather old numpy version (1.0.1).
Try upgrading numpy as at least recent numpy from svn detects
this compiler fine.
Regards,
Pearu

Peter Brady wrote:
> Hello all,
> 
> The version of f2py that's installed on our system doesn't appear to 
> handle version numbers correctly.  I've attached the relevant output of 
> f2py below:
> 
> customize IntelFCompiler
> Couldn't match compiler version for 'Intel(R) Fortran Intel(R) 64
> Compiler Professional for applications running on Intel(R) 64,
> Version 11.0Build 20090318 \nCopyright (C) 1985-2009 Intel
> Corporation.  All rights reserved.\nFOR NON-COMMERCIAL USE ONLY\n\n
> Intel Fortran 11.0-1578'
> IntelFCompiler instance properties:
>   archiver= ['ar', '-cr']
>   compile_switch  = '-c'
>   compiler_f77=
> ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
> 72', '-w90', '-w95', '-KPIC', '-cm', '-O3',
> '-unroll', '-
> tpp7', '-xW', '-arch SSE2']
>   compiler_f90=
> ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
> FR', '-KPIC', '-cm', '-O3', '-unroll', '-tpp7',
> '-xW', '-
> arch SSE2']
>   compiler_fix=
> ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
> FI', '-KPIC', '-cm', '-O3', '-unroll', '-tpp7',
> '-xW', '-
> arch SSE2']
>   libraries   = []
>   library_dirs= []
>   linker_so   =
> ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
> shared', '-tpp7', '-xW', '-arch SSE2']
>   object_switch   = '-o '
>   ranlib  = ['ranlib']
>   version = None
>   version_cmd =
> ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-FI
> -V -c /tmp/tmpx6aZa8__dummy.f -o
> /tmp/tmpx6aZa8__dummy.o']
> 
> 
> The output of f2py is:
> 
> Version: 2_3473
> numpy Version: 1.0.1
> Requires:Python 2.3 or higher.
> License: NumPy license (see LICENSE.txt in the NumPy source code)
> Copyright 1999 - 2005 Pearu Peterson all rights reserved.
> http://cens.ioc.ee/projects/f2py2e/
> 
> 
> We're running 64bit linux with python 2.4.  How do I make this work?
> 
> thanks,
> Peter.
>  
> 
> 
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting Callbacks with arrays to work

2010-01-12 Thread Pearu Peterson
Hi,

The problem is that f2py does not support callbacks that
return arrays. There is easy workaround to that: provide
returnable arrays as arguments to callback functions.
Using your example:

SUBROUTINE CallbackTest(dv,v0,Vout,N)
  IMPLICIT NONE

  !F2PY intent( hide ):: N
  INTEGER:: N, ic
  EXTERNAL:: dv

  DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0
  DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout

  DOUBLE PRECISION, DIMENSION( N ):: Vnow
  DOUBLE PRECISION, DIMENSION( N )::  temp

  Vnow = v0
  !f2py intent (out) temp
  call dv(temp, Vnow, N)

  DO ic = 1, N
 Vout( ic ) = temp(ic)
  END DO

END SUBROUTINE CallbackTest

$ f2py -c test.f90 -m t --fcompiler=gnu95

>>> from numpy import *
>>> from t import *
>>> arr = array([2.0, 4.0, 6.0, 8.0])
>>> def dV(v):
print 'in Python dV: V is: ',v
ret = v.copy()
ret[1] = 100.0
return ret
...
>>> output = callbacktest(dV, arr)
in Python dV: V is:  [ 2.  4.  6.  8.]
>>> output
array([   2.,  100.,6.,8.])

What problems do you have with implicit none? It works
fine here. Check the format of your source code,
if it is free then use `.f90` extension, not `.f`.

HTH,
Pearu

Jon Moore wrote:
>  Hi,
> 
> I'm trying to build a differential equation integrator and later a
> stochastic differential equation integrator.
> 
> I'm having trouble getting f2py to work where the callback itself
> receives an array from the Fortran routine does some work on it and then
> passes an array back.  
> 
> For the stoachastic integrator I'll need 2 callbacks both dealing with
> arrays.
> 
> The idea is the code that never changes (ie the integrator) will be in
> Fortran and the code that changes (ie the callbacks defining
> differential equations) will be different for each problem.
> 
> To test the idea I've written basic code which should pass an array back
> and forth between Python and Fortran if it works right.
> 
> Here is some code which doesn't work properly:-
> 
> SUBROUTINE CallbackTest(dv,v0,Vout,N)
> !IMPLICIT NONE
> 
> cF2PY intent( hide ):: N
> INTEGER:: N, ic
> 
> EXTERNAL:: dv
> 
> DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0
> DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout
> 
> DOUBLE PRECISION, DIMENSION( N ):: Vnow
> DOUBLE PRECISION, DIMENSION( N )::  temp
> 
> Vnow = v0
> 
> 
> temp = dv(Vnow, N)
> 
> DO ic = 1, N
> Vout( ic ) = temp(ic)
> END DO
> 
> END SUBROUTINE CallbackTest
> 
> 
> 
> When I test it with this python code I find the code just replicates the
> first term of the array!
> 
> 
> 
> 
> from numpy import *
> import callback as c
> 
> def dV(v):
> print 'in Python dV: V is: ',v
> return v.copy()
> 
> arr = array([2.0, 4.0, 6.0, 8.0])
> 
> print 'Arr is: ', arr
> 
> output = c.CallbackTest(dV, arr)
> 
> print 'Out is: ', output
> 
> 
> 
> 
> Arr is:  [ 2.  4.  6.  8.]
> 
> in Python dV: V is:  [ 2.  4.  6.  8.]
> 
> Out is:  [ 2.  2.  2.  2.]
> 
> 
> 
> Any ideas how I should do this, and also how do I get the code to work
> with implicit none not commented out?
> 
> Thanks
> 
> Jon
> 
> 
> 
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread Pearu Peterson

Hi James,

To answer the second question, use:

   j = 1+numpy.array([2], numpy.int32)

The answer to the first question is that
the type of 1+numpy.array([2]) is
numpy.int64 but Fortran function expects
an array of type numpy.int32 and hence
the wrapper makes a copy of the input
array (which is also returned by the wrapper)
before passing it to Fortran.

Regards,
Pearu

James McEnerney wrote:
> Pearu,
> Thanks. a follow question.
> Using fortran
> 
>   subroutine calc(j)
> Cf2py intent(callback) pycalc
>   external pycalc
> Cf2py integer dimension(1), intent(in,out):: j
> 
>   integer j(1)
>   print *, 'in fortran before pycalc ', 'j=', j(1)
>   call pycalc(j)
>   print *, 'in fortran after pycalc ', ' j=', j(1)
>   end
> 
> in python
> 
> import foo,numpy
> def pycalc(j):
>
> print ' in pycalc ', 'j=', j
> j[:] = 20*j
> return
> 
> print foo.calc.__doc__
> j = 1+numpy.array([2])
> print foo.calc(j, pycalc)
> print j
> 
> the output is
> 
> calc - Function signature:
>   j = calc(j,pycalc,[pycalc_extra_args])
> Required arguments:
>   j : input rank-1 array('i') with bounds (1)
>   pycalc : call-back function
> Optional arguments:
>   pycalc_extra_args := () input tuple
> Return objects:
>   j : rank-1 array('i') with bounds (1)
> Call-back functions:
>   def pycalc(j): return j
>   Required arguments:
> j : input rank-1 array('i') with bounds (1)
>   Return objects:
> j : rank-1 array('i') with bounds (1)
> 
>  in fortran before pycalc j=   3
>  in pycalc  j= [3]
>  in fortran after pycalc  j=  60
> [60]
> [3]
> 
> Why is the return from foo.calc different from j?
> How do I make them the same?
> "return j" in pycalc doesn't change things.
> 
> Thanks again!
> 
> At 12:06 AM 11/25/2009, you wrote:
> 
> 
>> Pearu Peterson wrote:
>>
>> > Hmm, regarding `intent(in, out) j`, this should work. I'll check what
>> > is going on..
>>
>> The `intent(in, out) j` works when pycalc is defined as subroutine:
>>
>>call pycalc(i, j)
>>
>> instead of
>>
>>pyreturn = pycalc(i, j)
>>
>> Pearu
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://*mail.scipy.org/mailman/listinfo/numpy-discussion
> 
> Jim McEnerney
> Lawrence Livermore National Laboratory
> 7000 East Ave.
> Livermore, Ca. 94550-9234
> 
> USA
> 
> 925-422-1963
> 
> 
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread Pearu Peterson


Pearu Peterson wrote:

> Hmm, regarding `intent(in, out) j`, this should work. I'll check what
> is going on..

The `intent(in, out) j` works when pycalc is defined as subroutine:

   call pycalc(i, j)

instead of

   pyreturn = pycalc(i, j)

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-24 Thread Pearu Peterson
Hi,

It is not really a bug what you are seeing..

In pycalc when assigning

j = 20 * j

you create a new object `j` and the argument object `j`, that links
back to Fortran data, gets discarded.
So, you can change j inplace, for example:

   j[:] = 20*j

The first argument `i` is int object that in Python is immutable,
and hence cannot be changed inplace and the corresponding Fortran
scalar object cannot be changed in the callback (in fact, `i`
is a copy of the corresponding Fortran data value).

To change `i` in Python callback, define it as an array
(similar to `j`) and do inplace operations with it.

Regarding pyreturn and assuming Fortran 77:
you cannot return arrays or multiple object in Fortran functions.
Fortran functions may return only scalars. So, the pyreturn trick will 
never work. The solution is to change arguments in place as with `j`.

Hmm, regarding `intent(in, out) j`, this should work. I'll check what
is going on..

HTH,
Pearu


James McEnerney wrote:
>   While using the call-back feature of f2py I stumbled across what appears
> to be a bug and I'm asking the community to look into this.
> 
> Background: I'm in the middle of converting some legacy fortran to python.
> There is one routine that is particulary thorny that calls more easily
> convertible service routines and my intention is to convert the latter
> and use the callback feature of f2py to execute them within the fortran
> followed by a systematic conversion of what remains.  This seems to be
> doable from what I've read on callback. I have not seen an example
> of using callback where the python actually changes parameters that are
> returned to the fortran; this is a requirement for me. While setting up
> an example to illustrate this I came across a syntactically correct
> situation(this means it compiles & executes) but gives the wrong answer.
> Here's the code:
> In fortran, source foo.f
>   subroutine calc(i, j)
> Cf2py intent(callback) pycalc
>   external pycalc
> Cf2py integer intent(in,out,copy):: i
> Cf2py integer dimension(1), intent(in,out):: j
>   integer pyreturn
> 
>   integer i, j(1)
>   print *, 'in fortran before pycalc ','i=',i, ' j=', j(1)
>   pyreturn = pycalc(i, j)
>   print *, 'in fortran after pycalc ','i=', i, ' j=', j(1)
> 
>   end
>  
> Standard build: f2py -c -m foo foo.f
> 
> In python, execute
> import foo,numpy
> 
> def pycalc(i, j):
> print ' in pycalc ', 'i=',i, 'j=', j
> i=10*i
> j = 20*j
> return i, j
> 
> print foo.calc.__doc__
> i=2
> j = 1+numpy.array([i])
> print foo.calc(i,j, pycalc)
> 
> Here's the output:
> calc - Function signature:
>   i,j = calc(i,j,pycalc,[pycalc_extra_args])
> Required arguments:
>   i : input int
>   j : input rank-1 array('i') with bounds (1)
>   pycalc : call-back function
> Optional arguments:
>   pycalc_extra_args := () input tuple
> Return objects:
>   i : int
>   j : rank-1 array('i') with bounds (1)
> Call-back functions:
>   def pycalc(i,j): return pyreturn,i,j
>   Required arguments:
> i : input int
> j : input rank-1 array('i') with bounds (1)
>   Return objects:
> pyreturn : int
> i : int
> j : rank-1 array('i') with bounds (1)
> 
> 
>  in fortran before pycalc i= 2 j= 3
>  in pycalc  i= 2 j= [3]
>  in fortran after pycalc i= 60 j= 3
> (60, array([3]))
> 
> The bug:
> on return to the fortran why is i=60 & j=3?
> shouldn't it be i=10 & j=60
> 
> While that's what I expect, I might not be defining the
> interface properly; but this compiles & executes. If this
> is incorrect, what is?  In the fortran, pyreturn appears
> to be an address; how do I get the retuned values?
> 
> I'm running 
> Redhat Linux
> python version 2.5
> f2py version 2_3979
> numpy version 1.0.3.1
> Thanks
> 
> Jim McEnerney
> Lawrence Livermore National Laboratory
> 7000 East Ave.
> Livermore, Ca. 94550-9234
> 
> USA
> 
> 925-422-1963
> 
> 
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py function callback: error while using repeated arguments in function call

2009-11-09 Thread Pearu Peterson

Yves Frederix wrote:
> Hi,
> 
> I am doing a simple function callback from fortran to python for which
> the actual function call in fortran has repeated arguments.
> 
> ! callback_error.f90:
> subroutine testfun(x)
>double precision, intent(in) :: x
>double precision :: y
> !f2py intent(callback) foo
> !f2py double precision :: arg1
> !f2py double precision :: arg2
> !f2py double precision :: y
> !f2py external y = foo(arg1, arg2)
>external foo
>y = foo(x, x) !  <-- this causes problems
>print *, 'y:', y
> end subroutine testfun

..

> Is this expected behavior?

No. The bug is now fixed in numpy svn (rev 7712).

Thanks for pointing out this corner case.
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: a journal paper about F2PY has been published

2009-10-05 Thread Pearu Peterson


 Original Message 
Subject: [f2py] ANN: a journal paper about F2PY has been published
Date: Mon, 05 Oct 2009 11:52:20 +0300
From: Pearu Peterson 
Reply-To: For users of the f2py program 
To: For users of the f2py program 

Hi,

A journal paper about F2PY has been published in International Journal
of Computational Science and Engineering:

  Peterson, P. (2009) 'F2PY: a tool for connecting Fortran and Python
  programs', Int. J. Computational Science and Engineering.
  Vol.4, No. 4, pp.296-305.

So, if you would like to cite F2PY in a paper or presentation, using
this reference is recommended.

Interscience Publishers will update their web pages with the new journal
number within few weeks. A softcopy of the article
available in my homepage:
  http://cens.ioc.ee/~pearu/papers/IJCSE4.4_Paper_8.pdf

Best regards,
Pearu

___
f2py-users mailing list
f2py-us...@cens.ioc.ee
http://cens.ioc.ee/mailman/listinfo/f2py-users
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054)

2009-03-16 Thread Pearu Peterson
On Mon, March 16, 2009 4:05 pm, Sturla Molden wrote:
> On 3/16/2009 9:27 AM, Pearu Peterson wrote:
>
>> If a operation produces new array then the new array should have the
>> storage properties of the lhs operand.
>
> That would not be enough, as 1+a would behave differently from a+1. The
> former would change storage order and the latter would not.

Actually, 1+a would be handled by __radd__ method and hence
the storage order would be defined by the rhs (lhs of the __radd__ method).

> Broadcasting arrays adds futher to the complexity of the problem.

I guess, similar rules should be applied to storage order then.

Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054)

2009-03-16 Thread Pearu Peterson
On Sun, March 15, 2009 8:57 pm, Sturla Molden wrote:
>
> Regarding ticket #1054. What is the reason for this strange behaviour?
>
 a = np.zeros((10,10),order='F')
 a.flags
>   C_CONTIGUOUS : False
>   F_CONTIGUOUS : True
>   OWNDATA : True
>   WRITEABLE : True
>   ALIGNED : True
>   UPDATEIFCOPY : False
 (a+1).flags
>   C_CONTIGUOUS : True
>   F_CONTIGUOUS : False
>   OWNDATA : True
>   WRITEABLE : True
>   ALIGNED : True
>   UPDATEIFCOPY : False

I wonder if this behavior could be considered as a bug
because it does not seem to have any advantages but
only hides the storage order change and that may introduce
inefficiencies.

If a operation produces new array then the new array should have the
storage properties of the lhs operand.
That would allow writing code

  a = zeros(, order='F')
  b = a + 1

instead of

  a = zeros(, order='F')
  b = a[:]
  b += 1

to keep the storage properties in operations.

Regards,
Pearu



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-10 Thread Pearu Peterson
On Wed, March 11, 2009 7:50 am, Christopher Barker wrote:

>  > > Python does not distinguish between True and
>  > > False -- Python makes the distinction between something and nothing.
>
> In that context, NaN is nothing, thus False.

Mathematically speaking, NaN is a quantity with undefined value. Closer
analysis of a particular case may reveal that it may be some finite number,
or an infinity with some direction, or be intrinsically undefined.
NaN is something that cannot be defined because its value is not unique.
Nothing would be the content of empty set.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error handling with f2py?

2009-01-16 Thread Pearu Peterson
On Thu, January 15, 2009 6:17 pm, Sturla Molden wrote:
> Is it possible to make f2py raise an exception if a fortran routine
> signals an error?
>
> If I e.g. have
>
>  subroutine foobar(a, ierr)
>
> Can I get an exception automatically raised if ierr != 0?

Yes, for that you need to provide your own fortran call code
using f2py callstatement construct. The initial fortran call
code can be obtained from f2py generated module.c file,
for instance.

An example follows below:

Fortran file foo.f:
---

  subroutine foo(a, ierr)
  integer a
  integer ierr
  if (a.gt.10) then
ierr=2
  else
 if (a.gt.5) then
ierr=1
 else
ierr = 0
 end if
  end if
  end

Generated (f2py -m m foo.f) and then modified signature file m.pyf:
---

!-*- f90 -*-
! Note: the context of this file is case sensitive.

python module m ! in
interface  ! in :m
subroutine foo(a,ierr) ! in :m:foo.f
integer :: a
integer :: ierr
intent (in, out) a
intent (hide) ierr
callstatement '''
(*f2py_func)(&a, &ierr);
if (ierr==1)
{
  PyErr_SetString(PyExc_ValueError, "a is gt 5");
  }
if (ierr==2)
  {
PyErr_SetString(PyExc_ValueError, "a is gt 10");
  }
'''
end subroutine foo
end interface
end python module m

! This file was auto-generated with f2py (version:2_5618).
! See http://cens.ioc.ee/projects/f2py2e/

Build the extension module and use from python:
---

$ f2py -c m.pyf foo.f
$ python
>>> import m
>>> m.foo(30)
---
Traceback (most recent call last)

/home/pearu/test/f2py/exc/ in ()

: a is gt 10
>>> m.foo(6)
---
Traceback (most recent call last)

/home/pearu/test/f2py/exc/ in ()

: a is gt 5
>>> m.foo(4)
4

HTH,
Pearu



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py - a recap

2008-07-24 Thread Pearu Peterson

Hi,

Few months ago I joined a group of system biologist and I have
been busy with new projects (mostly C++ based). So I haven't
had a chance to work on f2py.

However, I am still around to fix f2py bugs and maintain/support
numpy.f2py (as long as current numpy maintainers allow it..)
-- as a rule these tasks do not take much of my time.

I have also rewritten f2py users guide
for numpy.f2py and submitted a paper on f2py. I'll make them
available when I get some more time..

Regards,
still-kicking-yoursly,
Pearu


On Thu, July 24, 2008 1:46 am, Fernando Perez wrote:
> Howdy,
>
> On Wed, Jul 23, 2008 at 3:18 PM, Stéfan van der Walt <[EMAIL PROTECTED]>
> wrote:
>> 2008/7/23 Fernando Perez <[EMAIL PROTECTED]>:
>
>> I agree (with your previous e-mail) that it would be good to have some
>> documentation, so if you could give me some pointers on *what* to
>> document (I haven't used f2py much), then I'll try my best to get
>> around to it.
>
> Well, I think my 'recap' message earlier in this thread points to a
> few issues that can probably be addressed quickly (the 'instead' error
> in the help, the doc/docs dichotomy needs to be cleaned up so a single
> documentation directory exists, etc).   I'm also  attaching a set of
> very old notes I wrote years ago on f2py that you are free to use in
> any way you see fit.  I gave them a 2-minute rst treatment but didn't
> edit them at all, so they may be somewhat outdated (I originally wrote
> them in 2002 I think).
>
> If Pearu has moved to greener pastures, f2py could certainly use an
> adoptive parent.  It happens to be a really important piece of
> infrastructure and  for the most part it works fairly well.   I think
> a litlte bit of cleanup/doc integration with the rest of numpy is
> probably all that's needed, so it could be a good project for someone
> to adopt that would potentially be low-demand yet quite useful.
>
> Cheers,
>
> f
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] set_local_path in test files

2008-07-02 Thread Pearu Peterson
On Wed, July 2, 2008 8:25 pm, Robert Kern wrote:
> On Wed, Jul 2, 2008 at 09:01, Alan McIntyre <[EMAIL PROTECTED]>
> wrote:
>> On Wed, Jul 2, 2008 at 9:35 AM, Pearu Peterson <[EMAIL PROTECTED]>
>> wrote:
>>> Alan McIntyre wrote:
>>>> Some test files have a set_local_path()/restore_path() pair at the
>>>> top, and some don't.  Is there any reason to be changing sys.path like
>>>> this in the test modules?  If not, I'll take them out when I see them.
>>>
>>> The idea behind set_local_path is that it allows running tests
>>> inside subpackages without the need to rebuild the entire package.
>>
>> Ah, thanks; I'd forgotten about that.  I'll leave them alone, then.  I
>> made a note for myself to make sure it's possible to run tests locally
>> without doing a full build/install (where practical).
>
> Please remove them and adjust the imports. As I've mentioned before,
> numpy and scipy can now reliably be built in-place with "python
> setup.py build_src --inplace build_ext --inplace". This is a more
> robust method to test uninstalled code than adjusting sys.path.

Note that the point of set_local_path is not to test uninstalled
code but to test only a subpackage. For example,

  cd svn/scipy/scipy/fftpack
  python setup.py build
  python tests/test_basic.py

would run the tests using the extensions from the build directory.
Well, at least it used to do that in past but it seems that the
feature has been removed from scipy svn:(

Scipy subpackages used to be usable as standalone packages
(even not requiring scipy itself) but this seems to be changed.
This is not good from from the refactoring point of view.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] set_local_path in test files

2008-07-02 Thread Pearu Peterson

Alan McIntyre wrote:
> Some test files have a set_local_path()/restore_path() pair at the
> top, and some don't.  Is there any reason to be changing sys.path like
> this in the test modules?  If not, I'll take them out when I see them.

The idea behind set_local_path is that it allows running tests
inside subpackages without the need to rebuild the entire package.
set_local_path()/restore_path() are convenient when debugging or
developing a subpackage.
If you are sure that there are no bugs in numpy subpackges
that need such debugging process, then the set_local_path()
restore_path() calls can be removed. (But please do not
remove them from scipy tests files, rebuilding scipy just
takes too much time and debugging subpackages globally would
be too painful).

Pearu



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error importing a f2py compiled module.

2008-06-23 Thread Pearu Peterson
On Mon, June 23, 2008 10:38 am, Fabrice Silva wrote:
> Dear all
> I've tried to run f2py on a fortran file which used to be usable from
> python some months ago.
> Following command lines are applied with success (no errors raised) :
> f2py -m modulename -h tmpo.pyf --overwrite-signature  tmpo.f
> f2py -m modulename -c --f90exec=/usr/bin/f95 tmpo.f

First, it is not clear what compiler is f95. If it is gfortran, then
use the command
  f2py -m modulename -c --fcompiler=gnu95 tmpo.f

If it is something else, check the output of

  f2py -c --help-fcompiler

and use appropiate --fcompiler switch.

Second, I hope you realize that the first command has no effect to
the second command. If you have edited the tmpo.pyf file, then use
the following second command:

  f2py tmpo.pyf  -c --fcompiler=gnu95 tmpo.f

> The output of these commands is available here:
> http://paste.debian.net/7307
>
> When importing in Python with "import modulename", I have an
> ImportError:
> Traceback (most recent call last):
>   File "Solveur.py", line 44, in 
> import modulename as Modele
> ImportError: modulename.so: failed to map segment from shared
> object: Operation not permitted
>
> How can that be fixed ? Any suggestion ?

I don't  have ideas what is causing this import error. Try
the instructions above, may be it is due to some compile object
conflicts.

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] seeking help with f2py_options

2008-06-21 Thread Pearu Peterson
On Sat, June 21, 2008 3:28 pm, Helmut Rathgen wrote:
> Dear all,
>
> I am trying to write a setp.py based on numpy.distutils for a mixed
> python/fortran90 package.
>
> I'd like to specify the fortran compiler, such as the path to the
> compiler, compiler flags, etc. in setup.py.
>
> I seemed to understand that this should be done by passing 'f2py_options
> = [ "...", "..." ]' to numpy.distutils.core.Extension()
>
> However, I get errors such as
>
> [EMAIL PROTECTED]:~/src/testdistutils$ python setup.py build
> running build
> running scons
> customize UnixCCompiler
> Found executable /usr/bin/gcc
> customize GnuFCompiler
> Could not locate executable g77
> Could not locate executable f77
> customize IntelFCompiler
> Found executable /opt/intel/fc/10.0.026/bin/ifort
> customize IntelFCompiler
> customize UnixCCompiler
> customize UnixCCompiler using scons
> running config_cc
> unifing config_cc, config, build_clib, build_ext, build commands
> --compiler options
> running config_fc
> unifing config_fc, config, build_clib, build_ext, build commands
> --fcompiler options
> running build_src
> building extension "mrcwaf" sources
> f2py options: ['--fcompiler=intel',
> '--f90exec=/opt/intel/fc/10.0.026/bin/ifort', '--opt="-O3 -xW -ipo"',
> '--noarch']
> f2py: src/mrcwaf/mrcwaf.pyf
> Unknown option '--fcompiler=intel'
>
>
> How to use f2py_options - or should flags be passed in a different way?
>

Note that --fcompiler= and other such options are actually options
to numpy.distutils (f2py script would just pass these options
forward to numpy.distutils). f2py_options can contain only f2py
specific options.
Hence you should try to modify sys.path
in the beggining of the setup.py file to specify the fortran
compiler options. For example, in setup.py file, insert:

import sys
sys.path.extend('config_fc --fcompiler=intel '.split())

See
  python setup.py config_fc --help
  python setup.py build_ext --help
for more information about possible options.

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nose changes checked in

2008-06-17 Thread Pearu Peterson
On Tue, June 17, 2008 6:17 am, Robert Kern wrote:
> On Mon, Jun 16, 2008 at 21:18, Alan McIntyre <[EMAIL PROTECTED]>
> wrote:
>> On Mon, Jun 16, 2008 at 9:04 PM, Charles R Harris
>> <[EMAIL PROTECTED]> wrote:
>
>>> In [1]: numpy.test()
>>> Not implemented: Defined_Binary_Op
>>> Not implemented: Defined_Binary_Op
>>> Defined_Operator not defined used by Generic_Spec
>>> Needs match implementation: Allocate_Stmt
>>> Needs match implementation: Associate_Construct
>> 
>>
>> That stream of "Not implemented" and "Needs match implementation"
>> stuff comes from numpy/f2py/lib/parser/test_Fortran2003.py and
>> Fortran2003.py.  I can silence most of that output by disabling those
>> module-level "if 1:" blocks in those two files, but there's still
>> complaints about Defined_Binary_Op not being implemented.  If someone
>> more knowledgeable about that module could comment on that I'd
>> appreciate it.
>
> These files were for the in-development g3 version of f2py. That
> development has moved outside of numpy, so I think they can be removed
> wholesale. Some of them seem to require a Fortran 90 compiler, so I
> have had them fail for me. Removing these is a high priority.
>
> Pearu, can you confirm? Can we give you the task of removing the
> unused g3 portions of f2py for the 1.2 release?

Yes, I have created the corresponding ticket some time ago:
  http://projects.scipy.org/scipy/numpy/ticket/758

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson
On Tue, May 20, 2008 1:36 pm, Jarrod Millman wrote:
> On Mon, May 19, 2008 at 10:29 PM, Pearu Peterson <[EMAIL PROTECTED]>
> wrote:
>> On Tue, May 20, 2008 1:26 am, Robert Kern wrote:
>>> Is this an important bugfix? If not, can you hold off until 1.1.0 is
>>> released?
>>
>> The patch fixes a long existing and unreported bug in f2py - I think
>> the bug was introduced when Python defined min and max functions.
>> I learned about the bug when reading a manuscript about f2py. Such bugs
>> should not end up in a paper demonstrating f2py inability to process
>> certain features as it would have not been designed to do so. So, I'd
>> consider
>> the bugfix important.
>
> I have been struggling to try and get a stable release out since
> February and every time I think that the release is almost ready some
> piece of code changes that requires me to delay.  While overall the
> code has continuously improved over this period, I think it is time to
> get these improvements to our users.
>
> That said, I am willing to leave this change on the trunk, but please
> refrain from making any more changes until we release 1.1.0.  I know
> it can be frustrating, but, I believe, this is the first time I have
> asked the community to not make commits to the trunk since I started
> handling releases almost a year ago.  The freeze has only been in
> effect since Saturday and will last less than one week in total.  I
> would have preferred if you could have made this change during any one
> of the other 51 weeks of the year.

Please, go ahead. I'll not commit non-critical changes until the trunk
is open again.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development

2008-05-20 Thread Pearu Peterson
On Tue, May 20, 2008 12:59 pm, Jarrod Millman wrote:

> Commits to the trunk (1.2.x) should follow these rules:
>
> 1.  Documentation fixes are allowed and strongly encouraged.
> 2.  Bug-fixes are strongly encouraged.
> 3.  Do not break backwards compatibility.
> 4.  New features are permissible.
> 5.  New tests are highly desirable.
> 6.  If you add a new feature, it must have tests.
> 7.  If you fix a bug, it must have tests.
>
> If you want to break a rule, don't.  If you feel you absolutely have
> to, please don't--but feel free send an email to the list explain your
> problem.
...
> In particular, let me know it there is some aspect of this that
> you simply refuse to agree to in at least principle.

Since you asked, I have a problem with the rule 7 when applying
it to packages like numpy.distutils and numpy.f2py, for instance.

Do you realize that there exists bugs/features for which unittests cannot
be written in principle? An example: say, a compiler vendor changes
a flag of the new version of the compiler so that numpy.distutils
is not able to detect the compiler or it uses wrong flags for the
new compiler when compiling sources. Often, the required fix
is trivial to find and apply, also just reading the code one can
easily verify that the patch does not break anything. However, to
write a unittest covering such a change would mean that one needs
to ship also the corresponding compiler to the unittest directory.
This is nonsense, of course. I can find other similar examples
that have needed attention and changes to numpy.distutils and
numpy.f2py in past and I know that few are coming up.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson
On Tue, May 20, 2008 12:03 pm, David Cournapeau wrote:
> Pearu Peterson wrote:
>> f2py changes are never critical to numpy users who do not use f2py.
>>
> No, but they are to scipy users if f2py cannot build scipy.

Well, I know pretty well what f2py features scipy uses and
what could break scipy build. So, don't worry about that.

>> I have stated before that I am not developing numpy.f2py any further.
>> This also means that any changes to f2py should be essentially bug
>> fixes. Creating a branch for bug fixes is a waste of time, imho.
>>
> I was speaking about creating a branch for the unit tests changes you
> were talking about, that is things which could potentially break a lot
> of configurations.

A branch for the unit tests changes is of course reasonable.

> Is the new f2py available for users ? If yes,..

No, it is far from being usable now. The numpy.f2py and g3 f2py
are completely different software. The changeset was fixing
a bug in numpy.f2py, it has nothing to do with g3 f2py.

amazing-how-paranoiac-is-todays-numpy/scipy-development'ly yours,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson


David Cournapeau wrote:
> Pearu Peterson wrote:
>> So I beg to be flexible with f2py related commits for now. 
> 
> Why not creating a branch for the those changes, and applying only 
> critical bug fixes to the trunk ?

How do you define a critical bug? Critical to whom?
f2py changes are never critical to numpy users who do not use f2py.
I have stated before that I am not developing numpy.f2py any further. 
This also means that any changes to f2py should be essentially bug
fixes. Creating a branch for bug fixes is a waste of time, imho.
If somebody is willing to maintain the branch, that is, periodically
sync the branch with the trunk and vice-versa, then I don't mind.

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson


Stéfan van der Walt wrote:
> Hi Pearu
> 
> 2008/5/20 Pearu Peterson <[EMAIL PROTECTED]>:
>> CC: numpy-discussion because of other reactions on the subject.
>>
>> On Tue, May 20, 2008 1:26 am, Robert Kern wrote:
>>> Is this an important bugfix? If not, can you hold off until 1.1.0 is
>>> released?
>> The patch fixes a long existing and unreported bug in f2py - I think
>> the bug was introduced when Python defined min and max functions.
>> I learned about the bug when reading a manuscript about f2py. Such bugs
>> should not end up in a paper demonstrating f2py inability to process
>> certain
>> features as it would have not been designed to do so. So, I'd consider
>> the bugfix important.
>>
>> On the other hand, the patch does not affect numpy users who do not
>> use f2py, in any way. So, it is not important for numpy users, in general.
> 
> Many f2py users currently get their version via NumPy, I assume.

Yes. There is no other place.

>> Hmm, I also thought that the trunk is open for development, even though
>> r5198 is only fixing a bug (and I do not plan to develop f2py in numpy
>> further, just fix bugs and maintain it). If the release process
>> is going to take for weeks and is locking the trunk, may be the
>> release candidates should live in a separate branch?
> 
> If the patch
> 
> a) Fixes an important bug and
> b) has unit tests to ensure it does what it is supposed to
> 
> then I'd be +1 for applying.  It looks like there are some tests
> included; to which degree do they cover the bugfix, and do we have
> tests to make sure that f2py still functions correctly?

Note that in past f2py was developed using a bit different model
compared to what we require for numpy currently.
The g3 f2py development will be carried out out of numpy tree
but using numpy development model.

Switching numpy.f2py to numpy development model requires substantial
changes, most dramatic one would be to remove f2py;).
A realistic change would require working out a way to test
automatically generated extension modules.
Since this requires existence of C and *Fortran* compilers,
then the test runner must be able to detect the existence of compilers
in order to decide whether to run such tests or not.

So I beg to be flexible with f2py related commits for now. Most changes
are tested by me to ensure that f2py works correctly (as a minimum,
after changing f2py, I always test f2py against scipy).
Some changes may need users feedback because I may
not have access to all commercial Fortran compilers that numpy.distutis
aims at supporting. This development model has not broke f2py in past
as far as I have been concerned. If you will disallow such bug fixes
in future, it would mean that maintaining f2py in numpy will be
practically stopped. I am not sure that we would want that either.

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-19 Thread Pearu Peterson
CC: numpy-discussion because of other reactions on the subject.

On Tue, May 20, 2008 1:26 am, Robert Kern wrote:
> Is this an important bugfix? If not, can you hold off until 1.1.0 is
> released?

The patch fixes a long existing and unreported bug in f2py - I think
the bug was introduced when Python defined min and max functions.
I learned about the bug when reading a manuscript about f2py. Such bugs
should not end up in a paper demonstrating f2py inability to process
certain
features as it would have not been designed to do so. So, I'd consider
the bugfix important.

On the other hand, the patch does not affect numpy users who do not
use f2py, in any way. So, it is not important for numpy users, in general.

Hmm, I also thought that the trunk is open for development, even though
r5198 is only fixing a bug (and I do not plan to develop f2py in numpy
further, just fix bugs and maintain it). If the release process
is going to take for weeks and is locking the trunk, may be the
release candidates should live in a separate branch?

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils: building a f2py in a subdir

2008-05-18 Thread Pearu Peterson
On Sun, May 18, 2008 1:14 pm, David Cournapeau wrote:
> Hi,
>
> I would like to be able to build a f2py extension in a subdir with
> distutils, that is:
>
> config.add_extension('foo/bar', source = ['foo/bar.pyf'])

A safe approach would be to create a foo/setup.py that contains
  config.add_extension('bar', source = ['bar.pyf'])
and in the parent setup.py add
  config.add_subpackage('foo')
(you might also need creating foo/__init__.py).

> But it does not work right now because of the way numpy.distutils finds
> the name of the extension. Replacing:
>
> ext_name = extension.name.split('.')[-1]
>
> by
>
> ext_name = os.path.basename(extension.name.split('.')[-1])
>
> Seems to make it work. Could that break anything in numpy.distutils ? I
> don't see how, but I don't want to touch distutils without being sure it
> won't,

The change should not break anything that already works because
in distutils extension name is assumed to contain names joined with a dot.
If distutils works with / in extension name, then I think it is because
by an accident. I'd recommend checking this also on a windows system
before changing numpy.distutils, not sure if it works or not there..

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tagging 1.1rc1 in about 12 hours

2008-05-17 Thread Pearu Peterson
On Sat, May 17, 2008 7:48 pm, Charles R Harris wrote:
> On Fri, May 16, 2008 at 1:20 AM, Jarrod Millman <[EMAIL PROTECTED]>
> wrote:

>> Once I tag 1.1.0, I will open the trunk for 1.1.1 development.
...
>> Any development for 1.2 will have to occur on a new branch.
>
> So open the new branch already.

I am waiting it too. At least, give another time target for 1.1.0.
(ticket 752 has a patch ready and waiting for a commit,
if 1.1.0 is going to wait another few days, the commit to 1.1.0
should be safe).

Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tagging 1.1rc1 in about 12 hours

2008-05-16 Thread Pearu Peterson


Jarrod Millman wrote:
> Hello,
> 
> I believe that we have now addressed everything that was holding up
> the 1.1.0 release, so I will be tagging the 1.1.0rc1 in about 12
> hours.  Please be extremely conservative and careful about any commits
> you make to the trunk until we officially release 1.1.0 (now may be a
> good time to spend some effort on SciPy).  Once I tag the release
> candidate I will ask both David and Chris to create Windows and Mac
> binaries.  I will give everyone a few days to test the release
> candidate and binaries thoroughly.  If everything looks good, the
> release candidate will become the official release.
> 
> Once I tag 1.1.0, I will open the trunk for 1.1.1 development.  Any
> development for 1.2 will have to occur on a new branch.

I am working with the ticket 752 at the moment and I would probably
not want to commit my work to 1.1.0 at this time, so I shall commit
when trunk is open as 1.1.1.
My question regarding branching: how the changes from 1.1.1 will end up 
into 1.2 branch?

Thanks,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] let's use patch review

2008-05-15 Thread Pearu Peterson
On Thu, May 15, 2008 12:06 am, Ondrej Certik wrote:
> Hi,
>
> I read the recent flamebate about unittests, formal procedures for a
> commit etc. and it was amusing. :)
> I think Stefan is right about the unit tests. I also think that Travis
> is right that there is no formal procedure that can assure what we
> want.
>
> I think that a solution is a patch review.

I am -0.8 on it because the number of numpy core developers is just
too small for the patch review to be effective - there is not enough
reviewers who are qualified to review low-level code.

The number of core developers can be defined as a number of
developers who have ever been owners of numpy tickets.
It seems that the number is less than 10.
Note also that may be only few of them can work full time on numpy.

For adding new features, the patch review system can be reasonable though.

My 2 cents,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and -D_FORTIFY_SOURCE=2 compilation flag

2008-05-15 Thread Pearu Peterson


Robert Kern wrote:
> On Wed, May 14, 2008 at 3:20 PM, David Huard <[EMAIL PROTECTED]> wrote:
>> I filed a patch that seems to do the trick in ticket #792.
> 
> I don't think this is the right approach. The problem isn't that
> _FORTIFY_SOURCE is set to 2 but that f2py is doing (probably) bad
> things that trip these buffer overflow checks. IIRC, Pearu wasn't on
> the f2py mailing list at the time this came up; please try him again.

I was able to reproduce the bug on a debian system. The fix with
a comment on what was causing the bug, is in svn:

   http://scipy.org/scipy/numpy/changeset/5173

I should warn that the bug fix does not have unittests because:
1) testing the bug requires Fortran compiler that for NumPy is
an optional requirement.
2) I have tested the fix with two different setups that should cover
all possible configurations.
3) In the case of problems with the fix, users should notice it immediately.
4) I have carefully read the patch before committing.

Regards,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] First steps with f2py and first problems...

2008-05-08 Thread Pearu Peterson
On Thu, May 8, 2008 2:06 pm, LB wrote:
> Hi,
>
> I've tried to follow the example given at :
> http://www.scipy.org/Cookbook/Theoretical_Ecology/Hastings_and_Powell
> but I've got errors when compiling the fortran file :
>
> 12:53 loic:~ % f2py -c -m hastings hastings.f90 --fcompiler=gnu95
...
>   File "/usr/lib/python2.5/site-packages/numpy/f2py/rules.py", line
> 1222, in buildmodule
> for l in '\n\n'.join(funcwrappers2)+'\n'.split('\n'):
> TypeError: cannot concatenate 'str' and 'list' objects
> zsh: exit 1 f2py -c -m hastings hastings.f90 --fcompiler=gnu95
...
> Have you got any clue to solve this pb ?

This issue is fixed in SVN. So, either use numpy from svn,
or wait a bit until numpy 1.0.5 is released, or change the
line #1222 in numpy/f2py/rules.py to

  for l in ('\n\n'.join(funcwrappers2)+'\n').split('\n'):

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 673

2008-04-28 Thread Pearu Peterson
Hi,

As far as I am concerned, the issue needs a cosmetic fix of renaming
pythonxerbla to python_xerbla and the rest of the issue can be
postponed to 1.2.

Note that this isn't purely a numpy issue. To fix the
issue, system or user provided blas/lapack libraries need to be changed,
we can only give instructions how to do that. Doing the change
automatically requires testing the corresponding support code for many
different platforms and setups - this requires some effort and time..
and most importantly, some volunteers.

Pearu

Jarrod Millman wrote:
> On Wed, Apr 9, 2008 at 6:34 AM, Stéfan van der Walt <[EMAIL PROTECTED]> wrote:
>> Unfortunately, I couldn't get this patch to work, and my time has run
>>  out.  Maybe someone with more knowledge the precedences/order of
>>  functions during linking can take a look.  I don't know how to tell
>>  the system BLAS to call our custom xerbla, instead of the one
>>  provided.
>>
>>  The patch addresses an important issue, though, so it warrants some
>>  more attention.
> 
> Hey,
> 
> I was wondering what the status of this ticket is?  Is this something
> that should be fixed before 1.1.0?
> 
> Thanks,
> 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] tests in distutils/exec_command.py

2008-04-26 Thread Pearu Peterson
On Sat, April 26, 2008 7:53 pm, Zbyszek Szmek wrote:
> Hi,
> while looking at test coverage statistics published Stéfan van der Walt
> at http://mentat.za.net/numpy/coverage/, I noticed that the
> least-covered file, numpy/distutils/exec_command.py has it's own
> test routines, e.g.:
> def test_svn(**kws):
> s,o = exec_command(['svn','status'],**kws)
> assert s,(s,o)
> print 'svn ok'
>
> called as
> if __name__ == "__main__":
>test_svn(use_tee=1)
>
> The sense of this test seems reversed (svn status runs just fine):
> Traceback (most recent call last):
>   File "numpy/distutils/exec_command.py", line 591, in 
> test_svn(use_tee=1)
>   File "numpy/distutils/exec_command.py", line 567, in test_svn
> assert s,(s,o)
> AssertionError: (0, '')
>
> Should the test be cleaned up and moved into a seperate file in
> numpy/distutils/tests/ ?

Note that not all tests in exec_command.py are platform independent
(as it is the case with most distutils tools in general). So, be careful
when copying the tests to numpy/distutils/tests.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] New web site for the F2PY. tool

2008-04-24 Thread Pearu Peterson
Hi,

I have created a new web site for the F2PY tool:

   http://www.f2py.org

that will be used to collect F2PY related information.
At the moment, the site contains minimal information but
hopefully this will improve in future.
One can add content to the f2py.org site after registration
(see Register in the Login page).

Best regards,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: optional parameters and present() bug?

2008-04-23 Thread Pearu Peterson


Garry Willgoose wrote:
> in a F90 routine called from python the present() test (for optional  
> parameters) always returns true whether the parameter is present in  
> the function call or not. My F90 test code looks like this
> 
> subroutine test(test1,test2,test3)
>integer,intent(in) :: test1
>integer,intent(in),optional :: test2
>integer,intent(inout),optional :: test3
>  write (*,*) test1,present(test2),present(test3)
>  if (present(test2)) write (*,*) 'test2',test2
>  if (present(test3)) then
>write (*,*) 'test3',test3
>test3=5
>  end if
> end subroutine test
> 
> The output from python calls to this routine (its inside a module  
> tsdtm.tsg ... just to explain why the function looks like it does). I  
> have just created this with f2py automatically generating the  
> interface with no hand modification of the interfaces. What this test  
> shows is that present() always returns true no matter (1) the intent  
> (2) whether the parameter is present in the call, or (3) the type of  
> the parameter (real, integer ... though I haven't tested arrays etc)  
> as well.

f2py generated wrappers always call Fortran functions with the
full list of arguments. This is the reason why present always
returns True. With the current f2py I can only suggest the following
workaround:

!f2py integer intent(in), optional :: test2 = -1
! assuming that -1 is the magic value that indicates that option was not 
specified.
if (present(test2).and.(.not.(test2.eq.-1))) ...

> Secondly I can't see how to get the returned value of 'test3'. It  
> doesn't seem to be returned as a result of the function?

Yes, because f2py treats `intent(inout)` arguments as input arguments
that can be changed in place. Add the following comment to F90 code:

!f2py intent(out) test3

that will make the wrapper to return the value of test3.

> The docs say f2py handles optional parameters and other than present 
> () my testing suggests that optional parameters seem otherwise OK.  

Yes, f2py is not aware of the present function. Adding present awarness
support is not trivial as it may be compiler dependent (f2py would
need to know what is the value of an optional arguments such that
present would return False). However, there exists a technique to
circumvent this problem that I plan to implement for g3 f2py.

HTH,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numscons] 0.6.1 release: it build scipy, and on windows !

2008-04-21 Thread Pearu Peterson


David Cournapeau wrote:

> - f2py has been almost entirely rewritten: it can now scan the 

> module name automatically, and should be much more reliable.

What do you mean by ^^^? ;)

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py on windows cant find gfortran because split_quoted() removed from ccompiler.py

2008-04-21 Thread Pearu Peterson

Jarrod Millman wrote:

> Our version of split_quoted()  was added several years ago and should
> I think be submitted for inclusion upstream.  I know that distutils
> isn't exactly being maintained, but--Pearu--is it possible that we
> could get this upstream?

I looked into the distutils code again and remembered the following
circumstances:

The split_quoted hack that we have in numpy.distutils, was the
"minimal fix" of the path name handling issue - it basically adds
only two extra lines to the function body and works well.

However, the split_quoted hack is not the "right fix".
The "right fix" would require more substantial changes to distutils
(note that distutils Windows support was developed on a unix
platform and it looks like many issues were fixed for specific
usage cases).

Since the "minimal fix" did its job (consider it as a workaround
to distutils misbehavior) and the "right fix" would require
much more work as well as lots of effort to prove upstream about
the importance of required changes
(consider the current distutils development status and
the fact that the numpy.distutils expands distutils in a direction
that upstream might not be interested in because the current
issue is irrelevant for std distutils tasks), I was satisfied
with the "minimal fix" (as at the time there were more important
issues to be resolved).

I suggest to leave this issue for practical reasons: the workaround
works and the correct fix is to painful to work out. Let practicality 
beat purity in this case. I hope distutils will retire some day:)

Regards,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] F2PY future

2008-04-11 Thread Pearu Peterson

Bill Baxter wrote:
> I'm afraid I'm not much help answering your questions.  But one thing
> I've wondered about f2py is if it could be generalized into an f2***
> tool.  How intertwined is the analysis of the fortran with the
> synthesis of the python?  There are lots of languages that could
> benefit from a fortran wrapper generator tool.

This is a very good question. In fact, the g3 f2py contains a
full parser of Fortran 77/../2003 codes that is independent of
how the parsed tree will be used. It could be used as a starting
point to many tools like Fortran to C/C++ converters, as well as
generating wrappers for other scripting languages. So, I think
this is something worth of keeping in mind indeed when developing g3 
f2py. Although, I would need more support to take this task of
generalizing f2py to f2**.

Regards,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] F2PY future

2008-04-11 Thread Pearu Peterson
Hi,

I am in a process of writing a scientific paper about F2PY that will 
provide an automatic solution to the Python and Fortran connection 
problem. While writing it, I also need to decide what will be the future 
of F2PY. In particulary, I have the following main questions to which I 
am looking for suggestions:
1) where the future users of F2PY should find it,
2) how the users can get support (documentation, mailing lists, etc).
3) where to continue the development of F2PY.

Currently, F2PY has three "home pages":
1) http://cens.ioc.ee/projects/f2py2e/ - this has old f2py. The old f2py 
is unique in that it covers Numeric and numarray support, but is
not being developed anymore.
2) http://www.scipy.org/F2py - this covers the current f2py included
in NumPy. f2py in numpy is rather stable and is being maintained. There 
is no plans to add new functionalities (like F90 derived type support)
to the numpy f2py.
3) http://projects.scipy.org/scipy/numpy/wiki/G3F2PY - this is a
wiki page for the third generation of f2py. It aims at adding full
Fortran 90/../2003 support to the f2py tool, including F90 derived types
as well as POINTER arguments. It should replace numpy f2py in future.

Obviosly, the three "home pages" for f2py is too much, even when they 
cover three different code sets. So, now I am looking for to unify these
places to one site that will cover all three code sets with software,
documentation, and support.

Currently I can think of the following options:

Use Google Code. Pros: it provides necessary infrastructure to develop 
software projects and I am used to it. Cons: in my experience Google 
Code has been too many times broken (at least three times in half a 
year), though this may improve in future. Also, Google Code provides 
only SVN, no hg.

Since f2py will be an important tool for numpy/scipy users,
it would be natural to continue developing f2py under these projects.
However, there are rumours of cleaning up scipy/numpy from extension
generation tools and so in long term, f2py may need to look for another
home. So, I wonder if a hosting could be provided for f2py? Say, in a 
form of f2py.scipy.org or www.f2py.org? I am rather ignorant about these
matters, so any help will be appreciated.

Thanks,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ticket #587

2008-04-09 Thread Pearu Peterson
On Wed, April 9, 2008 3:25 pm, Jarrod Millman wrote:
> On Wed, Apr 9, 2008 at 4:59 AM, Pearu Peterson <[EMAIL PROTECTED]> wrote:
>>  I have fixed it in r4996.
>
> Thanks,
>
>>  However, when trying to change the ticked status, I get forbidden
>> error:
>>
>>  TICKET_APPEND privileges are required to perform this operation
>>
>>  Could track admins add required privileges to me?
>
> Hmm...  Your permissions look right (i.e., you have TICKET_ADMIN,
> which is a superset of  TICKET_APPEND) and it looks like you were able
> to change the status OK.  Are you still having troubles with the Trac
> interface?

Yes, now I can change the tickets again. I thought that somebody
fixed the permissions, if not, then I guess my browser was in some
weird state.. But all good now.

Thanks,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ticket #587

2008-04-09 Thread Pearu Peterson
On Wed, April 9, 2008 2:13 pm, Jarrod Millman wrote:
> Hey Pearu,
>
> Could you take a quick look at this:
> http://projects.scipy.org/scipy/numpy/ticket/587

I have fixed it in r4996.

However, when trying to change the ticked status, I get forbidden error:

TICKET_APPEND privileges are required to perform this operation

Could track admins add required privileges to me?

Regards,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py functions, docstrings, and epydoc

2008-03-27 Thread Pearu Peterson
On Thu, March 27, 2008 7:20 pm, Tom Loredo wrote:
>
> Pearu-
>
>> smll_offset = smll_offset
>> exec `smll_offset.__doc__`
>
> Thanks for the quick and helpful response!  I'll give it
> a try.  I don't grasp why it works, though.  I suppose I don't
> need to, but... I'm guessing the exec adds stuff to the current
> namespace that isn't there until a fortran object's attributes
> are explicitly accessed.
>
> While I have your attention... could you clear this up, also just
> for my curiousity?  It's probably related.

I got this idea from how epydoc gets documentation strings
for variables:

http://epydoc.sourceforge.net/whatsnew.html

according to the variable assignement must follow a string constant
containing documentation.

In our case,

  smll_offset = smll_offset

is variable assignment and

  exec `smll_offset.__doc__`

creates a string constant after the variable assingment.

>> f2py generated functions (that, by the way, are
>> actually instances of `fortran` type and define __call__ method).
>
> I had wondered about this when I first encountered this issue,
> and thought maybe I could figure out how to put some hook into
> epydoc so it would document anything with a __call__ method.
> But it looks like 'fortran' objects *don't* have a __call__
> (here _cbmlike is my f2py-generated module):
>
> In [1]: from inference.count._cbmlike import smllike
>
> In [2]: smllike
> Out[2]: 
>
> In [3]: dir smllike
> --> dir(smllike)
> Out[3]: ['__doc__', '_cpointer']
>
> In [4]: smllike.__call__
> ---
> AttributeErrorTraceback (most recent call
> last)
>
> /home/inference/loredo/tex/meetings/head08/ in ()
>
> AttributeError: __call__
>
> Yet despite this apparent absence of __call__, I can magically
> call smllike just fine.  Would you provide a quick explanation of
> what f2py and the fortran object are doing here?

`fortran` object is an instance of a *extension type* `fortran`.
It does not have __call__ method, the extension type has a slot
in C struct that holds a function that will be called when
something tries to call the `fortran` object.

If there are epydoc developers around in this list then here's
a feature request: epydoc support for extension types.

Regards,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py functions, docstrings, and epydoc

2008-03-27 Thread Pearu Peterson
Hi,

Tom Loredo wrote:
> Hi folks-
> 
> Can anyone offer any tips on how I can get epydoc to produce
> API documentation for functions in an f2py-produced module?
> Currently they get listed in the generated docs as "Variables":
> 
> Variables
>   psigc = 
>   sigctp = 
>   smll_offset = 
> 
> Yet each of these objects is callable, and has a docstring.
> The module itself has docs that give a 1-line signature for
> each function, but that's only part of the docstring.

epydoc 3.0 supports variable documentation strings but only
in python codes. However, one can also let epydoc to generate
documentation for f2py generated functions (that, by the way, are
actually instances of `fortran` type and define __call__ method).
For that one needs to create a python module containing::

from somef2pyextmodule import psigc, sigctp, smll_offset

smll_offset = smll_offset
exec `smll_offset.__doc__`

sigctp = sigctp
exec `sigctp.__doc__`

smll_offset = smll_offset
exec `smll_offset.__doc__`

#etc

#eof

Now, when applying epydoc to this python file, epydoc will
produce docs also to these f2py objects.

It should be easy to create a python script that will
generate these python files that epydoc could use to
generate docs to f2py extension modules.

> One reason I'd like to see the full docstrings documented by epydoc
> is that, for key functions, I'm loading the functions into a
> module and *changing* the docstrings, to have info beyond the
> limited f2py-generated docstrings.
> 
> On a related question, is there a way to provide input to f2py for
> function docstrings?  The manual hints that triple-quoted multiline
> blocks in the .pyf can be used to provide documentation, but when
> I add them, they don't appear to be used.

This feature is still implemented only partially and not enabled.
When I get more time, I'll finish it..

HTH,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py : callbacks without callback function as an argument

2008-03-12 Thread Pearu Peterson
On Wed, March 12, 2008 8:38 am, Daniel Creveling wrote:
> Hello-
>
> Is there a way to code a callback to python from
> fortran in a way such that the calling routine does
> not need the callback function as an input argument?
> I'm using the Intel fortran compiler for linux with
> numpy 1.0.4 and f2py gives version 2_4422.  My modules
> crash on loading because the external callback
> function is not set.  I noticed in the release notes
> for f2py 2.46.243 that it was a resolved issue, but I
> don't know how that development compares to version
> 2_4422 that comes with numpy.

The development version of f2py in numpy has a fix for
callback support that was broken for few versions
of numpy. So, use either numpy from svn or wait a bit
for 1.0.5 release.

> The example that I was trying to follow is from some
> documentatdevelopmention off of the web:
>
>   subroutine f1()
> print *, "in f1, calling f2 twice.."
> call f2()
> call f2()
> return
>   end
>
>   subroutine f2()
> cf2py intent(callback, hide) fpy
> external fpy
> print *, "in f2, calling fpy.."
> call fpy()
> return
>   end
>
> f2py -c -m pfromf extcallback.f
>
> I'm supposed to be able to define the callback
> function from Python like:
 import pfromf
 def f(): print "This is Python"
 pfromf.fpy = f
>
> but I am unable to even load the module:
 import pfromf
> Traceback (most recent call last):
>   File "", line 1, in 
> ImportError: ./pfromf.so: undefined symbol: fpy_

Yes, loading the module works with f2py from numpy svn.
However, calling f1 or f2 from Python fail because
the example does not leave a way to specify the fpy function.

Depending on your specific application, there are some ways
to fix it. For example, let fpy function propagete from f1 to
f2 using external argument to f1:

  subroutine f1(fpy)
  external fpy
  call f2(fpy)
  call f2(fpy)
  end

  subroutine f2(fpy)
  external fpy
  call fpy()
  end

If this is something not suitable for your case, then there
exist ways to influence the generated wrapper codes from
signature files using special hacks. I can explain them later
when I get a better idea what you are trying to do.

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: sympycore version 0.1 released

2008-02-29 Thread Pearu Peterson
We are proud to present a new Python package:

   sympycore - an efficient pure Python Computer Algebra System

Sympycore is available for download from

   http://sympycore.googlecode.com/

Sympycore is released under the New BSD License.

Sympycore provides efficient data structures for representing symbolic
expressions and methods to manipulate them. Sympycore uses a very
clear algebra oriented design that can be easily extended.

Sympycore is a pure Python package with no external dependencies, it
requires Python version 2.5 or higher to run. Sympycore uses Mpmath
for fast arbitrary-precision floating-point arithmetic that is
included into sympycore package.

Sympycore is to our knowledge the most efficient pure Python
implementation of a Computer Algebra System. Its speed is comparable
to Computer Algebra Systems implemented in compiled languages. Some
comparison benchmarks are available in

   * http://code.google.com/p/sympycore/wiki/Performance

   * http://code.google.com/p/sympycore/wiki/PerformanceHistory

and it is our aim to continue seeking for more efficient ways to
manipulate symbolic expressions:

   http://cens.ioc.ee/~pearu/sympycore_bench/

Sympycore version 0.1 provides the following features:

   * symbolic arithmetic operations
   * basic expression manipulation methods: expanding, substituting,
and pattern matching.
   * primitive algebra to represent unevaluated symbolic expressions
   * calculus algebra of symbolic expressions, unevaluated elementary
functions, differentiation and polynomial integration methods
   * univariate and multivariate polynomial rings
   * matrix rings
   * expressions with physical units
   * SympyCore User's Guide and API Docs are available online.

Take a look at the demo for sympycore 0.1 release:

   http://sympycore.googlecode.com/svn/trunk/doc/html/demo0_1.html

However, one should be aware that sympycore does not implement many
features that other Computer Algebra Systems do. The version number
0.1 speaks for itself:)

Sympycore is inspired by many attempts to implement CAS for Python and
it is created to fix SymPy performance and robustness issues.
Sympycore does not yet have nearly as many features as SymPy. Our goal
is to work on in direction of merging the efforts with the SymPy
project in the near future.

Enjoy!

   * Pearu Peterson
   * Fredrik Johansson

Acknowledgments:

   * The work of Pearu Peterson on the SympyCore project is supported
by a Center of Excellence grant from the Norwegian Research Council to
Center for Biomedical Computing at Simula Research Laboratory.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: sharing F90 module data between modules

2008-02-14 Thread Pearu Peterson
On Thu, February 14, 2008 8:24 am, Garry Willgoose wrote:
> Thanks for that. The docs suggest library dl is Unix only. Does that
> mean this solution will not work on Windows? Windows is on my
> implementation roadmap but I'm not quite there yet to test it.

I have no idea whether it will work on Windows or not. I would try
though as there seems to be other ways than dl to find the needed
flags as Lisandro pointed out.

> I guess I am now thinking maybe I can assemble (using f2py) an
> (aggregated) shared library on the fly from the individual module
> shared libraries when I open my modelling environment. I could check
> the aggregated library mod dates against all the .so files of the
> components and only rebuild the aggregated library if there was a
> newer component than the aggregated library. That appears to work
> (and is fast) except for one behaviour of f2py. If I give f2py a list
> of files that are ALL .so (i.e. no fortran files) then f2py quits
> without actually doing anything, even if all the component shared
> libraries all have perfectly fine pythons interfaces from previous
> f2py builds. I can give it a trivial fortran file (module .. end
> module) and it works fine.

Note that f2py job is not really perform linking tasks. It is
a useful feature that simplifies creating extensions modules,
but please, don't complain if it cannot be used as a general linker:)

> Why is that problem? I can envisage a user that just wants to use the
> environment without writing any additional fortran modules (and thus
> may not even have an installed fortran compiler) and if they screw up
> mod dates on the files (by say a ftp from one machine to another ...
> for instance on our cluster the compiler is only installed on one
> machine and only binaries are moved around the cluster) then the
> environment might want to reassemble (with f2py) the aggregated
> library because it (erroneously) thinks there is a newer component
> shared library. This will fail because f2py quits when asked to
> process ONLY .so files. If I have a trivial fortran file to force
> f2py then this forces users to have a fortran compiler on their
> machine, even if they do not want to actually compile a new fortran
> module component, simply because f2py will not operate unless it is
> offered at least one fortran file.

This is not a typical task for f2py. f2py is not a general purpose
linker. It's amazing that f2py could even be used for such a task,
so I don't think that the above demonstrates any bug of f2py.

However, if you are worried about whether users have fortran compilers
installed then can you assume that they have a C compiler installed?
If so, then instead of trivial Fortran file try using the following
trivial .pyf file:

python module dummy
  interface
subroutine dummyfunc()
  fortranname
  callstatement ;
end subroutine dummyfunc
  end interface
end python module dummy

that should force f2py to build a shared library dummy.so
with no Fortran dependencies.

> Does this make sense or am I just being thick about this? Is there a
> way of making f2py merge a number of existing shared libraries into a
> single library without having to compile any fortran. I guess I could
> just invoke the linker directly in the case where there are no
> fortran files to compile but is nice being able to use distutils to
> get away from platform dependencies.

Hopefully the hint above works for you.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: sharing F90 module data between modules

2008-02-12 Thread Pearu Peterson
On Tue, February 12, 2008 7:52 am, Garry Willgoose wrote:
> I have a suite of fortran modules that I want to wrap with f2py
> independently (so they appear to python as seperate imports) but
> where each module has access to another fortran module (which
> contains global data that is shared between the suite of fortran
> modules). I currently compile all the fortran modules, including the
> global data, in a single f2py statement so all the fortran code gets
> imported in a single statement

The source of this issue boils down to

  http://bugs.python.org/issue521854

according to which makes your goal unachivable because of how
Python loads shared libraries *by default*, see below.

> I guess the question is there any way that I can get fm3 to be shared
> between fm1 and fm2? The reasons for wanting to do this are because
> I'm developing a plug-in like architecture for environmental
> modelling where the user can develop new fortran modules (suitably
> f2py'ed) that can just be dropped into the module search path but
> still have access to the global data (subject to fortran module
> interfaces, etc).

The link above also gives an hint how to resolve this issue.
Try to use sys.setdlopenflags(...) before importing f2py generated
extension modules and then reset the state using sys.setdlopenflags(0).
See
  http://docs.python.org/lib/module-sys.html
for more information how to find proper value for ...

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py compiled module not found by python

2008-02-06 Thread Pearu Peterson
On Wed, February 6, 2008 8:35 pm, Chris wrote:
> Hello,
>
> I'm trying to build a package on Linux (Ubuntu) that contains a fortran
> module, built using f2py. However, despite the module building and
> installing without error, python cannot seem to see it (see log below).
> This works fine on Windows and Mac; the problem only seems to
> happen on Linux:

Can you import flib module directly? That is, what happens if you
execute
  cd .../PyMC
  PYTHONPATH=. python -c 'import flib'
?
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [F2PY]: Allocatable Arrays

2008-02-04 Thread Pearu Peterson
On Mon, February 4, 2008 4:39 pm, Lisandro Dalcin wrote:

> Pearu, now that f2py is part of numpy, I think it would be easier for
> you and also for users to post to the numpy list for f2py-related
> issues. What do you think?

Personaly, I don't have strong opinions on this.

On one hand, it would be better if f2py related issues are
raised in one and only one list. It could be numpy list as f2py
issues would not add much extra traffic to it.

On the other hand, if redirecting f2py-users messages to numpy
list and also vice versa, then I am not sure that all subscribes
in f2py-users list will appreciate extra messages about numpy
issues that might be irrelevant to them. Currently there are
about 180 subscribes to the f2py-users list, many of them also
subscribe numpy-discussion list, I guess. If most of them would
subscribe numpy-discussion list then we could get rid of f2py-users
list. If the administrators of the numpy-discussion list could
share (privately) the list of subscribers mails with me then I could
check how many of the f2py users subscribe the numpy list and
it would be easier to make a decision on the future of f2py-users
list.

When I'll be back to working on f2py g3 then I'll probably
create a google code project for it. There we'll have separate
list for the future version of f2py anyway.

Regards,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   >