Re: [Numpy-discussion] Including .f2py_f2cmap in numpy.distutils?

2015-09-28 Thread Pearu Peterson
Hi,

Currently, .f2py_f2cmap must be located in a directory where setup.py or
f2py.py is called (to be exact, where numpy.f2py.capi_maps is imported).
This location is hardcoded and there is no way to specify the file location
within setup.py scripts.

However, you don't need to use .f2py_f2cmap file for specifying the correct
mapping. Read the code in numpy/f2py/capi_maps.py and you'll find that
inserting the following codelet to setup.py file might work (untested code):

from numpy.f2py.capi_maps import f2c_map
f2c_map['real'].update(sp='float', dp='double', qp='long_double')

HTH,
Pearu


On Fri, Sep 25, 2015 at 10:04 PM, Eric Hermes  wrote:

> Hello,
>
>
>
> I am attempting to set up a numpy.distutils setup.py for a small python
> program that uses a Fortran module. Currently, the setup is able to compile
> and install the program seemingly successfully, but the f2py script
> erroneously maps the data types I am using to float, rather than double. I
> have the proper mapping set up in a .f2py_f2cmap in the source directory,
> but it does not seem to be copied to the build directory at compile time,
> and I cannot figure out how to make it get copied. Is there a simple way to
> do what I am trying to do? Alternatively, is there a way to specify the
> mapping in my setup.py scripts?
>
>
>
> Here's a github repo with the project:
>
>
>
> https://github.com/ehermes/ased3
>
>
>
> Thanks,
>
> Eric Hermes
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and callbacks with variables

2015-08-13 Thread Pearu Peterson
Hi Casey,

On Wed, Aug 12, 2015 at 11:46 PM, Casey Deen d...@mpia.de wrote:

 Hi Pearu-

Thanks so much!  This works!  Can you point me to a reference for the
 format of the .pyf files?  My ~day of searching found a few pages on the
 scipy website, but nothing which went into this amount of detail.


Try this:

  https://sysbio.ioc.ee/projects/f2py2e/usersguide/index.html#signature-file


 I also asked Stackoverflow, and unless you object, I'd like to add your
 explanation and mark it as SOLVED for future poor souls wrestling with
 this problem.  I'll also update the github repository with before and
 after versions of the .pyf file.


Go ahead with stackoverflow.

Best regards,
Pearu

Cheers,
 Casey

 On 08/12/2015 09:34 PM, Pearu Peterson wrote:
  Hi Casey,
 
  What you observe, is not a f2py bug. When f2py sees a code like
 
  subroutine foo
call bar
  end subroutine foo
 
  then it will not make an attempt to analyze bar because of implicit
  assumption that all statements that has no references to foo arguments
  are irrelevant for wrapper function generation.
  For your example, f2py needs some help. Try the following signature in
  .pyf file:
 
  subroutine barney ! in :flintstone:nocallback.f
  use test__user__routines, fred=fred, bambam=bambam
  intent(callback, hide) fred
  external fred
  intent(callback,hide) bambam
  external bambam
  end subroutine barney
 
  Btw, instead of
 
f2py -c -m flintstone flintstone.pyf callback.f nocallback.f
 
  use
 
f2py -c flintstone.pyf callback.f nocallback.f
 
  because module name comes from the .pyf file.
 
  HTH,
  Pearu
 
  On Wed, Aug 12, 2015 at 7:12 PM, Casey Deen d...@mpia.de
  mailto:d...@mpia.de wrote:
 
  Hi all-
 
 I've run into what I think might be a bug in f2py and callbacks to
  python.  Or, maybe I'm not using things correctly.  I have created a
  very minimal example which illustrates my problem at:
 
  https://github.com/soylentdeen/fluffy-kumquat
 
  The issue seems to affect call backs with variables, but only when
 they
  are called indirectly (i.e. from other fortran routines).  For
 example,
  if I have a python function
 
  def show_number(n):
  print(%d % n)
 
  and I setup a callback in a fortran routine:
 
subroutine cb
  cf2py intent(callback, hide) blah
external blah
call blah(5)
end
 
  and connect it to the python routine
  fortranObject.blah = show_number
 
  I can successfully call the cb routine from python:
 
  fortranObject.cb
  5
 
  However, if I call the cb routine from within another fortran
 routine,
  it seems to lose its marbles
 
subroutine no_cb
call cb
end
 
  capi_return is NULL
  Call-back cb_blah_in_cb__user__routines failed.
 
  For more information, please have a look at the github repository.
 I've
  reproduced the behavior on both linux and mac.  I'm not sure if this
 is
  an error in the way I'm using the code, or if it is an actual bug.
 Any
  and all help would be very much appreciated.
 
  Cheers,
  Casey
 
 
  --
  Dr. Casey Deen
  Post-doctoral Researcher
  d...@mpia.de mailto:d...@mpia.de
   +49-6221-528-375 tel:%2B49-6221-528-375
  Max Planck Institut für Astronomie (MPIA)
  Königstuhl 17  D-69117 Heidelberg, Germany
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 --
 Dr. Casey Deen
 Post-doctoral Researcher
 d...@mpia.de   +49-6221-528-375
 Max Planck Institut für Astronomie (MPIA)
 Königstuhl 17  D-69117 Heidelberg, Germany
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and callbacks with variables

2015-08-12 Thread Pearu Peterson
Hi Casey,

What you observe, is not a f2py bug. When f2py sees a code like

subroutine foo
  call bar
end subroutine foo

then it will not make an attempt to analyze bar because of implicit
assumption that all statements that has no references to foo arguments are
irrelevant for wrapper function generation.
For your example, f2py needs some help. Try the following signature in .pyf
file:

subroutine barney ! in :flintstone:nocallback.f
use test__user__routines, fred=fred, bambam=bambam
intent(callback, hide) fred
external fred
intent(callback,hide) bambam
external bambam
end subroutine barney

Btw, instead of

  f2py -c -m flintstone flintstone.pyf callback.f nocallback.f

use

  f2py -c flintstone.pyf callback.f nocallback.f

because module name comes from the .pyf file.

HTH,
Pearu

On Wed, Aug 12, 2015 at 7:12 PM, Casey Deen d...@mpia.de wrote:

 Hi all-

I've run into what I think might be a bug in f2py and callbacks to
 python.  Or, maybe I'm not using things correctly.  I have created a
 very minimal example which illustrates my problem at:

 https://github.com/soylentdeen/fluffy-kumquat

 The issue seems to affect call backs with variables, but only when they
 are called indirectly (i.e. from other fortran routines).  For example,
 if I have a python function

 def show_number(n):
 print(%d % n)

 and I setup a callback in a fortran routine:

   subroutine cb
 cf2py intent(callback, hide) blah
   external blah
   call blah(5)
   end

 and connect it to the python routine
 fortranObject.blah = show_number

 I can successfully call the cb routine from python:

 fortranObject.cb
 5

 However, if I call the cb routine from within another fortran routine,
 it seems to lose its marbles

   subroutine no_cb
   call cb
   end

 capi_return is NULL
 Call-back cb_blah_in_cb__user__routines failed.

 For more information, please have a look at the github repository.  I've
 reproduced the behavior on both linux and mac.  I'm not sure if this is
 an error in the way I'm using the code, or if it is an actual bug.  Any
 and all help would be very much appreciated.

 Cheers,
 Casey


 --
 Dr. Casey Deen
 Post-doctoral Researcher
 d...@mpia.de   +49-6221-528-375
 Max Planck Institut für Astronomie (MPIA)
 Königstuhl 17  D-69117 Heidelberg, Germany
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and debug mode

2014-10-03 Thread Pearu Peterson
Hi,

When you run f2py without -c option, the wrapper source files are generated
without compiling them.
With these source files and fortranobject.c, you can build the extension
module with your specific compiler options using the compiler framework of
your choice.
I am not familiar with Visual Studio specifics to suggest a more detailed
solution but after generating wrapper source files there is no f2py
specific way to build the extension module, in fact, `f2py -c` relies on
distutils compilation/linkage methods.

HTH,
Pearu

On Tue, Sep 30, 2014 at 4:15 PM, Bayard ferdinand.bay...@strains.fr wrote:

 Hello to all.
 I'm aiming to wrap a Fortran program into Python. I started to work with
 f2py, and am trying to setup a debug mode where I could reach
 breakpoints in Fortran module launched by Python. I've been looking in
 the existing post, but not seeing things like that.

 I'm used to work with visual studio 2012 and Intel Fortran compiler, I
 have tried to get that point doing :
 1) Run f2py -m to get *.c wrapper
 2) Embed it in a C Project in Visual Studio, containing also with
 fortranobject.c and fortranobject.h,
 3) Create a solution which also contains my fortran files compiled as a lib
 4) Generate in debug mode a dll with extension pyd (to get to that
 point name of the main function in Fortran by _main).

 I compiled without any error, and reach break point in C Wrapper, but
 not in Fortran, and the fortran code seems not to be executed (whereas
 it is when compiling with f2py -c). Trying to understand f2py code, I
 noticed that f2py is not only writing c-wrapper, but compiling it in a
 specific way. Is there a way to get a debug mode in Visual Studio with
 f2py (some members of the team are used to it) ? Any alternative tool we
 should use for debugging ?

 Thanks for answering
 Ferdinand




 ---
 Ce courrier électronique ne contient aucun virus ou logiciel malveillant
 parce que la protection avast! Antivirus est active.
 http://www.avast.com

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py and Fortran STOP statement issue

2013-11-20 Thread Pearu Peterson
Hi,

The issue with wrapping Fortran codes that contain STOP statements has been
raised several times in past with no good working solution proposed.

Recently the issue was raised again in f2py issues. Since the user was
filling to test out few ideas with positive results, I decided to describe
the outcome (a working solution) in the following wikipage:

  https://code.google.com/p/f2py/wiki/FAQ2ed

Just FYI,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py with allocatable arrays

2012-07-03 Thread Pearu Peterson
On Tue, Jul 3, 2012 at 5:20 PM, Sturla Molden stu...@molden.no wrote:


 As for f2py: Allocatable arrays are local variables for internal use,
 and they are not a part of the subroutine's calling interface. f2py only
 needs to know about the interface, not the local variables.


One can have allocatable arrays in module data block, for instance, where
they a global. f2py supports wrapping these allocatable arrays to python.
See, for example,


http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#allocatable-arrays

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Question on F/C-ordering in numpy svd

2012-01-13 Thread Pearu Peterson


On 01/12/2012 04:21 PM, Ivan Oseledets wrote:
 Dear all!

 I quite new to numpy and python.
 I am a matlab user, my work is mainly
 on multidimensional arrays, and I have a question on the svd function
 from numpy.linalg

 It seems that

 u,s,v=svd(a,full_matrices=False)

 returns u and v in the F-contiguous format.

The reason for this is that the underlying computational routine
is in Fortran (when using system lapack library, for instance) that 
requires and returns F-contiguous arrays and the current behaviour 
guarantees the most memory efficient computation of svd.

 That is not in a good agreement with other numpy stuff, where
 C-ordering is default.
 For example, matrix multiplication, dot() ignores ordering and returns
 result always in C-ordering.
 (which is documented), but the svd feature is not documented.

In generic numpy operation, the particular ordering of arrays
should not matter as the underlying code should know how to
compute array operation results from different input orderings
efficiently.

This behaviour of svd should be documented. However, one
should check that when using the svd from numpy lapack_lite (which is 
f2c code and could use also C-ordering, in principle),
F-contiguous arrays are actually returned.

Regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Build of current Git HEAD for NumPy fails

2011-08-19 Thread Pearu Peterson


On 08/19/2011 02:26 PM, Dirk Ullrich wrote:
 Hi,

 when trying to build current Git HAED of NumPy with - both for
 $PYTHON=python2 or $PYTHON=python3:

 $PYTHON setup.py config_fc --fcompiler=gnu95 install --prefix=$WHATEVER

 I get the following error - here for PYTHON=python3.2

The command works fine here with Numpy HEAD and Python 2.7.
Btw, why do you specify --fcompiler=gnu95 for numpy? Numpy
has no Fortran sources. So, fortran compiler is not needed
for building Numpy (unless you use Fortran libraries
for numpy.linalg).

 running build_clib
...
File 
 /common/packages/build/makepkg-du/python-numpy-git/src/numpy-build/build/py3k/numpy/distutils/command/build_clib.py,
 line 179, in build_a_library
  fcompiler.extra_f77_compile_args =
 build_info.get('extra_f77_compile_args') or []
 AttributeError: 'str' object has no attribute 'extra_f77_compile_args'

Reading the code, I don't see how this can happen. Very strange.
Anyway, I cleaned up build_clib to follow similar coding convention
as in build_ext. Could you try numpy head again?

Regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to start at line # x when using numpy.memmap

2011-08-19 Thread Pearu Peterson


On 08/19/2011 05:01 PM, Brent Pedersen wrote:
 On Fri, Aug 19, 2011 at 7:29 AM, Jeremy Conlinjlcon...@gmail.com  wrote:
 On Fri, Aug 19, 2011 at 7:19 AM, Pauli Virtanenp...@iki.fi  wrote:
 Fri, 19 Aug 2011 07:00:31 -0600, Jeremy Conlin wrote:
 I would like to use numpy's memmap on some data files I have. The first
 12 or so lines of the files contain text (header information) and the
 remainder has the numerical data. Is there a way I can tell memmap to
 skip a specified number of lines instead of a number of bytes?

 First use standard Python I/O functions to determine the number of
 bytes to skip at the beginning and the number of data items. Then pass
 in `offset` and `shape` parameters to numpy.memmap.

 Thanks for that suggestion. However, I'm unfamiliar with the I/O
 functions you are referring to. Can you point me to do the
 documentation?

 Thanks again,
 Jeremy
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 this might get you started:


 import numpy as np

 # make some fake data with 12 header lines.
 with open('test.mm', 'w') as fhw:
  print  fhw, \n.join('header' for i in range(12))
  np.arange(100, dtype=np.uint).tofile(fhw)

 # use normal python io to determine of offset after 12 lines.
 with open('test.mm') as fhr:
  for i in range(12): fhr.readline()
  offset = fhr.tell()

I think that before reading a line the program should
check whether the line starts with #. Otherwise fhr.readline()
may return a very large junk of data (may be the rest of the file 
content) that ought to be read only via memmap.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py - undefined symbol: _intel_fast_memset [SEC=UNCLASSIFIED]

2011-08-16 Thread Pearu Peterson


On 08/16/2011 02:32 PM, Jin Lee wrote:
 Hello,

 This is my very first attempt at using f2py but I have come across a problem. 
 If anyone can assist me I would appreciate it very much.

 I have a very simple test Fortran source, sub.f90 which is:

 subroutine sub1(x,y)
 implicit none

 integer, intent(in) :: x
 integer, intent(out) :: y

 ! start
 y = x

 end subroutine sub1


 I then used f2py to produce an object file, sub.so:

 f2py -c -m sub sub.f90 --fcompiler='gfortran'

 After starting a Python interactive session I tried to import the 
 Fortran-derived Python module but I get an error message:

 import sub
 Traceback (most recent call last):
File stdin, line 1, inmodule
 ImportError: ./sub.so: undefined symbol: _intel_fast_memset


 Can anyone suggest what this error message means and how I can overcome it, 
 please?

Try
   f2py -c -m sub sub.f90 --fcompiler=gnu95

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [f2py] How to specify compile options in setup.py

2011-08-16 Thread Pearu Peterson
,

On Tue, Aug 16, 2011 at 7:50 PM, Jose Gomez-Dans jgomezd...@gmail.comwrote:

 Hi,

 Up to now, I have managed to build Fortran extensions with f2py by ussing
 the following command:
 $ python setup.py config_fc --fcompiler=gnu95
  --f77flags='-fmy_flags' --f90flags='-fmy_flags' build

 I think that these options should be able to go in a setup.py file, and use
 the f2py_options file. One way of doing this is to extend sys.argv with the
 required command line options:
 import sys
 sys.argv.extend ( ['config_fc', '--fcompiler=gnu95',
  '--f77flags=-fmy_flags', --f90flags='-fmy_flags] )

 This works well if all the extensions require the same flags. In my case,
 however, One of the extensions requires a different set of flags (in
 particular, it requires that flag  -fdefault-real-8 isn't set, which is
 required by the extensions). I tried setting the f2py_options in the
 add_extension method call:

 config.add_extension( 'my_extension', sources = my_sources,
  f2py_options=['f77flags=-ffixed-line-length-0 -fdefault-real-8',
 'f90flags=-fdefault-real-8']  )

 This compiles the extensions (using the two dashes in front of the f2py
 option eg --f77flags results in an unrecognised option), but the f2p_options
 goes unheeded. Here's the relevant bit of the output from python setup.py
 build:

 compiling Fortran sources
 Fortran f77 compiler: /usr/bin/gfortran -ffixed-line-length-0 -fPIC -O3
 -march=native
 Fortran f90 compiler: /usr/bin/gfortran -ffixed-line-length-0 -fPIC -O3
 -march=native
 Fortran fix compiler: /usr/bin/gfortran -Wall -ffixed-form
 -fno-second-underscore -ffixed-line-length-0 -fPIC -O3 -march=native
 compile options: '-Ibuild/src.linux-i686-2.7
 -I/usr/lib/pymodules/python2.7/numpy/core/include -I/usr/include/python2.7
 -c'
 extra options: '-Jbuild/temp.linux-i686-2.7/my_dir
 -Ibuild/temp.linux-i686-2.7/my_dir'

 How can I disable (or enable) one option for compiling one particular
 extension?


You cannot do it unless you update numpy from git repo.
I just implemented the support for extra_f77_compile_args and
extra_f90_compile_args
options that can be used in config.add_extension as well as in
config.add_library.
See
  https://github.com/numpy/numpy/commit/43862759

So, with recent numpy the following will work

config.add_extension( 'my_extension', sources = my_sources,
extra_f77_compile_args =
[-ffixed-line-length-0, -fdefault-real-8],
extra_f90_compile_args =
[-fdefault-real-8],
  )

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ULONG not in UINT16, UINT32, UINT64 under 64-bit windows, is this possible?

2011-08-15 Thread Pearu Peterson
Hi,

A student of mine using 32-bit numpy 1.5 under 64-bit Windows 7 noticed that
giving a numpy array with dtype=uint32 to an extension module the
following codelet would fail:

switch(PyArray_TYPE(ARR)) {
  case PyArray_UINT16: /* do smth */ break;
  case PyArray_UINT32: /* do smth */ break;
  case PyArray_UINT64: /* do smth */ break;
  default: /* raise type error exception */
}

The same test worked fine under Linux.

Checking the value of PyArray_TYPE(ARR) (=8) showed that it corresponds to
NPY_ULONG (when counting the items in the enum definition).

Is this situation possible where NPY_ULONG does not correspond to a 16 or 32
or 64 bit integer?
Or does this indicate a bug somewhere for this particular platform?

Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy steering group?

2011-05-26 Thread Pearu Peterson
Hi,

Would it be possible to setup a signing system where anyone who would like
to support Clint could sign and advertise the system on relevant mailing
lists?
This would provide larger body of supporters for this letter and perhaps
will have greater impact to whom the letter will be
addressed. Personally, I would be happy to sign to such a letter.

On the letter: the letter should also mention scipy community as they
benefit
most from the ATLAS speed.

Best regards,
Pearu


On Fri, May 27, 2011 at 12:03 AM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Wed, May 4, 2011 at 9:24 AM, Robert Kern robert.k...@gmail.com wrote:
  On Wed, May 4, 2011 at 11:14, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  On Tue, May 3, 2011 at 7:58 PM, Robert Kern robert.k...@gmail.com
 wrote:
 
  I can't speak for the rest of the group, but as for myself, if you
  would like to draft such a letter, I'm sure I will agree with its
  contents.
 
  Thank you - sadly I am not confident in deserving your confidence, but
  I will do my best to say something sensible.   Any objections to a
  public google doc?
 
  Even better!

 I've put up a draft here:

 numpy-whaley-support -

 https://docs.google.com/document/d/1gPhUUjWqNpRatw90kCqL1WPWvn1yicf2VAowWSyHlno/edit?hl=en_USauthkey=CPv49_cK

 I didn't know who to put as signatories.  Maybe an extended steering
 group like (from http://scipy.org/Developer_Zone):

 Jarrod Millman
 Eric Jones
 Robert Kern
 Travis Oliphant
 Stefan van der Walt

 plus:

 Pauli
 Ralf
 Chuck

 or something like that?  Anyone else care to sign / edit?  Mark W for
 example?  Sorry, I haven't been following the numpy commits very
 carefully of late.

 Best,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy steering group?

2011-05-26 Thread Pearu Peterson
On Fri, May 27, 2011 at 7:39 AM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Thu, May 26, 2011 at 9:32 PM, Pearu Peterson
 pearu.peter...@gmail.com wrote:
  Hi,
 
  Would it be possible to setup a signing system where anyone who would
 like
  to support Clint could sign and advertise the system on relevant mailing
  lists?
  This would provide larger body of supporters for this letter and perhaps
  will have greater impact to whom the letter will be
  addressed. Personally, I would be happy to sign to such a letter.
 
  On the letter: the letter should also mention scipy community as they
  benefit
  most from the ATLAS speed.

 Maybe it would be best phrased then as 'numpy and scipy developers'
 instead of the steering group?

 I'm not sure how this kind of thing works for tenure letters, I would
 guess that if there are a very large number of signatures it might be
 difficult to see who is being represented...  I'm open to suggestions.
  I can also ask Clint.

 I've added you as an editor - would you consider adding your name at
 the end, and maybe something about scipy? - you know the scipy blas /
 lapack stuff much better than I do.


Done for adding the name. The document is currently numpy oriented and I am
not sure where to enter with scipy..

Technical summary of the situation with scipy blas/lapack stuff:
The main difference in between numpy and scipy lapack-wise is that numpy has
a lite C version of few lapack routines in case the lapack libraries are not
available for building it while for scipy the lapack libraries are
prerequisites as scipy provides interfaces to a much larger number of lapack
routines. Having ATLAS in addition would greatly increase the performance of
these routines.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert integer into bit array

2011-05-16 Thread Pearu Peterson
Hi,

I have used bitarray for that

  http://pypi.python.org/pypi/bitarray/

Here


http://code.google.com/p/pylibtiff/source/browse/#svn%2Ftrunk%2Flibtiff%2Fbitarray-0.3.5-numpy

you can find bitarray with numpy support.

HTH,
Pearu

On Mon, May 16, 2011 at 9:55 PM, Nikolas Tautenhahn virt...@gmx.de wrote:

 Hi,

 for some research, I need to convert lots of integers into their bit
 representation - but as a bit array, not a string like
 numpy.binary_repr() returns it.

 So instead of
 In [22]: numpy.binary_repr(23)
 Out[22]: '10111


 I'd need:
 numpy.binary_magic(23)
 Out: array([ True, False,  True,  True,  True], dtype=bool)

 is there any way to do this efficiently?

 best regards,
 Nik
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert integer into bit array

2011-05-16 Thread Pearu Peterson
On Tue, May 17, 2011 at 12:04 AM, Nikolas Tautenhahn virt...@gmx.de wrote:

 Hi,

  Here
 
 
 http://code.google.com/p/pylibtiff/source/browse/#svn%2Ftrunk%2Flibtiff%2Fbitarray-0.3.5-numpy
 
  you can find bitarray with numpy support.
 

 Thanks, that looks promising - to get a numpy array, I need to do

 numpy.array(bitarray.bitarray(numpy.binary_repr(i, l)))

 for an integer i and l with i  2**l, right?


If l  64 and little endian is assumed then you can use the

  fromword(i, l)

method:

 from libtiff import bitarray
 barr = bitarray.bitarray(0, 'little')
 barr.fromword(3,4)
 barr
bitarray('1100')

that will append 4 bits of the value 3 to the bitarray barr.

Also check out various bitarray `to*` and `from*` methods.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert integer into bit array

2011-05-16 Thread Pearu Peterson
On Tue, May 17, 2011 at 8:05 AM, Pearu Peterson pearu.peter...@gmail.comwrote:



 On Tue, May 17, 2011 at 12:04 AM, Nikolas Tautenhahn virt...@gmx.dewrote:

 Hi,

  Here
 
 
 http://code.google.com/p/pylibtiff/source/browse/#svn%2Ftrunk%2Flibtiff%2Fbitarray-0.3.5-numpy
 
  you can find bitarray with numpy support.
 

 Thanks, that looks promising - to get a numpy array, I need to do

 numpy.array(bitarray.bitarray(numpy.binary_repr(i, l)))

 for an integer i and l with i  2**l, right?


 If l  64 and little endian is assumed then you can use the

   fromword(i, l)

 method:

  from libtiff import bitarray
  barr = bitarray.bitarray(0, 'little')
  barr.fromword(3,4)
  barr
 bitarray('1100')

 that will append 4 bits of the value 3 to the bitarray barr.


 numpy.array(barr)
array([ True,  True, False, False], dtype=bool)

to complete the example...

Pearu


 Also check out various bitarray `to*` and `from*` methods.

 HTH,
 Pearu

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py complications

2011-05-12 Thread Pearu Peterson
On Thu, May 12, 2011 at 4:14 PM, Jose Gomez-Dans jgomezd...@gmail.comwrote:

 Hi,

 We have some fortran code that we'd like to wrap using f2py. The code
 consists of a library that we compile into an .so file, and a file that
 wraps that library, which is then wrapped by f2py to provide a clean
 interface to the library. The process thus includes two steps:
 1.- Use a makefile to compile the library
 2.- Use f2py to create the python bindings from a fortran file, and link
 dynamically to the library created in the previous step.

 Now, I don't see why just a call to f2py shouldn't suffice (we don't really
 need to have an .so lying around, and it implies that we have to set eg
 $LD_LIBRARY_PATH to search for it).


It would be sufficient to just call f2py once.


 I thought about using a pyf file for this, and use the only :: option:
 $ f2py -h my_model.pyf -m my_model  my_wrapper.f90 only: func1 func2 func3
 : all_my_other_files.f even_more_files.f90


The above command (with using -h option) will just create the my_model.pyf
file, no extra magic here,


 $ f2py -c  -m my_model  --f90flags=-fdefault-real-8 -O3 -march=native
 -m32 --f90exec=/usr/bin/gfortran  --f77exec=/usr/bin/gfortran --opt=-O3
  my_model.pyf


You need to include all .f and .f90 files to the f2py command and -m has no
effect when .pyf is specified:

f2py -c   --f90flags=-fdefault-real-8 -O3 -march=native -m32
--f90exec=/usr/bin/gfortran  --f77exec=/usr/bin/gfortran --opt=-O3
 my_model.pyf all_my_other_files.f even_more_files.f90

This command (with .pyf file in command line) reads only the my_model.pyf
file and creates wrapper code. It does not scan any Fortran files but only
compiles them (with -c in command line) and links to the extension module.

In fact, IIRC, the two above command lines can be joined to one:

  f2py  -m my_model  my_wrapper.f90 only: func1 func2 func3 :
all_my_other_files.f even_more_files.f90  --f90flags=-fdefault-real-8 -O3
-march=native -m32 --f90exec=/usr/bin/gfortran  --f77exec=/usr/bin/gfortran
--opt=-O3


 This however, doesn't seem to work, with python complaining about missing
 things. If I put all my *.f and *f90 files after the my_model.pyf (which
 doesn't seem to have them in the file), I get undefined symbol errors when
 importing the .so in python.


Are you sure that you specified all needed Fortran files in the f2py command
line? Where are these symbols defined that are reported to be undefined?

Additionally, it would be great to have this compilation in a
 distutils-friendly package, but how do you specify all these compiler flags?


It is possible. See numpy/distutils/tests for examples. To use gfortran, run

  python setup.py build --fcompiler=gnu95

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-06 Thread Pearu Peterson
On Fri, May 6, 2011 at 5:55 PM, DJ Luscher d...@lanl.gov wrote:

 Pearu Peterson pearu.peterson at gmail.com writes:
 
 
  Thanks for the bug report!These issues are now fixed in:
 https://github.com/numpy/numpy/commit/f393b604  Ralf, feel free to apply
 this
 changeset to 1.6.x branch if appropriate.Regards,Pearu
 

 Excellent! Thank you.

 I'll cautiously add another concern because I believe it is related.  Using
 f2py to compile subroutines where dimensions for result variable are
 derived
 from two argument usage of size of an assumed shape input variable does not
 compile for me.

 example:
 foo_size.f90

subroutine trans(x,y)

  implicit none

  real, intent(in), dimension(:,:) :: x
  real, intent(out), dimension( size(x,2), size(x,1) ) :: y

  integer :: N, M, i, j

  N = size(x,1)
  M = size(x,2)

  DO i=1,N
do j=1,M
  y(j,i) = x(i,j)
END DO
  END DO

end subroutine trans

 For this example the autogenerated fortran wrapper is:
 m-f2pywrappers.f
  subroutine f2pywraptrans (x, y, f2py_x_d0, f2py_x_d1)
  integer f2py_x_d0
  integer f2py_x_d1
  real x(f2py_x_d0,f2py_x_d1)
  real y(shape(x,-1+2),shape(x,-1+1))
  interface
  subroutine trans(x,y)
  real, intent(in),dimension(:,:) :: x
  real, intent(out),dimension( size(x,2), size(x,1) ) :: y
  end subroutine trans
  end interface
  call trans(x, y)
  end

 which (inappropriately?) translates the fortran SIZE(var,dim) into fortran
 SHAPE(var, kind).  Please let me know if it is poor form for me to follow
 on
 with this secondary issue, but it seems like it is related and a
 straight-forward fix.


This issue is now fixed in

  https://github.com/numpy/numpy/commit/a859492c

I had to implement size function in C that can be called both as size(var)
and size(var, dim).
The size-to-shape mapping feature is now removed. I have updated the
corresponding
release notes in

https://github.com/numpy/numpy/commit/1f2e751b

Thanks for testing these new f2py features,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-06 Thread Pearu Peterson
On Fri, May 6, 2011 at 10:18 PM, DJ Luscher d...@lanl.gov wrote:


 I have encountered another minor hangup.  For assumed-shape array-valued
 functions defined within a fortran module there seems to be some trouble in
 the
 autogenerated subroutine wrapper interface.  I think it has to do with the
 order
 in which variables are declared in the interface specification.

 for example:
 foo_out.f90
  ! -*- fix -*-
  module foo
  contains
  function outer(a,b)
implicit none
real, dimension(:), intent(in)   :: a, b
real, dimension(size(a),size(b)) :: outer

outer = spread(a,dim=2,ncopies=size(b) ) *
   spread(b,dim=1,ncopies=size(a) )

  end function outer
  end module

 when compiled by f2py creates the file:
 m-f2pywrappers2.f90
 ! -*- f90 -*-
 ! This file is autogenerated with f2py (version:2)
 ! It contains Fortran 90 wrappers to fortran functions.

  subroutine f2pywrap_foo_outer (outerf2pywrap, a, b, f2py_a_d0, f2p
 y_b_d0)
  use foo, only : outer
  integer f2py_a_d0
  integer f2py_b_d0
  real a(f2py_a_d0)
  real b(f2py_b_d0)
  real outerf2pywrap(size(a),size(b))
  outerf2pywrap = outer(a, b)
  end subroutine f2pywrap_foo_outer

  subroutine f2pyinitfoo(f2pysetupfunc)
  interface
  subroutine f2pywrap_foo_outer (outerf2pywrap, outer, a, b, f2py_a_
 d0, f2py_b_d0)
  integer f2py_a_d0
  integer f2py_b_d0
  real outer(size(a),size(b))
  real a(f2py_a_d0)
  real b(f2py_b_d0)
  real outerf2pywrap(size(a),size(b))
  end subroutine f2pywrap_foo_outer
  end interface
  external f2pysetupfunc
  call f2pysetupfunc(f2pywrap_foo_outer)
  end subroutine f2pyinitfoo

 in the subroutine interface specification the size(a) and size(b) are used
 to
 dimension outer above (before) the declaration of a and b themselves.  This
 halts my compiler.  The wrapper seems to compile OK if a and b are declared
 above outer in the interface.
 thanks again for your help,
 DJ


Your example works fine here:

$ f2py  -m foo foo_out.f90 -c
$ python -c 'import foo; print foo.foo.outer([1,2],[3,4])'
[[ 3.  4.]
 [ 6.  8.]]

with outer defined before a and b. I would presume that compiler would
give a warning, at least, when this would be a problem. Anyway, try to apply
the following patch:


diff --git a/numpy/f2py/func2subr.py b/numpy/f2py/func2subr.py
index 0f76920..f746108 100644
--- a/numpy/f2py/func2subr.py
+++ b/numpy/f2py/func2subr.py
@@ -90,7 +90,6 @@ def createfuncwrapper(rout,signature=0):
 v['dimension'][i] = dn
 rout['args'].extend(extra_args)
 need_interface = bool(extra_args)
-

 ret = ['']
 def add(line,ret=ret):
@@ -143,8 +142,13 @@ def createfuncwrapper(rout,signature=0):
 dumped_args.append(a)
 for a in args:
 if a in dumped_args: continue
+if isintent_in(vars[a]):
+add(var2fixfortran(vars,a,f90mode=f90mode))
+dumped_args.append(a)
+for a in args:
+if a in dumped_args: continue
 add(var2fixfortran(vars,a,f90mode=f90mode))
-
+
 add(l)

 if need_interface:


to see if changing the order will fix the hang.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-06 Thread Pearu Peterson
On Sat, May 7, 2011 at 12:00 AM, DJ Luscher d...@lanl.gov wrote:

 Pearu Peterson pearu.peterson at gmail.com writes:
 
  On Fri, May 6, 2011 at 10:18 PM, DJ Luscher djl at lanl.gov wrote:
 
  I have encountered another minor hangup.  For assumed-shape array-valued
  functions defined within a fortran module there seems to be some trouble
 in
  the autogenerated subroutine wrapper interface.  I think it has to do
 with   
 the order in which variables are declared in the interface specification.
 
  in the subroutine interface specification the size(a) and size(b) are
 used to
  dimension outer above (before) the declaration of a and b themselves.
  This
  halts my compiler.  The wrapper seems to compile OK if a and b are
 declared
  above outer in the interface.
  thanks again for your help,
 
  DJ
 
  Your example works fine here:$ f2py  -m foo foo_out.f90 -c$ python -c
 'import
  foo; print foo.foo.outer([1,2],[3,4])'[[ 3.  4.] [ 6.  8.]]with outer
 defined
  before a and b. I would presume that compiler would
  give a warning, at least, when this would be a problem. Anyway, try to
 apply 
 the following patch:
  to see if changing the order will fix the hang.Pearu
 
 
 indeed - it works fine as is when I compile with gfortran, but not ifort.
  I
 suppose there may be some compiler option for ifort to overcome that, but I
 couldn't tell from a brief scan of the doc.

 the patch works when I add in two separate loops over args: (~line 138 in
 func2subr.py):

for a in args:
if a in dumped_args: continue
 if isscalar(vars[a]):
 add(var2fixfortran(vars,a,f90mode=f90mode))
dumped_args.append(a)
for a in args:
if a in dumped_args: continue
if isintent_in(vars[a]):
add(var2fixfortran(vars,a,f90mode=f90mode))
dumped_args.append(a)

 not sure if that was your intention,


yes, that is what the patch was generated from.


 but when I tried to use just isintent_in
 or to include both conditions in same loop,


that would not work as you noticed..


 the input arrays (a and b) were
 declared ahead of the derived shape-array (outer), but also ahead of the
 integers used to define a and b (e.g. f2py_a_d0).


I have committed the patch:

  https://github.com/numpy/numpy/commit/6df2ac21

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2

2011-05-05 Thread Pearu Peterson
On Thu, May 5, 2011 at 11:51 PM, DJ Luscher d...@lanl.gov wrote:


 Ralf Gommers ralf.gommers at googlemail.com writes:

 
  Hi,
 
  I am pleased to announce the availability of the second release
  candidate of NumPy 1.6.0.
 
  Compared to the first release candidate, one segfault on (32-bit
  Windows + MSVC) and several memory leaks were fixed. If no new
  problems are reported, the final release will be in one week.
 
  Sources and binaries can be found at
  http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc2/
  For (preliminary) release notes see below.
 
  Enjoy,
  Ralf
 
  =
  NumPy 1.6.0 Release Notes
  =
 
  Fortran assumed shape array and size function support in ``numpy.f2py``
  ---
 
  F2py now supports wrapping Fortran 90 routines that use assumed shape
  arrays.  Before such routines could be called from Python but the
  corresponding Fortran routines received assumed shape arrays as zero
  length arrays which caused unpredicted results. Thanks to Lorenz
  Hüdepohl for pointing out the correct way to interface routines with
  assumed shape arrays.
 
  In addition, f2py interprets Fortran expression ``size(array, dim)``
  as ``shape(array, dim-1)`` which makes it possible to automatically
  wrap Fortran routines that use two argument ``size`` function in
  dimension specifications. Before users were forced to apply this
  mapping manually.
 


 Regarding the f2py support for assumed shape arrays:

 I'm just struggling along trying to learn how to use f2py to interface with
 fortran source, so please be patient if I am missing something obvious.
  That
 said, in test cases I've run the new f2py assumed-shape-array support in
 Numpy
 1.6.0.rc2 seems to conflict with the support for f90-style modules.  For
 example:

 foo_mod.f90

   ! -*- fix -*-

   module easy

   real, parameter :: anx(4) = (/1.,2.,3.,4./)

   contains

   subroutine sum(x, res)
 implicit none
 real, intent(in) :: x(:)
 real, intent(out) :: res

 integer :: i

 !print *, sum: size(x) = , size(x)

 res = 0.0

 do i = 1, size(x)
   res = res + x(i)
 enddo

   end subroutine sum

   end module easy


 when compiled with:
 f2py -c --fcompiler=intelem foo_mod.f90  -m e

 then:

 python
 import e
 print e.easy.sum(e.easy.anx)

 returns: 0.0

 Also (and I believe related) f2py can no longer compile source with assumed
 shape array valued functions within a module.  Even though the python
 wrapped
 code did not function properly when called from python, it did work when
 called
 from other fortran code.  It seems that the interface has been broken.  The
 previous version of Numpy I was using was 1.3.0 all on Ubuntu 10.04, Python
 2.6,
 and using Intel fortran compiler.

 thanks for your consideration and feedback.


Thanks for the bug report!

These issues are now fixed in:

  https://github.com/numpy/numpy/commit/f393b604

Ralf, feel free to apply this changeset to 1.6.x branch if appropriate.

Regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Pearu Peterson
On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates mat.yea...@gmail.com wrote:

 I have
 subroutine foo (a)
  integer a
  print*, Hello from Fortran!
  print*, a=,a
  a=2
  end

 and from python I want to do
  a=1
  foo(a)

 and I want a's value to now be 2.
 How do I do this?


With

 subroutine foo (a)
 integer a
!f2py intent(in, out) a
 print*, Hello from Fortran!
 print*, a=,a
 a=2
 end

you will have desired effect:

 a=1
 a = foo(a)
 print a
2

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py pass by reference

2011-04-12 Thread Pearu Peterson
Note that hello.foo(a) returns the value of Fortran `a` value. This explains
the printed 2 value.
So, use

 a = hello.foo(a)

and not

 hello.foo(a)

As Sameer noted in previous mail, passing Python scalar values to Fortran by
reference is not
possible because Python scalars are immutable. Hence the need to use `a =
foo(a)`.

HTH,
Pearu

On Tue, Apr 12, 2011 at 9:52 PM, Mathew Yeates mat.yea...@gmail.com wrote:

 bizarre
 I get
 =
  hello.foo(a)
  Hello from Fortran!
  a= 1
 2
  a
 1
  hello.foo(a)
  Hello from Fortran!
  a= 1
 2
  print a
 1
 
 =

 i.e. The value of 2 gets printed! This is numpy 1.3.0

 -Mathew


 On Tue, Apr 12, 2011 at 11:45 AM, Pearu Peterson
 pearu.peter...@gmail.com wrote:
 
 
  On Tue, Apr 12, 2011 at 9:06 PM, Mathew Yeates mat.yea...@gmail.com
 wrote:
 
  I have
  subroutine foo (a)
   integer a
   print*, Hello from Fortran!
   print*, a=,a
   a=2
   end
 
  and from python I want to do
   a=1
   foo(a)
 
  and I want a's value to now be 2.
  How do I do this?
 
  With
 
   subroutine foo (a)
   integer a
  !f2py intent(in, out) a
   print*, Hello from Fortran!
   print*, a=,a
   a=2
   end
 
  you will have desired effect:
 
  a=1
  a = foo(a)
  print a
  2
 
  HTH,
  Pearu
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] division operator

2011-04-04 Thread Pearu Peterson


On 04/04/2011 01:49 PM, Alex Ter-Sarkissov wrote:
 I have 2 variables, say var1=10,var2=100. To divide I do either
 divide(float(var1),float(var2)) or simply float(var1)/float(var2). I'm
 just wondering if there's a smarter way of doing this?

  from __future__ import division
  var1 = 10
  var2 = 100
  print var1/var2
 0.1

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] ANN: Numpy 1.6.0 beta 1

2011-03-31 Thread Pearu Peterson
On Thu, Mar 31, 2011 at 12:19 PM, David Cournapeau courn...@gmail.comwrote:

 On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen ro...@uw.edu wrote:
  In article
  AANLkTi=eeg8kl7639imrtl-ihg1ncqyolddsid5tf...@mail.gmail.com,
   Ralf Gommers ralf.gomm...@googlemail.com wrote:
 
  Hi,
 
  I am pleased to announce the availability of the first beta of NumPy
  1.6.0. Due to the extensive changes in the Numpy core for this
  release, the beta testing phase will last at least one month. Please
  test this beta and report any problems on the Numpy mailing list.
 
  Sources and binaries can be found at:
  http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/
  For (preliminary) release notes see below.

 I see a segfault on Ubuntu 64 bits for the test
 TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py.
 Am I the only one seeing it ?


The test work here ok on Ubuntu 64 with numpy master. Could you try the
maintenance/1.6.x branch where the related bugs are fixed.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] ANN: Numpy 1.6.0 beta 1

2011-03-31 Thread Pearu Peterson
On Thu, Mar 31, 2011 at 1:00 PM, Scott Sinclair scott.sinclair...@gmail.com
 wrote:

 On 31 March 2011 11:37, Pearu Peterson pearu.peter...@gmail.com wrote:
 
 
  On Thu, Mar 31, 2011 at 12:19 PM, David Cournapeau courn...@gmail.com
  wrote:
 
  On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen ro...@uw.edu wrote:
   In article
   AANLkTi=eeg8kl7639imrtl-ihg1ncqyolddsid5tf...@mail.gmail.com,
Ralf Gommers ralf.gomm...@googlemail.com wrote:
  
   Hi,
  
   I am pleased to announce the availability of the first beta of NumPy
   1.6.0. Due to the extensive changes in the Numpy core for this
   release, the beta testing phase will last at least one month. Please
   test this beta and report any problems on the Numpy mailing list.
  
   Sources and binaries can be found at:
   http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/
   For (preliminary) release notes see below.
 
  I see a segfault on Ubuntu 64 bits for the test
  TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py.
  Am I the only one seeing it ?
 
 
  The test work here ok on Ubuntu 64 with numpy master. Could you try the
  maintenance/1.6.x branch where the related bugs are fixed.

 For what it's worth, the maintenance/1.6.x branch works for me on 64-bit
 Ubuntu:

 (numpy-1.6.x)scott@godzilla:~$ python -c import numpy; numpy.test()


You might want to run

 python -c import numpy; numpy.test('full')

as the corresponding test is decorated as slow.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-29 Thread Pearu Peterson
On Tue, Mar 29, 2011 at 8:13 AM, Pearu Peterson pearu.peter...@gmail.comwrote:



 On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden stu...@molden.no wrote:

 Den 28.03.2011 19:12, skrev Pearu Peterson:
 
  FYI, f2py in numpy 1.6.x supports also assumed shape arrays.

 How did you do that? Chasm-interop, C bindings from F03, or marshalling
 through explicit-shape?


 The latter.
  Basically, if you have

 subroutine foo(a)
 real a(:)
 end

 then f2py automatically creates a wrapper subroutine

 subroutine wrapfoo(a, n)
 real a(n)
 integer n
 !f2py intent(in) :: a
 !f2py intent(hide) :: n = shape(a,0)
 interface
 subroutine foo(a)
 real a(:)
 end
 end interface
 call foo(a)
 end

 that can be wrapped with f2py in ordinary way.


 Can f2py pass strided memory from NumPy to Fortran?


 No. I haven't thought about it.


Now, after little bit of thinking and testing, I think supporting strided
arrays in f2py
is easily doable. For the example above, f2py just must generate the
following wrapper subroutine

subroutine wrapfoo(a, stride, n)
real a(n)
integer n, stride
!f2py intent(in) :: a
!f2py intent(hide) :: n = shape(a,0)
!f2py intent(hide) :: stride = getstrideof(a)
interface
subroutine foo(a)
real a(:)
end
end interface
call foo(a(1:stride:n))
end

Now the question is, how important this feature would be? How high I should
put it in my todo list?
If there is interest, the corresponding numpy ticket should be assigned to
me.

Best regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-29 Thread Pearu Peterson
On Tue, Mar 29, 2011 at 11:03 AM, Dag Sverre Seljebotn 
d.s.seljeb...@astro.uio.no wrote:


 I think it should be a(1:n*stride:stride) or something.


Yes, it was my typo and I assumed that n is the length of the original
array.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-28 Thread Pearu Peterson
On Mon, Mar 28, 2011 at 6:01 PM, Sturla Molden stu...@molden.no wrote


 I'll try to clarify this:

 ** Most Fortran 77 compilers (and beyond) assume explicit-shape and
 assumed-size arrays are contiguous blocks of memory. That is, arrays
 declared like a(m,n) or a(m,*). They are usually passed as a pointer to
 the first element. These are the only type of Fortran arrays f2py supports.


FYI, f2py in numpy 1.6.x supports also assumed shape arrays.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array views

2011-03-28 Thread Pearu Peterson
On Mon, Mar 28, 2011 at 10:44 PM, Sturla Molden stu...@molden.no wrote:

 Den 28.03.2011 19:12, skrev Pearu Peterson:
 
  FYI, f2py in numpy 1.6.x supports also assumed shape arrays.

 How did you do that? Chasm-interop, C bindings from F03, or marshalling
 through explicit-shape?


The latter.
 Basically, if you have

subroutine foo(a)
real a(:)
end

then f2py automatically creates a wrapper subroutine

subroutine wrapfoo(a, n)
real a(n)
integer n
!f2py intent(in) :: a
!f2py intent(hide) :: n = shape(a,0)
interface
subroutine foo(a)
real a(:)
end
end interface
call foo(a)
end

that can be wrapped with f2py in ordinary way.


 Can f2py pass strided memory from NumPy to Fortran?


No. I haven't thought about it.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1

2011-03-24 Thread Pearu Peterson
On Thu, Mar 24, 2011 at 2:04 AM, Derek Homeier 
de...@astro.physik.uni-goettingen.de wrote:

 On 24 Mar 2011, at 00:34, Derek Homeier wrote:

  tests with the fink-installed pythons on MacOS X mostly succeeded,
  with one failure in python2.4 and a couple of issues seemingly
  related to PPC floating point accuracy, as below:
 
 Probably last update for tonight: with the 'full' test suite, there's one
 additional failure and error, respectively under 10.5/ppc and
 10.6/x86_64 (in all Python versions):

 PowerPC:

 FAIL: test_kind.TestKind.test_all
 --
 Traceback (most recent call last):
   File /sw/lib/python2.5/site-packages/nose/case.py, line 187, in runTest
self.test(*self.arg)
  File /sw/lib/python2.5/site-packages/numpy/f2py/tests/test_kind.py, line
 30, in test_all
'selectedrealkind(%s): expected %r but got %r' %  (i,
 selected_real_kind(i), selectedrealkind(i)))
  File /sw/lib/python2.5/site-packages/numpy/testing/utils.py, line 34, in
 assert_
raise AssertionError(msg)
 AssertionError: selectedrealkind(16): expected 10 but got 16


Regarding this test failure, could you hack the
numpy/f2py/tests/test_kind.py script by adding the following code

for i in range(20):
  print '%s - %s, %s' % (i, selected_real_kind(i), selectedrealkind(i))

and send me the output? Also, what Fortran compiler version has been used to
build the test modules?



 Intel-64bit:
 ERROR: test_assumed_shape.TestAssumedShapeSumExample.test_all
 --
 Traceback (most recent call last):
   File /sw/lib/python3.2/site-packages/nose/case.py, line 372, in setUp
try_run(self.inst, ('setup', 'setUp'))
  File /sw/lib/python3.2/site-packages/nose/util.py, line 478, in try_run
return func()
  File /sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py, line 352,
 in setUp
module_name=self.module_name)
  File /sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py, line 73,
 in wrapper
memo[key] = func(*a, **kw)
  File /sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py, line 134,
 in build_module
% (cmd[4:], asstr(out)))
 RuntimeError: Running f2py failed: ['-m', '_test_ext_module_5403',
 '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_free.f90',
 '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_use.f90',
 '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/precision.f90']

   Reading .f2py_f2cmap ...
   Mapping real(kind=rk) to double

Hmm, this should not happen as real(kind=rk) should be mapped to float.
It seems that you have .f2py_f2cmap file lying around. Could you remove it
and try again.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1

2011-03-24 Thread Pearu Peterson
On Thu, Mar 24, 2011 at 10:11 AM, Pearu Peterson
pearu.peter...@gmail.comwrote:


 Regarding this test failure, could you hack the
 numpy/f2py/tests/test_kind.py script by adding the following code

 for i in range(20):
   print '%s - %s, %s' % (i, selected_real_kind(i), selectedrealkind(i))

 and send me the output? Also, what Fortran compiler version has been used
 to build the test modules?


Even better, you can report the result to

  http://projects.scipy.org/numpy/ticket/1767

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1

2011-03-24 Thread Pearu Peterson
On Fri, Mar 25, 2011 at 1:44 AM, Derek Homeier 
de...@astro.physik.uni-goettingen.de wrote:

 On 24.03.2011, at 9:11AM, Pearu Peterson wrote:
 
  Intel-64bit:
  ERROR: test_assumed_shape.TestAssumedShapeSumExample.test_all
  --
  Traceback (most recent call last):
   File /sw/lib/python3.2/site-packages/nose/case.py, line 372, in setUp
 try_run(self.inst, ('setup', 'setUp'))
   File /sw/lib/python3.2/site-packages/nose/util.py, line 478, in
 try_run
 return func()
   File /sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py, line
 352, in setUp
 module_name=self.module_name)
   File /sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py, line
 73, in wrapper
 memo[key] = func(*a, **kw)
   File /sw/lib/python3.2/site-packages/numpy/f2py/tests/util.py, line
 134, in build_module
 % (cmd[4:], asstr(out)))
  RuntimeError: Running f2py failed: ['-m', '_test_ext_module_5403',
 '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_free.f90',
 '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/foo_use.f90',
 '/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpfiy1jn/precision.f90']
 Reading .f2py_f2cmap ...
 Mapping real(kind=rk) to double
 
  Hmm, this should not happen as real(kind=rk) should be mapped to float.
  It seems that you have .f2py_f2cmap file lying around. Could you remove
 it and try again.

 Yes, it's in the tarball and was installed together with the f2py tests!


Indeed, f2py tests suite contains the .f2py_f2cmap file. Its effect should
be local to the corresponding test but it seems not.
I'll look into it..


 building extension _test_ext_module_5403 sources
 f2py options: []
 f2py:
 /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403module.c
 creating /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum
 creating
 /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7
 getctype: real(kind=rk) is mapped to C float (to override define
 dict(real = dict(rk=C typespec)) in
 /private/var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpPo744G/.f2py_f2cmap
 file).
 ...
 gfortran:f77:
 /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403-f2pywrappers.f

 /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403-f2pywrappers.f:26.21:

  real :: res
 1
 Error: Symbol 'res' at (1) already has basic type of REAL

 /var/folders/DC/DC7g9UNr2RWkb++8ZSn1J+++0Dk/-Tmp-/tmpS4KGum/src.macosx-10.6-x86_64-2.7/_test_ext_module_5403-f2pywrappers.f:26.21:

  real :: res
 1
 Error: Symbol 'res' at (1) already has basic type of REAL
 ...

 f2py is the one installed with this numpy version, gfortran is
 COLLECT_GCC=/sw/bin/gfortran

 COLLECT_LTO_WRAPPER=/sw/lib/gcc4.5/libexec/gcc/x86_64-apple-darwin10.6.0/4.5.2/lto-wrapper
 Ziel: x86_64-apple-darwin10.6.0
 Konfiguriert mit: ../gcc-4.5.2/configure --prefix=/sw
 --prefix=/sw/lib/gcc4.5 --mandir=/sw/share/man --infodir=/sw/lib/gcc4.5/info
 --enable-languages=c,c++,fortran,objc,obj-c++,java --with-gmp=/sw
 --with-libiconv-prefix=/sw --with-ppl=/sw --with-cloog=/sw --with-mpc=/sw
 --with-system-zlib --x-includes=/usr/X11R6/include
 --x-libraries=/usr/X11R6/lib --program-suffix=-fsf-4.5 --enable-lto
 Thread-Modell: posix
 gcc-Version 4.5.2 (GCC)


Can you send me the _test_ext_module_5403-f2pywrappers.f file. It should
exist there when the compilation fails.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.6: branching and release notes

2011-03-13 Thread Pearu Peterson
On Sun, Mar 13, 2011 at 11:22 AM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:

 Hi all,

 On Tuesday (~2am GMT) I plan to create the 1.6.x branch and tag the
 first beta. So please get your last commits for 1.6 in by Monday
 evening.

 Also, please review and add to the 1.6.0 release notes. I put in
 headers for several items that need a few lines in the notes, I hope
 this can be filled in by the authors of those features (Charles:
 Legendre polynomials, Pearu: assumed shape arrays, Mark: a bunch of
 stuff).


Done for assumed shape arrays and size function supports.

Best regards,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to compile Fortran using setup.py

2011-03-12 Thread Pearu Peterson
On Fri, Mar 11, 2011 at 3:58 AM, Ondrej Certik ond...@certik.cz wrote:

 Hi,

 I spent about an hour googling and didn't figure this out. Here is my
 setup.py:

 setup(
name = libqsnake,
cmdclass = {'build_ext': build_ext},
version = 0.1,
packages = [
'qsnake',
'qsnake.calculators',
'qsnake.calculators.tests',
'qsnake.data',
'qsnake.mesh2d',
'qsnake.tests',
],
package_data = {
'qsnake.tests': ['phaml_data/domain.*'],
},
include_dirs=[numpy.get_include()],
ext_modules = [Extension(qsnake.cmesh, [
qsnake/cmesh.pyx,
qsnake/fmesh.f90,
])],
description = Qsnake standard library,
license = BSD,
 )


You can specify Fortran code, that you don't want to process with f2py, in
the libraries list
and then use the corresponding library in the extension, for example:

setup(...
   libraries = [('foo', dict(sources=['qsnake/fmesh.f90']))],
   ext_modules = [Extension(qsnake.cmesh,
  sources =
[qsnake/cmesh.pyx],
  libraries = ['foo']
)],
  ...
)

See also scipy/integrate/setup.py that resolves the same issue but just
using the configuration function approach.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Pearu Peterson
Hi,

I have followed Development workflow instructions in

  http://docs.scipy.org/doc/numpy/dev/gitwash/

but I am having a problem with the last step:

$ git push upstream ticket1679:master
fatal: remote error:
  You can't push to git://github.com/numpy/numpy.git
  Use g...@github.com:numpy/numpy.git

What I am doing wrong?

Here's some additional info:
$ git remote -v show
origin  g...@github.com:pearu/numpy.git (fetch)
origin  g...@github.com:pearu/numpy.git (push)
upstreamgit://github.com/numpy/numpy.git (fetch)
upstreamgit://github.com/numpy/numpy.git (push)
$ git branch -a
  master
* ticket1679
  remotes/origin/HEAD - origin/master
  remotes/origin/maintenance/1.0.3.x
  remotes/origin/maintenance/1.1.x
  remotes/origin/maintenance/1.2.x
  remotes/origin/maintenance/1.3.x
  remotes/origin/maintenance/1.4.x
  remotes/origin/maintenance/1.5.x
  remotes/origin/master
  remotes/origin/ticket1679
  remotes/upstream/maintenance/1.0.3.x
  remotes/upstream/maintenance/1.1.x
  remotes/upstream/maintenance/1.2.x
  remotes/upstream/maintenance/1.3.x
  remotes/upstream/maintenance/1.4.x
  remotes/upstream/maintenance/1.5.x
  remotes/upstream/master


Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pushing changes to numpy git repo problem

2010-12-02 Thread Pearu Peterson
Thanks!
Pearu

On Thu, Dec 2, 2010 at 11:08 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Thu, Dec 2, 2010 at 1:52 PM, Pearu Peterson pearu.peter...@gmail.com
 wrote:

 Hi,

 I have followed Development workflow instructions in

  http://docs.scipy.org/doc/numpy/dev/gitwash/

 but I am having a problem with the last step:

 $ git push upstream ticket1679:master
 fatal: remote error:
  You can't push to git://github.com/numpy/numpy.git
  Use g...@github.com:numpy/numpy.git


 Do what the message says, the first address is readonly. You can change the
 settings in .git/config, mine looks like

 [core]
     repositoryformatversion = 0
     filemode = true
     bare = false
     logallrefupdates = true
 [remote origin]
     fetch = +refs/heads/*:refs/remotes/origin/*
     url = g...@github.com:charris/numpy
 [branch master]
     remote = origin
     merge = refs/heads/master
 [remote upstream]
     url = g...@github.com:numpy/numpy
     fetch = +refs/heads/*:refs/remotes/upstream/*
 [alias]
     mb = merge --no-ff

 Where upstream is the numpy repository.

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] compile fortran from python

2010-10-11 Thread Pearu Peterson
Hi,

You can create a setup.py file containing

def configuration(parent_package='',top_path=None):
 from numpy.distutils.misc_util import Configuration
 config = Configuration(None,parent_package,top_path)
 config.add_library('flib', sources = [test.f95])
 config.add_extension('ctest', sources = ['ctest.c'], 
libraries=['flib'])
 return config

if __name__ == __main__:
 from numpy.distutils.core import setup
 setup(configuration=configuration)

#eof

Running

   python setup.py config_fc --fcompiler=gnu95 build

will build a Fortran library flib.a and link it to
an extension module ctest.so. Use build_ext --inplace
if ctest.so should end up in current working directory.

Is that what you wanted to achieve?

HTH,
Pearu

On 10/09/2010 02:18 PM, Ioan Ferencik wrote:
 I would like to compile some Fortran  code using python,  build a
 shared library, and link to it using python. But I get a message
 saying the compiler does not recognise the extension of my file.

 this is my command:

 gcc -fPIC -c -shared -fno-underscoring test.f95 -o ./lib/libctest.so.1.0

 what is the easiest method to achieve this?
I suspect  I could create a custom extension and customise the
 unixccompiler object or could I just use the compiler object defined
 by f2py(fcompiler).

 Cheers


 Ioan Ferencik
 PhD student
 Aalto University
 School of Science and Technology
 Faculty Of Civil and Env. Engineering
 Lahti Center
 Tel: +358505122707


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Please help on compilation fortran module using f2py

2010-10-06 Thread Pearu Peterson
Hi,

On 10/06/2010 04:57 AM, Jing wrote:
   Hi, everyone:

 I am new to the python numpy and f2py. I really need help on compiling
 FORTRAN module using f2py. I have been searched internet without any
 success. Here is my setup: I have a Ubuntu 10.04 LTS with python 2.6,
 numpy 1.3.0 and f2py 2 (installed from ubuntu) and gfortran compiler
 4.4. I have a simple Fortran subroutine (in CSM_CH01_P1_1a_F.f95 file)
 as shown below:
 
 !

 subroutine sim_model_1(ts, n, a)

 !

 !f2py integer, intent(in) :: ts, n

 !f2py real,dimension(n), intent(inout) :: a

 implicit none

The problem is in Fortran code. The ts, n, and a variables need to be 
declared for fortran compiler too. The f2py directives are invisible
to the fortran compiler, they are just comments that are used by f2py.
So, try adding these lines to the Fortran code:

   integer, intent(in) :: ts, n
   real, dimension(n) :: a

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py problem with complex inout in subroutine

2010-07-24 Thread Pearu Peterson
Hi Mark,

On Mon, Jul 19, 2010 at 11:49 AM, Mark Bakker mark...@gmail.com wrote:
 Thanks for fixing this, Pearu.
 Complex arrays with intent(inout) don't seem to work either.
 They compile, but a problem occurs when calling the routine.

What problem?

 Did you fix that as well?

I guess so, see below.

 Here's an example that doesn't work (sorry, I cannot update to svn 8478 on
 my machine right now):

     subroutine test3(nlab,omega)
     implicit none
     integer, intent(in) :: nlab
     complex(kind=8), dimension(nlab), intent(inout) :: omega
     integer :: n
     do n = 1,nlab
     omega(n) = cmplx(1,1,kind=8)
     end do
     end subroutine test3

The example works fine here:

$ f2py -c -m foo test3.f90
 import foo
 from numpy import *
 omega=array([1,2,3,4],dtype='D')
 foo.test3(omega)
 print omega
-- print(omega)
[ 1.+1.j  1.+1.j  1.+1.j  1.+1.j]

If you cannot update numpy to required revision, you can also modify
the broken file directly. It only involves replacing four lines with
one line in numpy/f2py/cfuncs.py file.
See

  http://projects.scipy.org/numpy/changeset/8478

for details.

HTH,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py problem with complex inout in subroutine

2010-07-11 Thread Pearu Peterson


On 07/09/2010 02:03 PM, Mark Bakker wrote:
 Hello list. The following subroutine fails to compile with f2py.
 I use a complex variable with intent(inout). It works fine with two real
 variables, so I have a workaround, but it would be nicer with a complex
 variable.
 Any thoughts on what I am doing wrong?

compilation failed because of typos in the generated code. This is fixed 
in svn revision 8478.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Pearu Peterson
Hi,
I just noticed some weird behavior in operations with uint64 and int,
heres an example:

 numpy.uint64(3)+1
4.0
 type(numpy.uint64(3)+1)
type 'numpy.float64'

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible bug: uint64 + int gives float64

2010-06-13 Thread Pearu Peterson
On Sun, Jun 13, 2010 at 4:45 PM, Nadav Horesh nad...@visionsense.com wrote:
 int can be larger than numpy.int64 therefore it should be coerced to float64 
 (or float96/float128)

Ok, I see. The results type is defined by the types of operands, not
by their values. I guess
this has been discussed earlier but with small operands this feature
may be unexpected.
For example, with the same rule the result of int64 + int should be
float64 while currently
it is int64.

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to resize numpy.memmap?

2010-06-06 Thread Pearu Peterson
Hi,

I am creating a rather large file (typically 100MBi-1GBi) with numpy.memmap
but in some cases the initial estimate to the file size is just few
bytes too small.
So, I was trying to resize the memmap with a failure as demonstrated
with the following
example:

 fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
 fp.resize(11, refcheck=False)
...
ValueError: cannot resize this array:  it does not own its data

My question is, is there a way to fix this or may be there exist some other
technique to resize memmap. I have tried resizing memmap's _mmap attribute
directly:

 fp._mmap.resize(11)
 fp._mmap.size()
11

but the size of memmap instance remains unchanged:
 fp.size
10

Thanks,
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to resize numpy.memmap?

2010-06-06 Thread Pearu Peterson
Hi again,
To answer to the second part of my question, here follows an example
demonstrating how to resize a memmap:

 fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
 fp._mmap.resize(11)
 cp = numpy.ndarray.__new__(numpy.memmap, (fp._mmap.size(),), 
 dtype=fp.dtype, buffer=fp._mmap, offset=0, order='C')
 cp[-1] = 99
 cp[1] = 33
 cp
memmap([ 0, 33,  0,  0,  0,  0,  0,  0,  0,  0, 99], dtype=uint8)
 fp
memmap([ 0, 33,  0,  0,  0,  0,  0,  0,  0,  0], dtype=uint8)
 del fp, cp
 fp = numpy.memmap('test.dat', mode='r')
 fp
memmap([ 0, 33,  0,  0,  0,  0,  0,  0,  0,  0, 99], dtype=uint8)

Would there be any interest in turning the above code to numpy.memmap
method, say, to resized(newshape)? For example, for resolving the original
problem, one could have

fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
fp = fp.resized(11)

Regards,
Pearu

On Sun, Jun 6, 2010 at 10:19 PM, Pearu Peterson
pearu.peter...@gmail.com wrote:
 Hi,

 I am creating a rather large file (typically 100MBi-1GBi) with numpy.memmap
 but in some cases the initial estimate to the file size is just few
 bytes too small.
 So, I was trying to resize the memmap with a failure as demonstrated
 with the following
 example:

 fp = numpy.memmap('test.dat', shape=(10,), mode='w+')
 fp.resize(11, refcheck=False)
 ...
 ValueError: cannot resize this array:  it does not own its data

 My question is, is there a way to fix this or may be there exist some other
 technique to resize memmap. I have tried resizing memmap's _mmap attribute
 directly:

 fp._mmap.resize(11)
 fp._mmap.size()
    11

 but the size of memmap instance remains unchanged:
 fp.size
    10

 Thanks,
 Pearu

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: could not crack entity declaration

2010-03-25 Thread Pearu Peterson
Try renaming GLMnet.f90 to GLMnet.f.
HTH,
Pearu

David Warde-Farley wrote:
 I decided to give wrapping this code a try:
 
   http://morrislab.med.utoronto.ca/~dwf/GLMnet.f90
 
 I'm afraid my Fortran skills are fairly limited, but I do know that  
 gfortran compiles it fine. f2py run on this file produces lots of  
 errors of the form,
 
 Reading fortran codes...
   Reading file 'GLMnet.f90' (format:fix)
 Line #263 in GLMnet.f90:  real  
 x(no,ni),y(no),w(no),vp(ni),ca(nx,nlam)  353
   updatevars: could not crack entity declaration ca(nx,nlam)353.  
 Ignoring.
 Line #264 in GLMnet.f90:  real  
 ulam(nlam),a0(nlam),rsq(nlam),alm(nlam)  354
   updatevars: could not crack entity declaration alm(nlam)354.  
 Ignoring.
 Line #265 in GLMnet.f90:  integer  
 jd(*),ia(nx),nin(nlam)355
   updatevars: could not crack entity declaration nin(nlam)355.  
 Ignoring.
 Line #289 in GLMnet.f90:  real  
 x(no,ni),y(no),w(no),vp(ni),ulam(nlam)   378
   updatevars: could not crack entity declaration ulam(nlam)378.  
 Ignoring.
 Line #290 in GLMnet.f90:  real  
 ca(nx,nlam),a0(nlam),rsq(nlam),alm(nlam) 379
   updatevars: could not crack entity declaration alm(nlam)379.  
 Ignoring.
 Line #291 in GLMnet.f90:  integer  
 jd(*),ia(nx),nin(nlam)380
   updatevars: could not crack entity declaration nin(nlam)380.  
 Ignoring.
 Line #306 in GLMnet.f90:  call  
 chkvars(no,ni,x,ju)  392
   analyzeline: No name/args pattern found for li
 
 Is it the numbers that it is objecting to (I'm assuming these are some  
 sort of punchcard thing)? Do I need to modify the code in some way to  
 make it f2py-friendly?
 
 Thanks,
 
 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py compiler version errors

2010-03-17 Thread Pearu Peterson
Hi,
You are running rather old numpy version (1.0.1).
Try upgrading numpy as at least recent numpy from svn detects
this compiler fine.
Regards,
Pearu

Peter Brady wrote:
 Hello all,
 
 The version of f2py that's installed on our system doesn't appear to 
 handle version numbers correctly.  I've attached the relevant output of 
 f2py below:
 
 customize IntelFCompiler
 Couldn't match compiler version for 'Intel(R) Fortran Intel(R) 64
 Compiler Professional for applications running on Intel(R) 64,
 Version 11.0Build 20090318 \nCopyright (C) 1985-2009 Intel
 Corporation.  All rights reserved.\nFOR NON-COMMERCIAL USE ONLY\n\n
 Intel Fortran 11.0-1578'
 IntelFCompiler instance properties:
   archiver= ['ar', '-cr']
   compile_switch  = '-c'
   compiler_f77=
 ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
 72', '-w90', '-w95', '-KPIC', '-cm', '-O3',
 '-unroll', '-
 tpp7', '-xW', '-arch SSE2']
   compiler_f90=
 ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
 FR', '-KPIC', '-cm', '-O3', '-unroll', '-tpp7',
 '-xW', '-
 arch SSE2']
   compiler_fix=
 ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
 FI', '-KPIC', '-cm', '-O3', '-unroll', '-tpp7',
 '-xW', '-
 arch SSE2']
   libraries   = []
   library_dirs= []
   linker_so   =
 ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-
 shared', '-tpp7', '-xW', '-arch SSE2']
   object_switch   = '-o '
   ranlib  = ['ranlib']
   version = None
   version_cmd =
 ['/opt/intel/Compiler/11.0/083/bin/intel64/ifort', '-FI
 -V -c /tmp/tmpx6aZa8__dummy.f -o
 /tmp/tmpx6aZa8__dummy.o']
 
 
 The output of f2py is:
 
 Version: 2_3473
 numpy Version: 1.0.1
 Requires:Python 2.3 or higher.
 License: NumPy license (see LICENSE.txt in the NumPy source code)
 Copyright 1999 - 2005 Pearu Peterson all rights reserved.
 http://cens.ioc.ee/projects/f2py2e/
 
 
 We're running 64bit linux with python 2.4.  How do I make this work?
 
 thanks,
 Peter.
  
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Getting Callbacks with arrays to work

2010-01-12 Thread Pearu Peterson
Hi,

The problem is that f2py does not support callbacks that
return arrays. There is easy workaround to that: provide
returnable arrays as arguments to callback functions.
Using your example:

SUBROUTINE CallbackTest(dv,v0,Vout,N)
  IMPLICIT NONE

  !F2PY intent( hide ):: N
  INTEGER:: N, ic
  EXTERNAL:: dv

  DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0
  DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout

  DOUBLE PRECISION, DIMENSION( N ):: Vnow
  DOUBLE PRECISION, DIMENSION( N )::  temp

  Vnow = v0
  !f2py intent (out) temp
  call dv(temp, Vnow, N)

  DO ic = 1, N
 Vout( ic ) = temp(ic)
  END DO

END SUBROUTINE CallbackTest

$ f2py -c test.f90 -m t --fcompiler=gnu95

 from numpy import *
 from t import *
 arr = array([2.0, 4.0, 6.0, 8.0])
 def dV(v):
print 'in Python dV: V is: ',v
ret = v.copy()
ret[1] = 100.0
return ret
...
 output = callbacktest(dV, arr)
in Python dV: V is:  [ 2.  4.  6.  8.]
 output
array([   2.,  100.,6.,8.])

What problems do you have with implicit none? It works
fine here. Check the format of your source code,
if it is free then use `.f90` extension, not `.f`.

HTH,
Pearu

Jon Moore wrote:
  Hi,
 
 I'm trying to build a differential equation integrator and later a
 stochastic differential equation integrator.
 
 I'm having trouble getting f2py to work where the callback itself
 receives an array from the Fortran routine does some work on it and then
 passes an array back.  
 
 For the stoachastic integrator I'll need 2 callbacks both dealing with
 arrays.
 
 The idea is the code that never changes (ie the integrator) will be in
 Fortran and the code that changes (ie the callbacks defining
 differential equations) will be different for each problem.
 
 To test the idea I've written basic code which should pass an array back
 and forth between Python and Fortran if it works right.
 
 Here is some code which doesn't work properly:-
 
 SUBROUTINE CallbackTest(dv,v0,Vout,N)
 !IMPLICIT NONE
 
 cF2PY intent( hide ):: N
 INTEGER:: N, ic
 
 EXTERNAL:: dv
 
 DOUBLE PRECISION, DIMENSION( N ), INTENT(IN):: v0
 DOUBLE PRECISION, DIMENSION( N ), INTENT(OUT):: Vout
 
 DOUBLE PRECISION, DIMENSION( N ):: Vnow
 DOUBLE PRECISION, DIMENSION( N )::  temp
 
 Vnow = v0
 
 
 temp = dv(Vnow, N)
 
 DO ic = 1, N
 Vout( ic ) = temp(ic)
 END DO
 
 END SUBROUTINE CallbackTest
 
 
 
 When I test it with this python code I find the code just replicates the
 first term of the array!
 
 
 
 
 from numpy import *
 import callback as c
 
 def dV(v):
 print 'in Python dV: V is: ',v
 return v.copy()
 
 arr = array([2.0, 4.0, 6.0, 8.0])
 
 print 'Arr is: ', arr
 
 output = c.CallbackTest(dV, arr)
 
 print 'Out is: ', output
 
 
 
 
 Arr is:  [ 2.  4.  6.  8.]
 
 in Python dV: V is:  [ 2.  4.  6.  8.]
 
 Out is:  [ 2.  2.  2.  2.]
 
 
 
 Any ideas how I should do this, and also how do I get the code to work
 with implicit none not commented out?
 
 Thanks
 
 Jon
 
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread Pearu Peterson


Pearu Peterson wrote:

 Hmm, regarding `intent(in, out) j`, this should work. I'll check what
 is going on..

The `intent(in, out) j` works when pycalc is defined as subroutine:

   call pycalc(i, j)

instead of

   pyreturn = pycalc(i, j)

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread Pearu Peterson

Hi James,

To answer the second question, use:

   j = 1+numpy.array([2], numpy.int32)

The answer to the first question is that
the type of 1+numpy.array([2]) is
numpy.int64 but Fortran function expects
an array of type numpy.int32 and hence
the wrapper makes a copy of the input
array (which is also returned by the wrapper)
before passing it to Fortran.

Regards,
Pearu

James McEnerney wrote:
 Pearu,
 Thanks. a follow question.
 Using fortran
 
   subroutine calc(j)
 Cf2py intent(callback) pycalc
   external pycalc
 Cf2py integer dimension(1), intent(in,out):: j
 
   integer j(1)
   print *, 'in fortran before pycalc ', 'j=', j(1)
   call pycalc(j)
   print *, 'in fortran after pycalc ', ' j=', j(1)
   end
 
 in python
 
 import foo,numpy
 def pycalc(j):

 print ' in pycalc ', 'j=', j
 j[:] = 20*j
 return
 
 print foo.calc.__doc__
 j = 1+numpy.array([2])
 print foo.calc(j, pycalc)
 print j
 
 the output is
 
 calc - Function signature:
   j = calc(j,pycalc,[pycalc_extra_args])
 Required arguments:
   j : input rank-1 array('i') with bounds (1)
   pycalc : call-back function
 Optional arguments:
   pycalc_extra_args := () input tuple
 Return objects:
   j : rank-1 array('i') with bounds (1)
 Call-back functions:
   def pycalc(j): return j
   Required arguments:
 j : input rank-1 array('i') with bounds (1)
   Return objects:
 j : rank-1 array('i') with bounds (1)
 
  in fortran before pycalc j=   3
  in pycalc  j= [3]
  in fortran after pycalc  j=  60
 [60]
 [3]
 
 Why is the return from foo.calc different from j?
 How do I make them the same?
 return j in pycalc doesn't change things.
 
 Thanks again!
 
 At 12:06 AM 11/25/2009, you wrote:
 
 
 Pearu Peterson wrote:

  Hmm, regarding `intent(in, out) j`, this should work. I'll check what
  is going on..

 The `intent(in, out) j` works when pycalc is defined as subroutine:

call pycalc(i, j)

 instead of

pyreturn = pycalc(i, j)

 Pearu
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://*mail.scipy.org/mailman/listinfo/numpy-discussion
 
 Jim McEnerney
 Lawrence Livermore National Laboratory
 7000 East Ave.
 Livermore, Ca. 94550-9234
 
 USA
 
 925-422-1963
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-24 Thread Pearu Peterson
Hi,

It is not really a bug what you are seeing..

In pycalc when assigning

j = 20 * j

you create a new object `j` and the argument object `j`, that links
back to Fortran data, gets discarded.
So, you can change j inplace, for example:

   j[:] = 20*j

The first argument `i` is int object that in Python is immutable,
and hence cannot be changed inplace and the corresponding Fortran
scalar object cannot be changed in the callback (in fact, `i`
is a copy of the corresponding Fortran data value).

To change `i` in Python callback, define it as an array
(similar to `j`) and do inplace operations with it.

Regarding pyreturn and assuming Fortran 77:
you cannot return arrays or multiple object in Fortran functions.
Fortran functions may return only scalars. So, the pyreturn trick will 
never work. The solution is to change arguments in place as with `j`.

Hmm, regarding `intent(in, out) j`, this should work. I'll check what
is going on..

HTH,
Pearu


James McEnerney wrote:
   While using the call-back feature of f2py I stumbled across what appears
 to be a bug and I'm asking the community to look into this.
 
 Background: I'm in the middle of converting some legacy fortran to python.
 There is one routine that is particulary thorny that calls more easily
 convertible service routines and my intention is to convert the latter
 and use the callback feature of f2py to execute them within the fortran
 followed by a systematic conversion of what remains.  This seems to be
 doable from what I've read on callback. I have not seen an example
 of using callback where the python actually changes parameters that are
 returned to the fortran; this is a requirement for me. While setting up
 an example to illustrate this I came across a syntactically correct
 situation(this means it compiles  executes) but gives the wrong answer.
 Here's the code:
 In fortran, source foo.f
   subroutine calc(i, j)
 Cf2py intent(callback) pycalc
   external pycalc
 Cf2py integer intent(in,out,copy):: i
 Cf2py integer dimension(1), intent(in,out):: j
   integer pyreturn
 
   integer i, j(1)
   print *, 'in fortran before pycalc ','i=',i, ' j=', j(1)
   pyreturn = pycalc(i, j)
   print *, 'in fortran after pycalc ','i=', i, ' j=', j(1)
 
   end
  
 Standard build: f2py -c -m foo foo.f
 
 In python, execute
 import foo,numpy
 
 def pycalc(i, j):
 print ' in pycalc ', 'i=',i, 'j=', j
 i=10*i
 j = 20*j
 return i, j
 
 print foo.calc.__doc__
 i=2
 j = 1+numpy.array([i])
 print foo.calc(i,j, pycalc)
 
 Here's the output:
 calc - Function signature:
   i,j = calc(i,j,pycalc,[pycalc_extra_args])
 Required arguments:
   i : input int
   j : input rank-1 array('i') with bounds (1)
   pycalc : call-back function
 Optional arguments:
   pycalc_extra_args := () input tuple
 Return objects:
   i : int
   j : rank-1 array('i') with bounds (1)
 Call-back functions:
   def pycalc(i,j): return pyreturn,i,j
   Required arguments:
 i : input int
 j : input rank-1 array('i') with bounds (1)
   Return objects:
 pyreturn : int
 i : int
 j : rank-1 array('i') with bounds (1)
 
 
  in fortran before pycalc i= 2 j= 3
  in pycalc  i= 2 j= [3]
  in fortran after pycalc i= 60 j= 3
 (60, array([3]))
 
 The bug:
 on return to the fortran why is i=60  j=3?
 shouldn't it be i=10  j=60
 
 While that's what I expect, I might not be defining the
 interface properly; but this compiles  executes. If this
 is incorrect, what is?  In the fortran, pyreturn appears
 to be an address; how do I get the retuned values?
 
 I'm running 
 Redhat Linux
 python version 2.5
 f2py version 2_3979
 numpy version 1.0.3.1
 Thanks
 
 Jim McEnerney
 Lawrence Livermore National Laboratory
 7000 East Ave.
 Livermore, Ca. 94550-9234
 
 USA
 
 925-422-1963
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py function callback: error while using repeated arguments in function call

2009-11-09 Thread Pearu Peterson

Yves Frederix wrote:
 Hi,
 
 I am doing a simple function callback from fortran to python for which
 the actual function call in fortran has repeated arguments.
 
 ! callback_error.f90:
 subroutine testfun(x)
double precision, intent(in) :: x
double precision :: y
 !f2py intent(callback) foo
 !f2py double precision :: arg1
 !f2py double precision :: arg2
 !f2py double precision :: y
 !f2py external y = foo(arg1, arg2)
external foo
y = foo(x, x) !  -- this causes problems
print *, 'y:', y
 end subroutine testfun

..

 Is this expected behavior?

No. The bug is now fixed in numpy svn (rev 7712).

Thanks for pointing out this corner case.
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: a journal paper about F2PY has been published

2009-10-05 Thread Pearu Peterson


 Original Message 
Subject: [f2py] ANN: a journal paper about F2PY has been published
Date: Mon, 05 Oct 2009 11:52:20 +0300
From: Pearu Peterson pearu.peter...@gmail.com
Reply-To: For users of the f2py program f2py-us...@cens.ioc.ee
To: For users of the f2py program f2py-us...@cens.ioc.ee

Hi,

A journal paper about F2PY has been published in International Journal
of Computational Science and Engineering:

  Peterson, P. (2009) 'F2PY: a tool for connecting Fortran and Python
  programs', Int. J. Computational Science and Engineering.
  Vol.4, No. 4, pp.296-305.

So, if you would like to cite F2PY in a paper or presentation, using
this reference is recommended.

Interscience Publishers will update their web pages with the new journal
number within few weeks. A softcopy of the article
available in my homepage:
  http://cens.ioc.ee/~pearu/papers/IJCSE4.4_Paper_8.pdf

Best regards,
Pearu

___
f2py-users mailing list
f2py-us...@cens.ioc.ee
http://cens.ioc.ee/mailman/listinfo/f2py-users
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054)

2009-03-16 Thread Pearu Peterson
On Sun, March 15, 2009 8:57 pm, Sturla Molden wrote:

 Regarding ticket #1054. What is the reason for this strange behaviour?

 a = np.zeros((10,10),order='F')
 a.flags
   C_CONTIGUOUS : False
   F_CONTIGUOUS : True
   OWNDATA : True
   WRITEABLE : True
   ALIGNED : True
   UPDATEIFCOPY : False
 (a+1).flags
   C_CONTIGUOUS : True
   F_CONTIGUOUS : False
   OWNDATA : True
   WRITEABLE : True
   ALIGNED : True
   UPDATEIFCOPY : False

I wonder if this behavior could be considered as a bug
because it does not seem to have any advantages but
only hides the storage order change and that may introduce
inefficiencies.

If a operation produces new array then the new array should have the
storage properties of the lhs operand.
That would allow writing code

  a = zeros(shape, order='F')
  b = a + 1

instead of

  a = zeros(shape, order='F')
  b = a[:]
  b += 1

to keep the storage properties in operations.

Regards,
Pearu



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Superfluous array transpose (cf. ticket #1054)

2009-03-16 Thread Pearu Peterson
On Mon, March 16, 2009 4:05 pm, Sturla Molden wrote:
 On 3/16/2009 9:27 AM, Pearu Peterson wrote:

 If a operation produces new array then the new array should have the
 storage properties of the lhs operand.

 That would not be enough, as 1+a would behave differently from a+1. The
 former would change storage order and the latter would not.

Actually, 1+a would be handled by __radd__ method and hence
the storage order would be defined by the rhs (lhs of the __radd__ method).

 Broadcasting arrays adds futher to the complexity of the problem.

I guess, similar rules should be applied to storage order then.

Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is the logical value of nan?

2009-03-11 Thread Pearu Peterson
On Wed, March 11, 2009 7:50 am, Christopher Barker wrote:

Python does not distinguish between True and
False -- Python makes the distinction between something and nothing.

 In that context, NaN is nothing, thus False.

Mathematically speaking, NaN is a quantity with undefined value. Closer
analysis of a particular case may reveal that it may be some finite number,
or an infinity with some direction, or be intrinsically undefined.
NaN is something that cannot be defined because its value is not unique.
Nothing would be the content of empty set.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error handling with f2py?

2009-01-16 Thread Pearu Peterson
On Thu, January 15, 2009 6:17 pm, Sturla Molden wrote:
 Is it possible to make f2py raise an exception if a fortran routine
 signals an error?

 If I e.g. have

  subroutine foobar(a, ierr)

 Can I get an exception automatically raised if ierr != 0?

Yes, for that you need to provide your own fortran call code
using f2py callstatement construct. The initial fortran call
code can be obtained from f2py generated modulenamemodule.c file,
for instance.

An example follows below:

Fortran file foo.f:
---

  subroutine foo(a, ierr)
  integer a
  integer ierr
  if (a.gt.10) then
ierr=2
  else
 if (a.gt.5) then
ierr=1
 else
ierr = 0
 end if
  end if
  end

Generated (f2py -m m foo.f) and then modified signature file m.pyf:
---

!-*- f90 -*-
! Note: the context of this file is case sensitive.

python module m ! in
interface  ! in :m
subroutine foo(a,ierr) ! in :m:foo.f
integer :: a
integer :: ierr
intent (in, out) a
intent (hide) ierr
callstatement '''
(*f2py_func)(a, ierr);
if (ierr==1)
{
  PyErr_SetString(PyExc_ValueError, a is gt 5);
  }
if (ierr==2)
  {
PyErr_SetString(PyExc_ValueError, a is gt 10);
  }
'''
end subroutine foo
end interface
end python module m

! This file was auto-generated with f2py (version:2_5618).
! See http://cens.ioc.ee/projects/f2py2e/

Build the extension module and use from python:
---

$ f2py -c m.pyf foo.f
$ python
 import m
 m.foo(30)
---
type 'exceptions.ValueError'Traceback (most recent call last)

/home/pearu/test/f2py/exc/ipython console in module()

type 'exceptions.ValueError': a is gt 10
 m.foo(6)
---
type 'exceptions.ValueError'Traceback (most recent call last)

/home/pearu/test/f2py/exc/ipython console in module()

type 'exceptions.ValueError': a is gt 5
 m.foo(4)
4

HTH,
Pearu



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py - a recap

2008-07-24 Thread Pearu Peterson

Hi,

Few months ago I joined a group of system biologist and I have
been busy with new projects (mostly C++ based). So I haven't
had a chance to work on f2py.

However, I am still around to fix f2py bugs and maintain/support
numpy.f2py (as long as current numpy maintainers allow it..)
-- as a rule these tasks do not take much of my time.

I have also rewritten f2py users guide
for numpy.f2py and submitted a paper on f2py. I'll make them
available when I get some more time..

Regards,
still-kicking-yoursly,
Pearu


On Thu, July 24, 2008 1:46 am, Fernando Perez wrote:
 Howdy,

 On Wed, Jul 23, 2008 at 3:18 PM, Stéfan van der Walt [EMAIL PROTECTED]
 wrote:
 2008/7/23 Fernando Perez [EMAIL PROTECTED]:

 I agree (with your previous e-mail) that it would be good to have some
 documentation, so if you could give me some pointers on *what* to
 document (I haven't used f2py much), then I'll try my best to get
 around to it.

 Well, I think my 'recap' message earlier in this thread points to a
 few issues that can probably be addressed quickly (the 'instead' error
 in the help, the doc/docs dichotomy needs to be cleaned up so a single
 documentation directory exists, etc).   I'm also  attaching a set of
 very old notes I wrote years ago on f2py that you are free to use in
 any way you see fit.  I gave them a 2-minute rst treatment but didn't
 edit them at all, so they may be somewhat outdated (I originally wrote
 them in 2002 I think).

 If Pearu has moved to greener pastures, f2py could certainly use an
 adoptive parent.  It happens to be a really important piece of
 infrastructure and  for the most part it works fairly well.   I think
 a litlte bit of cleanup/doc integration with the rest of numpy is
 probably all that's needed, so it could be a good project for someone
 to adopt that would potentially be low-demand yet quite useful.

 Cheers,

 f
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] set_local_path in test files

2008-07-02 Thread Pearu Peterson

Alan McIntyre wrote:
 Some test files have a set_local_path()/restore_path() pair at the
 top, and some don't.  Is there any reason to be changing sys.path like
 this in the test modules?  If not, I'll take them out when I see them.

The idea behind set_local_path is that it allows running tests
inside subpackages without the need to rebuild the entire package.
set_local_path()/restore_path() are convenient when debugging or
developing a subpackage.
If you are sure that there are no bugs in numpy subpackges
that need such debugging process, then the set_local_path()
restore_path() calls can be removed. (But please do not
remove them from scipy tests files, rebuilding scipy just
takes too much time and debugging subpackages globally would
be too painful).

Pearu



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] set_local_path in test files

2008-07-02 Thread Pearu Peterson
On Wed, July 2, 2008 8:25 pm, Robert Kern wrote:
 On Wed, Jul 2, 2008 at 09:01, Alan McIntyre [EMAIL PROTECTED]
 wrote:
 On Wed, Jul 2, 2008 at 9:35 AM, Pearu Peterson [EMAIL PROTECTED]
 wrote:
 Alan McIntyre wrote:
 Some test files have a set_local_path()/restore_path() pair at the
 top, and some don't.  Is there any reason to be changing sys.path like
 this in the test modules?  If not, I'll take them out when I see them.

 The idea behind set_local_path is that it allows running tests
 inside subpackages without the need to rebuild the entire package.

 Ah, thanks; I'd forgotten about that.  I'll leave them alone, then.  I
 made a note for myself to make sure it's possible to run tests locally
 without doing a full build/install (where practical).

 Please remove them and adjust the imports. As I've mentioned before,
 numpy and scipy can now reliably be built in-place with python
 setup.py build_src --inplace build_ext --inplace. This is a more
 robust method to test uninstalled code than adjusting sys.path.

Note that the point of set_local_path is not to test uninstalled
code but to test only a subpackage. For example,

  cd svn/scipy/scipy/fftpack
  python setup.py build
  python tests/test_basic.py

would run the tests using the extensions from the build directory.
Well, at least it used to do that in past but it seems that the
feature has been removed from scipy svn:(

Scipy subpackages used to be usable as standalone packages
(even not requiring scipy itself) but this seems to be changed.
This is not good from from the refactoring point of view.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error importing a f2py compiled module.

2008-06-23 Thread Pearu Peterson
On Mon, June 23, 2008 10:38 am, Fabrice Silva wrote:
 Dear all
 I've tried to run f2py on a fortran file which used to be usable from
 python some months ago.
 Following command lines are applied with success (no errors raised) :
 f2py -m modulename -h tmpo.pyf --overwrite-signature  tmpo.f
 f2py -m modulename -c --f90exec=/usr/bin/f95 tmpo.f

First, it is not clear what compiler is f95. If it is gfortran, then
use the command
  f2py -m modulename -c --fcompiler=gnu95 tmpo.f

If it is something else, check the output of

  f2py -c --help-fcompiler

and use appropiate --fcompiler switch.

Second, I hope you realize that the first command has no effect to
the second command. If you have edited the tmpo.pyf file, then use
the following second command:

  f2py tmpo.pyf  -c --fcompiler=gnu95 tmpo.f

 The output of these commands is available here:
 http://paste.debian.net/7307

 When importing in Python with import modulename, I have an
 ImportError:
 Traceback (most recent call last):
   File Solveur.py, line 44, in module
 import modulename as Modele
 ImportError: modulename.so: failed to map segment from shared
 object: Operation not permitted

 How can that be fixed ? Any suggestion ?

I don't  have ideas what is causing this import error. Try
the instructions above, may be it is due to some compile object
conflicts.

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] seeking help with f2py_options

2008-06-21 Thread Pearu Peterson
On Sat, June 21, 2008 3:28 pm, Helmut Rathgen wrote:
 Dear all,

 I am trying to write a setp.py based on numpy.distutils for a mixed
 python/fortran90 package.

 I'd like to specify the fortran compiler, such as the path to the
 compiler, compiler flags, etc. in setup.py.

 I seemed to understand that this should be done by passing 'f2py_options
 = [ ..., ... ]' to numpy.distutils.core.Extension()

 However, I get errors such as

 [EMAIL PROTECTED]:~/src/testdistutils$ python setup.py build
 running build
 running scons
 customize UnixCCompiler
 Found executable /usr/bin/gcc
 customize GnuFCompiler
 Could not locate executable g77
 Could not locate executable f77
 customize IntelFCompiler
 Found executable /opt/intel/fc/10.0.026/bin/ifort
 customize IntelFCompiler
 customize UnixCCompiler
 customize UnixCCompiler using scons
 running config_cc
 unifing config_cc, config, build_clib, build_ext, build commands
 --compiler options
 running config_fc
 unifing config_fc, config, build_clib, build_ext, build commands
 --fcompiler options
 running build_src
 building extension mrcwaf sources
 f2py options: ['--fcompiler=intel',
 '--f90exec=/opt/intel/fc/10.0.026/bin/ifort', '--opt=-O3 -xW -ipo',
 '--noarch']
 f2py: src/mrcwaf/mrcwaf.pyf
 Unknown option '--fcompiler=intel'


 How to use f2py_options - or should flags be passed in a different way?


Note that --fcompiler= and other such options are actually options
to numpy.distutils (f2py script would just pass these options
forward to numpy.distutils). f2py_options can contain only f2py
specific options.
Hence you should try to modify sys.path
in the beggining of the setup.py file to specify the fortran
compiler options. For example, in setup.py file, insert:

import sys
sys.path.extend('config_fc --fcompiler=intel '.split())

See
  python setup.py config_fc --help
  python setup.py build_ext --help
for more information about possible options.

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nose changes checked in

2008-06-17 Thread Pearu Peterson
On Tue, June 17, 2008 6:17 am, Robert Kern wrote:
 On Mon, Jun 16, 2008 at 21:18, Alan McIntyre [EMAIL PROTECTED]
 wrote:
 On Mon, Jun 16, 2008 at 9:04 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:

 In [1]: numpy.test()
 Not implemented: Defined_Binary_Op
 Not implemented: Defined_Binary_Op
 Defined_Operator not defined used by Generic_Spec
 Needs match implementation: Allocate_Stmt
 Needs match implementation: Associate_Construct
 snip

 That stream of Not implemented and Needs match implementation
 stuff comes from numpy/f2py/lib/parser/test_Fortran2003.py and
 Fortran2003.py.  I can silence most of that output by disabling those
 module-level if 1: blocks in those two files, but there's still
 complaints about Defined_Binary_Op not being implemented.  If someone
 more knowledgeable about that module could comment on that I'd
 appreciate it.

 These files were for the in-development g3 version of f2py. That
 development has moved outside of numpy, so I think they can be removed
 wholesale. Some of them seem to require a Fortran 90 compiler, so I
 have had them fail for me. Removing these is a high priority.

 Pearu, can you confirm? Can we give you the task of removing the
 unused g3 portions of f2py for the 1.2 release?

Yes, I have created the corresponding ticket some time ago:
  http://projects.scipy.org/scipy/numpy/ticket/758

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson


David Cournapeau wrote:
 Pearu Peterson wrote:
 So I beg to be flexible with f2py related commits for now. 
 
 Why not creating a branch for the those changes, and applying only 
 critical bug fixes to the trunk ?

How do you define a critical bug? Critical to whom?
f2py changes are never critical to numpy users who do not use f2py.
I have stated before that I am not developing numpy.f2py any further. 
This also means that any changes to f2py should be essentially bug
fixes. Creating a branch for bug fixes is a waste of time, imho.
If somebody is willing to maintain the branch, that is, periodically
sync the branch with the trunk and vice-versa, then I don't mind.

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson
On Tue, May 20, 2008 12:03 pm, David Cournapeau wrote:
 Pearu Peterson wrote:
 f2py changes are never critical to numpy users who do not use f2py.

 No, but they are to scipy users if f2py cannot build scipy.

Well, I know pretty well what f2py features scipy uses and
what could break scipy build. So, don't worry about that.

 I have stated before that I am not developing numpy.f2py any further.
 This also means that any changes to f2py should be essentially bug
 fixes. Creating a branch for bug fixes is a waste of time, imho.

 I was speaking about creating a branch for the unit tests changes you
 were talking about, that is things which could potentially break a lot
 of configurations.

A branch for the unit tests changes is of course reasonable.

 Is the new f2py available for users ? If yes,..

No, it is far from being usable now. The numpy.f2py and g3 f2py
are completely different software. The changeset was fixing
a bug in numpy.f2py, it has nothing to do with g3 f2py.

amazing-how-paranoiac-is-todays-numpy/scipy-development'ly yours,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development

2008-05-20 Thread Pearu Peterson
On Tue, May 20, 2008 12:59 pm, Jarrod Millman wrote:

 Commits to the trunk (1.2.x) should follow these rules:

 1.  Documentation fixes are allowed and strongly encouraged.
 2.  Bug-fixes are strongly encouraged.
 3.  Do not break backwards compatibility.
 4.  New features are permissible.
 5.  New tests are highly desirable.
 6.  If you add a new feature, it must have tests.
 7.  If you fix a bug, it must have tests.

 If you want to break a rule, don't.  If you feel you absolutely have
 to, please don't--but feel free send an email to the list explain your
 problem.
...
 In particular, let me know it there is some aspect of this that
 you simply refuse to agree to in at least principle.

Since you asked, I have a problem with the rule 7 when applying
it to packages like numpy.distutils and numpy.f2py, for instance.

Do you realize that there exists bugs/features for which unittests cannot
be written in principle? An example: say, a compiler vendor changes
a flag of the new version of the compiler so that numpy.distutils
is not able to detect the compiler or it uses wrong flags for the
new compiler when compiling sources. Often, the required fix
is trivial to find and apply, also just reading the code one can
easily verify that the patch does not break anything. However, to
write a unittest covering such a change would mean that one needs
to ship also the corresponding compiler to the unittest directory.
This is nonsense, of course. I can find other similar examples
that have needed attention and changes to numpy.distutils and
numpy.f2py in past and I know that few are coming up.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-20 Thread Pearu Peterson
On Tue, May 20, 2008 1:36 pm, Jarrod Millman wrote:
 On Mon, May 19, 2008 at 10:29 PM, Pearu Peterson [EMAIL PROTECTED]
 wrote:
 On Tue, May 20, 2008 1:26 am, Robert Kern wrote:
 Is this an important bugfix? If not, can you hold off until 1.1.0 is
 released?

 The patch fixes a long existing and unreported bug in f2py - I think
 the bug was introduced when Python defined min and max functions.
 I learned about the bug when reading a manuscript about f2py. Such bugs
 should not end up in a paper demonstrating f2py inability to process
 certain features as it would have not been designed to do so. So, I'd
 consider
 the bugfix important.

 I have been struggling to try and get a stable release out since
 February and every time I think that the release is almost ready some
 piece of code changes that requires me to delay.  While overall the
 code has continuously improved over this period, I think it is time to
 get these improvements to our users.

 That said, I am willing to leave this change on the trunk, but please
 refrain from making any more changes until we release 1.1.0.  I know
 it can be frustrating, but, I believe, this is the first time I have
 asked the community to not make commits to the trunk since I started
 handling releases almost a year ago.  The freeze has only been in
 effect since Saturday and will last less than one week in total.  I
 would have preferred if you could have made this change during any one
 of the other 51 weeks of the year.

Please, go ahead. I'll not commit non-critical changes until the trunk
is open again.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-19 Thread Pearu Peterson
CC: numpy-discussion because of other reactions on the subject.

On Tue, May 20, 2008 1:26 am, Robert Kern wrote:
 Is this an important bugfix? If not, can you hold off until 1.1.0 is
 released?

The patch fixes a long existing and unreported bug in f2py - I think
the bug was introduced when Python defined min and max functions.
I learned about the bug when reading a manuscript about f2py. Such bugs
should not end up in a paper demonstrating f2py inability to process
certain
features as it would have not been designed to do so. So, I'd consider
the bugfix important.

On the other hand, the patch does not affect numpy users who do not
use f2py, in any way. So, it is not important for numpy users, in general.

Hmm, I also thought that the trunk is open for development, even though
r5198 is only fixing a bug (and I do not plan to develop f2py in numpy
further, just fix bugs and maintain it). If the release process
is going to take for weeks and is locking the trunk, may be the
release candidates should live in a separate branch?

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils: building a f2py in a subdir

2008-05-18 Thread Pearu Peterson
On Sun, May 18, 2008 1:14 pm, David Cournapeau wrote:
 Hi,

 I would like to be able to build a f2py extension in a subdir with
 distutils, that is:

 config.add_extension('foo/bar', source = ['foo/bar.pyf'])

A safe approach would be to create a foo/setup.py that contains
  config.add_extension('bar', source = ['bar.pyf'])
and in the parent setup.py add
  config.add_subpackage('foo')
(you might also need creating foo/__init__.py).

 But it does not work right now because of the way numpy.distutils finds
 the name of the extension. Replacing:

 ext_name = extension.name.split('.')[-1]

 by

 ext_name = os.path.basename(extension.name.split('.')[-1])

 Seems to make it work. Could that break anything in numpy.distutils ? I
 don't see how, but I don't want to touch distutils without being sure it
 won't,

The change should not break anything that already works because
in distutils extension name is assumed to contain names joined with a dot.
If distutils works with / in extension name, then I think it is because
by an accident. I'd recommend checking this also on a windows system
before changing numpy.distutils, not sure if it works or not there..

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tagging 1.1rc1 in about 12 hours

2008-05-17 Thread Pearu Peterson
On Sat, May 17, 2008 7:48 pm, Charles R Harris wrote:
 On Fri, May 16, 2008 at 1:20 AM, Jarrod Millman [EMAIL PROTECTED]
 wrote:

 Once I tag 1.1.0, I will open the trunk for 1.1.1 development.
...
 Any development for 1.2 will have to occur on a new branch.

 So open the new branch already.

I am waiting it too. At least, give another time target for 1.1.0.
(ticket 752 has a patch ready and waiting for a commit,
if 1.1.0 is going to wait another few days, the commit to 1.1.0
should be safe).

Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Tagging 1.1rc1 in about 12 hours

2008-05-16 Thread Pearu Peterson


Jarrod Millman wrote:
 Hello,
 
 I believe that we have now addressed everything that was holding up
 the 1.1.0 release, so I will be tagging the 1.1.0rc1 in about 12
 hours.  Please be extremely conservative and careful about any commits
 you make to the trunk until we officially release 1.1.0 (now may be a
 good time to spend some effort on SciPy).  Once I tag the release
 candidate I will ask both David and Chris to create Windows and Mac
 binaries.  I will give everyone a few days to test the release
 candidate and binaries thoroughly.  If everything looks good, the
 release candidate will become the official release.
 
 Once I tag 1.1.0, I will open the trunk for 1.1.1 development.  Any
 development for 1.2 will have to occur on a new branch.

I am working with the ticket 752 at the moment and I would probably
not want to commit my work to 1.1.0 at this time, so I shall commit
when trunk is open as 1.1.1.
My question regarding branching: how the changes from 1.1.1 will end up 
into 1.2 branch?

Thanks,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py and -D_FORTIFY_SOURCE=2 compilation flag

2008-05-15 Thread Pearu Peterson


Robert Kern wrote:
 On Wed, May 14, 2008 at 3:20 PM, David Huard [EMAIL PROTECTED] wrote:
 I filed a patch that seems to do the trick in ticket #792.
 
 I don't think this is the right approach. The problem isn't that
 _FORTIFY_SOURCE is set to 2 but that f2py is doing (probably) bad
 things that trip these buffer overflow checks. IIRC, Pearu wasn't on
 the f2py mailing list at the time this came up; please try him again.

I was able to reproduce the bug on a debian system. The fix with
a comment on what was causing the bug, is in svn:

   http://scipy.org/scipy/numpy/changeset/5173

I should warn that the bug fix does not have unittests because:
1) testing the bug requires Fortran compiler that for NumPy is
an optional requirement.
2) I have tested the fix with two different setups that should cover
all possible configurations.
3) In the case of problems with the fix, users should notice it immediately.
4) I have carefully read the patch before committing.

Regards,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] First steps with f2py and first problems...

2008-05-08 Thread Pearu Peterson
On Thu, May 8, 2008 2:06 pm, LB wrote:
 Hi,

 I've tried to follow the example given at :
 http://www.scipy.org/Cookbook/Theoretical_Ecology/Hastings_and_Powell
 but I've got errors when compiling the fortran file :

 12:53 loic:~ % f2py -c -m hastings hastings.f90 --fcompiler=gnu95
...
   File /usr/lib/python2.5/site-packages/numpy/f2py/rules.py, line
 1222, in buildmodule
 for l in '\n\n'.join(funcwrappers2)+'\n'.split('\n'):
 TypeError: cannot concatenate 'str' and 'list' objects
 zsh: exit 1 f2py -c -m hastings hastings.f90 --fcompiler=gnu95
...
 Have you got any clue to solve this pb ?

This issue is fixed in SVN. So, either use numpy from svn,
or wait a bit until numpy 1.0.5 is released, or change the
line #1222 in numpy/f2py/rules.py to

  for l in ('\n\n'.join(funcwrappers2)+'\n').split('\n'):

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 673

2008-04-28 Thread Pearu Peterson
Hi,

As far as I am concerned, the issue needs a cosmetic fix of renaming
pythonxerbla to python_xerbla and the rest of the issue can be
postponed to 1.2.

Note that this isn't purely a numpy issue. To fix the
issue, system or user provided blas/lapack libraries need to be changed,
we can only give instructions how to do that. Doing the change
automatically requires testing the corresponding support code for many
different platforms and setups - this requires some effort and time..
and most importantly, some volunteers.

Pearu

Jarrod Millman wrote:
 On Wed, Apr 9, 2008 at 6:34 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 Unfortunately, I couldn't get this patch to work, and my time has run
  out.  Maybe someone with more knowledge the precedences/order of
  functions during linking can take a look.  I don't know how to tell
  the system BLAS to call our custom xerbla, instead of the one
  provided.

  The patch addresses an important issue, though, so it warrants some
  more attention.
 
 Hey,
 
 I was wondering what the status of this ticket is?  Is this something
 that should be fixed before 1.1.0?
 
 Thanks,
 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] tests in distutils/exec_command.py

2008-04-26 Thread Pearu Peterson
On Sat, April 26, 2008 7:53 pm, Zbyszek Szmek wrote:
 Hi,
 while looking at test coverage statistics published Stéfan van der Walt
 at http://mentat.za.net/numpy/coverage/, I noticed that the
 least-covered file, numpy/distutils/exec_command.py has it's own
 test routines, e.g.:
 def test_svn(**kws):
 s,o = exec_command(['svn','status'],**kws)
 assert s,(s,o)
 print 'svn ok'

 called as
 if __name__ == __main__:
test_svn(use_tee=1)

 The sense of this test seems reversed (svn status runs just fine):
 Traceback (most recent call last):
   File numpy/distutils/exec_command.py, line 591, in module
 test_svn(use_tee=1)
   File numpy/distutils/exec_command.py, line 567, in test_svn
 assert s,(s,o)
 AssertionError: (0, '')

 Should the test be cleaned up and moved into a seperate file in
 numpy/distutils/tests/ ?

Note that not all tests in exec_command.py are platform independent
(as it is the case with most distutils tools in general). So, be careful
when copying the tests to numpy/distutils/tests.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] New web site for the F2PY. tool

2008-04-24 Thread Pearu Peterson
Hi,

I have created a new web site for the F2PY tool:

   http://www.f2py.org

that will be used to collect F2PY related information.
At the moment, the site contains minimal information but
hopefully this will improve in future.
One can add content to the f2py.org site after registration
(see Register in the Login page).

Best regards,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numscons] 0.6.1 release: it build scipy, and on windows !

2008-04-21 Thread Pearu Peterson


David Cournapeau wrote:

 - f2py has been almost entirely rewritten: it can now scan the 

 module name automatically, and should be much more reliable.

What do you mean by ^^^? ;)

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ticket #587

2008-04-09 Thread Pearu Peterson
On Wed, April 9, 2008 2:13 pm, Jarrod Millman wrote:
 Hey Pearu,

 Could you take a quick look at this:
 http://projects.scipy.org/scipy/numpy/ticket/587

I have fixed it in r4996.

However, when trying to change the ticked status, I get forbidden error:

TICKET_APPEND privileges are required to perform this operation

Could track admins add required privileges to me?

Regards,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ticket #587

2008-04-09 Thread Pearu Peterson
On Wed, April 9, 2008 3:25 pm, Jarrod Millman wrote:
 On Wed, Apr 9, 2008 at 4:59 AM, Pearu Peterson [EMAIL PROTECTED] wrote:
  I have fixed it in r4996.

 Thanks,

  However, when trying to change the ticked status, I get forbidden
 error:

  TICKET_APPEND privileges are required to perform this operation

  Could track admins add required privileges to me?

 Hmm...  Your permissions look right (i.e., you have TICKET_ADMIN,
 which is a superset of  TICKET_APPEND) and it looks like you were able
 to change the status OK.  Are you still having troubles with the Trac
 interface?

Yes, now I can change the tickets again. I thought that somebody
fixed the permissions, if not, then I guess my browser was in some
weird state.. But all good now.

Thanks,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py functions, docstrings, and epydoc

2008-03-27 Thread Pearu Peterson
Hi,

Tom Loredo wrote:
 Hi folks-
 
 Can anyone offer any tips on how I can get epydoc to produce
 API documentation for functions in an f2py-produced module?
 Currently they get listed in the generated docs as Variables:
 
 Variables
   psigc = fortran object at 0xa3e46b0
   sigctp = fortran object at 0xa3e4698
   smll_offset = fortran object at 0xa3e46c8
 
 Yet each of these objects is callable, and has a docstring.
 The module itself has docs that give a 1-line signature for
 each function, but that's only part of the docstring.

epydoc 3.0 supports variable documentation strings but only
in python codes. However, one can also let epydoc to generate
documentation for f2py generated functions (that, by the way, are
actually instances of `fortran` type and define __call__ method).
For that one needs to create a python module containing::

from somef2pyextmodule import psigc, sigctp, smll_offset

smll_offset = smll_offset
exec `smll_offset.__doc__`

sigctp = sigctp
exec `sigctp.__doc__`

smll_offset = smll_offset
exec `smll_offset.__doc__`

#etc

#eof

Now, when applying epydoc to this python file, epydoc will
produce docs also to these f2py objects.

It should be easy to create a python script that will
generate these python files that epydoc could use to
generate docs to f2py extension modules.

 One reason I'd like to see the full docstrings documented by epydoc
 is that, for key functions, I'm loading the functions into a
 module and *changing* the docstrings, to have info beyond the
 limited f2py-generated docstrings.
 
 On a related question, is there a way to provide input to f2py for
 function docstrings?  The manual hints that triple-quoted multiline
 blocks in the .pyf can be used to provide documentation, but when
 I add them, they don't appear to be used.

This feature is still implemented only partially and not enabled.
When I get more time, I'll finish it..

HTH,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py functions, docstrings, and epydoc

2008-03-27 Thread Pearu Peterson
On Thu, March 27, 2008 7:20 pm, Tom Loredo wrote:

 Pearu-

 smll_offset = smll_offset
 exec `smll_offset.__doc__`

 Thanks for the quick and helpful response!  I'll give it
 a try.  I don't grasp why it works, though.  I suppose I don't
 need to, but... I'm guessing the exec adds stuff to the current
 namespace that isn't there until a fortran object's attributes
 are explicitly accessed.

 While I have your attention... could you clear this up, also just
 for my curiousity?  It's probably related.

I got this idea from how epydoc gets documentation strings
for variables:

http://epydoc.sourceforge.net/whatsnew.html

according to the variable assignement must follow a string constant
containing documentation.

In our case,

  smll_offset = smll_offset

is variable assignment and

  exec `smll_offset.__doc__`

creates a string constant after the variable assingment.

 f2py generated functions (that, by the way, are
 actually instances of `fortran` type and define __call__ method).

 I had wondered about this when I first encountered this issue,
 and thought maybe I could figure out how to put some hook into
 epydoc so it would document anything with a __call__ method.
 But it looks like 'fortran' objects *don't* have a __call__
 (here _cbmlike is my f2py-generated module):

 In [1]: from inference.count._cbmlike import smllike

 In [2]: smllike
 Out[2]: fortran object at 0x947a668

 In [3]: dir smllike
 -- dir(smllike)
 Out[3]: ['__doc__', '_cpointer']

 In [4]: smllike.__call__
 ---
 AttributeErrorTraceback (most recent call
 last)

 /home/inference/loredo/tex/meetings/head08/ipython console in module()

 AttributeError: __call__

 Yet despite this apparent absence of __call__, I can magically
 call smllike just fine.  Would you provide a quick explanation of
 what f2py and the fortran object are doing here?

`fortran` object is an instance of a *extension type* `fortran`.
It does not have __call__ method, the extension type has a slot
in C struct that holds a function that will be called when
something tries to call the `fortran` object.

If there are epydoc developers around in this list then here's
a feature request: epydoc support for extension types.

Regards,
Pearu


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py : callbacks without callback function as an argument

2008-03-12 Thread Pearu Peterson
On Wed, March 12, 2008 8:38 am, Daniel Creveling wrote:
 Hello-

 Is there a way to code a callback to python from
 fortran in a way such that the calling routine does
 not need the callback function as an input argument?
 I'm using the Intel fortran compiler for linux with
 numpy 1.0.4 and f2py gives version 2_4422.  My modules
 crash on loading because the external callback
 function is not set.  I noticed in the release notes
 for f2py 2.46.243 that it was a resolved issue, but I
 don't know how that development compares to version
 2_4422 that comes with numpy.

The development version of f2py in numpy has a fix for
callback support that was broken for few versions
of numpy. So, use either numpy from svn or wait a bit
for 1.0.5 release.

 The example that I was trying to follow is from some
 documentatdevelopmention off of the web:

   subroutine f1()
 print *, in f1, calling f2 twice..
 call f2()
 call f2()
 return
   end

   subroutine f2()
 cf2py intent(callback, hide) fpy
 external fpy
 print *, in f2, calling fpy..
 call fpy()
 return
   end

 f2py -c -m pfromf extcallback.f

 I'm supposed to be able to define the callback
 function from Python like:
 import pfromf
 def f(): print This is Python
 pfromf.fpy = f

 but I am unable to even load the module:
 import pfromf
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: ./pfromf.so: undefined symbol: fpy_

Yes, loading the module works with f2py from numpy svn.
However, calling f1 or f2 from Python fail because
the example does not leave a way to specify the fpy function.

Depending on your specific application, there are some ways
to fix it. For example, let fpy function propagete from f1 to
f2 using external argument to f1:

  subroutine f1(fpy)
  external fpy
  call f2(fpy)
  call f2(fpy)
  end

  subroutine f2(fpy)
  external fpy
  call fpy()
  end

If this is something not suitable for your case, then there
exist ways to influence the generated wrapper codes from
signature files using special hacks. I can explain them later
when I get a better idea what you are trying to do.

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: sympycore version 0.1 released

2008-02-29 Thread Pearu Peterson
We are proud to present a new Python package:

   sympycore - an efficient pure Python Computer Algebra System

Sympycore is available for download from

   http://sympycore.googlecode.com/

Sympycore is released under the New BSD License.

Sympycore provides efficient data structures for representing symbolic
expressions and methods to manipulate them. Sympycore uses a very
clear algebra oriented design that can be easily extended.

Sympycore is a pure Python package with no external dependencies, it
requires Python version 2.5 or higher to run. Sympycore uses Mpmath
for fast arbitrary-precision floating-point arithmetic that is
included into sympycore package.

Sympycore is to our knowledge the most efficient pure Python
implementation of a Computer Algebra System. Its speed is comparable
to Computer Algebra Systems implemented in compiled languages. Some
comparison benchmarks are available in

   * http://code.google.com/p/sympycore/wiki/Performance

   * http://code.google.com/p/sympycore/wiki/PerformanceHistory

and it is our aim to continue seeking for more efficient ways to
manipulate symbolic expressions:

   http://cens.ioc.ee/~pearu/sympycore_bench/

Sympycore version 0.1 provides the following features:

   * symbolic arithmetic operations
   * basic expression manipulation methods: expanding, substituting,
and pattern matching.
   * primitive algebra to represent unevaluated symbolic expressions
   * calculus algebra of symbolic expressions, unevaluated elementary
functions, differentiation and polynomial integration methods
   * univariate and multivariate polynomial rings
   * matrix rings
   * expressions with physical units
   * SympyCore User's Guide and API Docs are available online.

Take a look at the demo for sympycore 0.1 release:

   http://sympycore.googlecode.com/svn/trunk/doc/html/demo0_1.html

However, one should be aware that sympycore does not implement many
features that other Computer Algebra Systems do. The version number
0.1 speaks for itself:)

Sympycore is inspired by many attempts to implement CAS for Python and
it is created to fix SymPy performance and robustness issues.
Sympycore does not yet have nearly as many features as SymPy. Our goal
is to work on in direction of merging the efforts with the SymPy
project in the near future.

Enjoy!

   * Pearu Peterson
   * Fredrik Johansson

Acknowledgments:

   * The work of Pearu Peterson on the SympyCore project is supported
by a Center of Excellence grant from the Norwegian Research Council to
Center for Biomedical Computing at Simula Research Laboratory.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: sharing F90 module data between modules

2008-02-12 Thread Pearu Peterson
On Tue, February 12, 2008 7:52 am, Garry Willgoose wrote:
 I have a suite of fortran modules that I want to wrap with f2py
 independently (so they appear to python as seperate imports) but
 where each module has access to another fortran module (which
 contains global data that is shared between the suite of fortran
 modules). I currently compile all the fortran modules, including the
 global data, in a single f2py statement so all the fortran code gets
 imported in a single statement

The source of this issue boils down to

  http://bugs.python.org/issue521854

according to which makes your goal unachivable because of how
Python loads shared libraries *by default*, see below.

 I guess the question is there any way that I can get fm3 to be shared
 between fm1 and fm2? The reasons for wanting to do this are because
 I'm developing a plug-in like architecture for environmental
 modelling where the user can develop new fortran modules (suitably
 f2py'ed) that can just be dropped into the module search path but
 still have access to the global data (subject to fortran module
 interfaces, etc).

The link above also gives an hint how to resolve this issue.
Try to use sys.setdlopenflags(...) before importing f2py generated
extension modules and then reset the state using sys.setdlopenflags(0).
See
  http://docs.python.org/lib/module-sys.html
for more information how to find proper value for ...

HTH,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [F2PY]: Allocatable Arrays

2008-02-04 Thread Pearu Peterson
On Mon, February 4, 2008 4:39 pm, Lisandro Dalcin wrote:

 Pearu, now that f2py is part of numpy, I think it would be easier for
 you and also for users to post to the numpy list for f2py-related
 issues. What do you think?

Personaly, I don't have strong opinions on this.

On one hand, it would be better if f2py related issues are
raised in one and only one list. It could be numpy list as f2py
issues would not add much extra traffic to it.

On the other hand, if redirecting f2py-users messages to numpy
list and also vice versa, then I am not sure that all subscribes
in f2py-users list will appreciate extra messages about numpy
issues that might be irrelevant to them. Currently there are
about 180 subscribes to the f2py-users list, many of them also
subscribe numpy-discussion list, I guess. If most of them would
subscribe numpy-discussion list then we could get rid of f2py-users
list. If the administrators of the numpy-discussion list could
share (privately) the list of subscribers mails with me then I could
check how many of the f2py users subscribe the numpy list and
it would be easier to make a decision on the future of f2py-users
list.

When I'll be back to working on f2py g3 then I'll probably
create a google code project for it. There we'll have separate
list for the future version of f2py anyway.

Regards,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [F2PY]: Allocatable Arrays

2008-02-01 Thread Pearu Peterson
On Fri, February 1, 2008 4:18 pm, Andrea Gavana wrote:
 Hi Lisandro,

 On Feb 1, 2008 1:59 PM, Lisandro Dalcin wrote:
 Sorry if I'm making noise, my knowledge of fortran is really little,
 but in your routine AllocateDummy your are fist allocating and next
 deallocating the arrays. Are you sure you can then access the contents
 of your arrays after deallocating them?

 Thank you for your answer.

 Unfortunately it seems that it doesn't matter whether I deallocate
 them or not, I still get the compilation warning and I can't access
 those variable in any case.

You cannot access those becase they are deallocated. Try to
disable deallocate statements in your fortran code.

 It seems like f2py (or python or whatever)
 does not like having more than 1 allocatable array inside a MODULE
 declaration.

This is not true.

 How much complicated is your binary format?

 *Very* complex. The fact is, I already know how to read those files in
 Fortran, is the linking with Python via f2py that is driving me mad. I
 can't believe no one has used before allocatable arrays as outputs
 (whether from a subroutine or from a module).

You can use allocatable arrays from module in Python as described
in f2py users guide.

It could be that the problem is related to deallocating the arrays
in the fortran code.

Regards,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [F2PY]: Allocatable Arrays

2008-02-01 Thread Pearu Peterson
On Fri, February 1, 2008 8:39 pm, Robert Kern wrote:
 Pearu Peterson wrote:
 On Fri, February 1, 2008 1:28 pm, Andrea Gavana wrote:
 Hi All,

 I sent a couple of messages to f2py mailing list, but it seems
 like my problem has no simple solution so I thought to ask for some
 suggestions here.

 Sorry, I haven't been around there long time.

 Are you going to continue not reading the f2py list? If so, you should
 point everyone there to this list and close the list.

It is a bit embarrassing, I haven't read the list because I lost
the link to f2py list archives that I used to use in my mailing box
in the server.. but I have been also busy with non-python stuff
last years (I moved and got married..:).
Anyway, I have subscribed to the f2py list again I'll try to respond
to any messages that have unresolved issues, also in the arhives.

Thanks,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils does not output compilation warning on win32 ?

2008-01-21 Thread Pearu Peterson
Hi,

If I remember correctly then the warnings were disabled because
when compiling numpy/scipy on windows there were *lots* of
warnings, especially for pyrex generated sources.
When there is an error, all warnings will be shown. Hmm, and on
linux the warnings should also be shown (this actually depends on what
Python distutils one is using as the distutils logging interface
has been changed between Python versions). In any case, we can enable
all the warnigs again with no problems (may be only windows guys will
not be so happy about it).

Regards,
Pearu

On Mon, January 21, 2008 8:44 am, David Cournapeau wrote:
 Hi,

 I noticed a strange behaviour with distutils: when compiling C code
 on windows (using mingw), the compilation warning are not output on the
 console. For example, umathmodule.c compilation emits the following
 warnings:

 numpy\core\src\umathmodule.c.src:128: warning: static declaration of
 'acoshf' follows non-static declaration
 numpy\core\src\umathmodule.c.src:136: warning: static declaration of
 'asinhf' follows non-static declaration

 I think this is coming from distutils because when I execute the exact
 same command on the shell, I get those warnings. If there is an error
 (by inserting #error), then all the warnings appear also when using
 distutils, which would suggest that distutils checks the return status
 of gcc to decide when to output should be sent on the shell ? Anyway, if
 we can do something about it, I think it would be better to always
 output warnings (it took me quite a while to understand why I got
 warnings with scons and not with distutils...).

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] weird indexing

2008-01-04 Thread Pearu Peterson
On Fri, January 4, 2008 2:16 am, Mathew Yeates wrote:
 Hi
 Okay, here's a weird one. In Fortran you can specify the upper/lower
 bounds of an array
 e.g. REAL A(3:7)

 What would be the best way to translate this to a Numpy array? I would
 like to do something like
 A=numpy.zeros(shape=(5,))
 and have the expression A[3] actually return A[0].

 Or something. Any thoughts?

f2py wraps A(3:7) to numpy array A[0:5] and keeps it that way
as it is most natural for working in Python side.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to create an array of objects that are sequences?

2008-01-04 Thread Pearu Peterson
On Fri, January 4, 2008 8:00 pm, Pearu Peterson wrote:
 On Fri, January 4, 2008 7:33 pm, Travis E. Oliphant wrote:
 Pearu Peterson wrote:
 Hi,

 Say, one defines

 class A(tuple):
   def __repr__(self):
 return 'A(%s)' % (tuple.__repr__(self))

 and I'd like to create an array of A instances.

 So, create an empty object array and insert the entries the way you want
 them:

 a = np.empty(1,dtype=object)
 a[0] = A((1,2))

 Meantime I was reading arrayobject.c and it seems that
 before objects are checked for being sequences, their
 __array_interface__ is accessed (eg in Array_FromSequence,
 discover_depth).

 Would this provide a solution if the class A would define
 a property __array_interface__? I just don't know what
 the data field should be for an object.

Ok, I found a partial solution:


class A(tuple):
   def __repr__(self):
 return 'A(%s)' % (tuple.__repr__(self))
@property
def __array_interface__(self):
import numpy
obj = numpy.empty(1,dtype=object)
obj[0] = self
return obj.__array_interface__.copy()

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to create an array of objects that are sequences?

2008-01-04 Thread Pearu Peterson
On Fri, January 4, 2008 8:00 pm, Pearu Peterson wrote:
 On Fri, January 4, 2008 7:33 pm, Travis E. Oliphant wrote:
 Pearu Peterson wrote:
 Hi,

 Say, one defines

 class A(tuple):
   def __repr__(self):
 return 'A(%s)' % (tuple.__repr__(self))

 and I'd like to create an array of A instances.

 The array function was designed a long time ago to inspect sequences and
 flatten them.

 Arguably, there should be more intelligence when an object array is
 requested, but there is ambiguity about what the right thing to do is.

 Thus, the current situation is that if you are creating object arrays,
 the advice is to populate it after the fact.

 So, create an empty object array and insert the entries the way you want
 them:

 a = np.empty(1,dtype=object)
 a[0] = A((1,2))

 Meantime I was reading arrayobject.c and it seems that
 before objects are checked for being sequences, their
 __array_interface__ is accessed (eg in Array_FromSequence,
 discover_depth).

 Would this provide a solution if the class A would define
 a property __array_interface__? I just don't know what
 the data field should be for an object.

Ok, I found a partial solution:

class A(tuple):
def __repr__(self):
return 'A('+tuple.__repr__(self)+')'
@property
def __array_interface__(self):
import numpy
obj = numpy.empty(1,dtype=object)
obj[0] = self
return obj.__array_interface__.copy()

 from numpy import *
 array([A((1,2))])
array([[1, 2]], dtype=object)

but

 array(A((1,2)))
array([None], dtype=object)

Pearu
PS: sorry about previous mail, Send was pressed accidentaly.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to create an array of objects that are sequences?

2008-01-04 Thread Pearu Peterson

Just ignore this solution. It was not quite working
and I was able to get a segfault from it.

Pearu

On Fri, January 4, 2008 8:58 pm, Pearu Peterson wrote:
 On Fri, January 4, 2008 8:00 pm, Pearu Peterson wrote:
 On Fri, January 4, 2008 7:33 pm, Travis E. Oliphant wrote:
 Pearu Peterson wrote:
 Hi,

 Say, one defines

 class A(tuple):
   def __repr__(self):
 return 'A(%s)' % (tuple.__repr__(self))

 and I'd like to create an array of A instances.

 So, create an empty object array and insert the entries the way you
 want
 them:

 a = np.empty(1,dtype=object)
 a[0] = A((1,2))

 Meantime I was reading arrayobject.c and it seems that
 before objects are checked for being sequences, their
 __array_interface__ is accessed (eg in Array_FromSequence,
 discover_depth).

 Would this provide a solution if the class A would define
 a property __array_interface__? I just don't know what
 the data field should be for an object.

 Ok, I found a partial solution:


 class A(tuple):
def __repr__(self):
  return 'A(%s)' % (tuple.__repr__(self))
 @property
 def __array_interface__(self):
 import numpy
 obj = numpy.empty(1,dtype=object)
 obj[0] = self
 return obj.__array_interface__.copy()

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ising model: f2py vs cython comparison

2007-12-23 Thread Pearu Peterson
On Sun, December 23, 2007 3:29 am, Ondrej Certik wrote:
 Hi,

 I need to write 2D Ising model simulation into my school, so I wrote
 it in Python, then rewrote it in Fortran + f2py, and also Cython:

 http://hg.sharesource.org/isingmodel/

 And Cython solution is 2x faster than f2py. I understand, that I am
 comparing many things - wrappers, my fortran coding skills
 vs Cython C code generation and gcc vs gfortran. But still any ideas
 how to speed up fortran+f2py solution is very welcomed.

Though the problem is 2D, your implementations are essentially
1D. If you would treat the array A as 2D array (and avoid calling
subroutine p) then you would gain some 7% speed up in Fortran.

When using -DF2PY_REPORT_ATEXIT for f2py then a summary
of timings will be printed out about how much time was spent
in Fortran code and how much in the interface. In the given case
I get (nsteps=5):

Overall time spent in ...
(a) wrapped (Fortran/C) functions   : 1962 msec
(b) f2py interface,   60 calls  :0 msec
(c) call-back (Python) functions:0 msec
(d) f2py call-back interface,  0 calls  :0 msec
(e) wrapped (Fortran/C) functions (acctual) : 1962 msec

that is, most of the time is spent in Fortran function and no time
in wrapper. The conclusion is that the cause of the
difference in timings is not in f2py or cpython generated
interfaces but in Fortran and C codes and/or compilers.

Some idiom used in Fortran code is just slower than in C..
For example, in C code you are doing calculations using
float precision but in Fortran you are forcing double precision.

HTH,
Pearu

PS: Here follows a setup.py file that I used to build the
extension modules instead of the Makefile:

#file: setup.py
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('',parent_package,top_path)
config.add_extension('mcising', sources=['mcising.f'],
 define_macros = [('F2PY_REPORT_ATEXIT',1)]
 )
#config.add_extension('pyising', sources=['pyising.pyx'])
return config
from numpy.distutils.core import setup
setup(configuration = configuration)


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ising model: f2py vs cython comparison

2007-12-23 Thread Pearu Peterson
On Sun, December 23, 2007 3:29 am, Ondrej Certik wrote:
 Hi,

 I need to write 2D Ising model simulation into my school, so I wrote
 it in Python, then rewrote it in Fortran + f2py, and also Cython:

 http://hg.sharesource.org/isingmodel/

 And Cython solution is 2x faster than f2py. I understand, that I am
 comparing many things - wrappers, my fortran coding skills
 vs Cython C code generation and gcc vs gfortran. But still any ideas
 how to speed up fortran+f2py solution is very welcomed.

When using g77 compiler instead of gfortran, I get a speed
up 4.8 times.

Btw, a line in a if statement of the fortran code
should read `A(p(i,j,N)) = - A(p(i,j,N))`.

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A quick f2py question

2007-12-05 Thread Pearu Peterson
On Wed, December 5, 2007 8:38 pm, Fernando Perez wrote:
...
 And I see this message in the build:

 In: mwrep.pyf:mwrep:unknown_interface:createblocks
 _get_depend_dict: no dependence info for 'len'

This is due to a typo introduced in r4511 and is now fixed in
r4553. This bug should not affect resulting extension module.

Thanks for the issue report,
Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] For review: first milestone of scons support in numpy

2007-10-11 Thread Pearu Peterson
Hi,

Examples look good. It seems that you have lots of work ahead;) to add
numpy.distutils features that are required to build numpy/scipy.

Few comments:
1) Why SConstruct does not have extension? It looks like a python file
and .py extension could be used.

2) It seems that scons does not interfare with numpy.distutils much.
If this is true and numpy/scipy builds will not break when scons is
not installed then I think you could continue the scons support
development in trunk.

3) In future, if we are going to replace using distutils with scons
then all numpy/scipy need SConstruct scripts. I think one can implement
these scripts already now and use, say, setupscons.py, containing only

def configuration(parent_package='',top_path=None):
 from numpy.distutils.misc_util import Configuration
 config = Configuration('packagename',parent_package,top_path)
 config.add_sconscript('SConstruct')
 return config
if __name__ == '__main__':
 from numpy.distutils.core import setup
 setup(configuration=configuration)

to build the packages. Or can scons already be used with only
SConstruct script and setupscons.py are not needed? Implementing these 
scripts now would give a good idea what features are required in using
scons to build numpy/scipy packages.
Also, it will prove that scons can replace numpy.distutils in future,

4) though, we cannot remove numpy.distutils for backward compatibility
with software using numpy.distutils. However, it should be possible
to enhance Configuration class to generate the corresponding SConstruct
scripts. It will save some work when converting configuration() 
functions to SConstruct scripts.

Looking forward for not using distutils,
Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] For review: first milestone of scons support in numpy

2007-10-11 Thread Pearu Peterson


David Cournapeau wrote:
 Pearu Peterson wrote:
 2) It seems that scons does not interfare with numpy.distutils much.
 If this is true and numpy/scipy builds will not break when scons is
 not installed then I think you could continue the scons support
 development in trunk.
 It won't break if scons is not installed because scons sources are 
 copied into the branch. Scons developers explicitely support this:
 
 http://www.scons.org/faq.php#SS_3_3
 
 (AFAIK, it does not pose any problem license-wise, since scons is new 
 BSD license; it adds ~350 kb of compressed source code to numpy).

I think this is good.
Does scons require python-dev? If not then this will solve one of the
frequent issues that new users may experience: not installed distutils.

 Now, concerning migrating all compiled extensions to scons, I would 
 prefer avoiding doing it in the trunk; but I heard horror stories about 
 subversion and merging, so maybe staying outside the trunk is too risky 
 ?

I think numpy is quite stable now that it's safe to develop in a branch
(if trunk is very actively developed then merging branches
can be a nightmare). However, IMHO using a branch makes other
developers to stay aside from branch development and in time it is
more and more difficult to merge.

 Also, when I said that I don't see big problems to replace distutils 
 for compiled extensions, I lied by over-simplification. If scons is used 
 for compiled extension, with the current design, distutils will call 
 scons for each package. Calling scons is expensive (importing many 
 modules, etc...: this easily takes around one second for each non 
 trivial Sconscript), and also, because each of them is independent, we 
 may check several times for the same thing (fortran compiler, etc...), 
 which would really add up to the build time.
 
 I see two approaches here:
 - do not care about it because numpy is unlikely to become really bigger
 - considering that numpy is really one big package (contrary to 
 scipy which, at least in my understanding, is gearing toward less 
 inter-package dependencies ?), we should only have one big sconscript 
 for configuration (checking blas/lapack, compilers, etc...), and use 
 scons recursively. In this case, it should not be much slower than the 
 current system.

Note that building a subpackage in subpackage directory must be 
supported. So a big sconscript may not be an option.

The third approach would be to cache those checks that are called
frequently to, say, $HOME/.numpy/scons.

numpy.distutils setup.py script gathers the information from
subpackage setup.py scripts recursively and then passes all the
information to one setup function call.
I think setupscons.py should do the same. If scons does not support
recursive reading of scons scripts then the corresponding feature
should be implemented, I guess it would not be difficult.

 4) though, we cannot remove numpy.distutils for backward compatibility
 with software using numpy.distutils. However, it should be possible
 to enhance Configuration class to generate the corresponding SConstruct
 scripts. It will save some work when converting configuration() 
 functions to SConstruct scripts.
 Could you elaborate on this point ? I am not sure what you mean by 
 generating Configuration class ?

I meant that Configuration class could have a method, say 
toscons(filename),  that will generate SConstruct script
from the information that Configuration instance holds. I thought that 
this would just ease creating SConstruct scripts from
existing setup.py files.

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] For review: first milestone of scons support in numpy

2007-10-11 Thread Pearu Peterson
David Cournapeau wrote:
 Pearu Peterson wrote:
 I think this is good.
 Does scons require python-dev? If not then this will solve one of the
 frequent issues that new users may experience: not installed distutils.
 Isn't distutils included in python library ?

Not always. For example, in debian one has to install python-dev 
separately from python.

 Anyway, scons does not 
 require anything else than a python interpreter. Actually, an explicit 
 requirement of scons is to support any python starting at version 1.5.2 
 (this is another important point which I consider important for a 
 replacement of numpy.distutils).

Though, it is irrelevant as numpy/scipy packages require python
versions starting at 2.3.

 I think numpy is quite stable now that it's safe to develop in a branch
 (if trunk is very actively developed then merging branches
 can be a nightmare). However, IMHO using a branch makes other
 developers to stay aside from branch development and in time it is
 more and more difficult to merge.
 I don't have strong experience in subversion, so I was afraid of that. 
 Do I understand correctly that you suggest opening a new dev branch, and 
 then do all subsequent dev (including non distutils/scons related ones) 
 there ?

No, my original suggestion was that I don't mind if you would develop
scons support in trunk as it does not affect the current state
of numpy/scipy builds. Don't know if other developers would have
objections in that.

 numpy.distutils setup.py script gathers the information from
 subpackage setup.py scripts recursively and then passes all the
 information to one setup function call.
 I think setupscons.py should do the same. If scons does not support
 recursive reading of scons scripts then the corresponding feature
 should be implemented, I guess it would not be difficult.
 Scons supports recursive calls. To be more clear about the possibilities 
 of scons wrt to this, let's take a simplified example:
 
 root/Sconstruct
 root/numpy/Sconscript
 root/numpy/core/Sconscript
 root/numpy/linalg/Sconscript
 
 If you are in root and call scons ( is a shell prompt):
 
 root  scons
 
 Then it will call recursively all the sconscript (as long as you request 
 it in the sconscript files). The other supported method is
 
 root/numpy/linalg  scons -u
 
 this will look every parent directory to find the root Sconstruct.

My point was that

   root/numpy/linalg  scons

should work (without the -u option). A subpackage may not require all
the stuff that other subpackages require and therefore scons should
not configure everything - it's a waste of time and efforts - especially
if something is broken in upper packages but not in the subpackage.

 I meant that Configuration class could have a method, say 
 toscons(filename),  that will generate SConstruct script
 from the information that Configuration instance holds. I thought that 
 this would just ease creating SConstruct scripts from
 existing setup.py files.
 I don't think it would worth the effort for numpy (the main work really 
 is to implement and test all the checkers: blas/lapack, fortran). Now, 
 as a general migration tool, this may be useful. But since we would 
 still use distutils, it would only be useful if it is easy to develop 
 such as tool.

Yes, that's a low priority feature.

 P.S: would it be easy for you to make a list of requirements for fortran 
 ? By requirement, I mean things like name mangling and so on ? Something 
 like the autoconf macros: 
 http://www.gnu.org/software/autoconf/manual/autoconf-2.57/html_node/autoconf_66.html

To use f2py succesfully, a fortran compiler must support flags that make
fortran symbol names lowercase and with exactly one underscore at the 
end of a name. This is required when using numpy.distutils.

f2py generated modules make use of the following CPP macros:
-DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN -DUNDERSCORE_G77
and therefore the above requirement would not be needed if
scons could figure out how some particular compiler mangles
the names of fortran symbols. This would be especially useful
since some fortran compiler vendors change the compiler flags
between compiler versions and one has to update numpy.distutils
files accordingly.

Note that using hardcoded name mangeling flags may be still required
for certian Fortran 90 compilers (which ones exactly, I don't member 
now) that by default produce symbol names with special characters like $ 
or . for F90 modules and making these names unaccessible to C programs.

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] adopting Python Style Guide for classes

2007-10-02 Thread Pearu Peterson


Jarrod Millman wrote:
 Hello,
 
..
 Please let me know if you have any major objections to adopting the
 Python class naming convention.

I don't object.

 Once we have agreed to using CapWords for classes, we will need to
 decide what to do about our existing class names.  Obviously, it is
 important that we are careful to not break a lot of code just to bring
 our class names up to standards.  So at the very least, we probably
 won't be able to just change every class name until NumPy 1.1.0 is
 released.
 
 Here is what I propose for NumPy:
 1.  Change the names of the TestCase class names to use CapWords.  I
 just checked in a change to allow TestCase classes to be prefixed with
 either 'test' or 'Test':
 http://projects.scipy.org/scipy/numpy/changeset/4144
 If no one objects, I plan to go through all the NumPy unit tests and
 change their names to CapWords.  Ideally, I would like to get at least
 this change in before NumPy 1.0.4.

It is safe to change all classes in tests to CamelCase.

 2.  Any one adding a new class to NumPy would use CapWords.
 3.  When we release NumPy 1.1, we will convert all (or almost all)
 class names to CapWords.  There is no reason to worry about the exact
 details of this conversion at this time.  I just would like to get a
 sense whether, in general, this seems like a good direction to move
 in.  If so, then after we get steps 1 and 2 completed we can start
 discussing how to handle step 3.

After fixing the class names in tests then how many classes use
camelcase style in numpy/distutils? How many of them are implementation
specific and how many of them are exposed to users? I think having this
statistics would be essential to make any decisions. Eg would we
need to introduce warnings for the few following releases of numpy/scipy
when camelcase class is used by user code, or not?

Pearu
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


  1   2   >