[Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Åsmund Hjulstad
Hi all,

Are there any success stories in building f2py extensions on AMD64 with
latest versions? Building the same extension on 32 bit works like a charm.

I am having trouble finding documentation or examples, is it supposed to be
working?

Compiling (with distutils) works like a charm, but that does not help when
I'm stuck figuring out link dependencies. It seems to me that the gfortran
library depends on a mingw that is in conflict with some CRT library.

I have to admit, I'm probably in way too deep waters, and should really be
satisfied with 32 bit, but still, it would be fun to get it working.

GFortran is from  MinGW-w64 project on sourceforge, version 4.5.1
prerelease.

Any pointers or other experiences?

Regards,
Åsmund Hjulstad
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.5.0 beta 1

2010-08-16 Thread Sandro Tosi
Hi all,
sorry for the delay

On Sun, Aug 1, 2010 at 18:38, Ralf Gommers ralf.gomm...@googlemail.com wrote:
 I am pleased to announce the availability of the first beta of NumPy 1.5.0.
 This will be the first NumPy release to include support for Python 3, as
 well as for Python 2.7. Please try this beta and report any problems on the
 NumPy mailing list.

 Binaries, sources and release notes can be found at
 https://sourceforge.net/projects/numpy/files/
 Please note that binaries for Python 3.1 are not yet up, they will follow as
 soon as a minor issue with building them is resolved. Building from source
 with Python 3.1 should work without problems.

I gave it a run on the Debian packaging system and these are the results:

- python 3.1 can't compile it:

$ python3 setup.py build
Traceback (most recent call last):
  File setup.py, line 210, in module
setup_package()
  File setup.py, line 174, in setup_package
import py3tool
ImportError: No module named py3tool

but this is already known, py3tool is missing from the tarball.

- python2.6 build, installation and numpy.test() works fine

- I have a problem building documentation:

# build doc only for default python version
(export MPLCONFIGDIR=. ; make -C doc html
PYTHONPATH=../build/lib.linux-x86_64-2.6)
make[1]: Entering directory `/tmp/buildd/python-numpy-1.5.0~b1/doc'
mkdir -p build
python \
./sphinxext/autosummary_generate.py source/reference/*.rst \
-p dump.xml -o source/reference/generated
./sphinxext/autosummary_generate.py:18: DeprecationWarning: The
numpydoc.autosummary extension can also be found as
sphinx.ext.autosummary in Sphinx = 0.6, and the version in Sphinx =
0.7 is superior to the one in numpydoc. This numpydoc version of
autosummary is no longer maintained.
  from autosummary import import_by_name
Failed to import 'numpy.__array_priority__':
Failed to import 'numpy.core.defchararray.len':
Failed to import 'numpy.generic.__squeeze__':
touch build/generate-stamp
mkdir -p build/html build/doctrees
LANG=C sphinx-build -b html -d build/doctrees   source build/html
Running Sphinx v0.6.6

Extension error:
Could not import extension numpydoc (exception: No module named domains.c)
1.5b1 1.5.0b1

I don't know exactly the reason for the Failed to import X but they
are there also for 1.4.1 so they should not be a problem. Anyhow,
there is no file named 'domains.c' in the tarball.

Regards,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py performance question from a rookie

2010-08-16 Thread Vasileios Gkinis


  
  
 Hi all,

This is a question on f2py.
I am using a Crank Nicholson scheme to model a diffusion process
and in the quest of some extra speed I started playing with
f2py.
However I do not seem to be able to get any significant boost in
the performance of the code.

In the following simple example I am building a tridiagonal
array with plain python for loops and by calling a simple
fortran subroutine that does the same thing.

Here goes the python script:

import numpy as np
import time
import learnf90_05
import sys

run = sys.argv[1]

try:
 dim = np.float(sys.argv[2])
except:
 dim = 500
elem = np.ones(dim)

### Python routine

if run == "Python":
 t_python_i = time.time()
 array_python = np.zeros((dim, dim))
 for row in np.arange(1,dim-1):
 array_python[row, row-1] = elem[row]
 array_python[row, row] = elem[row]
 array_python[row,row+1] = elem[row]

 t_python_f = time.time()
 python_time = t_python_f - t_python_i
 print("Python time: %0.5e" %(python_time))

###fortran routine

elif run == "Fortran":
 t_fortran_i = time.time()
 fortran_array = learnf90_05.test(j = dim, element = elem)
 t_fortran_f = time.time()
 fortran_time = t_fortran_f - t_fortran_i
 print("Fortran time: %0.5e" %(fortran_time))

And the fortran subroutine called test is here: 

subroutine test(j, element, a)
integer, intent(in) :: j
integer :: row, col
real(kind = 8), intent(in) :: element(j)
real(kind = 8), intent(out) :: a(j,j)

do row=2,j-1
 a(row,row-1) = element(row)
 a(row, row) = element(row)
 a(row, row+1) = element(row)
enddo

end

The subroutine is compiled with Gfortran 4.2 on Osx 10.5.8. 

Running the python script with array sizes 300x300 and 3000x3000
gives for the python plain for loops:

dhcp103:learnf vasilis$ python fill_array.py Python 3000
Python time: 1.56063e-01
dhcp103:learnf vasilis$ python fill_array.py Python 300
Python time: 4.82297e-03

and for the fortran subroutine:

  dhcp103:learnf vasilis$ python
fill_array.py Fortran 3000
Fortran time: 1.16298e-01
  dhcp103:learnf vasilis$ python
fill_array.py Fortran 300
Fortran time: 8.83102e-04
  
  It looks like the gain in performance is
rather low compared to tests i have seen elsewhere.

Am I missing something here..?

  Cheers...Vasilis
  
  
  
  -- 

  
  Vasileios Gkinis PhD student
   Center for Ice
and Climate
   Niels Bohr Institute
  Juliane Maries Vej 30
  2100 Copenhagen
  Denmark
  

  
email:

v.gki...@nbi.ku.dk
  
  
skype:
vasgkin
  
  
Tel:
+45 353 25913
  

  
  


-- 
  

Vasileios Gkinis PhD student

  Center for Ice and Climate

  Niels Bohr Institute
Juliane Maries Vej 30
2100 Copenhagen
Denmark

  

  email:
  
  v.gki...@nbi.ku.dk


  skype:
  vasgkin


  Tel:
  +45 353 25913

  


  
  

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Sturla Molden
Two criticial import libraries for mingw-w64 are missing (libpython26.a
and msvcr90.a). We cannot build C extensions for Python with mingw-w64. If
you can build with disutils, your are not using mingw-w64 but another C
compiler.

There are rumors of incompatibility issues between libgfortran and the CRT
or SciPy on Windows 64. David Cournapeau might know the details.

It seems f2py on Windows 64 requires a commercial Fortran compiler (the
ones worth looking at are Absoft, Intel and PGI) and e.g. Microsoft's free
C/C++ compiler (download Windows 7 SDK for .NET 3.5 -- not any later or
earlier version). You need to have the environment variables
DISTUTILS_USE_SDK and MSSdk set to build with the free SDK compiler. f2py
will enforce the former, the latter comes from running setenv from the
SDK. The latter is important as you will need to set up the environments
for the Fortran and the C compiler in the same command window before
running f2py.

I also had to remove support for Compaq Visual Fortran from the f2py
source to  make anything build, as it crashes f2py on Windows 64 (this
does not happen on 32-bit Windows).

Building f2py extensions on Windows 64 is a bit more tricky. And I've had
no luck with gfortran so far.

Sturla



 Hi all,

 Are there any success stories in building f2py extensions on AMD64 with
 latest versions? Building the same extension on 32 bit works like a charm.

 I am having trouble finding documentation or examples, is it supposed to
 be
 working?

 Compiling (with distutils) works like a charm, but that does not help when
 I'm stuck figuring out link dependencies. It seems to me that the gfortran
 library depends on a mingw that is in conflict with some CRT library.

 I have to admit, I'm probably in way too deep waters, and should really be
 satisfied with 32 bit, but still, it would be fun to get it working.

 GFortran is from  MinGW-w64 project on sourceforge, version 4.5.1
 prerelease.

 Any pointers or other experiences?

 Regards,
 Åsmund Hjulstad
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py performance question from a rookie

2010-08-16 Thread Robin
On Mon, Aug 16, 2010 at 10:12 AM, Vasileios Gkinis v.gki...@nbi.ku.dkwrote:

  Hi all,

 This is a question on f2py.
 I am using a Crank Nicholson scheme to model a diffusion process and in the
 quest of some extra speed I started playing with f2py.
 However I do not seem to be able to get any significant boost in the
 performance of the code.


Try adding
 -DF2PY_REPORT_ON_ARRAY_COPY
to the f2py command line.

This will cause f2py to report any array copies. If any of the
types/ordering of the arrays don't match f2py will silently make a copy -
this can really affect performance.

Cheers

Robin



 In the following simple example I am building a tridiagonal array with
 plain python for loops and by calling a simple fortran subroutine that does
 the same thing.

 Here goes the python script:

 import numpy as np
 import time
 import learnf90_05
 import sys

 run = sys.argv[1]

 try:
 dim = np.float(sys.argv[2])
 except:
 dim = 500
 elem = np.ones(dim)

 ### Python routine

 if run == Python:
 t_python_i = time.time()
 array_python = np.zeros((dim, dim))
 for row in np.arange(1,dim-1):
 array_python[row, row-1] = elem[row]
 array_python[row, row] = elem[row]
 array_python[row,row+1] = elem[row]

 t_python_f = time.time()
 python_time = t_python_f - t_python_i
 print(Python time: %0.5e %(python_time))

 ###fortran routine

 elif run == Fortran:
 t_fortran_i = time.time()
 fortran_array = learnf90_05.test(j = dim, element = elem)
 t_fortran_f = time.time()
 fortran_time = t_fortran_f - t_fortran_i
 print(Fortran time: %0.5e %(fortran_time))

 And the fortran subroutine called test is here:

 subroutine test(j, element, a)
 integer, intent(in) ::  j
 integer :: row, col
 real(kind = 8),  intent(in) :: element(j)
 real(kind = 8), intent(out) :: a(j,j)

 do row=2,j-1
 a(row,row-1) = element(row)
 a(row, row) = element(row)
 a(row, row+1) = element(row)
 enddo

 end

 The subroutine is compiled with Gfortran 4.2 on Osx 10.5.8.

 Running the python script with array sizes 300x300 and 3000x3000 gives for
 the python plain for loops:

 dhcp103:learnf vasilis$ python fill_array.py Python 3000
 Python time: 1.56063e-01
 dhcp103:learnf vasilis$ python fill_array.py Python 300
 Python time: 4.82297e-03

 and for the fortran subroutine:

 dhcp103:learnf vasilis$ python fill_array.py Fortran 3000
 Fortran time: 1.16298e-01
 dhcp103:learnf vasilis$ python fill_array.py Fortran 300
 Fortran time: 8.83102e-04

 It looks like the gain in performance is rather low compared to tests i
 have seen elsewhere.

 Am I missing something here..?

 Cheers...Vasilis


  --
  
 Vasileios Gkinis PhD student
  Center for Ice and Climate http://www.iceandclimate.nbi.ku.dk
  Niels Bohr Institute http://www.nbi.ku.dk/english/
 Juliane Maries Vej 30
 2100 Copenhagen
 Denmark
   email: v.gki...@nbi.ku.dk v.gki...@nbi.ku.dk?  skype: vasgkin  Tel: +45
 353 25913  
  --
  
 Vasileios Gkinis PhD student
  Center for Ice and Climate http://www.iceandclimate.nbi.ku.dk
  Niels Bohr Institute http://www.nbi.ku.dk/english/
 Juliane Maries Vej 30
 2100 Copenhagen
 Denmark
   email: v.gki...@nbi.ku.dk v.gki...@nbi.ku.dk?  skype: vasgkin  Tel: +45
 353 25913  

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Building numpy on Mac OS X 10.6, i386 (no ppc) 32/64bit: Error in Fortran tests due to ppc64

2010-08-16 Thread Samuel John
Hello!

At first, I'd like to say thanks to the numpy/scipy team and all contributors. 
Great software!

On Snow Leopard, aka Mac OS X 10.6.4 (server) I managed to build numpy 
2.0.0.dev8636 (and scipy 0.9.0.dev6646) for arch i386 in combined 32/64bit 
against MacPorts python27 (No ppc here!).

All tests pass (yeha!), except for the fortran related ones. I think there is 
an issue with detecting the right arch. My numpy and python are both i386 32/64 
bit but now ppc.

Only these tests fail, all others pass:
test_callback.TestF77Callback.test_all ... ERROR
test_mixed.TestMixed.test_all ... ERROR
test_return_character.TestF77ReturnCharacter.test_all ... ERROR
test_return_character.TestF90ReturnCharacter.test_all ... ERROR
test_return_complex.TestF77ReturnComplex.test_all ... ERROR
test_return_complex.TestF90ReturnComplex.test_all ... ERROR
test_return_integer.TestF77ReturnInteger.test_all ... ERROR
test_return_integer.TestF90ReturnInteger.test_all ... ERROR
test_return_logical.TestF77ReturnLogical.test_all ... ERROR
test_return_logical.TestF90ReturnLogical.test_all ... ERROR
test_return_real.TestCReturnReal.test_all ... ok
test_return_real.TestF77ReturnReal.test_all ... ERROR
test_return_real.TestF90ReturnReal.test_all ... ERROR
[...]
--
Ran 2989 tests in 47.008s
FAILED (KNOWNFAIL=4, SKIP=1, errors=12)


Some more information (Perhaps I did some known mistake in those steps? Details 
at the end of this mail):
o  Mac OS X 10.6.4 (intel Core 2 duo)
o  Python 2.7 (r27:82500, Aug 15 2010, 12:19:40) 
 [GCC 4.2.1 (Apple Inc. build 5659) + GF 4.2.4] on darwin
o  gcc --version
 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)
o  gfortran --version
GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5659) + GF 4.2.4
from gfortran from http://r.research.att.com/tools/
o  I used the BLAS/LAPACK that is provided by Apple's Accelerate framework. 
 
o  environment:
export CFLAGS=-arch i386 -arch x86_64
export FFLAGS=-m32 -m64
export LDFLAGS=-Wall -undefined dynamic_lookup -bundle -arch i386 
-arch x86_64 -framework Accelerate
o  bulid:
python setup.py build --fcompiler=gnu95
 

I have not found a matching ticket in trac. Should I open one or did I 
something very stupid during the build process? Thanks!

Samuel


PS: I failed to succeed in the first shot with python.org's official fat 
precompiled .dmg-file release (ppc/i386 32/64 bit), so I used MacPorts. Later 
today, I'll try again to compile against python.org because I think numpy/scipy 
recommends that version.


For completeness, here are my build steps:

o   Building numpy/scipy from source:
http://scipy.org/Installing_SciPy/Mac_OS_X:
- Make sure XCode is installed with the Development target 10.4 SDK
- Download and install gfortran from http://r.research.att.com/tools/
- svn co http://svn.scipy.org/svn/numpy/trunk numpy
- svn co http://svn.scipy.org/svn/scipy/trunk scipy
- sudo port install fftw-3
- sudo port install suitesparse
- sudo port install swig-python
- mkdir scipy_numpy; cd scipy_numpy
- cd numpy
- cp site.cfg.example site.cfg
- You may want to copy the site.cfg to ~/.numpy-site.cfg
- Edit site.cfg to contain only the following:
 [DEFAULT]
 library_dirs = /opt/local/lib
 include_dirs = /opt/local/include
 [amd]
 amd_libs = amd
 [umfpack]
 umfpack_libs = umfpack
 [fftw]
 libraries = fftw3
- export MACOSX_DEPLOYMENT_TARGET=10.6
- export CFLAGS=-arch i386 -arch x86_64
- export FFLAGS=-m32 -m64
- export LDFLAGS=-Wall -undefined dynamic_lookup -bundle -arch i386 -arch 
x86_64 -framework Accelerate
- export 
PYTHONPATH=/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
- python setup.py build --fcompiler=gnu95
- sudo python setup.py install
- cd ..
- cd scipy
- sed  's|include \(umfpack[^\.]*\.h\)|include 
/opt/local/include/ufsparse/\1|g' 
scipy/sparse/linalg/dsolve/umfpack/umfpack.i  
scipy/sparse/linalg/dsolve/umfpack/___tmp.i
- mv scipy/sparse/linalg/dsolve/umfpack/umfpack.i 
scipy/sparse/linalg/dsolve/umfpack/umfpack.old
- mv scipy/sparse/linalg/dsolve/umfpack/___tmp.i 
scipy/sparse/linalg/dsolve/umfpack/umfpack.i
- python setup.py build --fcompiler=gnu95
- cd
- python
  import numpy; numpy.test()
  import scipy; scipy.test()




A short excerpt of  numpy.test()'s output:


==
ERROR: test_return_real.TestF90ReturnReal.test_all
--
Traceback (most recent call last):
  File 
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose-0.11.4-py2.7.egg/nose/case.py,
 line 367, in setUp

Re: [Numpy-discussion] Building numpy on Mac OS X 10.6, i386 (no ppc) 32/64bit: Error in Fortran tests due to ppc64

2010-08-16 Thread Samuel John
Perhaps related tickets, but no perfect match (as far as I can judge):

-   http://projects.scipy.org/numpy/ticket/1399 distutils fails to build ppc64 
support on Mac OS X when requested
This revision is older than the one I used, ergo should already be applied.

-   http://projects.scipy.org/numpy/ticket/ Fix endianness-detection on 
ppc64 builds
closed. Already applied.

-   http://projects.scipy.org/numpy/ticket/527 fortran linking flag option...
Perhaps that linking flag could help to tell numpy (distutils) the right 
arch?

-   http://projects.scipy.org/numpy/ticket/1170 Possible Bug in F2PY Fortran 
Compiler Detection
Hmm, I don't know...

Samuel

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Åsmund Hjulstad
2010/8/16 Sturla Molden stu...@molden.no

 Two criticial import libraries for mingw-w64 are missing (libpython26.a
 and msvcr90.a). We cannot build C extensions for Python with mingw-w64. If
 you can build with disutils, your are not using mingw-w64 but another C
 compiler.


You are correct. The 32 bit setup is on a different computer, and there I'm
gcc and gfortran built by equation solution (v4.5.0). (No MSVC, and it works
well.)

I was hoping to continue with gcc for 64bit, but understood that msvc was
required. My attempted 64 bit setup is MSVC 2008 Pro for c, and mingw-w64
for the fortran part.

Thank you for the information. I will have to look into options for a
different
fortran compiler, for now.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] save data to csv with column names

2010-08-16 Thread Guillaume Chérel
Hello,

I'd like to know if there is an easy way to save a list of 1D arrays to a
csv file, with the first line of the file being the column names.

I found the following, but I can't get to save the column names:

data = rec.array([X1,X2,X3,X4], names=[n1,n2,n3,n4])
savetxt(filename, data, delimiter=,, fmt=[%i,%d, %f,%f])

Thank you,
Guillaume
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py performance question from a rookie

2010-08-16 Thread Sturla Molden

   font face=Andale MonoIt looks like the gain in performance is
 rather low compared to tests i have seen elsewhere.br
 br
 Am I missing something here..?br
 br
   /fontCheers...Vasilisbr

Turn HTML off please.

Use time.clock(), not time.time().

Try some tasks that actually takes a while. Tasks that take 10**-4 or
10**-3  seconds cannot be reliably timed on Windows or Linux.








___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] save data to csv with column names

2010-08-16 Thread John Hunter
2010/8/16 Guillaume Chérel guillaume.c.che...@gmail.com:
 Hello,

 I'd like to know if there is an easy way to save a list of 1D arrays to a
 csv file, with the first line of the file being the column names.

 I found the following, but I can't get to save the column names:

 data = rec.array([X1,X2,X3,X4], names=[n1,n2,n3,n4])
 savetxt(filename, data, delimiter=,, fmt=[%i,%d, %f,%f])

import matplotlib.mlab as mlab
mlab.rec2csv(data, 'myfile.csv')
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] save data to csv with column names

2010-08-16 Thread Skipper Seabold
2010/8/16 Guillaume Chérel guillaume.c.che...@gmail.com:
 Hello,

 I'd like to know if there is an easy way to save a list of 1D arrays to a
 csv file, with the first line of the file being the column names.

 I found the following, but I can't get to save the column names:

 data = rec.array([X1,X2,X3,X4], names=[n1,n2,n3,n4])
 savetxt(filename, data, delimiter=,, fmt=[%i,%d, %f,%f])


There is a patch to add this capability to savetxt.  It was proposed
to be included in Numpy 1.5, but I don't see that the patch was
applied yet.  You can use it for your stuff in the meantime.  I've
been using a version that I wrote since I need to do this quite often.

http://projects.scipy.org/numpy/ticket/1079

I don't remember without looking if this patch also accepts recarrays
and scrapes the names automatically.  It should.

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.5.0 beta 1

2010-08-16 Thread Ralf Gommers
On Mon, Aug 16, 2010 at 4:50 PM, Sandro Tosi mo...@debian.org wrote:

 Hi all,
 sorry for the delay

 On Sun, Aug 1, 2010 at 18:38, Ralf Gommers ralf.gomm...@googlemail.com
 wrote:
  I am pleased to announce the availability of the first beta of NumPy
 1.5.0.
  This will be the first NumPy release to include support for Python 3, as
  well as for Python 2.7. Please try this beta and report any problems on
 the
  NumPy mailing list.
 
  Binaries, sources and release notes can be found at
  https://sourceforge.net/projects/numpy/files/
  Please note that binaries for Python 3.1 are not yet up, they will follow
 as
  soon as a minor issue with building them is resolved. Building from
 source
  with Python 3.1 should work without problems.

 I gave it a run on the Debian packaging system and these are the results:

 - python 3.1 can't compile it:

 $ python3 setup.py build
 Traceback (most recent call last):
   File setup.py, line 210, in module
setup_package()
  File setup.py, line 174, in setup_package
import py3tool
 ImportError: No module named py3tool

 but this is already known, py3tool is missing from the tarball.

 - python2.6 build, installation and numpy.test() works fine

 - I have a problem building documentation:

 # build doc only for default python version
 (export MPLCONFIGDIR=. ; make -C doc html
 PYTHONPATH=../build/lib.linux-x86_64-2.6)
 make[1]: Entering directory `/tmp/buildd/python-numpy-1.5.0~b1/doc'
 mkdir -p build
 python \
./sphinxext/autosummary_generate.py source/reference/*.rst \
-p dump.xml -o source/reference/generated
 ./sphinxext/autosummary_generate.py:18: DeprecationWarning: The
 numpydoc.autosummary extension can also be found as
 sphinx.ext.autosummary in Sphinx = 0.6, and the version in Sphinx =
 0.7 is superior to the one in numpydoc. This numpydoc version of
 autosummary is no longer maintained.
  from autosummary import import_by_name
 Failed to import 'numpy.__array_priority__':
 Failed to import 'numpy.core.defchararray.len':
 Failed to import 'numpy.generic.__squeeze__':
 touch build/generate-stamp
 mkdir -p build/html build/doctrees
 LANG=C sphinx-build -b html -d build/doctrees   source build/html
 Running Sphinx v0.6.6

 Extension error:
 Could not import extension numpydoc (exception: No module named domains.c)
 1.5b1 1.5.0b1


That's because of the version of Sphinx, domains were introduced in version
1.0. With that you should be able to build the docs.

Cheers,
Ralf




 I don't know exactly the reason for the Failed to import X but they
 are there also for 1.4.1 so they should not be a problem. Anyhow,
 there is no file named 'domains.c' in the tarball.

 Regards,
 --
 Sandro Tosi (aka morph, morpheus, matrixhasu)
 My website: http://matrixhasu.altervista.org/
 Me at Debian: http://wiki.debian.org/SandroTosi
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.5.0 beta 1

2010-08-16 Thread Skipper Seabold
On Mon, Aug 16, 2010 at 11:36 AM, Ralf Gommers
ralf.gomm...@googlemail.com wrote:


 On Mon, Aug 16, 2010 at 4:50 PM, Sandro Tosi mo...@debian.org wrote:

 Hi all,
 sorry for the delay

 On Sun, Aug 1, 2010 at 18:38, Ralf Gommers ralf.gomm...@googlemail.com
 wrote:
  I am pleased to announce the availability of the first beta of NumPy
  1.5.0.
  This will be the first NumPy release to include support for Python 3, as
  well as for Python 2.7. Please try this beta and report any problems on
  the
  NumPy mailing list.
 
  Binaries, sources and release notes can be found at
  https://sourceforge.net/projects/numpy/files/
  Please note that binaries for Python 3.1 are not yet up, they will
  follow as
  soon as a minor issue with building them is resolved. Building from
  source
  with Python 3.1 should work without problems.

 I gave it a run on the Debian packaging system and these are the results:

 - python 3.1 can't compile it:

 $ python3 setup.py build
 Traceback (most recent call last):
  File setup.py, line 210, in module
    setup_package()
  File setup.py, line 174, in setup_package
    import py3tool
 ImportError: No module named py3tool

 but this is already known, py3tool is missing from the tarball.

 - python2.6 build, installation and numpy.test() works fine

 - I have a problem building documentation:

 # build doc only for default python version
 (export MPLCONFIGDIR=. ; make -C doc html
 PYTHONPATH=../build/lib.linux-x86_64-2.6)
 make[1]: Entering directory `/tmp/buildd/python-numpy-1.5.0~b1/doc'
 mkdir -p build
 python \
                ./sphinxext/autosummary_generate.py source/reference/*.rst
 \
                -p dump.xml -o source/reference/generated
 ./sphinxext/autosummary_generate.py:18: DeprecationWarning: The
 numpydoc.autosummary extension can also be found as
 sphinx.ext.autosummary in Sphinx = 0.6, and the version in Sphinx =
 0.7 is superior to the one in numpydoc. This numpydoc version of
 autosummary is no longer maintained.
  from autosummary import import_by_name
 Failed to import 'numpy.__array_priority__':
 Failed to import 'numpy.core.defchararray.len':
 Failed to import 'numpy.generic.__squeeze__':
 touch build/generate-stamp
 mkdir -p build/html build/doctrees
 LANG=C sphinx-build -b html -d build/doctrees   source build/html
 Running Sphinx v0.6.6

 Extension error:
 Could not import extension numpydoc (exception: No module named domains.c)
 1.5b1 1.5.0b1

 That's because of the version of Sphinx, domains were introduced in version
 1.0. With that you should be able to build the docs.


Ah, glad that the directives bug got fixed.  I get the same error as
above with Sphinx 1.0.  Different error with Sphinx 1.0.2.

In [2]: np.__version__
Out[2]: '2.0.0.dev8645'

$ make html
mkdir -p build
touch build/generate-stamp
mkdir -p build/html build/doctrees
LANG=C sphinx-build -b html -d build/doctrees   source build/html
Running Sphinx v1.0.2
2.0.dev8645 2.0.0.dev8645

snip

Exception occurred:
  File /home/skipper/numpy/doc/sphinxext/docscrape.py, line 208, in
parse_item_name
raise ValueError(%s is not a item name % text)
ValueError:   is not a item name
The full traceback has been saved in /tmp/sphinx-err-eHqxHV.log, if
you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error
message can be provided next time.
Either send bugs to the mailing list at
http://groups.google.com/group/sphinx-dev/,
or report them in the tracker at
http://bitbucket.org/birkenfeld/sphinx/issues/. Thanks!
make: *** [html] Error 1


Full output: http://pastebin.com/3yGHa1wu
Log output: http://pastebin.com/JLbVHjLY

Also upgraded to docutils 0.7 to be sure, and the error persists.  Any ideas?

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Eric Firing
On 08/15/2010 11:14 PM, Sturla Molden wrote:
 Two criticial import libraries for mingw-w64 are missing (libpython26.a
 and msvcr90.a). We cannot build C extensions for Python with mingw-w64. If
 you can build with disutils, your are not using mingw-w64 but another C
 compiler.

Sturla,

Have you tried the Equation Solution versions of the gcc tools?
http://www.equation.com/servlet/equation.cmd?fa=fortran

Eric



 There are rumors of incompatibility issues between libgfortran and the CRT
 or SciPy on Windows 64. David Cournapeau might know the details.

 It seems f2py on Windows 64 requires a commercial Fortran compiler (the
 ones worth looking at are Absoft, Intel and PGI) and e.g. Microsoft's free
 C/C++ compiler (download Windows 7 SDK for .NET 3.5 -- not any later or
 earlier version). You need to have the environment variables
 DISTUTILS_USE_SDK and MSSdk set to build with the free SDK compiler. f2py
 will enforce the former, the latter comes from running setenv from the
 SDK. The latter is important as you will need to set up the environments
 for the Fortran and the C compiler in the same command window before
 running f2py.

 I also had to remove support for Compaq Visual Fortran from the f2py
 source to  make anything build, as it crashes f2py on Windows 64 (this
 does not happen on 32-bit Windows).

 Building f2py extensions on Windows 64 is a bit more tricky. And I've had
 no luck with gfortran so far.

 Sturla



 Hi all,

 Are there any success stories in building f2py extensions on AMD64 with
 latest versions? Building the same extension on 32 bit works like a charm.

 I am having trouble finding documentation or examples, is it supposed to
 be
 working?

 Compiling (with distutils) works like a charm, but that does not help when
 I'm stuck figuring out link dependencies. It seems to me that the gfortran
 library depends on a mingw that is in conflict with some CRT library.

 I have to admit, I'm probably in way too deep waters, and should really be
 satisfied with 32 bit, but still, it would be fun to get it working.

 GFortran is from  MinGW-w64 project on sourceforge, version 4.5.1
 prerelease.

 Any pointers or other experiences?

 Regards,
 Åsmund Hjulstad
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread David Cournapeau
On Tue, Aug 17, 2010 at 1:53 AM, Eric Firing efir...@hawaii.edu wrote:
 On 08/15/2010 11:14 PM, Sturla Molden wrote:
 Two criticial import libraries for mingw-w64 are missing (libpython26.a
 and msvcr90.a). We cannot build C extensions for Python with mingw-w64. If
 you can build with disutils, your are not using mingw-w64 but another C
 compiler.

 Sturla,

 Have you tried the Equation Solution versions of the gcc tools?
 http://www.equation.com/servlet/equation.cmd?fa=fortran

It has the same issues last time I checked. The problem is that
libgfortran relies a lot on the mingw runtime, which has
incompatibilities with the msvc C runtime - much more it seems that
the g77 ever did. The fact that it worked for scipy was pure luck,
though.

I think the right solution is to build our own libgfortran, but
against the MSVC C runtime instead of the mingw runtime. THis is not
as crazy as it sounds because scipy does not rely so much on the
fortran runtime. Just faking the library (by putting functions which
do nothing), I could run half of scipy last time I tried. I also
started working on removing most of the fortran IO code in scipy
(which would cause issues anyway independently of this issue), but
then I started my new job :)

It is unlikely I will work on this again in the near future, but it
should not be so hard for anyone who has incentive to make this work,

cheers,

David
 Eric



 There are rumors of incompatibility issues between libgfortran and the CRT
 or SciPy on Windows 64. David Cournapeau might know the details.

 It seems f2py on Windows 64 requires a commercial Fortran compiler (the
 ones worth looking at are Absoft, Intel and PGI) and e.g. Microsoft's free
 C/C++ compiler (download Windows 7 SDK for .NET 3.5 -- not any later or
 earlier version). You need to have the environment variables
 DISTUTILS_USE_SDK and MSSdk set to build with the free SDK compiler. f2py
 will enforce the former, the latter comes from running setenv from the
 SDK. The latter is important as you will need to set up the environments
 for the Fortran and the C compiler in the same command window before
 running f2py.

 I also had to remove support for Compaq Visual Fortran from the f2py
 source to  make anything build, as it crashes f2py on Windows 64 (this
 does not happen on 32-bit Windows).

 Building f2py extensions on Windows 64 is a bit more tricky. And I've had
 no luck with gfortran so far.

 Sturla



 Hi all,

 Are there any success stories in building f2py extensions on AMD64 with
 latest versions? Building the same extension on 32 bit works like a charm.

 I am having trouble finding documentation or examples, is it supposed to
 be
 working?

 Compiling (with distutils) works like a charm, but that does not help when
 I'm stuck figuring out link dependencies. It seems to me that the gfortran
 library depends on a mingw that is in conflict with some CRT library.

 I have to admit, I'm probably in way too deep waters, and should really be
 satisfied with 32 bit, but still, it would be fun to get it working.

 GFortran is from  MinGW-w64 project on sourceforge, version 4.5.1
 prerelease.

 Any pointers or other experiences?

 Regards,
 Åsmund Hjulstad
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Sturla Molden

 Sturla,

 Have you tried the Equation Solution versions of the gcc tools?
 http://www.equation.com/servlet/equation.cmd?fa=fortran

 Eric


It does not matter which mingw-w64 build we use. The import libraries will
still be missing, and the issies with libgfortran are still there. Also,
AVG and McAfee warns about a trojan in the installers from equation.com. I
am not taking any chances with a compromised C compiler. Imagine a
compiler that infects anything you build with malware. That is about as
bad as it gets. I'd rather play safe and use a mingw build from a trusted
source; one that antivirus software don't warn me about. Building
mingw-w64 from gcc sources is not rocket science either.

Personally I use Absoft Pro Fortran and the MSVC compiler.

Sturla







___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] basic question about numpy arrays

2010-08-16 Thread gerardob

I have a numpy array A , such that when i print A it appears:

[[ 10.]
 [ 20.]]

I would like to have a one dimensional array B (obtained from A) such that
B[0] = 10 and B[1]=20. It could be seen as the transpose of A.

How can i obtain B = [10,20]  from A? I tried transpose(1,0) but it doesn't
seem to be useful.

Thanks.



-- 
View this message in context: 
http://old.nabble.com/basic-question-about-numpy-arrays-tp29449184p29449184.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] basic question about numpy arrays

2010-08-16 Thread Skipper Seabold
On Mon, Aug 16, 2010 at 2:25 PM, gerardob gberbeg...@gmail.com wrote:

 I have a numpy array A , such that when i print A it appears:

 [[ 10.]
  [ 20.]]

 I would like to have a one dimensional array B (obtained from A) such that
 B[0] = 10 and B[1]=20. It could be seen as the transpose of A.

 How can i obtain B = [10,20]  from A? I tried transpose(1,0) but it doesn't
 seem to be useful.


B = A.squeeze()

or

B = A.flatten()

would be two ways.

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] basic question about numpy arrays

2010-08-16 Thread John Salvatier
You can also do:

y = x[:,0]

On Mon, Aug 16, 2010 at 11:28 AM, Skipper Seabold jsseab...@gmail.comwrote:

 On Mon, Aug 16, 2010 at 2:25 PM, gerardob gberbeg...@gmail.com wrote:
 
  I have a numpy array A , such that when i print A it appears:
 
  [[ 10.]
   [ 20.]]
 
  I would like to have a one dimensional array B (obtained from A) such
 that
  B[0] = 10 and B[1]=20. It could be seen as the transpose of A.
 
  How can i obtain B = [10,20]  from A? I tried transpose(1,0) but it
 doesn't
  seem to be useful.
 

 B = A.squeeze()

 or

 B = A.flatten()

 would be two ways.

 Skipper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] basic question about numpy arrays

2010-08-16 Thread Rob Speer
To explain:

A has shape (2,1), meaning it's a 2-D array with 2 rows and 1 column.
The transpose of A has shape (1,2): it's a 2-D array with 1 row and 2
columns. That's not the same as what you want, which is an array with
shape (2,): a 1-D array with 2 entries.

When you take A[:,0], you're pulling out the 1-D array that
constitutes column 0 of your array, which is exactly what you want.

-- Rob
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Eric Firing
On 08/16/2010 07:36 AM, Sturla Molden wrote:

 Sturla,

 Have you tried the Equation Solution versions of the gcc tools?
 http://www.equation.com/servlet/equation.cmd?fa=fortran

 Eric


 It does not matter which mingw-w64 build we use. The import libraries will
 still be missing, and the issies with libgfortran are still there. Also,
 AVG and McAfee warns about a trojan in the installers from equation.com. I

Here is the rebuttal from Equation Solution:

http://www.equation.com/servlet/equation.cmd?fa=falsevirus

By the way, thank you for working on python-dev toward a solution to the 
problem of building extensions for Win using free compilers.
(http://www.gossamer-threads.com/lists/python/dev/854392).

Eric


 am not taking any chances with a compromised C compiler. Imagine a
 compiler that infects anything you build with malware. That is about as
 bad as it gets. I'd rather play safe and use a mingw build from a trusted
 source; one that antivirus software don't warn me about. Building
 mingw-w64 from gcc sources is not rocket science either.

 Personally I use Absoft Pro Fortran and the MSVC compiler.

 Sturla
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py performance question from a rookie

2010-08-16 Thread Vasileios Gkinis
Sturla Molden sturla at molden.no writes:

 
 
font face=Andale MonoIt looks like the gain in performance is
  rather low compared to tests i have seen elsewhere.br
  br
  Am I missing something here..?br
  br
/fontCheers...Vasilisbr
 
 Turn HTML off please.
 
 Use time.clock(), not time.time().
 
 Try some tasks that actually takes a while. Tasks that take 10**-4 or
 10**-3  seconds cannot be reliably timed on Windows or Linux.
 


Hi again,

After using time.clock, running f2py with the REPORT_ON_ARRAY_COPY enabled and
passing arrays as np.asfortranarray(array) to the fortran routines I still get a
slow performance on f2py. No copied arrays are reported.

Running on f2py with a 6499x6499 array takes about 1.2sec while the python for
loop does the job slightly faster at 0.9 sec.

Comparisons like this:
http://www.scipy.org/PerformancePython
indicate a 100x-1000x boost in performance with f2py when compared to
conventional python for loops.

Still quite puzzled...any help will be very much appreciated

Regards
Vasileios 

The actual fortran subroutine is here:

subroutine fill_B(j, beta, lamda, mu, b)
integer, intent(in) ::  j
integer :: row, col
real(kind = 8),  intent(in) :: beta(j), lamda(j), mu(j)
real(kind = 8), intent(out) :: b(j,j)

do row=2,j-1
do col=1,j
if (col == row-1) then
b(row,col) = beta(row) - lamda(row) + mu(row)
elseif (col == row) then
b(row,col) = 1 - 2*beta(row)
elseif (col == row+1) then
b(row, col) = beta(row) + lamda(row) - mu(row)
else
b(row, col) = 0
endif
enddo
enddo
b(1,1) = 1
b(j,j) = 1

end

and the python code that calls it together with the alternative implementation
with conventional python for loops is here:


def crank_heat(self, profile_i, go, g1, beta, lamda, mu, usef2py = False):




##Crank Nicolson AX = BD
##Make matrix A (coefs of n+1 step
N = np.size(profile_i)
print N
t1 = time.clock()

if usef2py == True:
matrix_A = fill_A.fill_a(j = N, beta = beta, lamda = lamda, mu = mu)
matrix_B = fill_B.fill_b(j = N, beta = beta, lamda = lamda, mu = mu)
else:
matrix_A = np.zeros((N,N))
matrix_A[0,0] = 1
matrix_A[-1,-1] = 1
for row in np.arange(1,N-1):
matrix_A[row,row-1] = -(beta[row] - lamda[row] + mu[row])
matrix_A[row, row] = 1 + 2*beta[row]
matrix_A[row, row+1] = -(beta[row] + lamda[row] - mu[row])


#make matrix B
matrix_B = np.zeros((N,N))
matrix_B[0,0] = 1
matrix_B[-1,-1] = 1
for row in np.arange(1,N-1):
matrix_B[row,row-1] = beta[row] - lamda[row] + mu[row]
matrix_B[row, row] = 1 - 2*beta[row]
matrix_B[row, row+1] = beta[row] + lamda[row] - mu[row]

print(CN function time: %0.5e %(time.clock()-t1))

matrix_C = np.dot(matrix_B, profile_i)
t1 = time.clock()
matrix_X = np.linalg.solve(matrix_A, matrix_C)
print(CN solve time: %0.5e %(time.clock()-t1))
matrix_X[0] = go
matrix_X[-1] = g1



return matrix_X





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py performance question from a rookie

2010-08-16 Thread Sturla Molden

 After using time.clock, running f2py with the REPORT_ON_ARRAY_COPY enabled
 and
 passing arrays as np.asfortranarray(array) to the fortran routines I still
 get a
 slow performance on f2py. No copied arrays are reported.

That is not any better as np.asfortranarray will make a copy instead. Pass
the transpose to Fortran, and transpose the return value. Make sure you
neither make a copy before or after calling Fortran. A transpose does not
make a copy in NumPy.

You want C ordering in NumPy and Fortran ordering in Fortran. Which means
you probably want to call Fortran like this:

foobar(x.T, y.T)

If you let f2py return y as a result, make sure it is C ordered after
transpose:

y = foobar(x.T).T
assert(y.flags['C_CONTIGUOUS'])

Even if you don't get a copy and transpose immediately (i.e. f2py does
not report a copy), it might happen silently later on if you get Fortran
ordered arrays returned into Python.


 Running on f2py with a 6499x6499 array takes about 1.2sec while the python
 for
 loop does the job slightly faster at 0.9 sec.

I suppose you are including the solve time here? In which case the
dominating factors are np.dot and np.linalg.solve. Fortran will not help a
lot then.

Well if you have access to performance libraries (ACML or MKL) you might
save a little by using Fortran DGEMM instead of np.dot and a carefully
selected Fortran LAPACK solver (that is, np.linalg.solve use Gaussian
substitution, i.e. LU, but there might be structure in your matrices to
exploit, I haven't looked too carefully.)


 subroutine fill_B(j, beta, lamda, mu, b)
 integer, intent(in) ::  j
 integer :: row, col
 real(kind = 8),  intent(in) :: beta(j), lamda(j), mu(j)
 real(kind = 8), intent(out) :: b(j,j)

 do row=2,j-1
 do col=1,j
 if (col == row-1) then
 b(row,col) = beta(row) - lamda(row) + mu(row)
 elseif (col == row) then
 b(row,col) = 1 - 2*beta(row)
 elseif (col == row+1) then
 b(row, col) = beta(row) + lamda(row) - mu(row)
 else
 b(row, col) = 0
 endif
 enddo
 enddo
 b(1,1) = 1
 b(j,j) = 1

 end

That iterates over strided memory in Fortran. Memory access is slow,
computation is fast.

So there are two problems with your Fortran code:

1. You want b to be transposed in Fortran compared to Python.
2. Iterate in column major order in Fortran, not row major order.


Sturla




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread David Cournapeau
On Tue, Aug 17, 2010 at 2:24 AM, Sturla Molden stu...@molden.no wrote:

 Thank for the information, David.

 I think the right solution is to build our own libgfortran, but
 against the MSVC C runtime instead of the mingw runtime. THis is not
 as crazy as it sounds because scipy does not rely so much on the
 fortran runtime. Just faking the library (by putting functions which
 do nothing),

 That might work for building SciPy, but not for arbitrary Fortran 95 code.

Actually, it does, you just have to build the whole libgfortran to
support arbitrary code.

 So the options are to use another compiler or run gfortran code in a
 spawned subprocess.

I don't really see how to make that works (that is running gfortran
code in a spawned process), but if you know how to, please go ahead.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building f2py extensions on AMD64 with Python 2.7 Numpy 1.5, MSVC 2008 Pro and gfortran

2010-08-16 Thread Sturla Molden

 I don't really see how to make that works (that is running gfortran
 code in a spawned process), but if you know how to, please go ahead.


I didn't debug this code examples so there might be typos, but this is
just to illustrate the general idea:


Say we have a Fortran function with this interface:

subroutine foobar(m, n, x, y)
   integer :: m, n
   real*8 :: x(n), y(n)
end subroutine

We could then proceed like this:

We can create a chunk of uniquely named shared memory like this:

   tag = uuid.uuid4().hex
   size = 1  20 # 1 MiB from the paging file
   shm = mmap.mmap(0, size, tagname=Local\\%s%(tag,))
   shm = np.frombuffer(shm, dtype=np.uint8)

The we use views of shm to set up x and y, e.g.

   m = 100
   n =  100
   nb = 8 * n * m
   x = shm[:nb].view(dtype=float).reshape((n,m))
   y = shm[nb:2*nb].view(dtype=float).reshape((n,m))

Then we can call a fortran program using subprocess module:

   returnval = subprocess.call(['foobar.exe','%d'%n, '%d'%m, tag])

Then we need some boilerplate C to grab the shared memory and call Fortran:

#include windows.h
#include stdlib.h
#include stdio.h

/* fortran: pass pointers, append underscore */
extern void foobar_(int *m, int *n, double *x, double *y)

static int openshm(const char *tag, size_t size,
HANDLE *phandle, void **pbuf)
{
*phandle = OpenFileMapping(FILE_MAP_ALL_ACCESS, TRUE, tag);
if (!h) return -1;
*pbuf = MapViewOfFile(*phandle, FILE_MAP_ALL_ACCESS, 0, 0, size);
if (*pbuf == NULL) {
CloseHandle(p);
return -1;
}
return 0;
}

int main(int argc, char *argv[])
{
char tag[256];
HANDLE shmem;
int m, n;
double *x, *y;

m = atoi(argv[1]);
n = atoi(argv[2]);
ZeroMemory(tag, sizeof(tag));
sprintf(tag, Local\\%s, argv[3]);

/* get arrays from shared memory */
if (openshm(tag, 2*m*n*sizeof(double), shmem, (void*)x)0)
  return -1;
y = x + m*n;

/* call fortran */
foobar_(m, n, x, y);

/* done */
CloseHandle(h);
return 0;
}

Blah... ugly.

On Linux we could use os.fork instead of subprocess, and only y would need
to be placed in (anonymous) shared memory (due to copy-on-write). But on
the other hand, we don't have run-time issues with gfortran on Linux
either. Just to illustrate why Windows sucks sometimes.

:-)


Sturla








___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Seeking advice on crowded namespace.

2010-08-16 Thread Charles R Harris
Hi All,

I just added support for Legendre polynomials to numpy and I think the
numpy.polynomial name space is getting a bit crowded. Since most of the
current functions in that namespace are just used to implement the
Polynomial, Chebyshev, and Legendre classes I'm thinking of only importing
those classes by default and leaving the other functions to explicit
imports. Of course I will have to fix the examples and maybe some other
users will be inconvenienced by the change. But with 2.0.0 in the works this
might be a good time to do this. Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array manipulation

2010-08-16 Thread Alex Ter-Sarkissov
hi, this is probably a very silly question, but I can't get my hear
around it unfortunately(

I have an array (say, mat=rand(3,5)) from which I 'pull out' a row
(say, s1=mat[1,]). The problem is, the shape of this row s1 is not
[1,5], as I would expect, but rather [5,], which means that I can't,
for example, concateante mat and s1 rowwise.

thanks for the help
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array manipulation

2010-08-16 Thread josef . pktd
On Tue, Aug 17, 2010 at 12:13 AM, Alex Ter-Sarkissov ater1...@gmail.com wrote:
 hi, this is probably a very silly question, but I can't get my hear
 around it unfortunately(

 I have an array (say, mat=rand(3,5)) from which I 'pull out' a row
 (say, s1=mat[1,]). The problem is, the shape of this row s1 is not
 [1,5], as I would expect, but rather [5,], which means that I can't,
 for example, concateante mat and s1 rowwise.

try
np.vstack((mat[0], mat))

np.arrays are not matrices,
a slice with a single number reduces dimenson,  mat[0] which is
identical to mat[0,:] for 2d array
mat[0][np.newaxis,:] adds a new (row) dimension

 mat[0][np.newaxis,:].shape
(1, 5)

it is described somewhere in the help on slicing and indexing

Josef


 thanks for the help


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problems building NumPy with GotoBLAS

2010-08-16 Thread ashford
I'm having problems getting the GotoBLAS library (Nehalem optimized BLAS -
http://www.tacc.utexas.edu/tacc-projects/gotoblas2/;) working properly under
the Python NumPy package (http://numpy.scipy.org/;) on a quad-core Nehalem
under FC10.

The command used to build the library is:
make BINARY=64 USE_THREAD=1 MAX_CPU_NUMBER=4

I'm limiting this to four cores, as I believe HyperThreading will slow it down
(I've seen this happen with other scientific code).  I'll benchmark later to
see whether or not HyperThreading helps.

I built the library (it uses -fPIC), then installed it under /usr/local/lib64,
and created the appropriate links:
# cp libgoto2_nehalemp-r1.13.a /usr/local/lib64
# cp libgoto2_nehalemp-r1.13.so /usr/local/lib64
# cd /usr/local/lib64
# ln -s libgoto2_nehalemp-r1.13.a libgoto2.a
# ln -s libgoto2_nehalemp-r1.13.so libgoto2.so
# ln -s libgoto2_nehalemp-r1.13.a libblas.a
# ln -s libgoto2_nehalemp-r1.13.so libblas.so

Without the libblas links, the NumPy configuration used the system default
BLAS library (single-threaded NetLib under FC10); it's set up for the NetLib
and Atlas BLAS libraries.

I used the latest release of NumPy, with no site.cfg file, and no NumPy
directory (rm -rf /usr/local/python2.6/lib/python2.6/site-packages/numpy). 
The configuration step (python setup.py config) appears to run OK, as do the
build (python setup.py build) and install (python setup.py install) steps.
 The problem is that the benchmark takes 8.5 seconds, which is what it took
before I changed the library.

python -c import numpy as N; a=N.random.randn(1000, 1000); N.dot(a, a)

I expect I'm missing something really simple here, but I've spent 10 hours on
it, and I have no idea as to what it could be.  I've tried various
permutations on the site.cfg file, all to no avail.  I've also tried different
names on the library, and different locations.  I've even tried a set of
symbolic links in /usr/local/lib64 for liblapack that point to libgoto.

Could someone offer some suggestions?

Thank you for your time.

Peter Ashford

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building NumPy with GotoBLAS

2010-08-16 Thread David
On 08/17/2010 01:58 PM, ashf...@whisperpc.com wrote:
 I'm having problems getting the GotoBLAS library (Nehalem optimized BLAS -
 http://www.tacc.utexas.edu/tacc-projects/gotoblas2/;) working properly under
 the Python NumPy package (http://numpy.scipy.org/;) on a quad-core Nehalem
 under FC10.

 The command used to build the library is:
  make BINARY=64 USE_THREAD=1 MAX_CPU_NUMBER=4

 I'm limiting this to four cores, as I believe HyperThreading will slow it down
 (I've seen this happen with other scientific code).  I'll benchmark later to
 see whether or not HyperThreading helps.

 I built the library (it uses -fPIC), then installed it under /usr/local/lib64,
 and created the appropriate links:
  # cp libgoto2_nehalemp-r1.13.a /usr/local/lib64
  # cp libgoto2_nehalemp-r1.13.so /usr/local/lib64
  # cd /usr/local/lib64
  # ln -s libgoto2_nehalemp-r1.13.a libgoto2.a
  # ln -s libgoto2_nehalemp-r1.13.so libgoto2.so
  # ln -s libgoto2_nehalemp-r1.13.a libblas.a
  # ln -s libgoto2_nehalemp-r1.13.so libblas.so

The .so are only used when linking, and not the ones used at runtime 
generally (the full version, e.g. .so.1.2.3 is). Which version exactly 
depends on your installation, but I actually advise you against doing 
those softlink. You should instead specificaly link the GOTO library to 
numpy, by customizing the site.cfg,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building NumPy with GotoBLAS

2010-08-16 Thread ashford
David,

 You should instead specificaly link the GOTO library to
 numpy, by customizing the site.cfg,

That was one of the many things I tried came to the list.

Peter Ashford

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion