[Numpy-discussion] Sun Studio Compilers on Linux / atan2 regression

2010-02-17 Thread Christian Marquardt
Hi,

when compiling numpy-1.40 with the Sun Studio Compilers (v12 Update 1) on Linux 
(an OpenSUSE 11.1 in my case), about 30 tests in numpy.test() fail; all 
failures are related to the arctan2 function. 

I've found that in r7732 a patch was applied to 
trunk/numpy/core/src/private/npy_config.h in response to #1201, #1202, and 
#1203, #undef'ing the HAVE_ATAN2 variable in order to fix a broken atan2() 
implementation on Solaris. It seems that this does no good with the most recent 
Sun compiler on Linux...

The attached patch ensures that the original patch is only applied on Solaris 
platforms; with this applied, all tests are completed successfully under Linux. 
BTW, I did not observe #1204 or #1205... As I have no access to a Solaris 
machine, I also don't know if the original patch is required with Sun Studio 
12.1 at all.


Something different - I would've loved to enter this in the numpy-Trac, but 
registration didn't work (I was asked for another username/password at 
scipy.org during the registration process) :-((

Thanks,

  Christian.
diff -r -C5 numpy-1.4.0.orig/numpy/core/src/private/npy_config.h numpy-1.4.0/numpy/core/src/private/npy_config.h
*** numpy-1.4.0.orig/numpy/core/src/private/npy_config.h	2009-12-22 11:59:52.0 +
--- numpy-1.4.0/numpy/core/src/private/npy_config.h	2010-02-17 17:47:12.080101518 +
***
*** 7,18 
  #if defined(_MSC_VER) || defined(__MINGW32_VERSION)
  #undef HAVE_ATAN2
  #undef HAVE_HYPOT
  #endif
  
! /* Disable broken Sun Workshop Pro math functions */
! #ifdef __SUNPRO_C
  #undef HAVE_ATAN2
  #endif
  
  /* 
   * On Mac OS X, because there is only one configuration stage for all the archs
--- 7,18 
  #if defined(_MSC_VER) || defined(__MINGW32_VERSION)
  #undef HAVE_ATAN2
  #undef HAVE_HYPOT
  #endif
  
! /* Disable broken Sun Workshop Pro math functions on Solaris */
! #if defined(__SUNPRO_C) && defined(__sun)
  #undef HAVE_ATAN2
  #endif
  
  /* 
   * On Mac OS X, because there is only one configuration stage for all the archs
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Command line args for the Intel compilers (32 bit)

2009-03-29 Thread Christian Marquardt
Hi,

I've been carrying these modifcations of the build-in compiler
command line arguments for the 32-bit Intel compilers for quite 
some while now; maybe they are interesting for other people as 
well... I've been using this with ifort (IFORT) 10.1 20080801 
on a Suse Linux 10.3.

Rationale for individual changes:

 - numpy-1.3.0rc1/numpy/distutils/fcompiler/intel.py:

   - For pentiumM's, options are changed from '-tpp7 -xB' to '-xN': 
The compiler documentation says that -tpp and -xB are 
deprecated and will be removed in future versions.
   - 'linker_so' gets an additional "-xN": If code is compiled 
with -x, additional "vector math" libraries (libvml*) need
to be linked in, or loading the shared objects may fail at
runtime; so I added this to the link command. If other '-x'
options are used by the numpy distutils, the linker command 
should be modified accordingly - so this patch probably is 
not generic.


 - numpy-1.3.0rc1/numpy/distutils/intelccompiler.py:

   - A dedicated C++ compiler (icpc) is introduced and used for 
 compiling C++ code, *and* for linking: I have found that C++
 extensions require additional runtime libraries that are not 
 linked in with the normal icc command, causing the loading 
 of C++ extensions to fail at runtime. This used to be a problem
 with scipy in earlier versions, but I think currently, there's
 no C++ any more in scipy (but I could be wrong).

Hope this is useful,

  Christian.

diff -r -C3 -N numpy-1.3.0rc1.orig/numpy/distutils/fcompiler/intel.py numpy-1.3.0rc1/numpy/distutils/fcompiler/intel.py
*** numpy-1.3.0rc1.orig/numpy/distutils/fcompiler/intel.py	2009-03-24 06:54:52.0 +0100
--- numpy-1.3.0rc1/numpy/distutils/fcompiler/intel.py	2009-03-29 12:05:32.0 +0200
***
*** 37,43 
  'compiler_f77' : [None, "-72", "-w90", "-w95"],
  'compiler_f90' : [None],
  'compiler_fix' : [None, "-FI"],
! 'linker_so': ["", "-shared"],
  'archiver' : ["ar", "-cr"],
  'ranlib'   : ["ranlib"]
  }
--- 37,43 
  'compiler_f77' : [None, "-72", "-w90", "-w95"],
  'compiler_f90' : [None],
  'compiler_fix' : [None, "-FI"],
! 'linker_so': ["", "-shared", "-xN"],
  'archiver' : ["ar", "-cr"],
  'ranlib'   : ["ranlib"]
  }
***
*** 72,78 
  if cpu.is_PentiumPro() or cpu.is_PentiumII() or cpu.is_PentiumIII():
  opt.extend(['-tpp6'])
  elif cpu.is_PentiumM():
! opt.extend(['-tpp7','-xB'])
  elif cpu.is_Pentium():
  opt.append('-tpp5')
  elif cpu.is_PentiumIV() or cpu.is_Xeon():
--- 72,78 
  if cpu.is_PentiumPro() or cpu.is_PentiumII() or cpu.is_PentiumIII():
  opt.extend(['-tpp6'])
  elif cpu.is_PentiumM():
! opt.extend(['-xN'])
  elif cpu.is_Pentium():
  opt.append('-tpp5')
  elif cpu.is_PentiumIV() or cpu.is_Xeon():
***
*** 90,96 
  if cpu.has_sse2():
  opt.append('-xN')
  elif cpu.is_PentiumM():
! opt.extend(['-xB'])
  if (cpu.is_Xeon() or cpu.is_Core2() or cpu.is_Core2Extreme()) and cpu.getNCPUs()==2:
  opt.extend(['-xT'])
  if cpu.has_sse3() and (cpu.is_PentiumIV() or cpu.is_CoreDuo() or cpu.is_CoreSolo()):
--- 90,96 
  if cpu.has_sse2():
  opt.append('-xN')
  elif cpu.is_PentiumM():
! opt.extend(['-xN'])
  if (cpu.is_Xeon() or cpu.is_Core2() or cpu.is_Core2Extreme()) and cpu.getNCPUs()==2:
  opt.extend(['-xT'])
  if cpu.has_sse3() and (cpu.is_PentiumIV() or cpu.is_CoreDuo() or cpu.is_CoreSolo()):
diff -r -C3 -N numpy-1.3.0rc1.orig/numpy/distutils/intelccompiler.py numpy-1.3.0rc1/numpy/distutils/intelccompiler.py
*** numpy-1.3.0rc1.orig/numpy/distutils/intelccompiler.py	2009-03-24 06:54:54.0 +0100
--- numpy-1.3.0rc1/numpy/distutils/intelccompiler.py	2009-03-29 12:05:32.0 +0200
***
*** 8,23 
  """
  
  compiler_type = 'intel'
! cc_exe = 'icc'
  
  def __init__ (self, verbose=0, dry_run=0, force=0):
  UnixCCompiler.__init__ (self, verbose,dry_run, force)
! compiler = self.cc_exe
  self.set_executables(compiler=compiler,
   compiler_so=compiler,
!  compiler_cxx=compiler,
   linker_exe=compiler,
!  linker_so=compiler + ' -shared')
  
  class IntelItaniumCCompiler(IntelCCompiler):
  compiler_type = 'intele'
--- 8,25 
  """
  
  compiler_type = 'intel'
! cc_exe  = 'icc'
! cxx_exe = 'icpc'
  
  def __init__ (self, verbose=0, dry_run=0, force=0):
  Un

Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type

2009-03-29 Thread Christian Marquardt
They are, also in v1..3.0rc1 

Many thanks! 

Christian 

- "Charles R Harris"  wrote: 
> 
> 
> 
> 2009/3/27 Christian Marquardt < christ...@marquardt.sc > 
> 



> Error messages? Sure;-) 
> 
> python -c 'import numpy; numpy.test()' 
> Running unit tests for numpy 
> NumPy version 1.3.0b1 
> NumPy is installed in /opt/apps/lib/python2.5/site-packages/numpy 
> Python version 2.5.2 (r252:60911, Aug 31 2008, 15:16:34) [GCC Intel(R) C++ 
> gcc 4.2 mode] 
> nose version 0.10.4 
> ...K.FF..FF..
>  

> OK, the tests should be fixed in r6773. 
> 
> Chuck 
> 
> 
> 
> ___ Numpy-discussion mailing list 
> Numpy-discussion@scipy.org 
> http://mail.scipy.org/mailman/listinfo/numpy-discussion 

-- 
Dr. Christian Marquardt Email: christ...@marquardt.sc 
Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 
64293 Darmstadt Mobile: +49 (0) 179 290 84 74 
Germany Fax: +49 (0) 6151 95 13 885 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type

2009-03-27 Thread Christian Marquardt
Hi David,

> I *guess* that the compiler command line does not work with your
> changes, and that distutils got confused, and fails somewhere later
> (or sooner, who knows). Without actually seeing the errors you got, it
> is difficult to know more - but I would make sure the command line
> arguments are ok instead of focusing on the .src error,
> 
> cheers,
> 
> David

I'n not sure if I understand... The compiler options I have changed seem 
to work (and installation without the "build_clib --compiler=intel" option
to setup.py works fine with them).

To be sure I've compiled numpy from the distribution tar file without any 
patches. With

   python setup.py config --compiler=intel \
   config_fc --fcompiler=intel \
   build_ext --compiler=intel build

everything compiles fine (and builds the internal lapack, as I haven't given 
the MKL paths, and have no other lapack / blas installed). With

   python setup.py config --compiler=intel \
   config_fc --fcompiler=intel \
   build_clib --compiler=intel \
   build_ext --compiler=intel build

the attempt to build fails (complete output is below). The python installation 
I use is also build with the Intel icc compiler; so it does pick up that
compiler by default. Maybe something is going wrong in the implementation
of build_clib in the numpy distutils? Where would I search for that in the 
code?

Many thanks,

  Chris.




MEDEA /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1>python setup.py 
config --compiler=intel config_fc --fcompiler=intel build_clib --compiler=intel 
build_ext --compiler=intel build
Running from numpy source directory.
non-existing path in 'numpy/distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.2.018/lib/32
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /opt/apps/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /opt/apps/lib
  libraries f77blas,cblas,atlas not found in /usr/local/lib
  libraries f77blas,cblas,atlas not found in /usr/lib
  NOT AVAILABLE

/home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1383:
 UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
blas_info:
  libraries blas not found in /opt/apps/lib
  libraries blas not found in /usr/local/lib
  libraries blas not found in /usr/lib
  NOT AVAILABLE

/home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1392:
 UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
  warnings.warn(BlasNotFoundError.__doc__)
blas_src_info:
  NOT AVAILABLE

/home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1395:
 UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
  warnings.warn(BlasSrcNotFoundError.__doc__)
  NOT AVAILABLE

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.2.018/lib/32
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /opt/apps/lib
  libraries lapack_atlas not found in /opt/apps/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
  libraries lapack_atlas not found in /usr/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib
  libraries lapack_atlas not found in /usr/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /opt/apps/lib
  libraries lapack_atlas not found in /opt/apps/lib
  libraries f77blas,cblas,atlas not found in /usr/local/lib
  libraries lapack_atlas not found in /usr/local/lib
  libraries f77blas,cblas,atlas not found in /usr/lib
  libraries lapack_atlas not found in /usr/lib
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

/home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1290:
 UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/d

Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type

2009-03-27 Thread Christian Marquardt
55124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], 
dtype=complex64) 

== 
FAIL: test_cdouble (test_linalg.TestEigvalsh) 
-- 
Traceback (most recent call last): 
File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 221, in test_cdouble 
self.do(a) 
File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 249, in do 
assert_almost_equal(ev, evalues) 
File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 23, in assert_almost_equal 
old_assert_almost_equal(a, b, decimal=decimal, **kw) 
File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, 
in assert_almost_equal 
return assert_array_almost_equal(actual, desired, decimal, err_msg) 
File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 321, 
in assert_array_almost_equal 
header='Arrays are not almost equal') 
File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 302, 
in assert_array_compare 
raise AssertionError(msg) 
AssertionError: 
Arrays are not almost equal 

(mismatch 100.0%) 
x: array([ 4.60555128+0.j, -2.60555128+0.j]) 
y: array([-2.60555128 +1.11022302e-16j, 4.60555128 -1.11022302e-16j]) 

== 
FAIL: test_csingle (test_linalg.TestEigvalsh) 
-- 
Traceback (most recent call last): 
File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 217, in test_csingle 
self.do(a) 
File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 249, in do 
assert_almost_equal(ev, evalues) 
File "/opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 23, in assert_almost_equal 
old_assert_almost_equal(a, b, decimal=decimal, **kw) 
File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 215, 
in assert_almost_equal 
return assert_array_almost_equal(actual, desired, decimal, err_msg) 
File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 321, 
in assert_array_almost_equal 
header='Arrays are not almost equal') 
File "/opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py", line 302, 
in assert_array_compare 
raise AssertionError(msg) 
AssertionError: 
Arrays are not almost equal 

(mismatch 100.0%) 
x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) 
y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], 
dtype=complex64) 

------ 
Ran 2029 tests in 19.729s 

FAILED (KNOWNFAIL=1, failures=4) 
MEDEA /home/marq> 



- "Charles R Harris"  wrote: 
> 
> 
> 
> On Thu, Mar 26, 2009 at 9:06 PM, Charles R Harris < charlesr.har...@gmail.com 
> > wrote: 
> 


> 
> 
> 
> 2009/3/26 Christian Marquardt < christ...@marquardt.sc > 
> 



> Oh sorry - you are right (too late in the night here in Europe). 
> 
> 
> The output is similar in all four cases - it looks like 
> 
> AssertionError: 
> Arrays are not almost equal 
> 
> (mismatch 100.0%) 
> x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) 
> y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], 
> dtype=complex64) 
> 
> Are x and y the expected and actual results? That would just show that there 
> are small rounding errors in the imaginary part, and that MKL returns the 
> results 
> in another order, no? 


> Looks like a sorting error, the eigen values should be sorted. So it looks 
> like a buggy test from here. Having an imaginary part to the eigenvalues 
> returned by a routine that is supposed to process Hermitean matrices doesn't 
> look right, but the errors are in the double precision range, which is pretty 
> good for float32. 
> 
> I think we need a ticket to fix those tests. 
> 

> Can you post the actual error messages? It will make it easier to find where 
> the failure is. 
> 
> Chuck 
> 
> 
> 
> ___ Numpy-discussion mailing list 
> Numpy-discussion@scipy.org 
> http://mail.scipy.org/mailman/listinfo/numpy-discussion 

-- 
Dr. Christian Marquardt Email: christ...@marquardt.sc 
Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 
64293 Darmstadt Mobile: +49 (0) 179 290 84 74 
Germany Fax: +49 (0) 6151 95 13 885 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type

2009-03-26 Thread Christian Marquardt
Oh sorry - you are right (too late in the night here in Europe). 

The output is similar in all four cases - it looks like 

AssertionError: 
Arrays are not almost equal 

(mismatch 100.0%) 
x: array([ 4.60555124+0.j, -2.60555124+0.j], dtype=complex64) 
y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], 
dtype=complex64) 

Are x and y the expected and actual results? That would just show that there 
are small rounding errors in the imaginary part, and that MKL returns the 
results 
in another order, no? 


- "Charles R Harris"  wrote: 
> 
> 
> 
> 2009/3/26 Christian Marquardt < christ...@marquardt.sc > 
> 



> 
> Hmm. 
> 
> I downloaded the beta tar file and started from the untarred contents plus a 
> patch for the Intel compilers 
> (some changes of the command line arguments for the compiler and a added 
> setup.cfg file specifying the 
> paths to the Intel MKL libraries) which applied cleanly. I then ran 
> 
> python setup.py config --compiler=intel config_fc --fcompiler=intel 
> build_clib --compiler=intel build_ext --compiler=intel install 
> 
> which failed. 
> 
> After playing around a bit, I found that it seems that the build_clib 
> --compiler=intel subcommand which 
> causes the trouble; after disabling it, that is with 
> 
> python setup.py config --compiler=intel config_fc --fcompiler=intel build_ext 
> --compiler=intel install 
> 
> things compile fine - and all but four of the unit tests fail 
> (test_linalg.TestEigh and test_linalg.TestEigvalsh 
> in both test_csingle and test_cdouble - should I be worried?) 
> 

> Four unit tests fail, or all fail except four? I assume you meant the former. 
> I'm not sure what the failures mean, can you check if they are really bad or 
> just some numbers a little bit off. I'm guessing these routines are calling 
> into MKL. 
> 
> 



> 
> 
> How are the .src files converted? 
> 

> The *.xxx.src files are templates that are processed by 
> numpy/distutils/conv_template.py to produce *.xxx files. When you have to 
> repeat basically the same code with umpteen different types a bit of 
> automation helps. The actual conversion is controlled by the setup/scons 
> files, I don't remember exactly where. 
> 
> Chuck 
> 
> 
> 
> _______ Numpy-discussion mailing list 
> Numpy-discussion@scipy.org 
> http://mail.scipy.org/mailman/listinfo/numpy-discussion 

-- 
Dr. Christian Marquardt Email: christ...@marquardt.sc 
Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 
64293 Darmstadt Mobile: +49 (0) 179 290 84 74 
Germany Fax: +49 (0) 6151 95 13 885 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy v1.2.0 vs 1.2.1 and setup_tools

2009-03-26 Thread Christian Marquardt
v1.3.0b1 has the same problem - setup_tools doesn't seem to recognize
that numpy is already installed in the system's site-packages directory.

Maybe I should add that I'm using virtualenv to generate a test environment
which includes the systems site-packages; setup_tools does seem recognize 
other packages which are only available there (e.g., scipy or netCDF4 if 
specified as a requirement for the install... strange. It also doesn't seem 
to work for tables (2.1, so not the newest version)).

Any ideas on what might be going on?

Thanks a lot,

  Christian.

- "Christian Marquardt"  wrote:

> Hello!
> 
> I ran into the following problem: 
> 
> I have easy_installable packages which list numpy as a dependency.
> numpy itself is installed in the system's site-packages directory and
> works fine.
> 
> When running a python setup.py install of the package with numpy
> v1.2.0 installed, everything works fine. When running the same command
> with numpy 1.2.1 installed, it tries to download a numpy tar file from
> Pypi and to compile and install it again. It looks as if v1.2.1 isn't
> providing the relevant information to the setup tools, but 1.2.0 did.
> 
> I don't know about v1.3.0b1 yet - I have difficulties to compile that
> currently (another email). I'd be more than willing to track this
> down, but is there anybody who could give me a starting point where I
> should start to look?
> 
> Many thanks,
> 
>   Christian.
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

-- 
Dr. Christian Marquardt Email:  christ...@marquardt.sc
Wilhelm-Leuschner-Str. 27   Tel.:   +49 (0) 6151 95 13 776
64293 Darmstadt Mobile: +49 (0) 179 290 84 74
Germany Fax:+49 (0) 6151 95 13 885
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type

2009-03-26 Thread Christian Marquardt

Hmm. 

I downloaded the beta tar file and started from the untarred contents plus a 
patch for the Intel compilers 
(some changes of the command line arguments for the compiler and a added 
setup.cfg file specifying the 
paths to the Intel MKL libraries) which applied cleanly. I then ran 

python setup.py config --compiler=intel config_fc --fcompiler=intel build_clib 
--compiler=intel build_ext --compiler=intel install 

which failed. 

After playing around a bit, I found that it seems that the build_clib 
--compiler=intel subcommand which 
causes the trouble; after disabling it, that is with 

python setup.py config --compiler=intel config_fc --fcompiler=intel build_ext 
--compiler=intel install 

things compile fine - and all but four of the unit tests fail 
(test_linalg.TestEigh and test_linalg.TestEigvalsh 
in both test_csingle and test_cdouble - should I be worried?) 

How are the .src files converted? 

Many thanks, 

Christian. 


- "Charles R Harris"  wrote: 
> 
> 
> 
> On Thu, Mar 26, 2009 at 6:25 PM, Christian Marquardt < christ...@marquardt.sc 
> > wrote: 
> 

Hello, 
> 
> I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with Python 
> 2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. The 
> distutils do find the compilers and the MKL (when using similar settings as I 
> used successfully for all previous numpy versions sonce 1.0.4 or so), but 
> then 
> bail out with the following error: 
> 
> ...>python setup.py install 
> 
> [...] 
> 
> running config 
> running config_fc 
> unifing config_fc, config, build_clib, build_ext, build commands --fcompiler 
> options 
> running build_clib 
> Found executable /opt/intel/cc/10.1.018/bin/icc 
> Could not locate executable ecc 
> customize IntelCCompiler 
> customize IntelCCompiler using build_clib 
> building 'npymath' library 
> compiling C sources 
> C compiler: icc 
> 
> error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src') 
> 
> I think the error message does not even come from the compiler... 
> 
> I'm lost... What does it mean, and why are there source files named ...c.src? 
> 

> That file should be preprocessed to produce npy_math.c which ends up in the 
> build directory. I don't know what is going on here, but you might first try 
> deleting the build directory just to see what happens. There might be some 
> setup file that is screwy/outdated also. Did you download the beta and do a 
> clean extract? 
> 
> Chuck 
> 
> 
> 
> ___ Numpy-discussion mailing list 
> Numpy-discussion@scipy.org 
> http://mail.scipy.org/mailman/listinfo/numpy-discussion 

-- 
Dr. Christian Marquardt Email: christ...@marquardt.sc 
Wilhelm-Leuschner-Str. 27 Tel.: +49 (0) 6151 95 13 776 
64293 Darmstadt Mobile: +49 (0) 179 290 84 74 
Germany Fax: +49 (0) 6151 95 13 885 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.3.0b1 with Intel compiler

2009-03-26 Thread Christian Marquardt
Hello,

I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with Python 
2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. The 
distutils do find the compilers and the MKL (when using similar settings as I 
used successfully for all previous numpy versions sonce 1.0.4 or so), but then 
bail out with the following error:

   ...>python setup.py install

   [...]

   running config
   running config_fc
   unifing config_fc, config, build_clib, build_ext, build commands --fcompiler 
options
   running build_clib
   Found executable /opt/intel/cc/10.1.018/bin/icc
   Could not locate executable ecc
   customize IntelCCompiler
   customize IntelCCompiler using build_clib
   building 'npymath' library
   compiling C sources
   C compiler: icc

   error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src')

I think the error message does not even come from the compiler...

I'm lost... WHat does it mean, and why are there source files named ...c.src?

Many thanks,

  Christian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type

2009-03-26 Thread Christian Marquardt
Hello,

I tried to compile and install numpy 1.3.0b1 on a Suse Linux 10.3 with Python
2.5.x and an Intel C and Fortran compilers 10.1 as well as the MKL 10.0. The 
distutils do find the compilers and the MKL (when using similar settings as I 
used successfully for all previous numpy versions sonce 1.0.4 or so), but then 
bail out with the following error:

   ...>python setup.py install

   [...]

   running config
   running config_fc
   unifing config_fc, config, build_clib, build_ext, build commands --fcompiler 
options
   running build_clib
   Found executable /opt/intel/cc/10.1.018/bin/icc
   Could not locate executable ecc
   customize IntelCCompiler
   customize IntelCCompiler using build_clib
   building 'npymath' library
   compiling C sources
   C compiler: icc

   error: unknown file type '.src' (from 'numpy/core/src/npy_math.c.src')

I think the error message does not even come from the compiler...

I'm lost... What does it mean, and why are there source files named ...c.src?

Many thanks,

  Christian


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy v1.2.0 vs 1.2.1 and setup_tools

2009-03-26 Thread Christian Marquardt
Hello!

I ran into the following problem: 

I have easy_installable packages which list numpy as a dependency. numpy itself 
is installed in the system's site-packages directory and works fine.

When running a python setup.py install of the package with numpy v1.2.0 
installed, everything works fine. When running the same command with numpy 
1.2.1 installed, it tries to download a numpy tar file from Pypi and to compile 
and install it again. It looks as if v1.2.1 isn't providing the relevant 
information to the setup tools, but 1.2.0 did.

I don't know about v1.3.0b1 yet - I have difficulties to compile that currently 
(another email). I'd be more than willing to track this down, but is there 
anybody who could give me a starting point where I should start to look?

Many thanks,

  Christian.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core

2007-06-15 Thread Christian Marquardt
On Fri, June 15, 2007 06:01, David Cournapeau wrote:

> I think it is important to separate different issues: object code
> compatibility, runtime compatibility, etc... Those are different issues.
> First, mixing ICC compiled code and gcc code *has* to be possible (I
> have never tried), otherwise, I don't see much use for it under linux.
> Then you have the problem of runtime services: I really doubt that ICC
> runtime is not compatible with gcc, and more globally with the GNU
> runtime (glibc, etc...); actually, ICC used to use the "standard" linux
> runtime, and I would be surprised if that changed.

Yes, this is possible - icc does use the standard system libraries. But
depending on the compiler options, icc will require additional libraries
from it's own set of libs. For example, with the -x[...] and -ax[...]
options which exploit the floating point pipelines of the Intel cpus, it's
using it's very own libsvml (vector math lib or something) which replace
some of the math versions in the system lib. If the linker - runtime or
static - doesn't know about these, linking will fail.

Therefore, if an icc object with certain optimisation is linked with gcc
without specifying the required optimised libraries, linking fails. I
remember that this also happened for me when building an optimised version
of numpy and trying to load it from a gcc-compiled and linked version of
Python. Actually, if I remember correctly, this is even a problem for the
icc itself; try to link a program from optimised objects with icc without
giving the same -x[...] options to the linker...

It might be possible that the shared libraries can be told where
additional required shared libraries are located (--rpath?), but I was
never brave enough to try that out... I simply rebuilt python with the
additional benefit that everything in python get faster. Or so ones
hopes...

It should be straightforward to link gcc objects and shared libs with icc
being the linker, though. Has anyone ever tried to build the core python
and numpy with icc, but continue to use the standard gcc build extensions?
Just a thought... maybe a bit over the top:-((

  Chris.



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Specifying compiler command line options for numpy.disutils.core

2007-06-14 Thread Christian Marquardt
Hi,

I think the default for the standard python distutils is to use the
compiler and the compiler settings for the C compiler that were used to
build Python itself. There might be ways to specify other compilers; but
if you have a shared python library build with one compiler and modules
build with another you might run into trouble if the two compilers use
different system libraries which are not resolved by standard python
build.

I believe that numpy is similar in the sense that you can always build
additional modules with the compilers that were used to build the numpy
core; then, using two fortran based modules (say) will work well because
both require the same shared system libraries of the compiler. Probably,
the  compiler options used to build numpy will also work for your
additinal modules (with respect to paths to linear algebra libraries and
so on).

Again, I think there could be ways to build with different compilers, but
you do run the risk of incompatibilities with the shared libraries.
Therefore, I have become used to build python with the C-compiler kI'd
like to use, even if that means a lot of work.

Hope this helps,

  Chris.


On Thu, June 14, 2007 11:10, Matthieu Brucher wrote:
> Hi,
>
> I've been trying to use the Intel C Compiler for some extensions, and as a
> matter of fact, only numpy.distutils seems to support it... (hope that the
> next version of setuptools will...)
> Is it possible to change the compiler command line options in the
> commandline or in a .cfg file ? For the moment, I have only -shared, I'd
> like to add -xP for instance.
>
> This seems to be related to rex's last mail, but I did not find anywhere a
> way to specify these options in (numpy.)distutils or setuptools, even
> though
> it is available for every other non-Python "library builder".
>
> Matthieu
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building pynetcdf-0.7 on OS X 10.4 intel

2007-05-03 Thread Christian Marquardt
On Wed, May 2, 2007 23:36, James Boyle wrote:
> OS X 10.4.9,   python 2.5,  numpy 1.0.3.dev3726,  netcdf 3.6.2
>
> /usr/bin/ld: warning /Users/boyle5/netcdf-3.6.2/lib/libnetcdf.a
> archive's cputype (7, architecture i386) does not match cputype (18)
> for specified -arch flag: ppc (can't load from it)

NOT having any experience on Macs, but doesn't the above error message
suggest that your netCDF library has been build for a i386 instead of
a ppc? Could that be the problem? Can you run the ncdump and ncgen
executables from the same netCDF distribution?

Just a thought,

  Christian.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-24 Thread Christian Marquardt
On Tue, April 24, 2007 23:31, Christian Marquardt wrote:
> On Tue, April 24, 2007 23:08, Robert Kern wrote:
>> Christian Marquardt wrote:
>>> Restore the invariant, and follow python.
>>>
>>> This
>>>
>>>>>> -5 // 6
>>>-1
>>>
>>> and
>>>
>>>>>> array([-5])[0] // 6
>>>0
>>>
>>> simply doesn't make sense - in any language, you would expect that
>>> all basic operators provide you with the same same answer when
>>> applied to the same number, no?
>>
>> Not if they are different types, you don't, e.g. -5.0 / 6 .
>
> But I would regard an integer and an array of integers as the same type.

By the way:

   >>> -5.0 // 6
   -1.0
   >>> array([-5.0]) // 6
   array([-1.])

  Christian.



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-24 Thread Christian Marquardt
On Tue, April 24, 2007 23:08, Robert Kern wrote:
> Christian Marquardt wrote:
>> Restore the invariant, and follow python.
>>
>> This
>>
>>>>> -5 // 6
>>-1
>>
>> and
>>
>>>>> array([-5])[0] // 6
>>0
>>
>> simply doesn't make sense - in any language, you would expect that
>> all basic operators provide you with the same same answer when
>> applied to the same number, no?
>
> Not if they are different types, you don't, e.g. -5.0 / 6 .

But I would regard an integer and an array of integers as the same type.

  Christian.




>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma
>  that is made terrible by our own mad attempt to interpret it as though it
> had
>  an underlying truth."
>   -- Umberto Eco
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-24 Thread Christian Marquardt
Restore the invariant, and follow python.

This

   >>> -5 // 6
   -1

and

   >>> array([-5])[0] // 6
   0

simply doesn't make sense - in any language, you would expect that
all basic operators provide you with the same same answer when
applied to the same number, no?

  Christian.


On Tue, April 24, 2007 22:38, Alan G Isaac wrote:
> Do restore the invariant.
> Behave completely like Python if not too costly,
> otherwise follow C89.
>
> A user's view,
> Alan Isaac


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-23 Thread Christian Marquardt
Hmmm,

On Mon, April 23, 2007 22:29, Christian Marquardt wrote:
> Actually,
>
> it happens for normal integers as well:
>
>>>> n = np.array([-5, -100, -150])
>>>> n // 100
>array([ 0, -1, -1])
>>>> -5//100, -100//100, -150//100
>(-1, -1, -2)

and finally:

   >>> n % 100
   array([95,  0, 50])
   >>> -5 % 100, -100 % 100, -150 % 100
   (95, 0, 50)

So plain python / using long provides consistent results across //
and %, but numpy doesn't...

  Christian.

> On Mon, April 23, 2007 22:20, Christian Marquardt wrote:
>> Dear all,
>>
>> this is odd:
>>
>>>>> import numpy as np
>>>>> fact = 2825L * 86400L
>>>>> nn = np.array([-20905000L])
>>>>> nn
>>array([-20905000], dtype=int64)
>>>>> nn[0] // fact
>>0
>>
>> But:
>>
>>>>> long(nn[0]) // fact
>>-1L
>>
>> Is this a bug in numpy, or in python's implementation of longs? I would
>> think both should give the same, really... (Python 2.5, numpy
>> 1.0.3dev3725,
>> Linux, Intel compilers...)
>>
>> Many thanks for any ideas / advice,
>>
>>   Christian
>>
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-23 Thread Christian Marquardt
Actually,

it happens for normal integers as well:

   >>> n = np.array([-5, -100, -150])
   >>> n // 100
   array([ 0, -1, -1])
   >>> -5//100, -100//100, -150//100
   (-1, -1, -2)

On Mon, April 23, 2007 22:20, Christian Marquardt wrote:
> Dear all,
>
> this is odd:
>
>>>> import numpy as np
>>>> fact = 2825L * 86400L
>>>> nn = np.array([-20905000L])
>>>> nn
>array([-20905000], dtype=int64)
>>>> nn[0] // fact
>0
>
> But:
>
>>>> long(nn[0]) // fact
>-1L
>
> Is this a bug in numpy, or in python's implementation of longs? I would
> think both should give the same, really... (Python 2.5, numpy
> 1.0.3dev3725,
> Linux, Intel compilers...)
>
> Many thanks for any ideas / advice,
>
>   Christian
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Oddity with numpy.int64 integer division

2007-04-23 Thread Christian Marquardt
Dear all,

this is odd:

   >>> import numpy as np
   >>> fact = 2825L * 86400L
   >>> nn = np.array([-20905000L])
   >>> nn
   array([-20905000], dtype=int64)
   >>> nn[0] // fact
   0

But:

   >>> long(nn[0]) // fact
   -1L

Is this a bug in numpy, or in python's implementation of longs? I would
think both should give the same, really... (Python 2.5, numpy 1.0.3dev3725,
Linux, Intel compilers...)

Many thanks for any ideas / advice,

  Christian

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] uint64 typecasting with scalars broken (?)

2007-04-23 Thread Christian Marquardt
On Mon, April 23, 2007 01:28, Charles R Harris wrote:
> Looks like a bug to me:
>
> In [5]: x = array([1],dtype=uint64)
>
> In [6]: type(x[0])
> Out[6]: 
>
> In [7]: type(x[0]+1)
> Out[7]: 
>
> Chuck

Yeah. Especially as it works apparently fine for uint32 and int64.

  Christian.



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] uint64 typecasting with scalars broken (?)

2007-04-22 Thread Christian Marquardt
Hello,

The following is what I expected...

   >>> y = 1234
   >>> x = array([1], dtype = "uint64")
   >>> print x + y, (x + y).dtype.type
   [1235] 

but is this the way it should be? (numpy 1.0.2, Linux, Intel comilers)

   >>> print x[0] + y, type(x[0] + y)
   1235.0 

Thanks,

  Christian.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building numpy and scipy on AIX

2007-04-21 Thread Christian Marquardt
Yes,

that worked - many thanks!

  Christian.


On Thu, April 19, 2007 22:38, David M. Cooke wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Christian Marquardt wrote:
>> Dear David,
>>
>> the svn version of numpy does indeed build cleanly on AIX. Many thanks!
>>
>> However, the wrapper problem still exists for the C++ compiler, and
>> shows
>> up when compiling scipy. Now, I *assume* that SciPy is using the
>> distutils
>> as installed by numpy. Do you know where the linker settings for the C++
>> compiler might be overwritten? There are two or three C compiler related
>> python modules in numpy/distutils... Or would you think that this
>> problem
>> is entirely unrelated to the distutils in numpy?
>
> I'm working on a better solution, but the quick fix to your problem is
> to look in numpy/distutils/command/build_ext.py. There are two lines
> that reference self.compiler.linker_so[0]; change those 0s to a 1s, so
> it keeps the ld_so_aix script there when switching the linker.
>
> - --
> |>|\/|<
> /--\
> |David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
> |[EMAIL PROTECTED]
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.7 (Darwin)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFGJ9MzN9ixZKFWjRQRArNwAKC029wYORk9sm+FShcYKNd0UEcMdgCghHGC
> rjYqtaESdt8zRgZHCDxYbDk=
> =PS30
> -END PGP SIGNATURE-
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building numpy with atlas on ubuntu edgy

2007-04-18 Thread Christian Marquardt
Sun has recently released their compilers under an opensource license for
Linux as well (Sun Studio Express or something), including their perflib -
which includes Blas and Lapack. Has somebody tried how that combination
performs, compared to Intel MKL or Atlas? I think they are free even for
commercial use.

Just a thought...

  Christian.


On Wed, April 18, 2007 22:27, rex wrote:
> Andrew Straw <[EMAIL PROTECTED]> [2007-04-18 13:22]:
>> rex wrote:
>> > If your use is entirely non-commercial you can use Intel's MKL with
>> > built-in optimized BLAS and LAPACK and avoid the need for ATLAS.
>> >
>> Just to clarify, my understanding is that if you buy a developer's
>> license, you can also use it for commercial use, including distributing
>> binaries. (Otherwise it would seem kind of silly for Intel to invest so
>> much in their performance libraries and compilers.)
>
> Yes, I should have included that. icc and MKL licenses for commercial
> use are $399 each.
>
> -rex
> --
> "It's a Singer, Captain Picard."   "Make it sew."
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problems building numpy and scipy on AIX

2007-04-18 Thread Christian Marquardt
Dear David,

the svn version of numpy does indeed build cleanly on AIX. Many thanks!

However, the wrapper problem still exists for the C++ compiler, and shows
up when compiling scipy. Now, I *assume* that SciPy is using the distutils
as installed by numpy. Do you know where the linker settings for the C++
compiler might be overwritten? There are two or three C compiler related
python modules in numpy/distutils... Or would you think that this problem
is entirely unrelated to the distutils in numpy?

Many thanks,

  Christian.



On Wed, April 18, 2007 17:20, David M. Cooke wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Christian Marquardt wrote:
>> Hello,
>>
>> I've run into a problem building numpy-1.0.2 on AIX (using gcc and
>> native
>> Fortran compilers). The problem on that platform in general is that the
>> process for building shared libraries is different from what's normally
>> done (and it's a pain...)
>
> Already fixed in svn :)
>
> - --
> |>|\/|<
> /--\
> |David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
> |[EMAIL PROTECTED]
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.7 (Darwin)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFGJjctN9ixZKFWjRQRAoLZAJ4uz6L/dO1j47nz4o5BEFiFLlc6bwCfayha
> tWZCkDzXjNR7lrJK7AVMyTc=
> =9it9
> -END PGP SIGNATURE-
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problems building numpy and scipy on AIX

2007-04-18 Thread Christian Marquardt
Hello,

I've run into a problem building numpy-1.0.2 on AIX (using gcc and native
Fortran compilers). The problem on that platform in general is that the
process for building shared libraries is different from what's normally
done (and it's a pain...)

Anyway. Core python has a dedicated script called ld_so_aix which is used
to create the shared objects (it can be found in the
.../lib/python2.5/config directory of any installed python version).
Everything compiled with the compiler that was used to build python
itself, and which is compiled through the standard distutils, is also
using this construction, so building C extensions usually works fine.

I ran into problems during the installation of numpy 1.0.2 using

   python setup.py config_fc --fcompiler=ibm install

because the numpy distutils try to use the xlf95 compiler to create a
shared image - what needs to be done instead is to wrap the call to the
compiler with the ld_so_aix script. I have attached a patch which solves
the problem for me (and python 2.5), but I don't know if it is the right
place and way to do that - and the python version is hardwired as well, so
it's not really a fix.

What I also find a bit suspicuous is that the Fortran linker is used in
that particular case at all - the problem occurs when building
_dotblas.so, and some targets after that one. However, the corresponding
code is written in C and has actually been compiled with the C compiler.
The problem did not appear with numpy 1.0.1, where the C compiler is
(correctly) used for linking. So maybe there's another hidden problem...

But anyway, I think that the missing wrapping with ld_so_aix is a bug on
its own.

When moving on to building scipy, I ran into similar problems with C++
compiled code - instead of wrapping the c++ compiler/linker with
ld_so_aix, the following command is executed

   g++ gcc -pthread
-bI:/homespace/grasppf/aix/lib/python2.5/config/python.exp
build/temp.aix-5.1-2.5/Lib/cluster/src/vq_wrap.o
-Lbuild/temp.aix-5.1-2.5 -o build/lib.aix-5.1-2.5/scipy/cluster/_vq.so


and predictably causes an error. What's happening is (I think) that the
numpy distutils partially overwrite the linker modifications from the core
python. (the -pthread -bI:/...python.exp is an argument to the ld_so_aix
script). My problem is that I do not know where in the numpy distutils
code this modification happens, so I've no idea where to try to fix it -
does anyone on this list know?

Many thanks,

  Christian.
diff -r -C3 numpy-1.0.2.orig/numpy/distutils/fcompiler/ibm.py numpy-1.0.2/numpy/distutils/fcompiler/ibm.py
*** numpy-1.0.2.orig/numpy/distutils/fcompiler/ibm.py	Fri Mar  2 20:52:50 2007
--- numpy-1.0.2/numpy/distutils/fcompiler/ibm.py	Wed Apr 18 07:23:26 2007
***
*** 58,63 
--- 58,65 
  opt = []
  if sys.platform=='darwin':
  opt.append('-Wl,-bundle,-flat_namespace,-undefined,suppress')
+ elif sys.platform.startswith('aix'):
+ self.executables['linker_so'] = [sys.prefix + "/lib/python2.5/config/ld_so_aix xlf95 -pthread -bI:" + sys.prefix + "/lib/python2.5/config/python.exp"]
  else:
  opt.append('-bshared')
  version = self.get_version(ok_status=[0,40])___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] pyhdf / was: Request for porting pycdf to NumPy

2007-02-09 Thread Christian Marquardt

Oops!

> a) Don't know; the last releases of pycdf and pyhdf are from February 2001

pycdf is from 2006, of course. Sorry!

  Chris.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] pyhdf / was: Request for porting pycdf to NumPy

2007-02-09 Thread Christian Marquardt
On Fri, February 9, 2007 22:28, Christopher Barker wrote:
>> Andre Gosselin (the guy who wrote pycdf) also wrote an interface to HDF4
>> (not 5) named pyhdf.
>
> Is he still maintaining these packages? Have you submitted the patches
> to him?

a) Don't know; the last releases of pycdf and pyhdf are from February 2001
and July 2005, respectively.

b) Yes, but just half an hour ago, after I had seen the request for pycdf
here.

I would actually prefer if Andre would apply the patches in some way in
his distribution. The C wrappers for both pycdf and pyhdf are created with
swig,
but the original interface description files are not included in the
distribution. So I patched the generated wrapper code instead of the
original files. Or rather let Travis' alter_codeN do the job;-))

  Chris.


>
> -Chris
>
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
>
> [EMAIL PROTECTED]
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] pyhdf / was: Request for porting pycdf to NumPy

2007-02-09 Thread Christian Marquardt
As we are at it,

Andre Gosselin (the guy who wrote pycdf) also wrote an interface to HDF4
(not 5) named pyhdf. I'm using that with numpy as well (patch attached),
but I haven't tested it much - little more than just running the examples,
really (which appear to be ok). Maybe it's useful...

In pyhdf, the author didn't support different array interfaces; so the
attached patch just modifes the existing source code and moves it over
to numpy. I've also attached a "install script" which makes use of the
particular lacotion of the HDF4 libraries (in $PREFIX/lib) and header
files (in $PREFIX/include/hdf), so it it almost certainly needs to be
adapted to the location of the actual HDF libraries and headers.

Regards,

  Christian.


On Fri, February 9, 2007 22:00, Christian Marquardt wrote:
> Dear list,
>
> attached is a patch for the original pycdf-0.6.2-rc1 distribution as
> available through sourceforge - and a little shell script illustrating
> how to install it. After applyng the patch, it should be configured for
> numpy. Note that the "installation" script uses an environment variable
> PREFIX to define where the pacjkage is installed; it also assumes that
> the netcdf libraries are installed in $PREFIX/lib.
>
> The orginal author already supported both numeric and numarray, so I just
> added a new subdirectory for numpy - which is simply the numeric version
> slightly changed. The patch is only that large because it replicates much
> of already existing code...
>
> I have been using this "port" for many weeks now without any problems or
> difficulties. I hope it's useful for others as well;-)
>
>   Christian.
>
>
>
> On Fri, February 9, 2007 15:31, Daran L. Rife wrote:
>> Hi Travis,
>>
>> If you're still offering NumPy "patches" to third party
>> packages that rely upon Numeric, I would really like for
>> pycdf to be ported to NumPy. This would allow me to
>> completely transition to NumPy.
>>
>> Thanks very much for considering my request.
>>
>>
>> Daran Rife
>>
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
diff -r -N -C 3 pyhdf-0.7-3/examples/hdfstruct/hdfstruct.py pyhdf-0.7-3-numpy/examples/hdfstruct/hdfstruct.py
*** pyhdf-0.7-3/examples/hdfstruct/hdfstruct.py	2004-11-03 15:41:12.0 +0100
--- pyhdf-0.7-3-numpy/examples/hdfstruct/hdfstruct.py	2007-01-20 21:08:53.0 +0100
***
*** 2,8 
  
  import sys
  from pyhdf.SD import *
! from Numeric import *
  
  # Dictionnary used to convert from a numeric data type to its symbolic
  # representation
--- 2,8 
  
  import sys
  from pyhdf.SD import *
! from numpy import *
  
  # Dictionnary used to convert from a numeric data type to its symbolic
  # representation
diff -r -N -C 3 pyhdf-0.7-3/examples/szip/szip16.py pyhdf-0.7-3-numpy/examples/szip/szip16.py
*** pyhdf-0.7-3/examples/szip/szip16.py	2005-07-13 05:19:19.0 +0200
--- pyhdf-0.7-3-numpy/examples/szip/szip16.py	2007-01-20 21:04:14.0 +0100
***
*** 1,7 
  #!/usr/bin/env python
  
  from pyhdf.SD import *
! import Numeric
  
  SZIP = True
  
--- 1,7 
  #!/usr/bin/env python
  
  from pyhdf.SD import *
! import numpy
  
  SZIP = True
  
***
*** 12,18 
  WIDTH   = 100
  fill_value  = 0
  
! #data = Numeric.array(((100,100,200,200,300,400),
  #  (100,100,200,200,300,400),
  #  (100,100,200,200,300,400),
  #  (300,300,  0,400,300,400),
--- 12,18 
  WIDTH   = 100
  fill_value  = 0
  
! #data = numpy.array(((100,100,200,200,300,400),
  #  (100,100,200,200,300,400),
  #  (100,100,200,200,300,400),
  #  (300,300,  0,400,300,400),
***
*** 20,28 
  #  (300,300,  0,400,300,400),
  #  (0,  0,600,600,300,400),
  #  (500,500,600,600,300,400),
! #  (0,  0,600,600,300,400)), Numeric.Int16)
  
! data = Numeric.zeros((LENGTH, WIDTH), Numeric.Int16)
  for i in range(LENGTH):
  for j in range(WIDTH):
  data[i,j] = i+j
--- 20,28 
  #  (300,300,  0,400,300,400),
  #  (0,  0,600,600,300,400),
  #  (500,500,600,600,300,400),
! #  (0,  0,600,600,300,400)), numpy.int16)
  
! data = numpy.zeros((LENGTH, WIDTH), numpy.int16)
  for i in range(LENGTH):
  for j in range(WIDTH

Re: [Numpy-discussion] Request for porting pycdf to NumPy

2007-02-09 Thread Christian Marquardt
Dear list,

attached is a patch for the original pycdf-0.6.2-rc1 distribution as
available through sourceforge - and a little shell script illustrating
how to install it. After applyng the patch, it should be configured for
numpy. Note that the "installation" script uses an environment variable
PREFIX to define where the pacjkage is installed; it also assumes that
the netcdf libraries are installed in $PREFIX/lib.

The orginal author already supported both numeric and numarray, so I just
added a new subdirectory for numpy - which is simply the numeric version
slightly changed. The patch is only that large because it replicates much
of already existing code...

I have been using this "port" for many weeks now without any problems or
difficulties. I hope it's useful for others as well;-)

  Christian.



On Fri, February 9, 2007 15:31, Daran L. Rife wrote:
> Hi Travis,
>
> If you're still offering NumPy "patches" to third party
> packages that rely upon Numeric, I would really like for
> pycdf to be ported to NumPy. This would allow me to
> completely transition to NumPy.
>
> Thanks very much for considering my request.
>
>
> Daran Rife
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>


pycdf-0.6.2-rc1-numpy.patch.gz
Description: GNU Zip compressed data


build_pycdf-0.6-2-rc1-linux.sh
Description: application/shellscript
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building NumPy with Intel CC & MKL

2007-01-24 Thread Christian Marquardt
Dear rex,

I'll try to explain... I hope it's not too basic.

Python is searching for its modules along the PYTHONPATH, i.e. a list
of directories where it expects to find whatever it needs. This is the
same as the Unix shell (or the DOC command.com) is looking in the PATH in
order
to find programs or shell /batch scripts, or the dynamic loader is using
LD_LIBRARY_PATH for finding shared libraries.

>>  import numpy
>>  print numpy
>> > > '/usr/lib/python2.5/site-packages/numpy/__init__.pyc'>
>> >
>> > What am I to make of this? Is it the rpm numpy or is it the numpy I
>> > built using the Intel compiler and MKL?

This tells from which directory your Python installation actually loaded
numpy from: It used the numpy installed in the directory

   /usr/lib/python2.5/site-packages/numpy

By *convention* (as someone already pointed out before), the
/usr/lib/python2.5/site-packages is the directory where the original
system versions of python packages should be installed. In particular, the
rpm version will very likely install it's stuff there.

When installing additional python modules or packages via a command like

   python setup.py install

the new packages will also be installed in that system directory. So if
you have installed your Intel version of numpy with the above command, you
might have overwritten the rpm stuff. There is a way to install in a
different place; more on that below.

You now probably want to find out if the numpy version in /usr/lib/... is
the Intel one or the original rpm one. To do this, you can check if the
MKL and Intel libraries are actually loaded by the shared libraries within
the numpy installation. You can use the command ldd which shows which
shared libraries are loaded by executables or other shared libraries. For
example, in my installation, the command

   ldd /python2.5/site-packages/numpy/linalg/lapack_lite.so

gives the following output:

   MEDEA /opt/apps/lib/python2.5/site-packages/numpy>ldd
./linalg/lapack_lite.so
linux-gate.so.1 =>  (0xe000)
libmkl_lapack32.so => /opt/intel/mkl/8.1/lib/32/libmkl_lapack32.so
(0x40124000)
libmkl_lapack64.so => /opt/intel/mkl/8.1/lib/32/libmkl_lapack64.so
(0x403c8000)
libmkl.so => /opt/intel/mkl/8.1/lib/32/libmkl.so (0x40692000)
libvml.so => /opt/intel/mkl/8.1/lib/32/libvml.so (0x406f3000)
libguide.so => /opt/intel/mkl/8.1/lib/32/libguide.so (0x4072c000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x40785000)
libimf.so => /opt/intel/fc/9.1/lib/libimf.so (0x40797000)
libm.so.6 => /lib/tls/libm.so.6 (0x409d5000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x409f8000)
libirc.so => /opt/intel/fc/9.1/lib/libirc.so (0x40a0)
libc.so.6 => /lib/tls/libc.so.6 (0x40a41000)
libdl.so.2 => /lib/libdl.so.2 (0x40b5b000)
/lib/ld-linux.so.2 (0x8000)

Note that the MKL libraries are referenced at the beginning - just look at
the path names! If the output for your lapack_lite.so also contains
references to the MKL libs, you've got the Intel version in
/usr/lib/python2.5/ (and have probably overwritten the rpm version).
If you do not get any reference to the MKL stuff, it's still the rpm
version which does not use the MKL.

Now, let's assume that you have the rpm version in /usr/lib/python2.5/
Maybe you'll want to reinstall the rpm to be sure that this is the case.

You now want to a) install your Intel version in some well-defined place,
and b) make sure that your Python picks that version up when importing
numpy.

To achieve a) one way is to reinstall numpy from the source as before, BUT
with

   python setup.py --prefix=
   ^

 is the path to some directory, e.g.

   python setup.py install --prefix=$HOME

The latter would install numpy into the directory

   $HOME/lib/python2.5/site-packages/numpy

Do an ls afterwards to check if numpy really arrived there. Instead of
using the environment variable HOME, you can of course also any other
directory you like. I'll stick to HOME in the following.

For b), we have to tell python that modules are waiting for it to be
picked up in $HOME/lib/python2.5/site-packages. You do that by setting the
environment variable PYTHONPATH, as was also mentioned in this thread. In
our example, you would do (for a bash or ksh)

   export PYTHONPATH=$HOME/lib/python2.5/site-packages

As long as this variable is set and exported (i.e., visible in the
environment of every program you start), the next instance of Python
you'll start will now begin searching for modules in PYTHONPATH whenever
you do an import, and only fall back to the ones in the system wide
installation if it doesn't find the required module in PYTHONPATH.

So, after having set PYTHONPATH in your environment, start up python and
import numpy. Do the 'print numpy' within python again and look at the
output. Does it point to the installation directory of your Intel version?
Great; you're done. I

Re: [Numpy-discussion] Docstring standards for NumPy and SciPy

2007-01-10 Thread Christian Marquardt
On Wed, 2007-01-10 at 17:53 -0600, Robert Kern wrote:

> How are you viewing the docstrings that wouldn't associate the docstring with
> the function?

   print .__doc__

Like so:

  Python 2.4 (#1, Mar 22 2005, 21:42:42)
  [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> def squared(x):
  ..."""My docstring without function signature."""
  ...return x*x
  ...
  >>> print squared.__doc__
  My docstring without function signature.


Sorry, I wasn't aware that help() prints the signature; I just don't use
it very often... I guess you have a point there.

  Chris.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Docstring standards for NumPy and SciPy

2007-01-10 Thread Christian Marquardt
Very nice! 

> """
> one-line summary not using variable names or the function name

I would personally prefer that the first name contains the function (but
nor arguments) - that way, when you are scrolling back in your terminal,
or have a version printed out, you know what function/method the
doc string belongs to without having to refer to examples lower down.

> A few sentences giving an extended description.

After the general description, but before giving the inputs and outputs,
wouldn't it make sense to give the function signature as well? Something
like

  named, list, of, outputs = my_function(var1, variable2 [,kwdarg1=...])

This would again reduce the need to have an extra look into the example
section. Using the brackets, optional arguments and default settings
could also be communicated easily.

> Inputs:
>  var1  -- Explanation
>  variable2 -- Explanation

 [...skipped a bit...]

> Notes:
>  Additional notes if needed

> Authors:
>  name (date):  notes about what was done
>  name (date):  major documentation updates can be included here also.

I'm all for mentioning the authors (certainly in the main docstrings of
a module, say). *But* I would put that information at the very end of a
docstring - when reading documentation, I'm primarily interested in the
usage information, examples, the 'See also' and further notes or
comments. Having to skip author names and even a (possibly long) list of
modifications somewhere in the middle of the docstring, i.e. before
getting to the that information I'm looking for each time seems like an
annoyance (to me) that is not really necessary.

Hope this helps / is useful,

  Christian.




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Indices returned by where()

2006-12-06 Thread Christian Marquardt
Dear list,

apologies if the answer to my question is obvious...

Is the following intentional?

  $>python

  Python 2.4 (#1, Mar 22 2005, 21:42:42)
  [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2
  Type "help", "copyright", "credits" or "license" for more information.

  >>> import numpy as np
  >>> print np.__version__
  1.0

  >>> x = np.array([1., 2., 3., 4., 5.])

  >>> idx = np.where(x > 6.)
  >>> print len(idx)
  1

The reason is of course that where() returns a tuple of index arrays
instead of simply an index array:

  >>> print idx
  (array([], dtype=int32),)

Does that mean that one always has to explicitely request the first
element of the returned tuple in order to check how many matches were
found, even for 1d arrays? What's the reason for designing it that way?

Many thanks,

  Christian

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion