Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread Pearu Peterson


Pearu Peterson wrote:

 Hmm, regarding `intent(in, out) j`, this should work. I'll check what
 is going on..

The `intent(in, out) j` works when pycalc is defined as subroutine:

   call pycalc(i, j)

instead of

   pyreturn = pycalc(i, j)

Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-25 Thread David Cournapeau
Olivia Cheronet wrote:
 compile options: '-Inumpy/core/src -Inumpy/core/include 
 -I/usr/include/python2.5 -c'
 gcc: _configtest.c
 gcc _configtest.o -llibm.a -o _configtest.exe
 /usr/lib/gcc/i686-pc-cygwin/3.4.4/../../../../i686-pc-cygwin/bin/ld: crt0.o: 
 No such file: No such file or directory
 collect2: ld returned 1 exit status
 /usr/lib/gcc/i686-pc-cygwin/3.4.4/../../../../i686-pc-cygwin/bin/ld: crt0.o: 
 No such file: No such file or directory
 collect2: ld returned 1 exit status

This is your problem: crt0.o is a file part of the C runtime, necessary
to create any executable. This should have been installed at the same
time as gcc, I would suggest reinstalling gcc on your cygwin. If this
works, you should be able to compile a hello world from the command line,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-25 Thread Olivia Cheronet
The crt0.o file was indeed missing. I have reinstalled cygwin from the cygwin 
setup.exe (as it seemed to be included therein), and it seems to have solved 
that.

However, I now get the error below.

Thanks,

Olivia

_

Running from numpy source directory.
non-existing path in 'numpy/distutils': 'site.cfg'
F2PY Version 2
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /usr/local/lib
  libraries mkl,vml,guide not found in /usr/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /usr/local/lib
  libraries f77blas,cblas,atlas not found in /usr/lib
  NOT AVAILABLE

/cygdrive/c/cygwin/home/Global/numpy-1.3.0/numpy/distutils/system_info.py:1383: 
UserWarning: 
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
blas_info:
  libraries blas not found in /usr/local/lib
  FOUND:
libraries = ['blas']
library_dirs = ['/usr/lib']
language = f77

  FOUND:
libraries = ['blas']
library_dirs = ['/usr/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /usr/local/lib
  libraries mkl,vml,guide not found in /usr/lib
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
  libraries lapack_atlas not found in /usr/local/lib
  libraries ptf77blas,ptcblas,atlas not found in /usr/lib
  libraries lapack_atlas not found in /usr/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /usr/local/lib
  libraries lapack_atlas not found in /usr/local/lib
  libraries f77blas,cblas,atlas not found in /usr/lib
  libraries lapack_atlas not found in /usr/lib
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

/cygdrive/c/cygwin/home/Global/numpy-1.3.0/numpy/distutils/system_info.py:1290: 
UserWarning: 
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
lapack_info:
  libraries lapack not found in /usr/local/lib
  FOUND:
libraries = ['lapack']
library_dirs = ['/usr/lib']
language = f77

  FOUND:
libraries = ['lapack', 'blas']
library_dirs = ['/usr/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler opti
ons
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler opt
ions
running build_src
building py_modules sources
building library npymath sources
building extension numpy.core._sort sources
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/config.h' to
 sources.
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/numpyconfig.
h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/__multiarray
_api.h' to sources.
numpy.core - nothing done with h_files = ['build/src.cygwin-1.5.25-i686-2.5/nump
y/core/include/numpy/config.h', 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/inc
lude/numpy/numpyconfig.h', 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/
numpy/__multiarray_api.h']
building extension numpy.core.multiarray sources
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/config.h' to
 sources.
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/numpyconfig.
h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/__multiarray
_api.h' to sources.
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/src' to include_dirs.
numpy.core - nothing done with h_files = ['build/src.cygwin-1.5.25-i686-2.5/nump
y/core/src/scalartypes.inc', 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/ar
raytypes.inc', 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/config
.h', 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/numpyconfig.h', 
'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/__multiarray_api.h']
building extension numpy.core.umath sources
  adding 'build/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy/config.h' to
 sources.
  adding 

Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-25 Thread David Cournapeau
On Wed, Nov 25, 2009 at 6:42 PM, Olivia Cheronet
cheronetoli...@yahoo.com wrote:
 The crt0.o file was indeed missing. I have reinstalled cygwin from the cygwin 
 setup.exe (as it seemed to be included therein), and it seems to have solved 
 that.

 compile options: '-Inumpy/core/include 
 -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/
 core/include/numpy -Inumpy/core/src -Inumpy/core/include 
 -I/usr/include/python2.
 5 -c'
 gcc: build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c
 numpy/core/src/npy_math.c.src:186: error: parse error before '/' token
 numpy/core/src/npy_math.c.src:186: error: parse error before '/' token
 error: Command gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall 
 -Wstrict-
 prototypes -Inumpy/core/include 
 -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/in
 clude/numpy -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 -c 
 bu
 ild/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c -o 
 build/temp.cygwin-1.
 5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.o 
 failed
  with exit status 1

Which version of the trunk are you using ? From the error, it looks
like a C99-style // comment (which should not be there), but I don't
see it in the last revision.

Could you put the content of the file
build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c as well ?

thanks,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-25 Thread Olivia Cheronet
The npy_math.c is attached here.

Cheers,

Olivia


- Original Message 
 On Wed, Nov 25, 2009 at 6:42 PM, Olivia Cheronet
 wrote:
  The crt0.o file was indeed missing. I have reinstalled cygwin from the 
  cygwin 
 setup.exe (as it seemed to be included therein), and it seems to have solved 
 that.
 
  compile options: '-Inumpy/core/include 
 -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/
  core/include/numpy -Inumpy/core/src -Inumpy/core/include 
 -I/usr/include/python2.
  5 -c'
  gcc: build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c
  numpy/core/src/npy_math.c.src:186: error: parse error before '/' token
  numpy/core/src/npy_math.c.src:186: error: parse error before '/' token
  error: Command gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall 
 -Wstrict-
  prototypes -Inumpy/core/include 
 -Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/in
  clude/numpy -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.5 
  -c 
 bu
  ild/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c -o 
 build/temp.cygwin-1.
  5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.o 
 failed
   with exit status 1
 
 Which version of the trunk are you using ? From the error, it looks
 like a C99-style // comment (which should not be there), but I don't
 see it in the last revision.
 
 Could you put the content of the file
 build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c as well ?
 
 thanks,
 
 David



  #line 1 numpy/core/src/npy_math.c.src

/*
 *
 **   This file was autogenerated from a template  DO NOT EDIT  **
 **   Changes should be made to the original source (.src) file **
 *
 */

#line 1
/*
 * vim:syntax=c
 * A small module to implement missing C99 math capabilities required by numpy
 *
 * Please keep this independant of python ! Only basic types (npy_longdouble)
 * can be used, otherwise, pure C, without any use of Python facilities
 *
 * How to add a function to this section
 * -
 *
 * Say you want to add `foo`, these are the steps and the reasons for them.
 *
 * 1) Add foo to the appropriate list in the configuration system. The
 *lists can be found in numpy/core/setup.py lines 63-105. Read the
 *comments that come with them, they are very helpful.
 *
 * 2) The configuration system will define a macro HAVE_FOO if your function
 *can be linked from the math library. The result can depend on the
 *optimization flags as well as the compiler, so can't be known ahead of
 *time. If the function can't be linked, then either it is absent, defined
 *as a macro, or is an intrinsic (hardware) function.
 *
 *i) Undefine any possible macros:
 *
 *#ifdef foo
 *#undef foo
 *#endif
 *
 *ii) Avoid as much as possible to declare any function here. Declaring
 *functions is not portable: some platforms define some function inline
 *with a non standard identifier, for example, or may put another
 *idendifier which changes the calling convention of the function. If you
 *really have to, ALWAYS declare it for the one platform you are dealing
 *with:
 *
 *Not ok:
 *double exp(double a);
 *
 *Ok:
 *#ifdef SYMBOL_DEFINED_WEIRD_PLATFORM
 *double exp(double);
 *#endif
 */

#include Python.h
#include math.h

#include config.h
#include numpy/npy_math.h

/*
 *
 ** BASIC MATH FUNCTIONS**
 *
 */

/* Original code by Konrad Hinsen.  */
#ifndef HAVE_EXPM1
static double expm1(double x)
{
double u = exp(x);
if (u == 1.0) {
return x;
} else if (u-1.0 == -1.0) {
return -1;
} else {
return (u-1.0) * x/log(u);
}
}
#endif

#ifndef HAVE_LOG1P
static double log1p(double x)
{
double u = 1. + x;
if (u == 1.0) {
return x;
} else {
return log(u) * x / (u - 1);
}
}
#endif

#ifndef HAVE_HYPOT
static double hypot(double x, double y)
{
double yx;

x = fabs(x);
y = fabs(y);
if (x  y) {
double temp = x;
x = y;
y = temp;
}
if (x == 0.)
return 0.;
else {
yx = y/x;
return x*sqrt(1.+yx*yx);
}
}
#endif

#ifndef HAVE_ACOSH
static double acosh(double x)
{
return 2*log(sqrt((x+1.0)/2)+sqrt((x-1.0)/2));
}
#endif

#ifndef HAVE_ASINH
static double asinh(double xx)
{
double x, d;
int sign;
if (xx  0.0) {
sign = -1;
x = -xx;
}
else {
sign = 1;
x = xx;
}
if (x  1e8) {
d = x;
} else {
d = sqrt(x*x + 1);
}
return sign*log1p(x*(1.0 + x/(d+1)));
}
#endif

#ifndef HAVE_ATANH
static 

Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-25 Thread David Cournapeau
On Wed, Nov 25, 2009 at 7:07 PM, Olivia Cheronet
cheronetoli...@yahoo.com wrote:
 The npy_math.c is attached here.


I have just tested a fresh svn checkout, and could built numpy
correctly on cygwin. I would suggest you update your sources, and
build from scratch (i.e. remove the entire build directory and start
from scratch).

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Bug in rec.fromarrays ; plus one other possible bug

2009-11-25 Thread Dan Yamins
Hi, I'm writing to report what looks like a two bugs in the handling of
strings of length 0.  (I'm using 1.4.0.dev7746, on Mac OSX 10.5.8.   The
problems below occur both for python 2.5 compiled 32-bit as well as
python2.6 compiled 64-bit).


Bug #1:
A problem arises when you try to create a record array passing a type of
'|S0'.

 Cols = [['test']*10,['']*10]

When not passing any dtype, this is created into a recarray with no problem:

 np.rec.fromarrays(Cols)
rec.array([('test', ''), ('test', ''), ('test', ''), ('test', ''),
   ('test', ''), ('test', ''), ('test', ''), ('test', ''),
   ('test', ''), ('test', '')],
  dtype=[('f0', '|S4'), ('f1', '|S1')])

However, trouble arises when I try to pass a length-0 dtype explicitly.

 d = np.dtype([('A', '|S4'), ('B', '|S')])
 np.rec.fromarrays(Cols,dtype=d)
rec.array([('test', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est', ''),
   ('\x00est', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est', ''),
   ('\x00est', ''), ('\x00est', '')],
  dtype=[('A', '|S4'), ('B', '|S0')])

The same thing occurs if I cast to np arrays before passing to
np.rec.fromarrays:

 _arrays = [np.array(Cols[0],'|S4'),np.array(Cols[1],'|S')]
[array(['test', 'test', 'test', 'test', 'test', 'test', 'test', 'test',
   'test', 'test'],
  dtype='|S4'),
 array(['', '', '', '', '', '', '', '', '', ''],
  dtype='|S1')]

 np.rec.fromarrays(_arrays,dtype=d)
rec.array([('test', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est', ''),
   ('\x00est', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est', ''),
   ('\x00est', ''), ('\x00est', '')],
  dtype=[('A', '|S4'), ('B', '|S0')])

(Btw, why does np.array(['',''],'|S')) return an array with dtype '|S1'?
Why not '|S0'?  Are length-0 arrays being avoided explicitly? If so, why?)


Bug #2:  I'm not sure this is a bug, but it is annoying: np.dtype won't
accept '|S0' as a type argument.

 np.dtype('|S0')
TypeError: data type not understood

I have to do something like this:

 d = np.dtype('|S')
 d
dtype('|S0')

to get what I want.   Is this intended?  Regardless, this inconsistency also
means that things like:

 np.dtype(d.descr)

can fail even when d is a properly constructed dtype object with a '|S0'
type, which seems a little perverse.


Am I just not supposed to be working with length-0 string columns, period?



Thanks,
Dan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in rec.fromarrays ; plus one other possible bug

2009-11-25 Thread John Hunter
On Wed, Nov 25, 2009 at 8:48 AM, Dan Yamins dyam...@gmail.com wrote:

 Am I just not supposed to be working with length-0 string columns, period?

But why would you want to?  array dtypes are immutable, so you are
saying: I want this field to be only empty strings now and forever.
So you can't initialize them to be empty and then fill them later.  If
by construction it is always empty, why have it at all?


Then again, numpy allows me to create empty arrays x = np.array([])



JDH
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in rec.fromarrays ; plus one other possible bug

2009-11-25 Thread Pauli Virtanen
ke, 2009-11-25 kello 09:48 -0500, Dan Yamins kirjoitti:
 Hi, I'm writing to report what looks like a two bugs in the handling
 of strings of length 0.  (I'm using 1.4.0.dev7746, on Mac OSX 10.5.8.
 The problems below occur both for python 2.5 compiled 32-bit as well
 as python2.6 compiled 64-bit). 
[clip]
  Cols = [['test']*10,['']*10]
 
 When not passing any dtype, this is created into a recarray with no
 problem:
 
  np.rec.fromarrays(Cols)
 rec.array([('test', ''), ('test', ''), ('test', ''), ('test', ''),
('test', ''), ('test', ''), ('test', ''), ('test', ''),
('test', ''), ('test', '')], 
   dtype=[('f0', '|S4'), ('f1', '|S1')])
 
 However, trouble arises when I try to pass a length-0 dtype
 explicitly.
 
  d = np.dtype([('A', '|S4'), ('B', '|S')])
  np.rec.fromarrays(Cols,dtype=d) 
 recarray([('test', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est',
 ''),
('\x00est', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est',
 ''),
('\x00est', ''), ('\x00est', '')], 
   dtype=[('A', '|S4'), ('B', '|S0')])

That certainly looks like a bug -- where does the \0 appear in front of
all but the first string?

File a ticket in Trac, http://projects.scipy.org/numpy/ -- small bugs
like this tend to get lost and forgotten in mailing list traffic.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in rec.fromarrays ; plus one other possible bug

2009-11-25 Thread Dan Yamins
('\x00est', ''), ('\x00est', ''), ('\x00est', ''), ('\x00est',
  ''),
 ('\x00est', ''), ('\x00est', '')],
dtype=[('A', '|S4'), ('B', '|S0')])

 That certainly looks like a bug -- where does the \0 appear in front of
 all but the first string?


Sorry, I'm not sure what you mean precisely ...


 File a ticket in Trac, http://projects.scipy.org/numpy/ -- small bugs
 like this tend to get lost and forgotten in mailing list traffic.


Will do.




 --
 Pauli Virtanen


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python 2.6, NumPy on CentOS 5.3

2009-11-25 Thread Robert DeLisle
David,

It does indeed work now.  I also was able to find a repo package with the
atlas libraries, so I installed them as well.  It appears that everything
went well.

Thank you again for your assistance.

-Kirk




On Tue, Nov 24, 2009 at 8:02 PM, David Cournapeau courn...@gmail.comwrote:

 On Wed, Nov 25, 2009 at 2:11 AM, Robert DeLisle rkdeli...@gmail.com
 wrote:
  David Cournapeau wrote:
 
  You need the *.so and not the *.so.3.1.1. The latter are enough to *run*
  applications linked against the library, the former is necessary to link
  against them. IOW, you need the devel packages for blas, lapack (and
  python). If you want to do it without admin rights, there is no other
  solution than building them by yourself.
 
 
 
  David,
 
  Thank you so much!  That helps a lot and I don't know why I didn't think
 of
  it myself.  As root I'm able to install blas-devel and lapack-devel which
  gives me new libraries in /usr/lib64.  Specifically, libblas.a,
 liblapack.a,
  and liblapack_pic.a.  Now when I run setup.py from the numpy directory,
 it
  says that it finds the libraries in the appropriate locations.
 
  A few more issues:
 
  The configuration complains that there are no mkl, vml, guide libraries.
  Also, atlas libraries are not found.  I cannot seem to find these at this
  point.

 That's ok. Our build process is  too verbose, and is a bit confusing.
 MKL and Atlas are just optimized BLAS/LAPACK, but they are not
 necessary to build scipy. They can significantly improve scipy speed
 for linear algebra stuff, but building Atlas or linking against the
 Intel MKL is complicated.

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Correlation function about a factor of 100 slower than matlab/mathcad ... but with fft even worse ?

2009-11-25 Thread qubax
The correlation of a large data (about 250k points) v can be checked
via

  correlate(v,v,mode='full')

and ought to give the same result as the matlab function

  xcorr(v)

FFT might speed up the evaluation ...

In my specific case:

  xcorr takes about 0.2 seconds.
  correlate takes about 70 seconds.
  fftconvolve takes about 400 seconds.

on the irc channel a second person happened to run into the same
problem using mathcad and data consisting of about 300k points:

  correl takes about 1 second
  correlate takes 127 seconds
  fftconvolve was aborded because it took to long

These tests were checked and confirmed by two other persons on the
irc channel. The computers involved were 32bit as well as 64bit
machines. All 4 persons are sure that lapack/atlas libraries are
properly installed.

Could someone please investigate why correlate and especially
fftconvolve are orders of magnitude slower?

Should more details / sample data be required, please let me know.

Thanks,
q


--

executed code:

tic=time.time()
cor_c1=correlate(c1data[:,1],c1data[:,1],mode='full')
toc=time.time()

tic=time.time()
cor_c1=fftconvolve(c1data[:,1],c1data[:,1],mode='full')
toc=time.time()

xcorr(data)


-- 
The king who needs to remind his people of his rank, is no king.

A beggar's mistake harms no one but the beggar. A king's mistake,
however, harms everyone but the king. Too often, the measure of
power lies not in the number who obey your will, but in the number
who suffer your stupidity.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread James McEnerney


Pearu,
Thanks. a follow question. 
Using fortran
 subroutine calc(j)
Cf2py intent(callback) pycalc
 external pycalc
Cf2py integer dimension(1), intent(in,out):: j
 integer j(1)
 print *, 'in fortran before pycalc ',
'j=', j(1)
 call pycalc(j)
 print *, 'in fortran after pycalc ', '
j=', j(1)
 end
in python
import foo,numpy
def pycalc(j):
 
 print ' in pycalc ', 'j=', j
 j[:] = 20*j
 return 
print foo.calc.__doc__
j = 1+numpy.array([2])
print foo.calc(j, pycalc)
print j
the output is
calc - Function signature:
 j = calc(j,pycalc,[pycalc_extra_args])
Required arguments:
 j : input rank-1 array('i') with bounds (1)
 pycalc : call-back function
Optional arguments:
 pycalc_extra_args := () input tuple
Return objects:
 j : rank-1 array('i') with bounds (1)
Call-back functions:
 def pycalc(j): return j
 Required arguments:
 j : input rank-1 array('i') with bounds (1)
 Return objects:
 j : rank-1 array('i') with bounds (1)
in fortran before pycalc
j= 3
in pycalc j= [3]
in fortran after pycalc
j= 60
[60]
[3]
Why is the return from foo.calc different from j?
How do I make them the same?
return j in pycalc doesn't change things.
Thanks again!
At 12:06 AM 11/25/2009, you wrote:

Pearu Peterson wrote:
 Hmm, regarding `intent(in, out) j`, this should work. I'll check
what
 is going on..
The `intent(in, out) j` works when pycalc is defined as
subroutine:
 call pycalc(i, j)
instead of
 pyreturn = pycalc(i, j)
Pearu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org

http://*mail.scipy.org/mailman/listinfo/numpy-discussion

Jim McEnerney
Lawrence Livermore National Laboratory
7000 East Ave.
Livermore, Ca. 94550-9234
USA
925-422-1963


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Correlation function about a factor of 100 slower than matlab/mathcad ... but with fft even worse ?

2009-11-25 Thread Pauli Virtanen
ke, 2009-11-25 kello 19:23 +0100, qu...@gmx.at kirjoitti:
[clip]
 Could someone please investigate why correlate and especially
 fftconvolve are orders of magnitude slower?

Read http://projects.scipy.org/numpy/ticket/1260

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Type Inference

2009-11-25 Thread Dan Yamins
Sometimes I need to convert object-type arrays to their natural, real
type, without a priori knowing what that type is, e.g.  the equivalent of:

 Y = np.array(X.tolist())

where X is the object array.  If X is naturally an array of ints, Y will be
an int array, if X is naturally strings, then Y will be '|Sn' where n is the
right string length, the right casting will be done if there are mixed
types, etc...

My question is: is there a faster way to do this, given that X is already an
object array.

One possible (though maybe not the only) way this might work is if there is
a faster lower-level numpy type-inference function that doesn't actually do
the conversion, but just reports the right type, the type that np.array()
*would* convert to. Does such a thing exist?  And if so, (let's call it
'np.typeinferer', hypothetically), would it then be faster to do something
like:

Y = np.array(X,np.typeinferer(X))

by-passing the step of converting the list-ified version of X to the new
type?   Or is the bulk of the work being done by np.array() the type
inference anyway, so that it just makes sense to use np.array() in the first
place?

Sorry if this question is unclear -- and thanks for all the help recently.

best,
Dan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py callback bug?

2009-11-25 Thread Pearu Peterson

Hi James,

To answer the second question, use:

   j = 1+numpy.array([2], numpy.int32)

The answer to the first question is that
the type of 1+numpy.array([2]) is
numpy.int64 but Fortran function expects
an array of type numpy.int32 and hence
the wrapper makes a copy of the input
array (which is also returned by the wrapper)
before passing it to Fortran.

Regards,
Pearu

James McEnerney wrote:
 Pearu,
 Thanks. a follow question.
 Using fortran
 
   subroutine calc(j)
 Cf2py intent(callback) pycalc
   external pycalc
 Cf2py integer dimension(1), intent(in,out):: j
 
   integer j(1)
   print *, 'in fortran before pycalc ', 'j=', j(1)
   call pycalc(j)
   print *, 'in fortran after pycalc ', ' j=', j(1)
   end
 
 in python
 
 import foo,numpy
 def pycalc(j):

 print ' in pycalc ', 'j=', j
 j[:] = 20*j
 return
 
 print foo.calc.__doc__
 j = 1+numpy.array([2])
 print foo.calc(j, pycalc)
 print j
 
 the output is
 
 calc - Function signature:
   j = calc(j,pycalc,[pycalc_extra_args])
 Required arguments:
   j : input rank-1 array('i') with bounds (1)
   pycalc : call-back function
 Optional arguments:
   pycalc_extra_args := () input tuple
 Return objects:
   j : rank-1 array('i') with bounds (1)
 Call-back functions:
   def pycalc(j): return j
   Required arguments:
 j : input rank-1 array('i') with bounds (1)
   Return objects:
 j : rank-1 array('i') with bounds (1)
 
  in fortran before pycalc j=   3
  in pycalc  j= [3]
  in fortran after pycalc  j=  60
 [60]
 [3]
 
 Why is the return from foo.calc different from j?
 How do I make them the same?
 return j in pycalc doesn't change things.
 
 Thanks again!
 
 At 12:06 AM 11/25/2009, you wrote:
 
 
 Pearu Peterson wrote:

  Hmm, regarding `intent(in, out) j`, this should work. I'll check what
  is going on..

 The `intent(in, out) j` works when pycalc is defined as subroutine:

call pycalc(i, j)

 instead of

pyreturn = pycalc(i, j)

 Pearu
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://*mail.scipy.org/mailman/listinfo/numpy-discussion
 
 Jim McEnerney
 Lawrence Livermore National Laboratory
 7000 East Ave.
 Livermore, Ca. 94550-9234
 
 USA
 
 925-422-1963
 
 
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Help making better use of numpy array functions

2009-11-25 Thread mdekauwe

I tried redoing the internal logic for example by using the where function
but I can't seem to work out how to match up the logic. For example (note
slightly different from above). If I change the main loop to

lst = np.where((data  -900.0)  (lst  -900.0), data, lst)
lst = np.where((data  -900.0)  (lst  -900.0), 5.0, lst)

If I then had a data array value of 10.0 and lst array value of -999.0. In
this scenario the first statement would set the LST value to 10.0. However
when you run the second statement, data is still bigger than the undefined
(-999.0) but now so is the LST, hence the lst is set to 5.0

This doesn't match with the logic of

if data[i,j]  -900.:
if lst[i,j]  -900.:
lst[i,j] = data[i,j]
elif lst[i,j]  -900.:
lst[i,j] = 5.0

as it would never get to the second branch under this logic.

Does anyone have any suggestions on how to match up the logic?



mdekauwe wrote:
 
 Hi I have written some code and I would appreciate any suggestions to make
 better use of the numpy arrays functions to make it a bit more efficient
 and less of a port from C. Any tricks are thoughts would be much
 appreciated.
 
 The code reads in a series of images, collects a running total if the
 value is valid and averages everything every 8 images.
 
 Code below...
 
 Thanks in advance...
 
 #!/usr/bin/env python
 
 
 Average the daily LST values to make an 8 day product...
 
 Martin De Kauwe
 18th November 2009
 
 import sys, os, glob
 import numpy as np
 import matplotlib.pyplot as plt
 
 
 def averageEightDays(files, lst, count, numrows, numcols):
   
 day_count = 0
 # starting day - tag for output file
 doy = 122
 
 # loop over all the images and average all the information
 # every 8 days...
 for fname in glob.glob(os.path.join(path, files)):
 current_file = fname.split('/')[8]
 year = current_file.split('_')[2][0:4]
   
 try:
 f = open(fname, 'rb')
 except IOError:
 print Cannot open outfile for read, fname
 sys.exit(1)
   
 data = np.fromfile(f, dtype=np.float32).reshape(numrows, numcols)
 f.close()
   
 # if it is day 1 then we need to fill up the holding array...
 # copy the first file into the array...
 if day_count == 0:
 lst = data
 for i in xrange(numrows):
  for j in xrange(numcols):
  # increase the pixel count if we got vaild data
 (undefined = -999.0
  if lst[i,j]  -900.:
  count[i,j] = 1   
  day_count += 1
   
 # keep a running total of valid lst values in an 8 day sequence
 elif day_count  0 and day_count = 7:
 for i in xrange(numrows):
 for j in xrange(numcols):
 # lst valid pixel?
 if data[i,j]  -900.:
 # was the existing pixel valid?
 if lst[i,j]  -900.:
 lst[i,j] = data[i,j]
 count[i,j] = 1
 else:
 lst[i,j] += data[i,j]
 count[i,j] += 1   
 day_count += 1
   
 # need to average previous 8 days and write to a file...
 if day_count == 8:
 for i in xrange(numrows):
 for j in xrange(numcols):
 if count[i,j]  0:
 lst[i,j] = lst[i,j] / count[i,j]
 else:
 lst[i,j] = -999.0
   
 day_count = 0
 doy += 8
   
 lst, count = write_outputs(lst, count, year, doy)
   
 # it is possible we didn't have enough slices to do the last 8day
 avg...
 # but if we have more than one day we shall use it
 # need to average previous 8 days and write to a file...
 if day_count  1:
 for i in xrange(numrows):
 for j in xrange(numcols):   
 if count[i,j]  0:
 lst[i,j] = lst[i,j] / count[i,j]
 else:
 lst[i,j] = -999.0
   
 day_count = 0
 doy += 8
   
 lst, count = write_outputs(lst, count, year, doy)
   
 def write_outputs(lst, count, year, doy):
 path = /users/eow/mgdk/research/HOFF_plots/LST/8dayLST
 outfile = lst_8day1030am_ + str(year) + str(doy) + .gra
   
 try:
 of = open(os.path.join(path, outfile), 'wb')
 except IOError:
 print Cannot open outfile for write, outfile
 sys.exit(1)
   
 outfile2 = pixelcount_8day1030am_ + str(year) + str(doy) + .gra
 try:
 of2 = open(os.path.join(path, 

Re: [Numpy-discussion] Help making better use of numpy array functions

2009-11-25 Thread josef . pktd
On Wed, Nov 25, 2009 at 4:13 PM, mdekauwe mdeka...@gmail.com wrote:

 I tried redoing the internal logic for example by using the where function
 but I can't seem to work out how to match up the logic. For example (note
 slightly different from above). If I change the main loop to

 lst = np.where((data  -900.0)  (lst  -900.0), data, lst)
 lst = np.where((data  -900.0)  (lst  -900.0), 5.0, lst)

does an intermediate array help?

lst2 = np.where((data  -900.0)  (lst  -900.0), data, lst)
lst = np.where((data  -900.0)  (lst  -900.0), 5.0, lst2)

I didn't read through all the loops to see what would be a better way to do it.

Josef


 If I then had a data array value of 10.0 and lst array value of -999.0. In
 this scenario the first statement would set the LST value to 10.0. However
 when you run the second statement, data is still bigger than the undefined
 (-999.0) but now so is the LST, hence the lst is set to 5.0

 This doesn't match with the logic of

 if data[i,j]  -900.:
    if lst[i,j]  -900.:
        lst[i,j] = data[i,j]
    elif lst[i,j]  -900.:
        lst[i,j] = 5.0

 as it would never get to the second branch under this logic.

 Does anyone have any suggestions on how to match up the logic?



 mdekauwe wrote:

 Hi I have written some code and I would appreciate any suggestions to make
 better use of the numpy arrays functions to make it a bit more efficient
 and less of a port from C. Any tricks are thoughts would be much
 appreciated.

 The code reads in a series of images, collects a running total if the
 value is valid and averages everything every 8 images.

 Code below...

 Thanks in advance...

 #!/usr/bin/env python

 
 Average the daily LST values to make an 8 day product...

 Martin De Kauwe
 18th November 2009
 
 import sys, os, glob
 import numpy as np
 import matplotlib.pyplot as plt


 def averageEightDays(files, lst, count, numrows, numcols):

     day_count = 0
     # starting day - tag for output file
     doy = 122

     # loop over all the images and average all the information
     # every 8 days...
     for fname in glob.glob(os.path.join(path, files)):
         current_file = fname.split('/')[8]
         year = current_file.split('_')[2][0:4]

         try:
             f = open(fname, 'rb')
         except IOError:
             print Cannot open outfile for read, fname
             sys.exit(1)

         data = np.fromfile(f, dtype=np.float32).reshape(numrows, numcols)
         f.close()

         # if it is day 1 then we need to fill up the holding array...
         # copy the first file into the array...
         if day_count == 0:
             lst = data
             for i in xrange(numrows):
                  for j in xrange(numcols):
                      # increase the pixel count if we got vaild data
 (undefined = -999.0
                      if lst[i,j]  -900.:
                          count[i,j] = 1
                          day_count += 1

         # keep a running total of valid lst values in an 8 day sequence
         elif day_count  0 and day_count = 7:
             for i in xrange(numrows):
                 for j in xrange(numcols):
                     # lst valid pixel?
                     if data[i,j]  -900.:
                         # was the existing pixel valid?
                         if lst[i,j]  -900.:
                             lst[i,j] = data[i,j]
                             count[i,j] = 1
                         else:
                             lst[i,j] += data[i,j]
                             count[i,j] += 1
                 day_count += 1

         # need to average previous 8 days and write to a file...
         if day_count == 8:
             for i in xrange(numrows):
                 for j in xrange(numcols):
                     if count[i,j]  0:
                         lst[i,j] = lst[i,j] / count[i,j]
                     else:
                         lst[i,j] = -999.0

             day_count = 0
             doy += 8

             lst, count = write_outputs(lst, count, year, doy)

     # it is possible we didn't have enough slices to do the last 8day
 avg...
     # but if we have more than one day we shall use it
     # need to average previous 8 days and write to a file...
     if day_count  1:
         for i in xrange(numrows):
             for j in xrange(numcols):
                 if count[i,j]  0:
                     lst[i,j] = lst[i,j] / count[i,j]
                 else:
                     lst[i,j] = -999.0

         day_count = 0
         doy += 8

         lst, count = write_outputs(lst, count, year, doy)

 def write_outputs(lst, count, year, doy):
     path = /users/eow/mgdk/research/HOFF_plots/LST/8dayLST
     outfile = lst_8day1030am_ + str(year) + str(doy) + .gra

     try:
         of = open(os.path.join(path, outfile), 'wb')
     except IOError:
         print Cannot open outfile for write, outfile
         sys.exit(1)

     outfile2 = pixelcount_8day1030am_ + str(year) + str(doy) + .gra
     try:
         of2 = 

Re: [Numpy-discussion] Help making better use of numpy array functions

2009-11-25 Thread Pierre GM
On Nov 25, 2009, at 4:13 PM, mdekauwe wrote:
 I tried redoing the internal logic for example by using the where function
 but I can't seem to work out how to match up the logic. For example (note
 slightly different from above). If I change the main loop to
 
 lst = np.where((data  -900.0)  (lst  -900.0), data, lst)
 lst = np.where((data  -900.0)  (lst  -900.0), 5.0, lst)
 
 If I then had a data array value of 10.0 and lst array value of -999.0. In
 this scenario the first statement would set the LST value to 10.0. However
 when you run the second statement, data is still bigger than the undefined
 (-999.0) but now so is the LST, hence the lst is set to 5.0
 
 This doesn't match with the logic of
 
 if data[i,j]  -900.:
if lst[i,j]  -900.:
lst[i,j] = data[i,j]
elif lst[i,j]  -900.:
lst[i,j] = 5.0
 
 as it would never get to the second branch under this logic.
 
 Does anyone have any suggestions on how to match up the logic?
 


np.where(data-900,np.where(lst-900,data,5.),lst)

Note that this should fail if lst=-900
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Correlation function about a factor of 100 slower than matlab/mathcad ... but with fft even worse ?

2009-11-25 Thread qubax
My own solution (i just heard that a very similar fix is (about to
be) included in the new svn version) - for those who need a quickfix:

*) This quick and dirty solution is about a factor of 300 faster
for an input of 10^5 random numbers. Probably alot more for larger
vectors.

*) The deviation from correlate(v,v,mode='full') is max. about 10^-10 and
similar to the difference between conv() and xcorr() in matlab.


The trick: please notice the fft(x,2**_nextPow2()) part.


--

def myXcorrfun(x):
  len_x=len(a)
  maxlag = len_x - 1
  c = ifft(abs(fft(x,2**__nextPowOf2(2*len_x-1)))**2)
  # force it to be real
  c = real(c)
  c_final = append(c[-maxlag:],c[0:maxlag+1])
  return c_final

def __nextPowOf2(x):
  # round up what remains after log2 ... that should be
  # the next highest power of 2 for the given number
  return ceil(log2(x))



you can give it a quick try via:


expon = range(5)

for k in expon:
  a = random.rand(10**k)
  print 'testing with 10^',k,' random numbers'

  tic=time.time()
  myc = myXcorrfun(a)
  mytime = time.time()-tic
  print mytime,'seconds for myXcorr'

  tic=time.time()
  pyc = correlate(a,a,mode='full')
  pytime = time.time()-tic
  print pytime,'seconds for scipy correlate'
  print
  print 'estimated speedup:', int(pytime/mytime)
  print max(myc-pyc),' max deviation between the two'
  print

I hope it helps one or the other out there.

With best regards,
q

On Wed, Nov 25, 2009 at 09:09:50PM +0200, Pauli Virtanen wrote:
 ke, 2009-11-25 kello 19:23 +0100, qu...@gmx.at kirjoitti:
 [clip]
  Could someone please investigate why correlate and especially
  fftconvolve are orders of magnitude slower?
 
 Read http://projects.scipy.org/numpy/ticket/1260
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

-- 
The king who needs to remind his people of his rank, is no king.

A beggar's mistake harms no one but the beggar. A king's mistake,
however, harms everyone but the king. Too often, the measure of
power lies not in the number who obey your will, but in the number
who suffer your stupidity.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] The trunk has been branched for 1.4.0

2009-11-25 Thread David Cournapeau
Hi,

I have finally branched the trunk into the 1.4.x branch. I have
disabled the C API for datetime, and fixed the C API/ABI numbers. At
this point, you should avoid committing anything which is not a high
priority bug in the branch. I will prepare a first rc1  based on the
branch,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Type Inference

2009-11-25 Thread Charles R Harris
On Wed, Nov 25, 2009 at 12:34 PM, Dan Yamins dyam...@gmail.com wrote:

 Sometimes I need to convert object-type arrays to their natural, real
 type, without a priori knowing what that type is, e.g.  the equivalent of:

  Y = np.array(X.tolist())

 where X is the object array.  If X is naturally an array of ints, Y will be
 an int array, if X is naturally strings, then Y will be '|Sn' where n is the
 right string length, the right casting will be done if there are mixed
 types, etc...

 My question is: is there a faster way to do this, given that X is already
 an object array.

 One possible (though maybe not the only) way this might work is if there is
 a faster lower-level numpy type-inference function that doesn't actually do
 the conversion, but just reports the right type, the type that np.array()
 *would* convert to. Does such a thing exist?  And if so, (let's call it
 'np.typeinferer', hypothetically), would it then be faster to do something
 like:

 Y = np.array(X,np.typeinferer(X))

 by-passing the step of converting the list-ified version of X to the new
 type?   Or is the bulk of the work being done by np.array() the type
 inference anyway, so that it just makes sense to use np.array() in the first
 place?

 Sorry if this question is unclear -- and thanks for all the help recently.


This seems to work:

In [5]: x = ones(2, dtype=object)

In [6]: x
Out[6]: array([1, 1], dtype=object)

In [7]: array(x, dtype=int)
Out[7]: array([1, 1])

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion