[Numpy-discussion] Numpify this?

2008-05-18 Thread Matt Crane
Hey,

I'm new to numpy but not new to python or programming in general. I
was wondering if there's a way of using numpy to do the following or
whether I've got what I've got and that's as good as it's going to
get.

I have two 2d arrays and I want to create another 2d array that
contains the values from the 2nd column of the first two arrays where
the values in the 1st column match. To elaborate with an example - if
I had an array a:

array([[2834, 1], [3282, 3], [6850, 2], [9458, 2]])
and an array b:

array([[2834, 3], [3282, 5], [, 5], [9458, 3], [, 4], [1,
5], [12345, 1]])

then I'd want the result to be

array([[1, 3],   # from 2834
   [3, 5],# from 3282
   [2, 3]])   # from 9458

This is what I have at the moment:

results = []
while aind < amax and bind < bmax:
if a[aind, 0] < b[bind, 0]:
aind += 1
elif a[aind, 0] > b[bind, 0]:
bind += 1
else:
results.append([a[aind, 1], b[bind, 1]])
aind += 1
bind += 1
results = array(results)

Where aind = bind = 0, amax = a.shape[0] and bmax = b.shape[0].

Any tips/pointers/speedups?

Cheers,
Matt
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Robert Kern
On Sun, May 18, 2008 at 2:04 AM, Matt Crane <[EMAIL PROTECTED]> wrote:
> Hey,
>
> I'm new to numpy but not new to python or programming in general. I
> was wondering if there's a way of using numpy to do the following or
> whether I've got what I've got and that's as good as it's going to
> get.
>
> I have two 2d arrays and I want to create another 2d array that
> contains the values from the 2nd column of the first two arrays where
> the values in the 1st column match. To elaborate with an example - if
> I had an array a:
>
> array([[2834, 1], [3282, 3], [6850, 2], [9458, 2]])
> and an array b:
>
> array([[2834, 3], [3282, 5], [, 5], [9458, 3], [, 4], [1,
> 5], [12345, 1]])
>
> then I'd want the result to be
>
> array([[1, 3],   # from 2834
>   [3, 5],# from 3282
>   [2, 3]])   # from 9458

Are the matching rows always going to be the same row in each? I.e.
you want rows i such that a[i,0]==b[i,0] rather than trying to find
all i,j such that a[i,0]==b[j,0]?

If so, then I would do the following:


In [1]: from numpy import *

In [2]: a = array([[2834, 1], [3282, 3], [6850, 2], [9458, 2]])

In [3]: b = array([[2834, 3], [3282, 5], [, 5], [9458, 3], [,
4], [1,
   ...: 5], [12345, 1]])

In [4]: minlength = min(a.shape[0], b.shape[0])

In [5]: matching = nonzero(a[:minlength,0] == b[:minlength,0])[0]

In [6]: matching
Out[6]: array([0, 1, 3])

In [7]: column_stack([a[matching,1], b[matching,1]])
Out[7]:
array([[1, 3],
   [3, 5],
   [2, 3]])

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Matt Crane
On Sun, May 18, 2008 at 7:19 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> Are the matching rows always going to be the same row in each? I.e.
> you want rows i such that a[i,0]==b[i,0] rather than trying to find
> all i,j such that a[i,0]==b[j,0]?
>
> If so, then I would do the following:
>
>
> In [1]: from numpy import *
>
> In [2]: a = array([[2834, 1], [3282, 3], [6850, 2], [9458, 2]])
>
> In [3]: b = array([[2834, 3], [3282, 5], [, 5], [9458, 3], [,
> 4], [1,
>   ...: 5], [12345, 1]])
>
> In [4]: minlength = min(a.shape[0], b.shape[0])
>
> In [5]: matching = nonzero(a[:minlength,0] == b[:minlength,0])[0]
>
> In [6]: matching
> Out[6]: array([0, 1, 3])
>
> In [7]: column_stack([a[matching,1], b[matching,1]])
> Out[7]:
> array([[1, 3],
>   [3, 5],
>   [2, 3]])
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma that is made terrible by our own mad attempt to interpret it as
> though it had an underlying truth."
>  -- Umberto Eco
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>

Sorry, I should have mentioned that no, the matching rows won't always
be in the same position.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Robert Kern
On Sun, May 18, 2008 at 2:59 AM, Matt Crane <[EMAIL PROTECTED]> wrote:
> Sorry, I should have mentioned that no, the matching rows won't always
> be in the same position.

Okay, then it's just a little bit more complicated.


In [18]: from numpy import *
In [19]: a = array([[1, 10], [2, 20], [3, 30], [3, 40], [4, 50]])

In [20]: b = array([[2, 60], [1, 70], [5, 80], [6, 90], [7, 100], [3, 110]])

In [21]: m = a[:,0] == b[:,0][:,newaxis]

In [22]: m
Out[22]:
array([[False,  True, False, False, False],
   [ True, False, False, False, False],
   [False, False, False, False, False],
   [False, False, False, False, False],
   [False, False, False, False, False],
   [False, False,  True,  True, False]], dtype=bool)

In [23]: i, j = nonzero(m)

In [24]: i
Out[24]: array([0, 1, 5, 5])

In [25]: j
Out[25]: array([1, 0, 2, 3])

In [26]: column_stack([a[j,1], b[i,1]])
Out[26]:
array([[ 20,  60],
   [ 10,  70],
   [ 30, 110],
   [ 40, 110]])


-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Matt Crane
On Sun, May 18, 2008 at 8:08 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> Okay, then it's just a little bit more complicated.

Thanks, and that's going to be faster - the method that I posted is
linear in terms of the length of the two lists? Given that the values
in the first column are monotonically increasing (again something I
should have mentioned -- I blame a lack of caffeine) - could we make
it faster?

Thanks, for everything up to this point though.
Matt
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Robert Kern
On Sun, May 18, 2008 at 3:29 AM, Matt Crane <[EMAIL PROTECTED]> wrote:
> On Sun, May 18, 2008 at 8:08 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>> Okay, then it's just a little bit more complicated.
>
> Thanks, and that's going to be faster - the method that I posted is
> linear in terms of the length of the two lists?

It depends on the sizes.

> Given that the values
> in the first column are monotonically increasing (again something I
> should have mentioned -- I blame a lack of caffeine) - could we make
> it faster?

Are there repeats?

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Matt Crane
On Sun, May 18, 2008 at 8:52 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> It depends on the sizes.
The sizes could range from 3 to 24 with an average of around 5500.

> Are there repeats?
No, no repeats in the first column.

I'm going to go get a cup of coffee before I forget to leave out any
potentially vital information again. It's going to be a long day.

Thanks,
Matt
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Anne Archibald
2008/5/18 Matt Crane <[EMAIL PROTECTED]>:
> On Sun, May 18, 2008 at 8:52 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>> Are there repeats?
> No, no repeats in the first column.
>
> I'm going to go get a cup of coffee before I forget to leave out any
> potentially vital information again. It's going to be a long day.

It can be done, though I had to be kind of devious. My solution might
not even be O(n log n), depending on how mergesort is implemented:

def find_matches(A,B):
"""Find positions A_idx and B_idx so that A[A_idx]==B[B_idx]

A and B should be sorted arrays with no repeats; A_idx and
B_idx will be all the places they agree.

>>> import numpy as np
>>> A = np.array([1,2,4,5,7])
>>> B = np.array([1,3,5,9])
>>> find_matches(A,B)
array([0,3]), array([0,2])
"""
AB = np.concatenate((A,B))
idx = np.argsort(np.concatenate((A,B)),kind="mergesort")
sorted = AB[idx]
pairs = np.where(np.diff(sorted)==0)[0]
A_pairs = idx[pairs]
B_pairs = idx[pairs+1]-len(A)

if np.any(A_pairs>=len(A)):
raise ValueError, "first array contains a repeated element"
if np.any(B_pairs<0):
raise ValueError, "second array contains a repeated element"

return A_pairs, B_pairs

The idea is that diff is not a bad way to find repeats in a sequence;
so concatenate the two sequences and sort them (if you're lucky
mergesort will be fast because they're individually sorted, but in any
case we need stability). Of course, you have to take the pairs in the
sorted sequence and figure out where they came from, but fortunately
you don't even need to invert the permutation returned by argsort (do
we have a tool to invert permutations, or should one just use argsort
again?).

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Robert Kern
On Sun, May 18, 2008 at 4:02 AM, Matt Crane <[EMAIL PROTECTED]> wrote:
> On Sun, May 18, 2008 at 8:52 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>> It depends on the sizes.
> The sizes could range from 3 to 24 with an average of around 5500.

A 24x24 boolean matrix will probably be too slow.

>> Are there repeats?
> No, no repeats in the first column.

Great! So let's use searchsorted() to find potential indices where the
two first columns are equal. We pull out the values at those indices
and actually do the comparison to get a boolean mask where there is an
equality. Do both a.searchsorted(b) and b.searchsorted(a) to get the
appropriate masks on b and a respectively. The number of True elements
will be the same for both. Now just apply the masks to the second
columns.


In [20]: a = array([[2, 10], [4, 20], [6, 30], [8, 40], [10, 50]])

In [21]: b = array([[2, 60], [3, 70], [4, 80], [5, 90], [8, 100], [10, 110]])

In [22]: a[b[b[:,0].searchsorted(a[:,0]),0] == a[:,0], 1]
Out[22]: array([10, 20, 40, 50])

In [23]: b[a[a[:,0].searchsorted(b[:,0]),0] == b[:,0], 1]
Out[23]: array([ 60,  80, 100, 110])

In [24]: column_stack([Out[22], Out[23]])
Out[24]:
array([[ 10,  60],
   [ 20,  80],
   [ 40, 100],
   [ 50, 110]])

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy with icc

2008-05-18 Thread rex

> I am trying to build numpy with intel icc and mkl.  I don't understand
> a lot of what I am doing.  

Me, too. I have built it with icc & MKL several times in the past,
but cannot build the numpy svn with MKL now. I can build it with
icc and no MKL, and it passes all the tests with no errors.

I've interleaved my setup with yours to make it easy to compare them. 


> --
> site.cfg:

> [DEFAULT]
> library_dirs = 
> /opt/intel/mkl/10.0.3.020/lib/em64t,/usr/lib64,/usr/local/lib64,/usr/local/python2.5.2-intel/lib,/usr/lib,/usr/local/lib
> include_dirs = 
> /opt/intel/mkl/10.0.3.020/include,/usr/include,/usr/local/include,/usr/local/python2.5.2-intel/include

> [mkl]
> include_dirs = /opt/intel/mkl/10.0.3.020/include
> library_dirs = /opt/intel/mkl/10.0.3.020/lib/em64t
> lapack_libs = mkl_lapack

> [lapack_src]
> libraries=mkl_lapack,mkl,guide

> [lapack_info]
> libraries=mkl_lapack,mkl,guide
> ---

[DEFAULT]
library_dirs = /opt/intel/mkl/10.0.3.020/lib/32
include_dirs = /opt/intel/mkl/10.0.3.020/10.0.3.020/include

[mkl]
library_dirs = /opt/intel/mkl/10.0.3.020/lib/32
lapack_libs = mkl, mkl_lapack, guide

#my reading of the Intel MKL docs suggests that vml does not need
to be explicitly added, but that mkl_lapack does.


> intelccompiler.py:

> from distutils.unixccompiler import UnixCCompiler
> from numpy.distutils.exec_command import find_executable

> class IntelCCompiler(UnixCCompiler):

>""" A modified Intel compiler compatible with an gcc built Python.
>"""

>compiler_type = 'intel'
>cc_exe = 'icc -g -O3 -w -fPIC -parallel -ipo -xT -axT'

cc_exe = 'icc -msse3 -fast'#adjust to suit your cpu

I think some of those flags can be omitted: drop -g, and -fast
implies several other flags, e.g., -xT, IIRC. (I'm building on a
Core 2 Duo)


>def __init__ (self, verbose=0, dry_run=0, force=0):
>UnixCCompiler.__init__ (self, verbose,dry_run, force)
>compiler = self.cc_exe
>self.set_executables(compiler=compiler,
> compiler_so=compiler,
> compiler_cxx=compiler,
> linker_exe=compiler,
> linker_so=compiler + ' -shared')

> class IntelItaniumCCompiler(IntelCCompiler):
>compiler_type = 'intele'

># On Itanium, the Intel Compiler used to be called ecc, let's search for
># it (now it's also icc, so ecc is last in the search).
>for cc_exe in map(find_executable,['icc','ecc']):
>if cc_exe:
>break
> 

> system_info.py:

> 
> class lapack_mkl_info(mkl_info):

>def calc_info(self):
>mkl = get_info('mkl')
>if not mkl:
>return
>if sys.platform == 'win32':
>lapack_libs = self.get_libs('lapack_libs',['mkl_lapack'])
>else:
>lapack_libs = self.get_libs('lapack_libs',['mkl_lapack'])

>info = {'libraries': lapack_libs}
>dict_append(info,**mkl)
>self.set_info(**info)
> ...
> 

I made the same change in the above. It was originally:

if sys.platform == 'win32':
lapack_libs = self.get_libs('lapack_libs',['mkl_lapack'])
else:
lapack_libs = 
self.get_libs('lapack_libs',['mkl_lapack32','mkl_lapack64]')

Which does not work because all 3 libs are now named mkl_lapack.

I made another change in system.info.py:

class mkl_info(system_info):
section = 'mkl'
dir_env_var = 'MKL'
_lib_mkl = ['mkl','mkl_lapack','guide']

It originally had 'vml' instead of 'mkl_lapack'


> i then use this command to compile:

> /usr/local/python2.5.2-intel/bin/python setup.py config
> --compiler=intel config_fc --fcompiler=intel \
> --opt='-fPIC -O3 -w -axT -xT' install > build.out

I used:

numpy# python setup.py config --compiler=intel build_clib \
 --compiler=intel build_ext --compiler=intel install \
 --prefix=/usr/local> build51

> build.out has this in it:

> F2PY Version 2_4422
> blas_opt_info:
> blas_mkl_info:
>  libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.3.020/lib/em64t
>  NOT AVAILABLE

> atlas_blas_threads_info:
> Setting PTATLAS=ATLAS
>  NOT AVAILABLE

> atlas_blas_info:
>  NOT AVAILABLE

> blas_info:
>  NOT AVAILABLE

> blas_src_info:
>  NOT AVAILABLE

>  NOT AVAILABLE

> lapack_opt_info:
> lapack_mkl_info:
> mkl_info:
>  libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.3.020/lib/em64t
>  NOT AVAILABLE

>  NOT AVAILABLE

> atlas_threads_info:
> Setting PTATLAS=ATLAS
> numpy.distutils.system_info.atlas_threads_info
>  NOT AVAILABLE

> atlas_info:
> numpy.distutils.system_info.atlas_info
>  NOT AVAILABLE

> lapack_info:
>  NOT AVAILABLE

> lapack_src_info:
>  NOT AVAILABLE

>  NOT AVAILABLE


build53 (I u

[Numpy-discussion] numpy.distutils: building a f2py in a subdir

2008-05-18 Thread David Cournapeau
Hi,

I would like to be able to build a f2py extension in a subdir with 
distutils, that is:

config.add_extension('foo/bar', source = ['foo/bar.pyf'])

But it does not work right now because of the way numpy.distutils finds 
the name of the extension. Replacing:

ext_name = extension.name.split('.')[-1]

by

ext_name = os.path.basename(extension.name.split('.')[-1])

Seems to make it work. Could that break anything in numpy.distutils ? I 
don't see how, but I don't want to touch distutils without being sure it 
won't,

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils: building a f2py in a subdir

2008-05-18 Thread Robert Kern
On Sun, May 18, 2008 at 5:14 AM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
> Hi,
>
>I would like to be able to build a f2py extension in a subdir with
> distutils, that is:
>
> config.add_extension('foo/bar', source = ['foo/bar.pyf'])
>
> But it does not work right now because of the way numpy.distutils finds
> the name of the extension.

Is foo a subpackage and the extension is supposed to be imported as
parent.foo.bar (assuming the setup.py is for the "parent" package)? If
so, you want this:

config.add_extension('foo.bar', source=['foo/bar.pyf'])

If the source is just in a subdirectory, but bar.so should be imported
as "parent.bar", then you want this:

config.add_extension('bar', source=['foo/bar.pyf'])

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils: building a f2py in a subdir

2008-05-18 Thread David Cournapeau
Robert Kern wrote:
>
> config.add_extension('foo.bar', source=['foo/bar.pyf'])
>   

Duh, should have thought about that.

thanks,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils: building a f2py in a subdir

2008-05-18 Thread Pearu Peterson
On Sun, May 18, 2008 1:14 pm, David Cournapeau wrote:
> Hi,
>
> I would like to be able to build a f2py extension in a subdir with
> distutils, that is:
>
> config.add_extension('foo/bar', source = ['foo/bar.pyf'])

A safe approach would be to create a foo/setup.py that contains
  config.add_extension('bar', source = ['bar.pyf'])
and in the parent setup.py add
  config.add_subpackage('foo')
(you might also need creating foo/__init__.py).

> But it does not work right now because of the way numpy.distutils finds
> the name of the extension. Replacing:
>
> ext_name = extension.name.split('.')[-1]
>
> by
>
> ext_name = os.path.basename(extension.name.split('.')[-1])
>
> Seems to make it work. Could that break anything in numpy.distutils ? I
> don't see how, but I don't want to touch distutils without being sure it
> won't,

The change should not break anything that already works because
in distutils extension name is assumed to contain names joined with a dot.
If distutils works with / in extension name, then I think it is because
by an accident. I'd recommend checking this also on a windows system
before changing numpy.distutils, not sure if it works or not there..

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 1.1.0rc1 tagged

2008-05-18 Thread rex
Jarrod Millman <[EMAIL PROTECTED]> wrote:

>Please test the release candidate:
>svn co http://svn.scipy.org/svn/numpy/tags/1.1.0rc1 1.1.0rc1

With icc & MKL it fails to find the MKL libraries.

site.cfg:
--
[DEFAULT]
library_dirs = /opt/intel/mkl/10.0.3.020/lib/32
include_dirs = /opt/intel/mkl/10.0.3.020/10.0.3.020/include

[mkl]
library_dirs = /opt/intel/mkl/10.0.3.020/lib/32
lapack_libs = mkl, mkl_lapack, guide
--
1.1.0rc1# python setup.py config --compiler=intel build_clib \
 --compiler=intel build_ext --compiler=intel install \
 --prefix=/usr/local> build0
 
cat build0

F2PY Version 2_5188
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.3.020/lib/32
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in 
/opt/intel/mkl/10.0.3.020/lib/32
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /opt/intel/mkl/10.0.3.020/lib/32
  NOT AVAILABLE

blas_info:
  libraries blas not found in /opt/intel/mkl/10.0.3.020/lib/32
  NOT AVAILABLE

blas_src_info:
  NOT AVAILABLE

  NOT AVAILABLE

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.3.020/lib/32
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in 
/opt/intel/mkl/10.0.3.020/lib/32
  libraries lapack_atlas not found in /opt/intel/mkl/10.0.3.020/lib/32
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /opt/intel/mkl/10.0.3.020/lib/32
  libraries lapack_atlas not found in /opt/intel/mkl/10.0.3.020/lib/32
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

lapack_info:
  libraries lapack not found in /opt/intel/mkl/10.0.3.020/lib/32
  NOT AVAILABLE

lapack_src_info:
  NOT AVAILABLE

  NOT AVAILABLE

running config
[...]

--

Without MKL linked it runs the tests perfectly after being
compiled with icc:
---
 python
Python 2.5 (release25-maint, Dec  9 2006, 14:35:53)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-20)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.test()
Numpy is installed in /usr/local/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5 (release25-maint, Dec  9 2006, 14:35:53) [GCC 4.1.2 20061115 
(prerelease) (Debian 4.1.1-20)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 24/24 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar 
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 94/94 tests for numpy.ma.core
  Found 15/15 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
.

Re: [Numpy-discussion] svd in numpy

2008-05-18 Thread Zachary Pincus
On May 17, 2008, at 9:34 AM, David Cournapeau wrote:

> Nripun Sredar wrote:
>> I have a sparse matrix 416x52. I tried to factorize this matrix using
>> svd from numpy. But it didn't produce a result and looked like it is
>> in an infinite loop.
>> I tried a similar operation using random numbers in the matrix. Even
>> this is in an infinite loop.
>> Did anyone else face a similar problem?
>> Can anyone please give some suggestions?
>
> Are you on windows ? What is the CPU on your machine ? I suspect  
> this is
> caused by windows binaries which shipped blas/lapack without support  
> for
> "old" CPU.

I have seen this issue as well, on Windows XP running on a Core2 Duo.  
(But... it was a virtualized environment with VirtualBox, so I don't  
know if that disables the SSE features.)

Anyhow, this was with the latest windows binaries, I think  
(numpy-1.0.4.win32-py2.5.msi), and had the same issue: infinite loop  
with 100% processor doing a SVD. (Non-sparse array, though.)

Zach
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPtr vs NumPy.i to access C

2008-05-18 Thread Jose Martin

Thanks everyone for all the comments! It helped to understand better the 
advantages/disadvantages of the various options to interact with C. 

Jose.


 --- On Sat 05/17, Bill Spotz < [EMAIL PROTECTED] > wrote:

Just to make sure the original question gets answered, yes, numpy.i 
avoids copies as much as possible.

A special case is when your C code provides you with a view of its 
internal data and does not require any memory to be allocated by the 
(python) user. This can be dangerous, but if it is your use case, be 
sure to use the ARGOUTVIEW_* typemaps.

Oh, and Brian's description of SWIG is an eminently fair one.

AND, if NumPtr is only for Numeric, you should know that Numeric is no 
longer developed or supported.


___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-18 Thread Steven H. Rogers
Joe Harrington wrote:
>  NUMPY/SCIPY DOCUMENTATION MARATHON 2008
> ...
> 5. Write a new help function that optionally produces ASCII or points
> the user's PDF or HTML reader to the right page (either local or
> global).
>   
I can work on this.  Fernando suggested this at the IPython sprint in 
Boulder last year, so I've given it some thought and started a wiki page:
http://ipython.scipy.org/moin/Developer_Zone/SearchDocs

Regards,
Steve

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc

2008-05-18 Thread David Cournapeau
On Sun, May 18, 2008 at 7:13 PM, rex <[EMAIL PROTECTED]> wrote:
>
>> I am trying to build numpy with intel icc and mkl.  I don't understand
>> a lot of what I am doing.
>
> Me, too. I have built it with icc & MKL several times in the past,
> but cannot build the numpy svn with MKL now. I can build it with
> icc and no MKL, and it passes all the tests with no errors.
>

I have not tried with icc, but the following works for me with the
last mkl (I have only tried numpy).

[mkl]
library_dirs = /home/david/intel/mkl/10.0.1.014/lib/32
lapack_libs = mkl_lapack
mkl_libs = mkl, guide

(of course, adapt the library_dirs accordingly). All test pass. I have
updated the site.cfg.example in numpy. But really, if Intel keeps
changing its library names, there is not much we can do.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008

2008-05-18 Thread Pauli Virtanen
Hi,

su, 2008-05-18 kello 07:16 -0600, Steven H. Rogers kirjoitti:
> Joe Harrington wrote:
> >NUMPY/SCIPY DOCUMENTATION MARATHON 2008
> > ...
> > 5. Write a new help function that optionally produces ASCII or points
> > the user's PDF or HTML reader to the right page (either local or
> > global).
> >   
> I can work on this.  Fernando suggested this at the IPython sprint in 
> Boulder last year, so I've given it some thought and started a wiki page:
> http://ipython.scipy.org/moin/Developer_Zone/SearchDocs

In Numpy SVN/1.1 there is a function "lookfor" that searches the
docstrings for a substring (no stemming etc. is done). Similar 
"%lookfor" magic command got accepted into IPython0 as an extension
ipy_lookfor.py. Improvements to these would be surely appreciated.

I think that also Sphinx supports searching, so that the generated HTML
docs [1] are searchable, as is the generated PDF output.

Pauli


.. [1] http://mentat.za.net/numpy/refguide/
   So far, this preview contains only docs for ndarray, though.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] checking element types in array

2008-05-18 Thread Zoho Vignochi
On Sat, 17 May 2008 14:58:20 -0400, Anne Archibald wrote:

> numpy arrays are efficient, among other reasons, because they have
> homogeneous types. So all the elements in an array are the same type.
> (Yes, this means if you have an array of numbers only one of which
> happens to be complex, you have to represent them all as complex numbers
> whose imaginary part happens to be zero.) So if A is an array A.dtype is
> the type of its elements.
> 
> numpy provides two convenience functions for checking whether an array
> is complex, depending on what you want:
> 
> iscomplex checks whether each element has a nonzero imaginary part and
> returns an array representing the element-by-element answer; so
> any(iscomplex(A)) will be true if any element of A has a nonzero
> imaginary part.
> 
> iscomplexobj checks whether the array has a complex data type. This is
> much much faster, but of course it may happen that all the imaginary
> parts happen to be zero; if you want to treat this array as real, you
> must use iscomplex.
> 
> Anne
> 
> 
> Anne

Thank you for the explanation. I knew there would be a speed penalty but 
the current numpy dot doesn't work as expected with mpf or mpc just yet 
and so I had to write my own. Your explanation helped as I decided to 
treat all numbers as complex and just implemented the complex version.

Thanks,  Zoho

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] arbitrary precision arrays in numpy?

2008-05-18 Thread mark
Hello list -

I could not find an option for arbitrary precision arrays in numpy.
Did anybody implement this?

I would like to use something like 80 digits precision.

Thanks,

Mark
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 tagged

2008-05-18 Thread Andrew Straw
Jarrod Millman wrote:
> Please test the release candidate:
> svn co http://svn.scipy.org/svn/numpy/tags/1.1.0rc1 1.1.0rc1
>   
Thanks, Jarrod.

I have packaged SVN trunk from r5189 and made a Debian source package
(based on a slightly old version the Debian Python Modules Team's numpy
package with all patches removed) and Ubuntu Hardy (8.04) binary
packages. These are available at:

http://debs.astraw.com/hardy/

In particular, you can just grab the .deb for your architecture for
Ubuntu Hardy:

 * i386:
http://debs.astraw.com/hardy/python-numpy_1.1.0~dev5189-0ads1_i386.deb
 * amd64:
http://debs.astraw.com/hardy/python-numpy_1.1.0~dev5189-0ads1_amd64.deb

All numpy, tests pass on both architectures in my hands, and I shall
begin testing my various codes with this release. Other Ubunteers who
don't want to bother compiling from source are also welcome to try these
packages.

(I chose the trunk rather than the RC tag because my understanding is
that fixes to the final 1.1.0 are going to the trunk, and David
Cournapeau has made a couple commits. Also, I released after I packaged
this up that I forgot to touch the date in debian/changelog -- apologies!)

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arbitrary precision arrays in numpy?

2008-05-18 Thread Charles R Harris
Hi Mark,

On Sun, May 18, 2008 at 9:37 AM, mark <[EMAIL PROTECTED]> wrote:

> Hello list -
>
> I could not find an option for arbitrary precision arrays in numpy.
> Did anybody implement this?
>
> I would like to use something like 80 digits precision.
>

No, we don't have this. What do you need it for?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc

2008-05-18 Thread rex
David wrote:
> I have not tried with icc, but the following works for me with the
> last mkl (I have only tried numpy).
>
> [mkl]
> library_dirs = /home/david/intel/mkl/10.0.1.014/lib/32
> lapack_libs = mkl_lapack
> mkl_libs = mkl, guide
>
> (of course, adapt the library_dirs accordingly). All test pass. I have
> updated the site.cfg.example in numpy. But really, if Intel keeps
> changing its library names, there is not much we can do.

The last relevant MKL library name change I'm aware of occurred
when MKL 9.X was released in 2006:

"mkl_lapackXX.so" was changed to "mkl_lapack.so" in all 3
cases. And, since the material below shows mkl_lapack need not be
in site.cfg, the name change should not matter.


I tried your (locally adapted) site.cfg with no changes to
distutils.


/usr/local/src/1.1.0rc1/site.cfg:
-
[mkl]
library_dirs = /opt/intel/mkl/mkl/10.0.3.020/lib/32
lapack_libs = mkl_lapack
mkl_libs = mkl, guide
-

Build command:

python setup.py config --compiler=intel build_clib \
 --compiler=intel build_ext --compiler=intel install \
 --prefix=/usr/local> build3

#less build3

F2PY Version 2_5188
blas_opt_info:
blas_mkl_info:
  FOUND:
libraries = ['mkl', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/usr/local/include', '/usr/include']

  FOUND:
libraries = ['mkl', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/usr/local/include', '/usr/include']

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  FOUND:
libraries = ['mkl', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/usr/local/include', '/usr/include']

  FOUND:
libraries = ['mkl_lapack', 'mkl', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/usr/local/include', '/usr/include']

  FOUND:
libraries = ['mkl_lapack', 'mkl', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]

[...]

copying site.cfg -> /usr/local/lib/python2.5/site-packages/numpy/distutils/
copying build/src.linux-i686-2.5/numpy/core/numpyconfig.h -> 
/usr/local/lib/python2.5/site-packages/numpy/core/include/numpy
copying build/src.linux-i686-2.5/numpy/core/__multiarray_api.h -> 
/usr/local/lib/python2.5/site-packages/numpy/core/include/numpy
copying build/src.linux-i686-2.5/numpy/core/multiarray_api.txt -> 
/usr/local/lib/python2.5/site-packages/numpy/core/include/numpy
copying build/src.linux-i686-2.5/numpy/core/__ufunc_api.h -> 
/usr/local/lib/python2.5/site-packages/numpy/core/include/numpy
copying build/src.linux-i686-2.5/numpy/core/ufunc_api.txt -> 
/usr/local/lib/python2.5/site-packages/numpy/core/include/numpy
running install_egg_info
-

ldd shows MKL was linked:

# ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
linux-gate.so.1 =>  (0xe000)
libmkl_lapack.so => /opt/intel/mkl/10.0.3.020/lib/32/libmkl_lapack.so 
(0xb7ae1000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel.so (0xb79a3000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel_thread.so (0xb77a6000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_core.so (0xb7742000)
libguide.so => /opt/intel/cc/10.1.015/lib/libguide.so (0xb76e)
libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb76b4000)
libimf.so => /opt/intel/cc/10.1.015/lib/libimf.so (0xb7484000)
libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb745f000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7454000)
libintlc.so.5 => /opt/intel/cc/10.1.015/lib/libintlc.so.5 (0xb7411000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb72e)
libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb72db000)
/lib/ld-linux.so.2 (0x8000)

Interestingly, the above shows libmkl_lapack.so is being used even
though it is not in the site.cfg. Apparently, mkl and guide are
sufficient in site.cfg.  


$ python
Python 2.5 (release25-maint, Dec  9 2006, 14:35:53)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-20)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.test()
Numpy is installed in /usr/local/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5 (release25-maint, Dec  9 2006, 14:35:53) [GCC 4.1.2 20061115 
(prerelease) (Debian 4.1.1-20)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.n

[Numpy-discussion] Which to test: 1.1.x or 1.1.0rc1?

2008-05-18 Thread James Snyder
Hi -

I've been running out of trunk recently, and I've noted that an rc release
has appeared and the 1.1.x branch has been regenerated.

Which would be most helpful to provide feedback from?

>From the branch (1.1.1x) - test results on Mac OS X 10.5.2, built for
universal, using apple Python, looks all clean.

In the past I've needed to build universal binaries, not sure if that is
still the case, but things behave more happily if I do.

CFLAGS="-O -g -isysroot /Developer/SDKs/MacOSX10.5.sdk -arch ppc -arch ppc64
-arch i386 -arch x86_64"
LDFLAGS="-arch ppc -arch ppc64 -arch i386 -arch x86_64"

Thanks for the fabulous work guys!  It's great to have an open and
python-based alternative for scientific computing.

In [2]: numpy.test()
Numpy is installed in /Library/Python/2.5/site-packages/numpy
Numpy version 1.1.0.dev5142
Python version 2.5.1 (r251:54863, Feb  4 2008, 21:48:13) [GCC 4.0.1 (Apple
Inc. build 5465)]
  Found 15/15 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 281/281 tests for numpy.core.multiarray
  Found 69/69 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 24/24 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 93/93 tests for numpy.ma.core
  Found 14/14 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
..
 ..
--
Ran 996 tests in 1.784s

OK

Out[2]: 

-- 
James Snyder
Biomedical Engineering
Northwestern University
[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arbitrary precision arrays in numpy?

2008-05-18 Thread mark
I need it for a numerical back transformation from Laplace space.
I found mpmath, which I think will do the trick
Mark

On May 18, 6:06 pm, "Charles R Harris" <[EMAIL PROTECTED]>
wrote:
> Hi Mark,
>
> On Sun, May 18, 2008 at 9:37 AM, mark <[EMAIL PROTECTED]> wrote:
> > Hello list -
>
> > I could not find an option for arbitrary precision arrays in numpy.
> > Did anybody implement this?
>
> > I would like to use something like 80 digits precision.
>
> No, we don't have this. What do you need it for?
>
> Chuck
>
> ___
> Numpy-discussion mailing list
> [EMAIL PROTECTED]://projects.scipy.org/mailman/listinfo/numpy-discussion
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc

2008-05-18 Thread rex linuxuser
David, how do these environment variables compare with yours? Are you sure
MKL is being used?

Adjusted for your local path, what does the ldd command below show?
ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
linux-gate.so.1 =>  (0xe000)
libmkl_lapack.so => /opt/intel/mkl/
10.0.3.020/lib/32/libmkl_lapack.so (0xb7ae4000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel.so (0xb79a6000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel_thread.so (0xb77a9000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_core.so (0xb7745000)
libguide.so => /opt/intel/mkl/10.0.3.020/lib/32/libguide.so(0xb76e3000)
libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb76b7000)
libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7586000)
libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb7582000)
/lib/ld-linux.so.2 (0x8000)


Relevant lines from .bashrc:

source /opt/intel/cc/10.1.015/bin/iccvars.sh
source /opt/intel/fc/10.1.015/bin/ifortvars.sh
source /opt/intel/mkl/10.0.3.020/tools/environment/mklvars32.sh
export PYTHONPATH=/usr/local/lib/python2.5/site-packages:/usr/lib/python2.5


The above statements set relevant environment variables:

CPATH=/opt/intel/mkl/10.0.3.020/include
FPATH=/opt/intel/mkl/10.0.3.020/include
DYLD_LIBRARY_PATH=/opt/intel/fc/10.1.015/lib:/opt/intel/cc/10.1.015/lib
INCLUDE=/opt/intel/mkl/10.0.3.020/include
LD_LIBRARY_PATH=/opt/intel/mkl/
10.0.3.020/lib/32:/opt/intel/fc/10.1.015/lib:/opt/intel/cc/10.1.015/lib
LIBRARY_PATH=/opt/intel/mkl/10.0.3.020/lib/32
MANPATH=/opt/intel/mkl/
10.0.3.020/man:/opt/intel/fc/10.1.015/man:/opt/intel/cc/10.1.015/m
an:/opt/intel/cc/10.1.015/man:/usr/local/man:/usr/local/share/man:/usr/share/man
MKLROOT=/opt/intel/mkl/10.0.3.020
PYTHONPATH=/usr/local/lib/python2.5/site-packages:/usr/lib/python2.5


Are we having fun yet? :(
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc

2008-05-18 Thread David Cournapeau
On Sun, 2008-05-18 at 12:14 -0700, rex wrote:
> 
> The last relevant MKL library name change I'm aware of occurred
> when MKL 9.X was released in 2006:
> 

No, they heavily changed how to link against mkl in 10. There is a whole
chapter about it in the releases notes.

> ldd shows MKL was linked:
> 
> # ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
> linux-gate.so.1 =>  (0xe000)
> libmkl_lapack.so => /opt/intel/mkl/10.0.3.020/lib/32/libmkl_lapack.so 
> (0xb7ae1000)
> /opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel.so (0xb79a3000)
> /opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel_thread.so (0xb77a6000)
> /opt/intel/mkl/10.0.3.020/lib/32/libmkl_core.so (0xb7742000)
> libguide.so => /opt/intel/cc/10.1.015/lib/libguide.so (0xb76e)
> libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb76b4000)
> libimf.so => /opt/intel/cc/10.1.015/lib/libimf.so (0xb7484000)
> libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb745f000)
> libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7454000)
> libintlc.so.5 => /opt/intel/cc/10.1.015/lib/libintlc.so.5 (0xb7411000)
> libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb72e)
> libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb72db000)
> /lib/ld-linux.so.2 (0x8000)
> 

I don't have the same thing at all. You have many more libraries than I
have, and that's because you were using the intel compiler, I think
(libguide is from intel cc, imf is also used by icc and ifort).

Remove the installer numpy AND the build directory, and retry building
numpy with the mkl.

> Interestingly, the above shows libmkl_lapack.so is being used even
> though it is not in the site.cfg. Apparently, mkl and guide are
> sufficient in site.cfg.  

I am not sure I understand: there is a mkl_lapack section in the
site.cfg, and you need it.


> So why do Erik and I get failures (with both gcc & icc) when MKL
> is used and you don't?

I don't know. I would ask Intel about this error if the above does not
work, maybe you did not install it correctly, or there was a bug in your
version (my version is a bit more recent, I downloaded it a few days
ago).

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpify this?

2008-05-18 Thread Matt Crane
On Sun, May 18, 2008 at 9:14 PM, Anne Archibald
<[EMAIL PROTECTED]> wrote:
> 2008/5/18 Matt Crane <[EMAIL PROTECTED]>:
>> On Sun, May 18, 2008 at 8:52 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>> Are there repeats?
>> No, no repeats in the first column.
>>
>> I'm going to go get a cup of coffee before I forget to leave out any
>> potentially vital information again. It's going to be a long day.
>
> It can be done, though I had to be kind of devious. My solution might
> not even be O(n log n), depending on how mergesort is implemented:
>
Although this O(n log n) solution is written in numpy - and let's
assume that the mergesort is implemented to give that.

That's O(n log n) where n is the combined sizes of the arrays -
although given that they are already sorted it might be straight
linear because it only has to merge them? I know for sure that the
solution that I originally posted was linear in terms of the combined
sizes. While it might be slow because it's written in straight python
- it perhaps might be quicker if one were to use scipy.weave
blitz/inline?

I think I might leave the discussion here until I can get some
benchmarking on the different alternatives - thanks all.
Matt
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which to test: 1.1.x or 1.1.0rc1?

2008-05-18 Thread Jarrod Millman
On Sun, May 18, 2008 at 12:56 PM, James Snyder <[EMAIL PROTECTED]> wrote:
> I've been running out of trunk recently, and I've noted that an rc release
> has appeared and the 1.1.x branch has been regenerated.
>
> Which would be most helpful to provide feedback from?

Hmmm.  I deleted the 1.1.x branch and it doesn't appear to exist
anymore.  How did you get it?

Please test the 1.1.0rc1:
http://projects.scipy.org/scipy/numpy/browser/tags/1.1.0rc1


-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion