Re: [Numpy-discussion] Problem Building Numpy with Python 2.7.1 and OS X 10.7.3

2012-02-26 Thread Samuel John
Hi

The plain gcc (non-llvm) is no longer there, if you install Lion and directly 
Xcode 4.3.
Only, if you have the old Xcode 4.2 or lower, then you may have a non-llvm gcc.

For Xcode 4.3, I recommend installing the "Command Line Tools for Xcode" from 
the preferences of Xcode. Then you'll have the unix tools and compilers for 
building software.

The solution is to compile numpy and scipy with clang. I had no problems so far 
but I think few people actually compiled it with clang.

The issue #1500 (scipy) may help here. 
http://projects.scipy.org/scipy/ticket/1500


On 25.02.2012, at 14:14, Ralf Gommers wrote:
> Since you're using pip, I assume that gcc-4.2 is llvm-gcc. As a first step, I 
> suggest using plain gcc and not using pip (so just "python setup.py 
> install"). Also make sure you follow the recommendations in "version specific 
> notes" at http://scipy.org/Installing_SciPy/Mac_OS_X.

This website should be updated.

cheers,
 Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Samuel John

On 17.02.2012, at 21:46, Ralf Gommers wrote:
> [...]
> So far no one has managed to build the numpy/scipy combo with the LLVM-based 
> compilers, so if you were willing to have a go at fixing that it would be 
> hugely appreciated. See http://projects.scipy.org/scipy/ticket/1500 for 
> details.
> 
> Once that's fixed, numpy can switch to using it for releases.

Well, I had great success with using clang and clang++ (which uses llvm) to 
compile both numpy and scipy on OS X 10.7.3.

Samuel

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] repeat array along new axis without making a copy

2012-02-15 Thread Samuel John
Wow, I wasn't aware of that even if I work with numpy for years now. 
NumPy is amazing.

Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] I must be wrong? -- endian detection failure on Mac OSX 10.5

2012-02-01 Thread Samuel John
Hi!

Your Machine should be able to handle at least Mac OS X10.6 and even 10.7.
If there is not a strong reason to remain on 10.5...

10.5 is so long ago, I can barely remember.

cheers,
 Samuel

On 01.02.2012, at 18:03, Dustin Lang wrote:

> 
> Hi,
> 
> I don't really believe this is a numpy bug that hasn't been detected, so 
> it must be something weird about my setup, but I can't figure it out. 
> Here goes.
> 
> The symptom is that while numpy-1.4.1 builds fine, numpy-1.5.0 and later 
> releases fail with:
> 
> In file included from numpy/core/src/npymath/npy_math.c.src:56:
> numpy/core/src/npymath/npy_math_private.h:78: error: conflicting types for 
> ieee_double_shape_type
> numpy/core/src/npymath/npy_math_private.h:64: note: previous declaration of 
> ieee_double_shape_type was here
> error: Command "gcc -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes 
> -Inumpy/core/include 
> -Ibuild/src.macosx-10.5-i386-2.7/numpy/core/include/numpy 
> -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core 
> -Inumpy/core/src/npymath -Inumpy/core/src/multiarray 
> -Inumpy/core/src/umath -Inumpy/core/include 
> -I/usr/local/python-2.7.2/include/python2.7 
> -Ibuild/src.macosx-10.5-i386-2.7/numpy/core/src/multiarray 
> -Ibuild/src.macosx-10.5-i386-2.7/numpy/core/src/umath -c 
> build/src.macosx-10.5-i386-2.7/numpy/core/src/npymath/npy_math.c -o 
> build/temp.macosx-10.5-i386-2.7/build/src.macosx-10.5-i386-2.7/numpy/core/src/npymath/npy_math.o"
>  
> failed with exit status 1
> 
> 
> The relevant code looks like,
> 
> #define IEEE_WORD_ORDER NPY_BYTE_ORDER
> 
> #if IEEE_WORD_ORDER == NPY_BIG_ENDIAN
> // declare ieee_double_shape_type;
> #endif
> 
> #if IEEE_WORD_ORDER == NPY_LITTLE_ENDIAN
> // declare ieee_double_shape_type;
> #endif
> 
> 
> so it looks like both word-order blocks are getting compiled.
> 
> For the record, including the same header files as the failing code and 
> compiling with the same command-line args I get:
> 
> LITTLE_ENDIAN is defined: 1234
> __LITTLE_ENDIAN is not defined
> __LITTLE_ENDIAN__ is defined: 1   (by gcc)
> BIG_ENDIAN is defined: 4321
> __BIG_ENDIAN is not defined
> __BIG_ENDIAN__ is not defined
> BYTE_ORDER is defined: 1234
> __BYTE_ORDER is not defined
> __BYTE_ORDER__ is not defined
> NPY_BYTE_ORDER is defined
>   => __BYTE_ORDER
> NPY_BIG_ENDIAN is defined
>   => __BIG_ENDIAN
> NPY_LITTLE_ENDIAN is defined
>   => __LITTLE_ENDIAN
> 
> and NPY_BYTE_ORDER etc are set in npy_endian.h, in this block of code:
> 
> #ifdef NPY_HAVE_ENDIAN_H
> /* Use endian.h if available */
> #include 
> 
> #define NPY_BYTE_ORDER __BYTE_ORDER
> #define NPY_LITTLE_ENDIAN __LITTLE_ENDIAN
> #define NPY_BIG_ENDIAN __BIG_ENDIAN
> #else
> 
> (setup.py detected that I do have endian.h:
> build/src.macosx-10.5-i386-2.7/numpy/core/include/numpy/_numpyconfig.h:#define
>  NPY_HAVE_ENDIAN_H 1
> )
> 
> So my guess is that npy_endian.h is expecting glibc-style endian.h with 
> __BYTE_ORDER but getting Apple's endian.h with BYTE_ORDER.  Then 
> NPY_BYTE_ORDER gets defined to __BYTE_ORDER which is itself not defined. 
> Same with NPY_{BIG,LITTLE}_ENDIAN, and then apparently the two undefined 
> things compare equal in wacky preprocessor land?
> 
> 
> For what it's worth, in my own codebase I see that I do this:
> 
> #if \
>   (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
>   (defined( _BYTE_ORDER) && ( _BYTE_ORDER ==  _BIG_ENDIAN)) || \
>   (defined(  BYTE_ORDER) && (  BYTE_ORDER ==   BIG_ENDIAN))
> // yup, big-endian
> #endif
> 
> 
> This is a Mac OSX 10.5.8 machine, MacBook5,1, Intel Core2 Duo CPU P8600 @ 
> 2.40GHz, gcc 4.4.6 and python 2.7.2
> 
> The weirdness on this system is that I installed a gcc with only x86_64 
> support, while the kernel and uname insist that it's i386, but I don't 
> *think* that's implicated here.
> 
> 
> cheers,
> dustin
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] histogram help

2012-01-30 Thread Samuel John
Hi Ruby,

I still do not fully understand your question but what I do in such cases is to 
construct a very simple array and test the functions.
The help of numpy.histogram2d or numpy.histogramdd (for more than two dims) 
might help here.

So I guess, basically you want to ignore the x,y positions and just look at the 
combined distribution of the Z values?
In this case, you would just need the numpy.histogram (the 1d version).

Note that the histogram returns the numbers and the bin-borders.

bests
 Samuel


On 30.01.2012, at 20:27, Ruby Stevenson wrote:

> Sorry, I realize I didn't describe the problem completely clear or correct.
> 
> the (x,y) in this case is just many co-ordinates, and  each coordinate
> has a list of values (Z value) associated with it.  The bins are
> allocated for the Z.
> 
> I hope this clarify things a little. Thanks again.
> 
> Ruby
> 
> 
> 
> 
> On Mon, Jan 30, 2012 at 2:21 PM, Ruby Stevenson  wrote:
>> hi, all
>> 
>> I am trying to figure out how to do histogram with numpy
>> 
>> I have a three-dimension array A[x,y,z],  another array (bins) has
>> been allocated along Z dimension, z'
>> 
>> how can I get the histogram of H[ x, y, z' ]?
>> 
>> thanks for your help.
>> 
>> Ruby
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem installing NumPy with Python 3.2.2/MacOS X 10.7.2

2012-01-26 Thread Samuel John
Hi Hans-Martin!

You could try my instructions recently posted to this list 
http://thread.gmane.org/gmane.comp.python.scientific.devel/15956/
Basically, using llvm-gcc scipy segfaults when scipy.test() (on my system at 
least).

Therefore, I created the homebrew install formula. 
They work for whatever "which python" you have. But I have tested this for 
2.7.2 on MacOS X 10.7.2.

Samuel


On 11.01.2012, at 16:12, Hans-Martin v. Gaudecker wrote:

> I recently upgraded to Lion and just faced the same problem with both Python 
> 2.7.2 and Python 3.2.2 installed via the python.org installers. My hunch is 
> that the errors are related to the fact that Apple dropped gcc-4.2 from XCode 
> 4.2. I got gcc-4.2 via [1] then, still the same error -- who knows what else 
> got lost in that upgrade... Previous successful builds with gcc-4.2 might 
> have been with XCode 4.1 (or 4.2 installed on top of it).
> 
> In the end I decided to re-install both Python versions via homebrew, nicely 
> described here [2] and everything seems to work fine using LLVM. Test outputs 
> for NumPy master under 2.7.2 and 3.2.2 are below in case they are of interest.
> 
> Best,
> Hans-Martin
> 
> [1] https://github.com/kennethreitz/osx-gcc-installer
> [2] 
> http://www.thisisthegreenroom.com/2011/installing-python-numpy-scipy-matplotlib-and-ipython-on-lion/#numpy

The instructions at [2] lead to a segfault in scipy.test() for me, because it 
used llvm-gcc (which is the default on Lion).
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: MS C++ AMP library

2012-01-26 Thread Samuel John
Yes, I agree 100%.

On 26.01.2012, at 10:19, Sturla Molden wrote:
> When we have nice libraries like OpenCL, OpenGL and OpenMP, I am so glad 
> we have Microsoft to screw it up.
> 
> Congratulations to Redmond: Another C++ API I cannot read, and a 
> scientific compute library I hopefully never have to use.
> 
> http://msdn.microsoft.com/en-us/library/hh265136(v=vs.110).aspx
> 
> The annoying part is, with this crap there will never be a standard 
> OpenCL DLL in Windows.
> 
> Sturla Molden

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] installing matplotlib in MacOs 10.6.8.

2012-01-24 Thread Samuel John
Sorry for the late answer. But at least for the record:

If you are using eclipse, I assume you have also installed the eclipse plugin 
[pydev](http://pydev.org/). Is use it myself, it's good. 
Then you have to go to the preferences->pydev->PythonInterpreter and select the 
python version you want to use by searching for the "Python" executable. 

I am not familiar with the pre-built versions of matplotlib. Perhaps they miss 
the 64bit intel versions? 
Perhaps you can find a lib (.so file) in matplotlib and use the "file" command 
to see the architectures, it was built for.
You should be able to install matplotlib also with `pip install matplotlib`. 
(if you have pip)

Samuel

On 26.12.2011, at 06:40, Alex Ter-Sarkissov wrote:

> hi everyone, I run python 2.7.2. in Eclipse (recently upgraded from 2.6). I 
> have a problem with installing matplotlib (I found the version for python 
> 2.7. MacOs 10.3, no later versions). If I run python in terminal using arch 
> -i386 python, and then 
> 
> from matplotlib.pylab import *
> 
> and similar stuff, everything works fine. If I run python in eclipse or just 
> without arch -i386, I can import matplotlib as 
> 
> from matplotlib import  *
> 
> but actually nothing gets imported. If I do it in the same way as above, I 
> get the message
> 
> no matching architecture in universal wrapper
> 
> which means there's conflict of versions or something like that. I tried 
> reinstalling the interpreter and adding matplotlib to forced built-ins, but 
> nothing helped. For some reason I didn't have this problem with numpy and 
> tkinter. 
> 
> Any suggestions are appreciated. 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 'Advanced' save and restore operation

2012-01-24 Thread Samuel John
I know you wrote that you want "TEXT" files, but never-the-less, I'd like to 
point to http://code.google.com/p/h5py/ .
There are viewers for hdf5 and it is stable and widely used.

 Samuel


On 24.01.2012, at 00:26, Emmanuel Mayssat wrote:

> After having saved data, I need to know/remember the data dtype to
> restore it correctly.
> Is there a way to save the dtype with the data?
> (I guess the header parameter of savedata could help, but they are
> only available in v2.0+ )
> 
> I would like to save several related structured array and a dictionary
> of parameters into a TEXT file.
> Is there an easy way to do that?
> (maybe xml file, or maybe archive zip file of other files, or . )
> 
> Any recommendation is helpful.
> 
> Regards,
> --
> Emmanuel
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Unexpected behavior with np.min_scalar_type

2012-01-24 Thread Samuel John
I get the same results as you, Kathy.
*surprised*

(On OS X (Lion), 64 bit, numpy 2.0.0.dev-55472ca, Python 2.7.2.


On 24.01.2012, at 16:29, Kathleen M Tacina wrote:

> I was experimenting with np.min_scalar_type to make sure it worked as 
> expected, and found some unexpected results for integers between 2**63 and 
> 2**64-1.  I would have expected np.min_scalar_type(2**64-1) to return uint64. 
>  Instead, I get object.  Further experimenting showed that the largest 
> integer for which np.min_scalar_type will return uint64 is 2**63-1.  Is this 
> expected behavior?
> 
> On python 2.7.2 on a 64-bit linux machine:
> >>> import numpy as np
> >>> np.version.full_version
> '2.0.0.dev-55472ca'
> >>> np.min_scalar_type(2**8-1)
> dtype('uint8')
> >>> np.min_scalar_type(2**16-1)
> dtype('uint16')
> >>> np.min_scalar_type(2**32-1)
> dtype('uint32')
> >>> np.min_scalar_type(2**64-1)
> dtype('O')
> >>> np.min_scalar_type(2**63-1)
> dtype('uint64')
> >>> np.min_scalar_type(2**63)
> dtype('O')
> 
> I get the same results on a Windows XP  machine running python 2.7.2 and 
> numpy 1.6.1. 
> 
> Kathy 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] advanced indexing bug with huge arrays?

2012-01-24 Thread Samuel John

On 23.01.2012, at 11:23, David Warde-Farley wrote:
>> a = numpy.array(numpy.random.randint(256,size=(500,972)),dtype='uint8')
>> b = numpy.random.randint(500,size=(4993210,))
>> c = a[b]
>> In [14]: c[100:].sum()
>> Out[14]: 0

Same here.

Python 2.7.2, 64bit, Mac OS X (Lion), 8GB RAM, numpy.__version__ = 
2.0.0.dev-55472ca
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)]
Numpy built without llvm.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The NumPy Mandelbrot code 16x slower than Fortran

2012-01-23 Thread Samuel John
I'd like to add 
http://git.tiker.net/pyopencl.git/blob/HEAD:/examples/demo_mandelbrot.py to the 
discussion, since I use pyopencl  (http://mathema.tician.de/software/pyopencl) 
with great success in my daily scientific computing. Install with pip.

PyOpenCL does understand numpy arrays. You write a kernel (small c-program) 
directly into a python triple quoted strings and get a pythonic way to program 
GPU and core i5 and i7 CPUs with python Exception if something goes wrong. 
Whenever I hit a speed bottleneck that I cannot solve with pure numpy, I code a 
little part of the computation for GPU. The compilation is done just in time 
when you run the python code.

Especially for the mandelbrot this may be a _huge_ gain in speed since its 
embarrassingly parallel.

Samuel


On 23.01.2012, at 14:02, Robert Cimrman wrote:

> On 01/23/12 13:51, Sturla Molden wrote:
>> Den 23.01.2012 13:09, skrev Sebastian Haase:
>>> 
>>> I would think that interactive zooming would be quite nice
>>> ("illuminating")   and for that 13 secs would not be tolerable
>>> Well... it's not at the top of my priority list ... ;-)
>>> 
>> 
>> Sure, that comes under the 'fast enough' issue. But even Fortran might
>> be too slow here?
>> 
>> For zooming Mandelbrot I'd use PyOpenGL and a GLSL fragment shader
>> (which would be a text string in Python):
>> 
>> madelbrot_fragment_shader = """
>> 
>> uniform sampler1D tex;
>> uniform vec2 center;
>> uniform float scale;
>> uniform int iter;
>> void main() {
>>  vec2 z, c;
>>  c.x = 1. * (gl_TexCoord[0].x - 0.5) * scale - center.x;
>>  c.y = (gl_TexCoord[0].y - 0.5) * scale - center.y;
>>  int i;
>>  z = c;
>>  for(i=0; i>  float x = (z.x * z.x - z.y * z.y) + c.x;
>>  float y = (z.y * z.x + z.x * z.y) + c.y;
>>  if((x * x + y * y)>   4.0) break;
>>  z.x = x;
>>  z.y = y;
>>  }
>>  gl_FragColor = texture1D(tex, (i == iter ? 0.0 : float(i)) / 100.0);
>> }
>> 
>> """
>> 
>> The rest is just boiler-plate OpenGL...
>> 
>> Sources:
>> 
>> http://nuclear.mutantstargoat.com/articles/sdr_fract/
>> 
>> http://pyopengl.sourceforge.net/context/tutorials/shader_1.xhtml
> 
> Off-topic comment: Or use some algorithmic cleverness, see [1]. I recall Xaos 
> had interactive, extremely fast a fluid fractal zooming more than 10 (or 15?) 
> years ago (-> on a laughable hardware by today's standards).
> 
> r.
> 
> [1] http://wmi.math.u-szeged.hu/xaos/doku.php
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ufuncs and flexible types, CAPI

2012-01-10 Thread Samuel John
[sorry for duplicate - I used the wrong mail address]

I am afraid, I didn't quite get the question.
What is the scenario? What is the benefit that would weight out the performance 
hit of checking whether there is a callback or not. This has to be evaluated 
quite a lot.

Oh well ... and 1.3.0 is pretty old :-)

cheers,
Samuel

On 31.12.2011, at 07:48, Val Kalatsky wrote:

> 
> Hi folks, 
> 
> First post, may not follow the standards, please bear with me. 
> 
> Need to define a ufunc that takes care of various type. 
> Fixed - no problem, userdef - no problem, flexible - problem. 
> It appears that the standard ufunc loop does not provide means to 
> deliver the size of variable size items. 
> Questions and suggestions:
> 
> 1) Please no laughing: I have to code for NumPy 1.3.0. 
> Perhaps this issue has been resolved, then the discussion becomes moot. 
> If so please direct me to the right link. 
> 
> 2) A reasonable approach here would be to use callbacks and to give the user 
> (read programmer) 
> a chance to intervene at least twice: OnInit and OnFail (OnFinish may not be 
> unreasonable as well). 
> 
> OnInit: before starting the type resolution the user is given a chance to do 
> something (e.g. check for 
> that pesky type and take control then return a flag indicating a stop) before 
> the resolution starts
> OnFail: the resolution took place and did not succeed, the user is given a 
> chance to fix it. 
> In most of the case these callbacks are NULLs. 
> 
> I could patch numpy with a generic method that does it, but it's a shame not 
> to use the good ufunc machine. 
> 
> Thanks for tips and suggestions.
> 
> Val Kalatsky
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SParse feature vector generation

2012-01-10 Thread Samuel John
I would just use a lookup dict:

names = [ "uc_berkeley", "stanford", "uiuc", "google", "intel", 
"texas_instruments", "bool"]
lookup = dict( zip( range(len(names)), names ) )


Now, given you have n entries:

S = numpy.zeros( (n, len(names)) ,dtype=numpy.int32)

for k in ["uc_berkeley", "google", "bool"]:
S[0,lookup[k]] += 1

for k in ["stanford", "intel","bool"]: 
S[1,lookup[k]] += 1

... and so forth. so lookup[k] returns the index to use. 


Hope this helps. I am not aware of an automatic that does this. I may be wrong.
cheers, 
 Samuel


On 04.01.2012, at 07:25, Dhruvkaran Mehta wrote:

> Hi numpy users,
> 
> Is there a convenient way in numpy to go from "string" features like:
> 
> "uc_berkeley", "google", 1
> "stanford", "intel", 1
> .
> .
> .
> "uiuc", "texas_instruments", 0
> 
> to a numpy matrix like:
> 
>  "uc_berkeley", "stanford", ..., "uiuc", "google", "intel", 
> "texas_instruments", "bool"
>   10 ... 0   1   0
> 0   1
>   01 ... 0   0   1
> 0   1 
>   :
>   00 ... 1   0   0
> 1   0
> 
> I really appreciate you taking the time to help!
> Thanks!
> --Dhruv
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simple vector->matrix question

2011-10-06 Thread Samuel John
I just learned two things:

1. np.newaxis
2. Array dimension broadcasting rocks more than you think.


The x[:, np.newaxis] might not be the most intuitive solution but it's great 
and powerful.
Intuitive would be to have x.T to transform [0,1,2,4] into [[0],[1],[2],[4]].

Thanks Warren :-)
Samuel

On 06.10.2011, at 14:18, Warren Weckesser wrote:

> 
> 
> On Thu, Oct 6, 2011 at 7:08 AM, Neal Becker  wrote:
> Given a vector y, I want a matrix H whose rows are
> 
> y - x0
> y - x1
> y - x2
> ...
> 
> 
> where x_i are scalars
> 
> Suggestion?
> 
> 
> 
> In [15]: import numpy as np
> 
> In [16]: y = np.array([10.0, 20.0, 30.0])
> 
> In [17]: x = np.array([0, 1, 2, 4])
> 
> In [18]: H = y - x[:, np.newaxis]
> 
> In [19]: H
> Out[19]: 
> array([[ 10.,  20.,  30.],
>[  9.,  19.,  29.],
>[  8.,  18.,  28.],
>[  6.,  16.,  26.]])
> 
> 
> Warren
> 
>  
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simple vector->matrix question

2011-10-06 Thread Samuel John

import numpy
# Say y is
y = numpy.array([1,2,3])
Y = numpy.vstack([y,y,y,y]) 
# Y is array([[1, 2, 3],
#  [1, 2, 3],
#  [1, 2, 3],
#  [1, 2, 3]])

x = numpy.array([[0],[2],[4],[6]]) # a column-vector of your scalars x0, x1...
Y - x

Hope this is what you meant.

cheers,
 Samuel


On 06.10.2011, at 14:08, Neal Becker wrote:

> Given a vector y, I want a matrix H whose rows are
> 
> y - x0
> y - x1
> y - x2
> ...
> 
> 
> where x_i are scalars

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OS X Lion: llvm: numpy and scipy

2011-09-20 Thread Samuel John
Hi!

On 20.09.2011, at 14:41, David Cournapeau wrote:
> On Tue, Sep 20, 2011 at 5:13 AM, Samuel John  wrote:
>> Ralf, thanks for your answer.
>> 
>> However, in short:
>> 
>>  I want `pip install numpy; pip install scipy` to work on OS X Lion without 
>> extra effort :-)
[...]
> I will try to look at this problem next week, when I will receive a
> new laptop with Lion on it. If I forget about it, please ping me at
> the end of next week, we need to fix this,

Congratulation to your new Mac :-)

When I download scipy.10.0b2 and get gfortran via homebrew:
brew install gfortran (which is not the issue here)
cd scipy-0.10.0b2
python setup.py build
python setup.py install

Then, scipy.test() causes segfaults or malloc errors:

> samuel@ubi:~/Downloads/scipy-0.10.0b2 $ cd ..
> samuel@ubi:~/Downloads $ ipython
> Python 2.7.2 (default, Sep 16 2011, 11:18:55) 
> Type "copyright", "credits" or "license" for more information.
> 
> IPython 0.11 -- An enhanced Interactive Python.
> ? -> Introduction and overview of IPython's features.
> %quickref -> Quick reference.
> help  -> Python's own help system.
> object?   -> Details about 'object', use 'object??' for extra details.
> 
> In [1]: import scipy
> 
> In [2]: scipy.test()
> Running unit tests for scipy
> NumPy version 1.6.1
> NumPy is installed in 
> /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy
> SciPy version 0.10.0b2
> SciPy is installed in 
> /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy
> Python version 2.7.2 (default, Sep 16 2011, 11:18:55) [GCC 4.2.1 (Based on 
> Apple Inc. build 5658) (LLVM build 2335.15.00)]
> nose version 1.1.2
> ...F.FSegmentation
>  fault: 11
> samuel@ubi:~/Downloads $ ipython
> Python 2.7.2 (default, Sep 16 2011, 11:18:55) 
> Type "copyright", "credits" or "license" for more information.
> 
> IPython 0.11 -- An enhanced Interactive Python.
> ? -> Introduction and overview of IPython's features.
> %quickref -> Quick reference.
> help  -> Python's own help system.
> object?   -> Details about 'object', use 'object??' for extra details.
> 
> In [1]: import scipy
> 
> In [2]: scipy.test()
> Running unit tests for scipy
> NumPy version 1.6.1
> NumPy is installed in 
> /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy
> SciPy version 0.10.0b2
> SciPy is installed in 
> /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy
> Python version 2.7.2 (default, Sep 16 2011, 11:18:55) [GCC 4.2.1 (Based on 
> Apple Inc. build 5658) (LLVM build 2335.15.00)]
> nose version 1.1.2
> ...F.FFPython(93907,0x7fff7201b960)
>  malloc: *** error for object 0x105ce5630: pointer being freed was not 
> allocated
> *** set a breakpoint in malloc_error_break to debug
> Abort trap: 6


However, when setting the CC, CXX and FFLAGS explicitly to avoid llvm:

export CC=gcc-4.2 
export CXX=g++-4.2 
export FFLAGS=-ff2c
python setup.py build
python setup.py install

Then scipy.test() works fine:

> In [2]: scipy.test()
> Running unit tests for scipy
> NumPy version 1.6.1
> NumPy is installed in 
> /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy
> SciPy version 0.10.0b2
> SciPy is installed in 
> /usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy
> Python version 2.7.2 (default, Sep 16 2011, 11:18:55) [GCC 4.2.1 (Based on 
> Apple Inc. build 5658) (LLVM build 2335.15.00)]
> nose version 1.1.2
> K/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py:674:
>  UserWarning: 
> The coefficients of the spline returned have been computed as the
> minimal norm least-squares s

Re: [Numpy-discussion] OS X Lion: llvm: numpy and scipy

2011-09-20 Thread Samuel John
Ralf, thanks for your answer.

However, in short:

 I want `pip install numpy; pip install scipy` to work on OS X Lion without 
extra effort :-)


On 19.09.2011, at 19:05, Ralf Gommers wrote:
> Do you think it's possible to teach numpy to use different CC, CXX?
> 
>> This is possible, but numpy probably shouldn't mess with these variables. As 
>> a user you can set them permanently by adding them to your bash_profile for 
>> example.

The problem is that most things do work fine with the default gcc which has the 
llvm backend on OS X Lion (10.7) /usr/bin/gcc -> llvm-gcc-4.2. But somehow 
scipy has problems with that backend.
I do not want to set CC CXX permanently.
I made a homebrew formula for numpy, which takes care of these things. But the 
policy of the homebrew team is to avoid duplicates which can be installed via 
pip. Therefore I am asking for support if someone could point me to the place 
where the compiler is chosen. I'd propose to add an OS X 10.7 switch there in 
order to avoid the llvm.


>> And the FFLAGS
> 
> This needs to be solved. Perhaps the solution involves more wrappers for 
> broken vecLib/Accelerate functions in scipy? Does anyone know which routines 
> are broken on 10.7? For 10.6 I found this discussion helpful: 
> http://www.macresearch.org/lapackblas-fortran-106. It is claimed there that 
> while '-ff2c' fixes complex routines, it breaks SDOT when used with '-m64'. 
> SDOT is used in linalg.

I don't know nothing of such things. :-(
If there is really something broken with vecLib/Accelerate, a ticket on Apple's 
bugtracker rdar should be opened.


>> and the switch --fcompiler=gnu95 arg?
> 
> This shouldn't be necessary if you only have gfortran installed.
> 

Ah ok. Thanks!


cheers,
 Samuel




> 
> Building scipy on OS X Lion 10.7.x currently fails because of some llvm 
> incompatibilies and the gfortran.
> While it's easy to get gfortran e.g. via http://mxcl.github.com/homebrew/ 
> it's hard to `pip install scipy` or manual install because you have to:
> 
> > export CC=gcc-4.2
> > export CXX=g++-4.2
> > export FFLAGS=-ff2c
> > python setup.py build --fcompiler=gnu95
> 
> This way, numpy and then scipy builds successfully.
> Scipy uses the distutil settings from numpy -- as far as I get it -- and 
> therefore scipy cannot add these variables. Right?
> 
> It would be great if numpy and scipy would build right out-of-the-box on OS 
> X, again.
> 
> I'd love to provide a patch but I am lost in the depth of distutils...

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OS X Lion: llvm: numpy and scipy

2011-09-19 Thread Samuel John
Ahoy numpy gurus :-)

Would it be possible to adapt the setup.py and/or numpy/distutils to set the 
right variables on Mac OS X 10.7? (see below).
I have looked a bit into the setup.py and the distutils package of numpy but I 
am a bit lost.

Do you think it's possible to teach numpy to use different CC, CXX? And the 
FFLAGS and the switch --fcompiler=gnu95 arg?

Building scipy on OS X Lion 10.7.x currently fails because of some llvm 
incompatibilies and the gfortran.
While it's easy to get gfortran e.g. via http://mxcl.github.com/homebrew/ it's 
hard to `pip install scipy` or manual install because you have to:

> export CC=gcc-4.2
> export CXX=g++-4.2
> export FFLAGS=-ff2c
> python setup.py build --fcompiler=gnu95

This way, numpy and then scipy builds successfully.
Scipy uses the distutil settings from numpy -- as far as I get it -- and 
therefore scipy cannot add these variables. Right?

It would be great if numpy and scipy would build right out-of-the-box on OS X, 
again. 

I'd love to provide a patch but I am lost in the depth of distutils...


Samuel



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] glumpy 0.2.0

2011-09-16 Thread Samuel John
Hi Nicolas,

that looks great. 
Could you make this available such that `pip install glumpy` would work?

cheers,
 Samuel

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for finding the relative extrema of numeric data

2011-09-15 Thread Samuel John
Hi all,

I am not sure if this is of help for anyone. I wrote some code to find the 
relative maxima in a 1D array for my own purpose.
Maybe someone is interested or even finds a bug *g*.
I post the code here and appreciate any feedback. Even "stop spamming your 
buggy code" :-)


> from numpy import diff, sign, convolve, array, where, around, int32, alen
> 
> def localmaxima_at(x):
> '''Returns the indices of local maxima in the 1D array x.
> 
> If several elements in x have the same value, then the 
> index of the element in the middle is returned.
> 
> If there are two adjacent elements with the same value,
> one of them is returned.
> 
> x[0] and x[-1] are never returned as an index for the
> local maximum.
> 
> @Author: Samuel John
> @copyright: http://creativecommons.org/licenses/by-nc-sa/3.0/
> @todo: unittests
> '''
> assert len(x) > 2, "Length of x should be greater than two in order to 
> define a meaningful local maximum."
> assert x.ndim == 1, "Expected 1D array."
> #print 'x=\n',x
> filled=sign(diff(x)).astype(int32)
> # fill zeros:
> has_zeros = (filled == 0).any()
> last = 0
> if has_zeros:
> for i in xrange(alen(filled)):
> if filled[i] == 0:
> filled[i] = last
> else:
> last = filled[i]
> #print 'filled\n',filled
> left = where( convolve(
>   filled,
>   array([-1,1]), mode='full' ) -2 == 0 )[0]
> 
> if has_zeros:
> filled=sign(diff(x)).astype(int32)
> last = 0
> for i in reversed(xrange(len(filled))):
> if filled[i] == 0:
> filled[i] = last
> else:
> last = filled[i]
> 
> right = where( convolve(
>   filled,
>   array([-1,1]), mode='full' ) -2 == 0 )[0]
> #print 'left\n',left
> #print 'right\n',right
> return around( (left + right) / 2.0).astype(int32)
> 



bests,
 Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.test() failure

2011-09-14 Thread Samuel John
Hi Nils,

which version of numpy, which os?
I can infer that you use python 2.6 in 64bit, right?

Right after the beginning of the numpy.test() are some crucial information.

bests
 Samuel

On 14.09.2011, at 22:09, Nils Wagner wrote:

> ERROR: test_polyfit (test_polynomial.TestDocs)
> --
> Traceback (most recent call last):
>   File 
> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/tests/test_polynomial.py",
>  
> line 106, in test_polyfit
> weights = arange(8,1,-1)**2/7.0
> NameError: global name 'arange' is not defined
> 
> ==
> FAIL: Tests polyfit
> --
> Traceback (most recent call last):
>   File 
> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/ma/tests/test_extras.py",
>  
> line 622, in test_polyfit
> assert_almost_equal(a, a_)
>   File 
> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/ma/testutils.py", 
> line 155, in assert_almost_equal
> err_msg=err_msg, verbose=verbose)
>   File 
> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/ma/testutils.py", 
> line 221, in assert_array_almost_equal
> header='Arrays are not almost equal')
>   File 
> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/ma/testutils.py", 
> line 186, in assert_array_compare
> verbose=verbose, header=header)
>   File 
> "/home/nwagner/local/lib64/python2.6/site-packages/numpy/testing/utils.py", 
> line 677, in assert_array_compare
> raise AssertionError(msg)
> AssertionError:
> Arrays are not almost equal
> 
> (mismatch 100.0%)
>  x: array([ 4.25134878,  1.14131297,  0.20519666, 
> 0.01701   ])
>  y: array([ 1.9345248 ,  0.49711011,  0.10202554, 
> 0.00928034])
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy blas running slow: how to check that it is properly linked

2011-09-07 Thread Samuel John

On 06.09.2011, at 22:13, David Cottrell wrote:

> Thanks, I didn't realize dot was not just calling dgemm or some
> variant which I assume would be reasonably fast. I see dgemm appears
> in the numpy code in various places such as the lapack_lite module.
> 
> I ran the svd test on the solaris setup and will check the OSX run
> when back at my laptop. 8.4 seconds is slightly slower than matlab but
> still seems reasonable.
> 
> $ ./test_03.py
> No ATLAS:
> (1000, 1000) (1000,) (1000, 1000)
> 8.17235898972

I just ran your benchmark code on OSX 10.7.1 on an 2011 MacBook Pro (core-i7) 
with numpy.version.version '2.0.0.dev-900d82e':
   Using ATLAS:
   ((1000, 1000), (1000,), (1000, 1000))
   0.908223152161

cheers, 
 Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-02-19 Thread Samuel John
Thanks Robin,

that makes sense and explains why I could not find any reference.

Perhaps the scipy.org wiki and install instructions should be updated.
I mean how many people try to compile amd and umfpack, because they
think it's good for numpy to have them, because the site.cfg contains
those entries!

To conclude: numpy does *NOT* use umfpack or libamd at all. Those
sections in the site.cfg are outdated.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-02-18 Thread Samuel John
Ping.

How to tell, if numpy successfully build against libamd.a and libumfpack.a?
How do I know if they were successfully linked (statically)?
Is it possible from within numpy, like show_config() ?
I think show_config() has no information about these in it :-(

Anybody?

Thanks,
 Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-01-27 Thread Samuel John
Hi Paul,

thanks for your answer! I was not aware of numpy.show_config().

However, it does not say anything about libamd.a and libumfpack.a, right?
How do I know if they were successfully linked (statically)?
Does anybody have a clue?

greetings
 Samuel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to tell if I succeeded to build numpy with amd, umfpack and lapack

2011-01-26 Thread Samuel John
Hi there!

I have successfully built numpy 1.5 on ubuntu lucid (32 for now).
I think I got ATLAS/lapack/BLAS support, and if I
>  ldd linalg/lapack_lite.so
I see that my libptf77blas.so etc. are successfully linked. :-)

However, how to I find out, if (and where) libamd.a and libumfpack.a
have been found and (statically) linked.
As far as I understand, I they are not present, a fallback in pure
python is used, right?

Is there a recommended way, I can query against which libs numpy has
been built?
So I can be sure numpy uses my own compiled versions of libamd, lapack
and so forth.

And the fftw3 is no longer supported, I guess (even if it is still
mentioned in the site.cfg.example)

Bests,
 Samuel


-- 
Dipl.-Inform. Samuel John
- - - - - - - - - - - - - - - - - - - - - - - - -
PhD student, CoR-Lab(.de) and
Neuroinformatics Group, Faculty
of Technology, D33594 Bielefeld
in cooperation with the HONDA
Research Institute Europe GmbH


jabber: samuelj...@jabber.org
- - - - - - - - - - - - - - - - - - - - - - - - -
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building numpy on Mac OS X 10.6, i386 (no ppc) 32/64bit: Error in Fortran tests due to ppc64

2010-08-16 Thread Samuel John
Perhaps related tickets, but no perfect match (as far as I can judge):

-   http://projects.scipy.org/numpy/ticket/1399 "distutils fails to build ppc64 
support on Mac OS X when requested"
This revision is older than the one I used, ergo should already be applied.

-   http://projects.scipy.org/numpy/ticket/ "Fix endianness-detection on 
ppc64 builds"
closed. Already applied.

-   http://projects.scipy.org/numpy/ticket/527 "fortran linking flag option..."
Perhaps that linking flag could help to tell numpy (distutils) the right 
arch?

-   http://projects.scipy.org/numpy/ticket/1170 "Possible Bug in F2PY Fortran 
Compiler Detection"
Hmm, I don't know...

Samuel

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Building numpy on Mac OS X 10.6, i386 (no ppc) 32/64bit: Error in Fortran tests due to ppc64

2010-08-16 Thread Samuel John
Hello!

At first, I'd like to say thanks to the numpy/scipy team and all contributors. 
Great software!

On Snow Leopard, aka Mac OS X 10.6.4 (server) I managed to build numpy 
2.0.0.dev8636 (and scipy 0.9.0.dev6646) for arch i386 in combined 32/64bit 
against MacPorts python27 (No ppc here!).

All tests pass (yeha!), except for the fortran related ones. I think there is 
an issue with detecting the right arch. My numpy and python are both i386 32/64 
bit but now ppc.

Only these tests fail, all others pass:
test_callback.TestF77Callback.test_all ... ERROR
test_mixed.TestMixed.test_all ... ERROR
test_return_character.TestF77ReturnCharacter.test_all ... ERROR
test_return_character.TestF90ReturnCharacter.test_all ... ERROR
test_return_complex.TestF77ReturnComplex.test_all ... ERROR
test_return_complex.TestF90ReturnComplex.test_all ... ERROR
test_return_integer.TestF77ReturnInteger.test_all ... ERROR
test_return_integer.TestF90ReturnInteger.test_all ... ERROR
test_return_logical.TestF77ReturnLogical.test_all ... ERROR
test_return_logical.TestF90ReturnLogical.test_all ... ERROR
test_return_real.TestCReturnReal.test_all ... ok
test_return_real.TestF77ReturnReal.test_all ... ERROR
test_return_real.TestF90ReturnReal.test_all ... ERROR
[...]
--
Ran 2989 tests in 47.008s
FAILED (KNOWNFAIL=4, SKIP=1, errors=12)


Some more information (Perhaps I did some known mistake in those steps? Details 
at the end of this mail):
o  Mac OS X 10.6.4 (intel Core 2 duo)
o  Python 2.7 (r27:82500, Aug 15 2010, 12:19:40) 
 [GCC 4.2.1 (Apple Inc. build 5659) + GF 4.2.4] on darwin
o  gcc --version
 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)
o  gfortran --version
GNU Fortran (GCC) 4.2.1 (Apple Inc. build 5659) + GF 4.2.4
from gfortran from http://r.research.att.com/tools/
o  I used the BLAS/LAPACK that is provided by Apple's Accelerate framework. 
 
o  environment:
export CFLAGS="-arch i386 -arch x86_64"
export FFLAGS="-m32 -m64"
export LDFLAGS="-Wall -undefined dynamic_lookup -bundle -arch i386 
-arch x86_64 -framework Accelerate"
o  bulid:
python setup.py build --fcompiler=gnu95
 

I have not found a matching ticket in trac. Should I open one or did I 
something very stupid during the build process? Thanks!

Samuel


PS: I failed to succeed in the first shot with python.org's official fat 
precompiled .dmg-file release (ppc/i386 32/64 bit), so I used MacPorts. Later 
today, I'll try again to compile against python.org because I think numpy/scipy 
recommends that version.


For completeness, here are my build steps:

o   Building numpy/scipy from source:
http://scipy.org/Installing_SciPy/Mac_OS_X:
- Make sure XCode is installed with the Development target 10.4 SDK
- Download and install gfortran from http://r.research.att.com/tools/
- svn co http://svn.scipy.org/svn/numpy/trunk numpy
- svn co http://svn.scipy.org/svn/scipy/trunk scipy
- sudo port install fftw-3
- sudo port install suitesparse
- sudo port install swig-python
- mkdir scipy_numpy; cd scipy_numpy
- cd numpy
- cp site.cfg.example site.cfg
- You may want to copy the site.cfg to ~/.numpy-site.cfg
- Edit site.cfg to contain only the following:
 [DEFAULT]
 library_dirs = /opt/local/lib
 include_dirs = /opt/local/include
 [amd]
 amd_libs = amd
 [umfpack]
 umfpack_libs = umfpack
 [fftw]
 libraries = fftw3
- export MACOSX_DEPLOYMENT_TARGET=10.6
- export CFLAGS="-arch i386 -arch x86_64"
- export FFLAGS="-m32 -m64"
- export LDFLAGS="-Wall -undefined dynamic_lookup -bundle -arch i386 -arch 
x86_64 -framework Accelerate"
- export 
PYTHONPATH="/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/"
- python setup.py build --fcompiler=gnu95
- sudo python setup.py install
- cd ..
- cd scipy
- sed  's|include <\(umfpack[^\.]*\.h\)>|include 
|g' 
scipy/sparse/linalg/dsolve/umfpack/___tmp.i
- mv scipy/sparse/linalg/dsolve/umfpack/umfpack.i 
scipy/sparse/linalg/dsolve/umfpack/umfpack.old
- mv scipy/sparse/linalg/dsolve/umfpack/___tmp.i 
scipy/sparse/linalg/dsolve/umfpack/umfpack.i
- python setup.py build --fcompiler=gnu95
- cd
- python
  import numpy; numpy.test()
  import scipy; scipy.test()




A short excerpt of  numpy.test()'s output:


==
ERROR: test_return_real.TestF90ReturnReal.test_all
--
Traceback (most recent call last):
  File 
"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose-0.11.4-py2.7.egg/nose/case.py",
 line 367, in setUp
try_run(self.inst, ('setup', 'setUp'))
  File 
"/opt/local/Library/Frame