Re: [Numpy-discussion] Different results from repeated calculation

2007-02-01 Thread Keith Goodman

On 2/1/07, Robert Kern <[EMAIL PROTECTED]> wrote:

Keith Goodman wrote:
> A port to Octave of the test script works fine on the same system.

Are you sure that your Octave port uses ATLAS to do the matrix product? Could
you post your port?


Here's the port. Yes, Octave uses atlas for matrix multiplication.
Maybe the problem is a race condition and due to timing the outcome is
always the same in Octave...
function repeat()
nsim = 100;
[x, y] = load();
z0 = calc(x, y);
for i = 1:nsim
disp(i)
[x, y] = load();
z = calc(x, y);
if any(z ~= z0)
printf("%d Max difference = %f", i, max(abs((z - z0)/z0))) 
end
end 

function z = calc(x, y) 
z = x * (x' * y);

function [x, y] = load()

# x data
x = zeros(3,3);
x(1,1) =  0.00301404794991108;
x(1,2) =  0.0026474226678;
x(1,3) = -0.00112705028731085;
x(2,1) =  0.0228605377994491;
x(2,2) =  0.00337153112741583;
x(2,3) = -0.00823674912992519;
x(3,1) =  0.00447839875836716;
x(3,2) =  0.00274880280576514;
x(3,3) = -0.00161133933606597;
 
# y data
y = zeros(3,1);
y(1,1) = 0.000885398;
y(2,1) = 0.00667193;
y(3,1) = 0.000324727;
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-02-01 Thread Charles R Harris

On 2/1/07, Keith Goodman <[EMAIL PROTECTED]> wrote:


On 2/1/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
> This problem may be related to this bug:
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=279294

It says it is fixed in libc6 2.3.5. I'm on 2.3.6. But do you think it
is something similar?



I do, I am suspicious that the roundoff mode flag is changing state. But
these sort of bugs are notoriously hard to track down. You did good
isolating it to atlas and sse.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-02-01 Thread Robert Kern
Keith Goodman wrote:
> A port to Octave of the test script works fine on the same system.

Are you sure that your Octave port uses ATLAS to do the matrix product? Could
you post your port?

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-02-01 Thread Keith Goodman
On 2/1/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
> This problem may be related to this bug:
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=279294

It says it is fixed in libc6 2.3.5. I'm on 2.3.6. But do you think it
is something similar?

A port to Octave of the test script works fine on the same system.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Different results from repeated calculation

2007-02-01 Thread Charles R Harris

On 1/28/07, Keith Goodman <[EMAIL PROTECTED]> wrote:


On 1/27/07, Keith Goodman <[EMAIL PROTECTED]> wrote:
> On 1/27/07, Fernando Perez <[EMAIL PROTECTED]> wrote:
> > It's definitely looking like something SMP related: on my laptop, with
> > everything other than the hardware being identical (Linux distro,
> > kernel, numpy build, etc), I can't make it fail no matter how I muck
> > with it.  I always get '0 differences'.
> >
> > The desktop is a dual-core AMD Athlon as indicated before, the laptop
> > is an oldie Pentium III.  They both run the same SMP-aware Ubuntu i686
> > kernel, since Ubuntu now ships a unified kernel, though obviously on
> > the laptop the SMP code isn't active.
>
> After installing a kernel that is not smp aware, I still have the same
problem.

The problem goes away if I remove atlas (atlas3-sse2 for me). But that
just introduces another problem: slowness.



This problem may be related to this bug:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=279294

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab

2007-02-01 Thread David Cournapeau
Alan G Isaac wrote:
> On Wed, 31 Jan 2007, David Cournapeau apparently wrote: 
>> With pyaudiolab, you should be able to read and write most 
>> common audio files from and to numpy arrays. The 
>> underlying IO operations are done using libsndfile from 
>> Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/) 
>
> I think it is worth mentioning (on this list) that 
> pyaudiolab uses the SciPy license and libsndfile is LGPL.
Indeed, I forgot to mention this fact in the announcement. It is 
mentioned somewhere in the source, but it should be done better. It is 
the only reason that pyaudiolab is not part of scipy.

Your post made me realize that I actually didn't look at how applying 
the license correctly, which is not good at all (that's the first 
project I started from scratch). I will change that.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab

2007-02-01 Thread Alan G Isaac
On Wed, 31 Jan 2007, David Cournapeau apparently wrote: 
> With pyaudiolab, you should be able to read and write most 
> common audio files from and to numpy arrays. The 
> underlying IO operations are done using libsndfile from 
> Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/) 

I think it is worth mentioning (on this list) that 
pyaudiolab uses the SciPy license and libsndfile is LGPL.

Cheers,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Complex arange

2007-02-01 Thread Robert Kern
Russel Howe wrote:
>> arange(1j, 5j, 1) do? Numeric raises an exception here, and I thing  
>> numpy
>> should, too.
> 
> The same as arange(1, 5, 1j) - an empty array since it takes 0 of the  
> step to cross the distance. 

I'm not sure that's really the answer. I think it's simply not defined. No
number of steps (which is different than 0 steps) along the imaginary axis will
take 1+0j to 5+0j.

> But something like
> arange(1j, 5j, 1j) seems fine.  As does arange(1j, 3+5j, 2+1j) which  
> should give [ 1j, 2+2j ].  The idea is to walk by step up to the edge  
> of the box.  I seem to recall a discussion of why this was a bad idea  
> a while ago on this list, but I can't find it...

Box? *Aaah*! You're looking at z1 placing distinct upper(lower) bounds on the
real and imaginary parts rather than specifying a point target. That's
a...unique perspective.  ;-)

But then, I'm of the opinion that arange() should be reserved for integers, and
the other use cases are better served by linspace() instead.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Complex arange

2007-02-01 Thread Russel Howe

> arange(1j, 5j, 1) do? Numeric raises an exception here, and I thing  
> numpy
> should, too.
>

The same as arange(1, 5, 1j) - an empty array since it takes 0 of the  
step to cross the distance.  But something like
arange(1j, 5j, 1j) seems fine.  As does arange(1j, 3+5j, 2+1j) which  
should give [ 1j, 2+2j ].  The idea is to walk by step up to the edge  
of the box.  I seem to recall a discussion of why this was a bad idea  
a while ago on this list, but I can't find it...

The exception is a good answer too, but it should probably happen for  
all complex arguments, since most seem to return an empty array now.

Russel
they're all fine hovses.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Complex arange

2007-02-01 Thread Timothy Hochberg

On 2/1/07, Robert Kern <[EMAIL PROTECTED]> wrote:


Russel Howe wrote:

(It's good to see so many Rudds seeing sense and using Python and
numpy.  ;-))



rudds! Here? Dear me.


--

//=][=\\

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Robert Kern
Christopher Barker wrote:
> Sebastian Haase wrote:
> 
>> Could you explain what a possible downside of this would be !?
>> It seems that if you don't need to refer to a specific "self" object
>> that a class-method is what it should - is this not always right !?
> 
> Well, what these really are are alternate constructors. I don't think 
> I've seen class methods used that way, but then I haven't seen them used 
> much at all.

Alternate constructors is probably the primary use case for class methods that
I've seen. It's certainly the most frequent reason I've made them.

> Sometimes I have wished for an overloaded constructor, i.e.:
> 
> array(SomeBuffer)
> 
> results in the same thing as
> 
> frombuffer(SomeBuffer)
> 
> 
> but Python doesn't really "do" overloaded methods, and there are some 
> times when there wouldn't be only one way the input could be interpreted.

Well, array() is already very, very overloaded. That's why it's difficult to use
sometimes. BTW, you might want to check out the module simplegeneric for a good
way to implement certain kinds of overloading. Just not numpy.array(), please 
;-).

  http://cheeseshop.python.org/pypi/simplegeneric/

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New may_share_memory function

2007-02-01 Thread Christopher Barker
thanks Travis,

Now I just need to remember it's there when I need it!

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Complex arange

2007-02-01 Thread Robert Kern
Russel Howe wrote:

(It's good to see so many Rudds seeing sense and using Python and numpy.  ;-))

> Should this work?
> 
> Python 2.4.3 (#1, Dec 27 2006, 21:18:13)
> [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
>  >>> import numpy as N
>  >>> N.__version__
> '1.0.2.dev3531'
>  >>> N.arange(1j, 5j)
> array([], dtype=complex128)
>  >>> N.arange(1j, 5j, 1j)
> array([], dtype=complex128)
> 
> Currently, the real direction is determined to be of zero length  
> (multiarraymodule.c _calc_length), and the length of the array is the  
> minimal length.  I can understand the first one not working (default  
> step is 1, it takes 0 of those to cover this range), but the second  
> seems like a bug.

arange() is pretty much only defined for real numbers. For general z0, z1, and
dz, there is no guarantee that (z1 - z0)/dz is a real number as it needs to be.
dz may point in a different direction than (z1-z0). For example, what should
arange(1j, 5j, 1) do? Numeric raises an exception here, and I thing numpy
should, too.

Of course, linspace() is generally preferred for floating point types regardless
of whether complex or real. It's difficult to predetermine whether the stop
value will be included or not. And since you give it a count instead of a step
size, we can guarantee that the step does lie along the vector (z1-z0).

In [8]: linspace(1j, 5j, 5)
Out[8]: array([ 0.+1.j,  0.+2.j,  0.+3.j,  0.+4.j,  0.+5.j])

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] New may_share_memory function

2007-02-01 Thread Travis Oliphant

In SVN there is a new function  may_share_memory(a,b) which will return 
True if the memory foot-print of the two arrays over-lap.

 >>> may_share_memory(a, flipud(a))
True

This is based on another utility function byte_bounds that returns the 
byte-boundaries of any object exporting the Python side of the array 
interface.

Perhaps these utilities will help (I know they can be used to make the 
who function a bit more intelligent about how many bytes are being used).

-Travis



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Christopher Barker
Sebastian Haase wrote:

> Could you explain what a possible downside of this would be !?
> It seems that if you don't need to refer to a specific "self" object
> that a class-method is what it should - is this not always right !?

Well, what these really are are alternate constructors. I don't think 
I've seen class methods used that way, but then I haven't seen them used 
much at all.

Sometimes I have wished for an overloaded constructor, i.e.:

array(SomeBuffer)

results in the same thing as

frombuffer(SomeBuffer)


but Python doesn't really "do" overloaded methods, and there are some 
times when there wouldn't be only one way the input could be interpreted.

That all being the case, it seems to make some sense to put these in as 
class methods, but :

a = numpy.ndarray.fromfile(MyFile)

does feel a bit awkward.

Wx Python handles this by having a few constructors:

wx.EmptyBitmap()
wx.BitmapFromImage()
wx.BitmapFromBuffer()
etc...

but that's kind of clunky too.

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Travis Oliphant
Christopher Barker wrote:

>Travis Oliphant wrote:
>  
>
>>I'm thinking that we should have several.  For example all the fromXXX 
>>functions should probably be classmethods
>>
>>ndarray.frombuffer
>>ndarray.fromfile
>>
>>
>
>would they still be accessible in their functional form in the numpy 
>namespace?
>
>  
>
Yes, until a major revision at which point they could (if deemed useful) 
be removed after a deprecation warning period.

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Travis Oliphant
Sebastian Haase wrote:

>Travis,
>Could you explain what a possible downside of this would be !?
>It seems that if you don't need to refer to a specific "self" object
>that a class-method is what it should - is this not always right !?
>
>  
>
I don't understand the last point.   Classmethods would get inherited by 
sub-classes by default, as far as I know.

I can't think of any downsides.  I have to understand how class-methods 
are actually implemented, though before I could comment on speed 
implications of class methods.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Complex arange

2007-02-01 Thread Russel Howe
Should this work?

Python 2.4.3 (#1, Dec 27 2006, 21:18:13)
[GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
 >>> import numpy as N
 >>> N.__version__
'1.0.2.dev3531'
 >>> N.arange(1j, 5j)
array([], dtype=complex128)
 >>> N.arange(1j, 5j, 1j)
array([], dtype=complex128)

Currently, the real direction is determined to be of zero length  
(multiarraymodule.c _calc_length), and the length of the array is the  
minimal length.  I can understand the first one not working (default  
step is 1, it takes 0 of those to cover this range), but the second  
seems like a bug.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Sebastian Haase
Travis,
Could you explain what a possible downside of this would be !?
It seems that if you don't need to refer to a specific "self" object
that a class-method is what it should - is this not always right !?

-Sebastian



On 2/1/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Travis Oliphant wrote:
> > What is the attitude of this group about the ndarray growing some class
> > methods?
>
> Works for me.
>
> --
> Robert Kern
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Christopher Barker
Travis Oliphant wrote:
> I'm thinking that we should have several.  For example all the fromXXX 
> functions should probably be classmethods
> 
> ndarray.frombuffer
> ndarray.fromfile

would they still be accessible in their functional form in the numpy 
namespace?

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Robert Kern
Travis Oliphant wrote:
> What is the attitude of this group about the ndarray growing some class 
> methods?

Works for me.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Pierre GM
On Thursday 01 February 2007 18:48:56 Travis Oliphant wrote:
> What is the attitude of this group about the ndarray growing some class
> methods?

> ndarray.frombuffer
> ndarray.fromfile

Sounds great. But what would really make my semester is to have 
ndarray.__new__ accept optional keywords (as **whatever) on top of the 
shape/buffer/order... That'd be a big help for subclassing.
On a side note, I came to the fact that Santa Claus doesn't really live in the 
North Pole, so I would understand if it wasn't feasible...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] classmethods for ndarray

2007-02-01 Thread Travis Oliphant
What is the attitude of this group about the ndarray growing some class 
methods?

I'm thinking that we should have several.  For example all the fromXXX 
functions should probably be classmethods

ndarray.frombuffer
ndarray.fromfile

etc.



-Travis
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SciPy '07 ???

2007-02-01 Thread Christopher Barker
Hi,

Does anyone know if there will be a SciPy '07 conference, and if so, when?

I'd really like to try to get there this year, but need to start 
planning my summer schedule.

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Louis Wicker

Robert:

thanks - I appreciate the advice, and hopefully a) Leopard will get  
here in a few months, and b) that will fix this.


cheers!

Lou Wicker

On Feb 1, 2007, at 3:11 PM, Robert Kern wrote:


Louis Wicker wrote:

Sebastian:

that code helps a lot.  A standard gcc (no flags) of that code  
breaks,
but if you compile it with gcc -m64, you can address large memory  
spaces.


So I will try and compile numpy with -m64


It won't work. Your Python is not compiled as a 64-bit program. The  
whole stack
down to the runtime libraries needs to be built as a 64-bit  
program. That's easy
enough to do with a single-file C program, but in Tiger the 64-bit  
runtime

provides very few services, not enough to build Python.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a  
harmless enigma
 that is made terrible by our own mad attempt to interpret it as  
though it had

 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


 


| Dr. Louis J. Wicker
| NSSL/WRDD
| National Weather Center
| 120 David L. Boren Boulevard, Norman, OK 73072-7323
|
| E-mail:   [EMAIL PROTECTED]
| HTTP:  www.nssl.noaa.gov/~lwicker
| Phone:(405) 325-6340
| Fax:(405) 325-6780
|
| "Programming is not just creating strings of instructions
| for a computer to execute.  It's also 'literary' in that you
| are trying to communicate a program structure to
| other humans reading the code." - Paul Rubin
|
|"Real efficiency comes from elegant solutions, not optimized programs.
| Optimization is always just a few correctness-preserving  
transformations

| away." - Jonathan Sobel
 


|
| "The contents  of this message are mine personally and
| do not reflect any position of  the Government or NOAA."
|
 






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Robert Kern
Louis Wicker wrote:
> Sebastian:
> 
> that code helps a lot.  A standard gcc (no flags) of that code breaks,
> but if you compile it with gcc -m64, you can address large memory spaces.
> 
> So I will try and compile numpy with -m64

It won't work. Your Python is not compiled as a 64-bit program. The whole stack
down to the runtime libraries needs to be built as a 64-bit program. That's easy
enough to do with a single-file C program, but in Tiger the 64-bit runtime
provides very few services, not enough to build Python.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Louis Wicker

Sebastian:

that code helps a lot.  A standard gcc (no flags) of that code  
breaks, but if you compile it with gcc -m64, you can address large  
memory spaces.


So I will try and compile numpy with -m64

Lou

On Feb 1, 2007, at 2:01 PM, Sebastian Haase wrote:


#include 
#include 
int main()  {  size_t n;  void *p;  double gb;
 for(gb=10;gb>.3;gb-=.5) {
   n= 1024L * 1024L * 1024L * gb;
   p = malloc( n );
   printf("%12lu %4.1lfGb %p\n",n,n/1024./1024./1024.,p);
   free(p); }  return 0;  }


 


| Dr. Louis J. Wicker
| NSSL/WRDD
| National Weather Center
| 120 David L. Boren Boulevard, Norman, OK 73072-7323
|
| E-mail:   [EMAIL PROTECTED]
| HTTP:  www.nssl.noaa.gov/~lwicker
| Phone:(405) 325-6340
| Fax:(405) 325-6780
|
| "Programming is not just creating strings of instructions
| for a computer to execute.  It's also 'literary' in that you
| are trying to communicate a program structure to
| other humans reading the code." - Paul Rubin
|
|"Real efficiency comes from elegant solutions, not optimized programs.
| Optimization is always just a few correctness-preserving  
transformations

| away." - Jonathan Sobel
 


|
| "The contents  of this message are mine personally and
| do not reflect any position of  the Government or NOAA."
|
 






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Timothy Hochberg

On 2/1/07, Christopher Barker <[EMAIL PROTECTED]> wrote:


Zachary Pincus wrote:
> Say a function that (despite Tim's pretty
> reasonable 'don't do that' warning) will return true when two arrays
> have overlapping memory?

I think it would be useful, even if it's not robust. I'd still like to
know if a given two arrays COULD share data.

I suppose to really be robust, what I'd really want to know is if a
given array shares data with ANY other array, i.e. could changing this
mess something up? -- but I'm pretty sure that is next to impossible



It's not totally impossible in theory -- languages like Haskell and Clean
(which I'm playing with now) manage to use arrays that get updated without
copying, while still maintaining the illusion that everything is constant
and thus you can't mess up any other arrays. While it's fun to play with and
Clean is allegedly pretty fast, it takes quite a bit of work to wrap ones
head around.

In a language like Python I expect that it would be pretty hard to come up
with something useful. Most of the checks would probably be too conservative
and thus not useful

-tim










--

//=][=\\

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Travis Oliphant
Christopher Barker wrote:

>Zachary Pincus wrote:
>  
>
>>Say a function that (despite Tim's pretty  
>>reasonable 'don't do that' warning) will return true when two arrays  
>>have overlapping memory?
>>
>>
>
>I think it would be useful, even if it's not robust. I'd still like to 
>know if a given two arrays COULD share data.
>
>I suppose to really be robust, what I'd really want to know is if a 
>given array shares data with ANY other array, i.e. could changing this 
>mess something up? -- but I'm pretty sure that is next to impossible
>
>  
>
Yeah, we don't keep track of who has a reference to a particular array.  
They only way to get that information would be to walk through all the 
Objects defined and see if any of them share memory with me.

You can sometimes get away with it by looking at the reference count of 
the object.  But, the reference count is used in more ways than that and 
so it's a very conservative check.   In the array interface I'm 
proposing for inclusion into Python, an object that shares memory could 
define a "call-back" function that (if defined) would be called when the 
view to the memory was released.  That way objects could store 
information regarding how many "views" they have extant.

-Travis



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Christopher Barker
Zachary Pincus wrote:
> Say a function that (despite Tim's pretty  
> reasonable 'don't do that' warning) will return true when two arrays  
> have overlapping memory?

I think it would be useful, even if it's not robust. I'd still like to 
know if a given two arrays COULD share data.

I suppose to really be robust, what I'd really want to know is if a 
given array shares data with ANY other array, i.e. could changing this 
mess something up? -- but I'm pretty sure that is next to impossible

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Travis Oliphant
Zachary Pincus wrote:

>>>A question, then: Does this represent a bug? Or perhaps there is a
>>>better idiom for modifying an array in-place than 'a[:] = ...'? Or is
>>>incumbent on the user to ensure that any time an array is directly
>>>modified, that the modifying array is not a view of the original  
>>>array?
>>>
>>>
>>>  
>>>
>>Yes, it is and has always been incumbent on the user to ensure that  
>>any
>>time an array is directly modified in-place that the modifying  
>>array is
>>not a "view" of the original array.
>>
>>
>
>Fair enough. Now, how does a user ensure this -- say someone like me,  
>who has been using numpy (et alia) for a couple of years, but clearly  
>not long enough to have an 'intuitive' feel for every time something  
>might be a view (a feeling that must seem quite natural to long-time  
>numpy users, who may have forgotten precisely how long it takes to  
>develop that level of intuition)?
>  
>
Basically, red-flags go off when you do in-place modification of any 
kind and you make sure you don't have an inappropriate view.  That 
pretty much describes my "intuition."  Views arise from "slicing" 
notation.  The flipud returning a view is a bit obscure and should be 
documented better.  

>Documentation of what returns views helps, for sure. Would any other  
>'training' mechanisms help? Say a function that (despite Tim's pretty  
>reasonable 'don't do that' warning) will return true when two arrays  
>have overlapping memory? Or an 'inplace_modify' function that takes  
>the time to make that check?
>  
>
I thought I had written a function that would see if two input arrays 
have over-lapping memory, but maybe not.  It's not hard for a contiguous 
chunk of memory, but for two views it's a harder function to write.  

It's probably a good idea to have such a thing, however.

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Christopher Barker
Zachary Pincus wrote:
>>> I recently was trying to write code to modify an array in-place (so
>>> as not to invalidate any references to that array)
>> I'm not sure what this means exactly.
> 
> Say one wants to keep two different variables referencing a single in- 
> memory list, as so:
> a = [1,2,3]
> b = a
> Now, if 'b' and 'a' go to live in different places (different class  
> instances or whatever) but we want 'b' and 'a' to always refer to the  
> same in-memory object, so that 'id(a) == id(b)', we need to make sure  
> to not assign a brand new list to either one.

OK, got it, but numpy arrays are not quite the same as lists, there is 
the additional complication that two different array objects can share 
the same data:

 >>> b = a[:]
 >>> a = N.ones((5,))
 >>> b = a[:]
 >>> a is b
False
 >>> a[2] = 5
 >>> a
array([ 1.,  1.,  5.,  1.,  1.])
 >>> b
array([ 1.,  1.,  5.,  1.,  1.])

This is very useful, but can be tricky. In a way, it's like a nested list:
 >>> a = [[1,2,3,4]]
 >>> b = [a[0]]
 >>> a is b
False
 >>> a[0][2] = 5
 >>> a
[[1, 2, 5, 4]]
 >>> b
[[1, 2, 5, 4]]

hey! changing a changed b too!

So key is that in your case, it probably doesn't matter if a and b are 
the same object, as long as they share the same data, and having 
multiple arrays sharing the same data is a common idiom in numpy.

> That is, if we do something like 'a = [i + 1 for i in a]' then 'id 
> (a) != id(b)'. However, we can do 'a[:] = [i + 1 for i in a]' to  
> modify a in-place.

Ah, but at Travis pointed out, the difference is not in assignment or 
anything like that, but in the fact that a list comprehension produces a 
copy, which is analogous to :

flipud(a).copy

In numpy, you DO need to be aware of when you are getting copies, and 
when you are getting views, and what the consequences are.

So really, the only "bug" here is in the docs -- they should make it 
clear whether a function returns a copy or a view.

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Sebastian Haase
Here is a small c program that we used more than a year ago to confirm
that tiger is really doing a 64-bit malloc (on G5).


#include 
#include 
int main()  {  size_t n;  void *p;  double gb;
 for(gb=10;gb>.3;gb-=.5) {
   n= 1024L * 1024L * 1024L * gb;
   p = malloc( n );
   printf("%12lu %4.1lfGb %p\n",n,n/1024./1024./1024.,p);
   free(p); }  return 0;  }


Hope this helps anyone.
Sebastian


On 2/1/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Louis Wicker wrote:
>
> > Travis:
> >
> > yes it does.  Its the Woodcrest server chip
> > 
> >  which
> > supports 32 and 64 bit operations.  For example the new Intel Fortran
> > compiler can grab more than 2 GB of memory (its a beta10 version).  I
> > think gcc 4.x can as well.
> >
> Nice.  I didn't know this.
>
> > However, Tiger (OS X 10.4.x) is not completely 64 bit compliant -
> > Leopard is supposed to be pretty darn close.
> >
> > Is there a numpy flag I could try for compilation
>
> It's entirely compiler and system dependent.  NumPy just uses the system
> malloc.  If you can compile it so that the system malloc supports 64-bit
> then O.K. (but you will probably run into trouble unless Python is also
> compiled as a 64-bit application).   From Robert's answer, I guess it is
> impossible under Tiger to compile with 64-bit support.
>
> -Travis
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Sebastian Haase
Here is a small c program that we used more than a year ago to confirm
that tiger is really doing a 64-bit malloc (on G5).


#include 
#include 
int main()  {  size_t n;  void *p;  double gb;
  for(gb=10;gb>.3;gb-=.5) {
n= 1024L * 1024L * 1024L * gb;
p = malloc( n );
printf("%12lu %4.1lfGb %p\n",n,n/1024./1024./1024.,p);
free(p); }  return 0;  }


Hope this helps anyone.
Sebastian


On 2/1/07, Travis Oliphant <[EMAIL PROTECTED]> wrote:
> Louis Wicker wrote:
>
> > Travis:
> >
> > yes it does.  Its the Woodcrest server chip
> > 
> >  which
> > supports 32 and 64 bit operations.  For example the new Intel Fortran
> > compiler can grab more than 2 GB of memory (its a beta10 version).  I
> > think gcc 4.x can as well.
> >
> Nice.  I didn't know this.
>
> > However, Tiger (OS X 10.4.x) is not completely 64 bit compliant -
> > Leopard is supposed to be pretty darn close.
> >
> > Is there a numpy flag I could try for compilation
>
> It's entirely compiler and system dependent.  NumPy just uses the system
> malloc.  If you can compile it so that the system malloc supports 64-bit
> then O.K. (but you will probably run into trouble unless Python is also
> compiled as a 64-bit application).   From Robert's answer, I guess it is
> impossible under Tiger to compile with 64-bit support.
>
> -Travis
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Zachary Pincus
>> A question, then: Does this represent a bug? Or perhaps there is a
>> better idiom for modifying an array in-place than 'a[:] = ...'? Or is
>> incumbent on the user to ensure that any time an array is directly
>> modified, that the modifying array is not a view of the original  
>> array?
>>
>>
> Yes, it is and has always been incumbent on the user to ensure that  
> any
> time an array is directly modified in-place that the modifying  
> array is
> not a "view" of the original array.

Fair enough. Now, how does a user ensure this -- say someone like me,  
who has been using numpy (et alia) for a couple of years, but clearly  
not long enough to have an 'intuitive' feel for every time something  
might be a view (a feeling that must seem quite natural to long-time  
numpy users, who may have forgotten precisely how long it takes to  
develop that level of intuition)?

Documentation of what returns views helps, for sure. Would any other  
'training' mechanisms help? Say a function that (despite Tim's pretty  
reasonable 'don't do that' warning) will return true when two arrays  
have overlapping memory? Or an 'inplace_modify' function that takes  
the time to make that check?

Perhaps I'm the first to have views bite me in this precise way.  
However, if there are common failure-modes with views, I hope it's  
not too unreasonable to ask about ways that those common problems  
might be addressed. (Other than just saying "train for ten years, and  
you too will have numpy-fu, my son.") Giving newbies tools to deal  
with common problems with admittedly "dangerous" constructs might be  
useful.

Zach
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Travis Oliphant
Louis Wicker wrote:

> Travis:
>
> yes it does.  Its the Woodcrest server chip 
> 
>  which 
> supports 32 and 64 bit operations.  For example the new Intel Fortran 
> compiler can grab more than 2 GB of memory (its a beta10 version).  I 
> think gcc 4.x can as well.
>
Nice.  I didn't know this.

> However, Tiger (OS X 10.4.x) is not completely 64 bit compliant - 
> Leopard is supposed to be pretty darn close.  
>
> Is there a numpy flag I could try for compilation

It's entirely compiler and system dependent.  NumPy just uses the system 
malloc.  If you can compile it so that the system malloc supports 64-bit 
then O.K. (but you will probably run into trouble unless Python is also 
compiled as a 64-bit application).   From Robert's answer, I guess it is 
impossible under Tiger to compile with 64-bit support.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Louis Wicker

Thanks Robert

thats kinda what I thought.  Since Leopard is not far off, then by  
summer things will be fine, I hope...


L

On Feb 1, 2007, at 1:48 PM, Robert Kern wrote:


Travis Oliphant wrote:

Louis Wicker wrote:


Dear list:

I cannot seem to figure how to create arrays > 2 GB on a Mac Pro
(using Intel chip and Tiger, 4.8).  I have hand compiled both Python
2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB.  I  
also
run out of space if I try and 3-6 several arrays of 1000 mb or so  
(the

mem-alloc failure does not seem consistent, depends on whether I am
creating them with a "numpy.ones()" call, or creating them on the  
fly

by doing math with the other arrays "e.g., c  = 4.3*a + 3.1*b").

Is this a numpy issue, or a Python 2.5 issue for the Mac?  I have
tried this on the SGI Altix, and this works fine.


It must be a malloc issue.  NumPy uses the system malloc to construct
arrays.  It just reports errors back to you if it can't.

I don't think the Mac Pro uses a 64-bit chip, does it?


Intel Core 2 Duo's are 64-bit, but the OS and runtime libraries are  
not. None of
the Python distributions for OS X are compiled for 64-bit support,  
either.


When Leopard comes out, there will be better 64-bit support in the  
OS, and

Python will follow. Until then, Python on OS X is 32-bit.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a  
harmless enigma
 that is made terrible by our own mad attempt to interpret it as  
though it had

 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


 


| Dr. Louis J. Wicker
| NSSL/WRDD
| National Weather Center
| 120 David L. Boren Boulevard, Norman, OK 73072-7323
|
| E-mail:   [EMAIL PROTECTED]
| HTTP:  www.nssl.noaa.gov/~lwicker
| Phone:(405) 325-6340
| Fax:(405) 325-6780
|
| "Programming is not just creating strings of instructions
| for a computer to execute.  It's also 'literary' in that you
| are trying to communicate a program structure to
| other humans reading the code." - Paul Rubin
|
|"Real efficiency comes from elegant solutions, not optimized programs.
| Optimization is always just a few correctness-preserving  
transformations

| away." - Jonathan Sobel
 


|
| "The contents  of this message are mine personally and
| do not reflect any position of  the Government or NOAA."
|
 






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Louis Wicker

Travis:

quick follow up:  Mac Pro's currently have the dual-core 5100 Xeon  
(two processors, two cores each), the 5300 Xeon's (quad-core) are  
coming in a few weeks, we think.


Lou

On Feb 1, 2007, at 1:41 PM, Travis Oliphant wrote:


Louis Wicker wrote:


Dear list:

I cannot seem to figure how to create arrays > 2 GB on a Mac Pro
(using Intel chip and Tiger, 4.8).  I have hand compiled both Python
2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB.  I also
run out of space if I try and 3-6 several arrays of 1000 mb or so  
(the

mem-alloc failure does not seem consistent, depends on whether I am
creating them with a "numpy.ones()" call, or creating them on the fly
by doing math with the other arrays "e.g., c  = 4.3*a + 3.1*b").

Is this a numpy issue, or a Python 2.5 issue for the Mac?  I have
tried this on the SGI Altix, and this works fine.


It must be a malloc issue.  NumPy uses the system malloc to construct
arrays.  It just reports errors back to you if it can't.

I don't think the Mac Pro uses a 64-bit chip, does it?

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


 


| Dr. Louis J. Wicker
| NSSL/WRDD
| National Weather Center
| 120 David L. Boren Boulevard, Norman, OK 73072-7323
|
| E-mail:   [EMAIL PROTECTED]
| HTTP:  www.nssl.noaa.gov/~lwicker
| Phone:(405) 325-6340
| Fax:(405) 325-6780
|
| "Programming is not just creating strings of instructions
| for a computer to execute.  It's also 'literary' in that you
| are trying to communicate a program structure to
| other humans reading the code." - Paul Rubin
|
|"Real efficiency comes from elegant solutions, not optimized programs.
| Optimization is always just a few correctness-preserving  
transformations

| away." - Jonathan Sobel
 


|
| "The contents  of this message are mine personally and
| do not reflect any position of  the Government or NOAA."
|
 






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Louis Wicker

Travis:

yes it does.  Its the Woodcrest server chip which supports 32 and 64  
bit operations.  For example the new Intel Fortran compiler can grab  
more than 2 GB of memory (its a beta10 version).  I think gcc 4.x can  
as well.


However, Tiger (OS X 10.4.x) is not completely 64 bit compliant -  
Leopard is supposed to be pretty darn close.


Is there a numpy flag I could try for compilation

Lou


On Feb 1, 2007, at 1:41 PM, Travis Oliphant wrote:


Louis Wicker wrote:


Dear list:

I cannot seem to figure how to create arrays > 2 GB on a Mac Pro
(using Intel chip and Tiger, 4.8).  I have hand compiled both Python
2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB.  I also
run out of space if I try and 3-6 several arrays of 1000 mb or so  
(the

mem-alloc failure does not seem consistent, depends on whether I am
creating them with a "numpy.ones()" call, or creating them on the fly
by doing math with the other arrays "e.g., c  = 4.3*a + 3.1*b").

Is this a numpy issue, or a Python 2.5 issue for the Mac?  I have
tried this on the SGI Altix, and this works fine.


It must be a malloc issue.  NumPy uses the system malloc to construct
arrays.  It just reports errors back to you if it can't.

I don't think the Mac Pro uses a 64-bit chip, does it?

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


 


| Dr. Louis J. Wicker
| NSSL/WRDD
| National Weather Center
| 120 David L. Boren Boulevard, Norman, OK 73072-7323
|
| E-mail:   [EMAIL PROTECTED]
| HTTP:  www.nssl.noaa.gov/~lwicker
| Phone:(405) 325-6340
| Fax:(405) 325-6780
|
| "Programming is not just creating strings of instructions
| for a computer to execute.  It's also 'literary' in that you
| are trying to communicate a program structure to
| other humans reading the code." - Paul Rubin
|
|"Real efficiency comes from elegant solutions, not optimized programs.
| Optimization is always just a few correctness-preserving  
transformations

| away." - Jonathan Sobel
 


|
| "The contents  of this message are mine personally and
| do not reflect any position of  the Government or NOAA."
|
 






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Robert Kern
Travis Oliphant wrote:
> Louis Wicker wrote:
> 
>> Dear list:
>>
>> I cannot seem to figure how to create arrays > 2 GB on a Mac Pro 
>> (using Intel chip and Tiger, 4.8).  I have hand compiled both Python 
>> 2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB.  I also 
>> run out of space if I try and 3-6 several arrays of 1000 mb or so (the 
>> mem-alloc failure does not seem consistent, depends on whether I am 
>> creating them with a "numpy.ones()" call, or creating them on the fly 
>> by doing math with the other arrays "e.g., c  = 4.3*a + 3.1*b").
>>
>> Is this a numpy issue, or a Python 2.5 issue for the Mac?  I have 
>> tried this on the SGI Altix, and this works fine.
> 
> It must be a malloc issue.  NumPy uses the system malloc to construct 
> arrays.  It just reports errors back to you if it can't.
> 
> I don't think the Mac Pro uses a 64-bit chip, does it?

Intel Core 2 Duo's are 64-bit, but the OS and runtime libraries are not. None of
the Python distributions for OS X are compiled for 64-bit support, either.

When Leopard comes out, there will be better 64-bit support in the OS, and
Python will follow. Until then, Python on OS X is 32-bit.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Travis Oliphant
Louis Wicker wrote:

> Dear list:
>
> I cannot seem to figure how to create arrays > 2 GB on a Mac Pro 
> (using Intel chip and Tiger, 4.8).  I have hand compiled both Python 
> 2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB.  I also 
> run out of space if I try and 3-6 several arrays of 1000 mb or so (the 
> mem-alloc failure does not seem consistent, depends on whether I am 
> creating them with a "numpy.ones()" call, or creating them on the fly 
> by doing math with the other arrays "e.g., c  = 4.3*a + 3.1*b").
>
> Is this a numpy issue, or a Python 2.5 issue for the Mac?  I have 
> tried this on the SGI Altix, and this works fine.

It must be a malloc issue.  NumPy uses the system malloc to construct 
arrays.  It just reports errors back to you if it can't.

I don't think the Mac Pro uses a 64-bit chip, does it?

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cutting 1.0.2 release

2007-02-01 Thread Sven Schreiber
Travis Oliphant schrieb:
> I think it's time for the 1.0.2 release of NumPy.
> 
> What outstanding issues need to be resolved before we do it?  
> 

Hi,
I just used real_if_close for the first time, and promptly discovered
that it turns matrix input into array output:

>>> import numpy as n
>>> n.__version__
'1.0.1'
>>> n.real_if_close(n.mat(1))
array([[1]])

Maybe it's something for 1.0.2, or should I file a ticket?

I could also do another round of systematic searches for other functions
that still behave like this, but I'm afraid not before 1.0.2 (if that
happens this weekend).

Thanks,
Sven
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] large memory address space on Mac OS X (intel)

2007-02-01 Thread Louis Wicker

Dear list:

I cannot seem to figure how to create arrays > 2 GB on a Mac Pro  
(using Intel chip and Tiger, 4.8).  I have hand compiled both Python  
2.5 and numpy 1.0.1, and cannot make arrays bigger than 2 GB.  I also  
run out of space if I try and 3-6 several arrays of 1000 mb or so  
(the mem-alloc failure does not seem consistent, depends on whether I  
am creating them with a "numpy.ones()" call, or creating them on the  
fly by doing math with the other arrays "e.g., c  = 4.3*a + 3.1*b").


Is this a numpy issue, or a Python 2.5 issue for the Mac?  I have  
tried this on the SGI Altix, and this works fine.


If there is a compile flag to turn on 64 bit support in the Mac  
compile, I would be glad to find out about it.  Or do I have to wait  
for Leopard?


Thanks.

Lou Wicker

 


| Dr. Louis J. Wicker
| NSSL/WRDD
| National Weather Center
| 120 David L. Boren Boulevard, Norman, OK 73072-7323
|
| E-mail:   [EMAIL PROTECTED]
| HTTP:  www.nssl.noaa.gov/~lwicker
| Phone:(405) 325-6340
| Fax:(405) 325-6780
|
| "Programming is not just creating strings of instructions
| for a computer to execute.  It's also 'literary' in that you
| are trying to communicate a program structure to
| other humans reading the code." - Paul Rubin
|
|"Real efficiency comes from elegant solutions, not optimized programs.
| Optimization is always just a few correctness-preserving  
transformations

| away." - Jonathan Sobel
 


|
| "The contents  of this message are mine personally and
| do not reflect any position of  the Government or NOAA."
|
 






___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Travis Oliphant
Zachary Pincus wrote:

>Hello folks,
>
>I recently was trying to write code to modify an array in-place (so  
>as not to invalidate any references to that array) via the standard  
>python idiom for lists, e.g.:
>
>a[:] = numpy.flipud(a)
>
>Now, flipud returns a view on 'a', so assigning that to 'a[:]'  
>provides pretty strange results as the buffer that is being read (the  
>view) is simultaneously modified. Here is an example:
>  
>

This is a known feature of the "view" concept.   It has been present in 
Numeric from the beginning. Performing operations in-place using a view 
always gives "hard-to-predict" results.  It depends completely on how 
the algorithms are implemented.  

Knowing that numpy.flipud(a)  is just a different way to write 
a[::-1,...]  which works for any nested-sequence, helps you realize that 
if a is already an array, then it returns a reversed view, but when 
copied back into itself creates the results you obtained but might not 
have bee expecting.
 
You can understand the essence of what is happening with a simpler example:

a = arange(10)
a[:] = a[::-1]

What is a?

It is easy to see the answer when you realize that the code is doing the 
equivalent of

a[0] = a[9]
a[1] = a[8] 
a[2] = a[7] 
a[3] = a[6] 
a[4] = a[5]
a[5] = a[4]
a[6] = a[3]
a[7] = a[2]
a[8] = a[1]
a[9] = a[0]

Notice that the final 5 lines are completely redundant, so really all 
that is happening is

a[:5] = a[5:][::-1]

There was an explicit warning of the oddities of this construct in the 
original Numeric documentation.

Better documentation of the flipud function to indicate that it returns 
a view is definitely desireable.  In fact, all functions that return 
views should be clear about this in the docstring.

In addition, all users of "in-place" functionality of NumPy must be 
aware of the view concept and realize that you could be modifying the 
array you are using.

This came up before when somebody asked how to perform a "diff" in place 
and I was careful to make sure and not change the input array before it 
was used.

>A question, then: Does this represent a bug? Or perhaps there is a  
>better idiom for modifying an array in-place than 'a[:] = ...'? Or is  
>incumbent on the user to ensure that any time an array is directly  
>modified, that the modifying array is not a view of the original array?
>  
>
Yes, it is and has always been incumbent on the user to ensure that any 
time an array is directly modified in-place that the modifying array is 
not a "view" of the original array.

Good example...

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Timothy Hochberg

On 2/1/07, Zachary Pincus <[EMAIL PROTECTED]> wrote:

[CHOP]

I think that this is is unquestionably a bug


It's not a bug. It's a design decision. It has certain consequences. Many
good, some bad and some that just take some getting used to.


-- isn't the point of

views that the user shouldn't need to care if a particular array
object is a view or not?



As you state elsewhere, the issue isn't whether a given object is a view per
se, it's whether the objects that you are operating on refer to the same
block of memory. They could both be views, even of the same object, and as
long as they're disjoint, it's not a problem.

Given the lack of methods to query whether

an array is a view, or what it might be a view on, this seems like a
reasonable perspective... I mean, if certain operations produce
completely different results when one of the operands is a view, that
*seems* like a bug. It might not be worth fixing, but I can't see how
that behavior would be considered a feature.



View semantics are a feature. A powerful and sometime dangerous feature.
Sometimes the consequences of these semantics can bite people, but that
doesn't make them a bug.

[CHOP]




Good question. As I mentioned above, I assume that this information
is tracked internally to prevent the 'original' array data from being
deleted before any views have; however I really don't know how it is
exposed.



I believe that a reference is held to the original array, so the array
itself won't be deleted even if all of the references to it go away. The
details may be different, but that's the gist of it. Even ifyou could access
this, it wouldn't really tell you anything useful since two slices could
refer to pieces of the original chunk of data, yet still be disjoint.

If you wanted to be able to figure this out, probably the thing to do is
just to actually look at the block of data occupied by each array and see if
they overlap. I think you could even do this without resorting to C by using
the array interface.

However, I'd like to repeat what my doctor said as a kid when I complained
that "it hurts when I do this":
   "Don't do that!" -- Some Radom Doctor
In other words, I think you'd be better off restructuring your code so that
this isn't an issue. I've been using Numeric/numarray/numpy for over ten
years now this has never been a significant issue for me.


--

//=][=\\

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Zachary Pincus
> Zachary Pincus wrote:
>> Hello folks,
>>
>> I recently was trying to write code to modify an array in-place (so
>> as not to invalidate any references to that array)
>
> I'm not sure what this means exactly.

Say one wants to keep two different variables referencing a single in- 
memory list, as so:
a = [1,2,3]
b = a
Now, if 'b' and 'a' go to live in different places (different class  
instances or whatever) but we want 'b' and 'a' to always refer to the  
same in-memory object, so that 'id(a) == id(b)', we need to make sure  
to not assign a brand new list to either one.

That is, if we do something like 'a = [i + 1 for i in a]' then 'id 
(a) != id(b)'. However, we can do 'a[:] = [i + 1 for i in a]' to  
modify a in-place. This is not super-common, but it's also not an  
uncommon python idiom.

I was in my email simply pointing out that naïvely translating that  
idiom to the numpy case can cause unexpected behavior in the case of  
views.

I think that this is is unquestionably a bug -- isn't the point of  
views that the user shouldn't need to care if a particular array  
object is a view or not? Given the lack of methods to query whether  
an array is a view, or what it might be a view on, this seems like a  
reasonable perspective... I mean, if certain operations produce  
completely different results when one of the operands is a view, that  
*seems* like a bug. It might not be worth fixing, but I can't see how  
that behavior would be considered a feature.

However, I do think there's a legitimate question about whether it  
would be worth fixing -- there could be a lot of complicated checks  
to catch these kind of corner cases.

>> via the standard
>> python idiom for lists, e.g.:
>>
>> a[:] = numpy.flipud(a)
>>
>> Now, flipud returns a view on 'a', so assigning that to 'a[:]'
>> provides pretty strange results as the buffer that is being read (the
>> view) is simultaneously modified.
>
> yes, weird. So why not just:
>
> a = numpy.flipud(a)
>
> Since flipud returns a view, the new "a" will still be using the same
> data array. Does this satisfy your need above?

Nope -- though 'a' and 'numpy.flipud(a)' share the same data, the  
actual ndarray instances are different. This means that any other  
references to the 'a' array (made via 'b = a' or whatever) now refer  
to the old 'a', not the flipped one.

The only other option for sharing arrays is to encapsulate them as  
attributes of *another* object, which itself won't change. That seems  
a bit clumsy.

> It's too bad that to do this you need to know that flipud created a
> view, rather than a copy of the data, as if it were a copy, you would
> need to do the a[:] trick to make sure a kept the same data, but  
> that's
> the price we pay for the flexibility and power of numpy -- the
> alternative is to have EVERYTHING create a copy, but there were be a
> substantial performance hit for that.

Well, Anne's email suggests another alternative -- each time a view  
is created, keep track of the original array from whence it came, and  
then only make a copy when collisions like the above would take place.

And actually, I suspect that views already need to keep a reference  
to their original array in order to keep that array from being  
deleted before the view is. But I don't know the guts of numpy well  
enough to say for sure.

> NOTE: the docstring doesn't make it clear that a view is created:
>
 help(numpy.flipud)
> Help on function flipud in module numpy.lib.twodim_base:
>
> flipud(m)
>  returns an array with the columns preserved and rows flipped in
>  the up/down direction.  Works on the first dimension of m.
>
> NOTE2: Maybe these kinds of functions should have an optional flag  
> that
> specified whether you want a view or a copy -- I'd have expected a  
> copy
> in this case!

Well, it seems like in most cases one does not need to care whether  
one is looking at a view or an array. The only time that comes to  
mind is when you're attempting to modify the array in-place, e.g.
a[] = 

Even if the maybe-bug above were easily fixable (again, not sure  
about that), you might *still* want to be able to figure out if a  
were a view before such a modification. Whether this needs a runtime  
'is_view' method, or just consistent documentation about what returns  
a view, isn't clear to me. Certainly the latter couldn't hurt.

> QUESTION:
> How do you tell if two arrays are views on the same data: is  
> checking if
> they have the same .base reliable?
>
 a = numpy.array((1,2,3,4))
 b = a.view()
 a.base is b.base
> False
>
> No, I guess not. Maybe .base should return self if it's the originator
> of the data.
>
> Is there a reliable way? I usually just test by changing a value in  
> one
> to see if it changes in the other, but that's one heck of kludge!
>
 a.__array_interface__['data'][0] == b.__array_interface__['data'] 
 [0]
> True
>
> seems to work, but that's pretty ugly!

Good question. As I mentioned ab

Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Christopher Barker
Zachary Pincus wrote:
> Hello folks,
> 
> I recently was trying to write code to modify an array in-place (so  
> as not to invalidate any references to that array)

I'm not sure what this means exactly.

> via the standard  
> python idiom for lists, e.g.:
> 
> a[:] = numpy.flipud(a)
> 
> Now, flipud returns a view on 'a', so assigning that to 'a[:]'  
> provides pretty strange results as the buffer that is being read (the  
> view) is simultaneously modified.

yes, weird. So why not just:

a = numpy.flipud(a)

Since flipud returns a view, the new "a" will still be using the same 
data array. Does this satisfy your need above?

You've created a new numpy array object, but that was created by flipud 
anyway, so there is no performance loss.

It's too bad that to do this you need to know that flipud created a 
view, rather than a copy of the data, as if it were a copy, you would 
need to do the a[:] trick to make sure a kept the same data, but that's 
the price we pay for the flexibility and power of numpy -- the 
alternative is to have EVERYTHING create a copy, but there were be a 
substantial performance hit for that.

NOTE: the docstring doesn't make it clear that a view is created:

 >>> help(numpy.flipud)
Help on function flipud in module numpy.lib.twodim_base:

flipud(m)
 returns an array with the columns preserved and rows flipped in
 the up/down direction.  Works on the first dimension of m.

NOTE2: Maybe these kinds of functions should have an optional flag that 
specified whether you want a view or a copy -- I'd have expected a copy 
in this case!

QUESTION:
How do you tell if two arrays are views on the same data: is checking if 
they have the same .base reliable?

 >>> a = numpy.array((1,2,3,4))
 >>> b = a.view()
 >>> a.base is b.base
False

No, I guess not. Maybe .base should return self if it's the originator 
of the data.

Is there a reliable way? I usually just test by changing a value in one 
to see if it changes in the other, but that's one heck of kludge!

 >>> a.__array_interface__['data'][0] == b.__array_interface__['data'][0]
True

seems to work, but that's pretty ugly!

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with installation of numpy: undefined symbols

2007-02-01 Thread Robert Kern
Eric Emsellem wrote:

> - I finally get the svn version of numpy, and do the "python setup.py
> install" (no site.cfg ! but environment variables defined as mentioned
> above)

Show us the output of

  $ cd ~/src/numpy  # or whereever
  $ python setup.py config

Most likely, you are having the same problem you had with scipy. You will
probably need to make a site.cfg with the correct information about ATLAS.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] problem with installation of numpy: undefined symbols

2007-02-01 Thread Eric Emsellem
Hi,

after trying to solve an installation problem with scipy, I had to
reinstall everything from scratch, and so I now turned back to numpy the
installation of which does not work for me (which may in fact explain
the pb I had with scipy).

To be clear on what I do:

- I install blas first, and create a libfblas.a which I copy under
/usr/local/lib
(using a g77 -fno-second-underscore -O2 -c *.f, ar r libfblas.a *.o,
ranlib libfblas.a)

- I then define BLAS environment variable accordingly
(/usr/local/lib/libfblas.a)
- I compile lapack-3.1.0, using "make lapacklib"
- I then use a precompiled ATLAS Linux version P4SSE2 (I tried compiling
it but it does have the same result)
- I copy all ATLAS .a and .h in /usr/local/lib/atlas, and define the
ATLAS variable accordingly
- I then merge the  *.o to have an extended liblapack.a by doing the usual:
 cd /usr/local/lib/atlas
 cp liblapack.a liblapack.a_ATLAS
 mkdir tmp
 cd tmp
 ar x ../liblapack.a_ATLAS
 cp
/soft/python/tar/Science_Packages/lapack-3.1.0/lapack_LINUX.a ../liblapack.a
 ar r ../liblapack.a *.o

- I finally get the svn version of numpy, and do the "python setup.py
install" (no site.cfg ! but environment variables defined as mentioned
above)

- I go into ipython, and try:


import numpy

and I get :

exceptions.ImportError   Traceback (most
recent call last)

---> 40 import linalg

ImportError:
/usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so:
undefined symbol: ATL_cGetNB

I have tried many ways to avoid this problem but does not manage to
solve it.

Any help is welcome there. (and sorry for those who already saw my
numerous posts on scipy forum: I am trying to get to the heart of the pb...)

thanks in advance!
Eric
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Anne Archibald
On 01/02/07, Zachary Pincus <[EMAIL PROTECTED]> wrote:

> I recently was trying to write code to modify an array in-place (so
> as not to invalidate any references to that array) via the standard
> python idiom for lists, e.g.:

You can do this, but if your concern is invalidating references, you
might want to think about rearranging your application so you can just
return "new" arrays (that may share elements), if that is possible.

> a[:] = numpy.flipud(a)
>
> Now, flipud returns a view on 'a', so assigning that to 'a[:]'
> provides pretty strange results as the buffer that is being read (the
> view) is simultaneously modified. Here is an example:

> A question, then: Does this represent a bug? Or perhaps there is a
> better idiom for modifying an array in-place than 'a[:] = ...'? Or is
> incumbent on the user to ensure that any time an array is directly
> modified, that the modifying array is not a view of the original array?

It's the user's job to keep them separate. Sorry. If you're worried -
say if it's an array you don't have much control over (so it might
share elements without you knowing), you can either return a new
array, or if you must modify it in place, copy the right-hand side
before using it (a[:]=flipud(a).copy()).

It would in principle be possible for numpy to provide a function that
tells you if two arrays might share data (simply compare the pointer
to the malloc()ed storage and ignore strides and offset; a bit
conservative but probably Good Enough, though a bit more cleverness
should let one get the Right Answer efficiently).

Anne M. Archibald
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array copy-to-self and views

2007-02-01 Thread Zachary Pincus
Hello folks,

I recently was trying to write code to modify an array in-place (so  
as not to invalidate any references to that array) via the standard  
python idiom for lists, e.g.:

a[:] = numpy.flipud(a)

Now, flipud returns a view on 'a', so assigning that to 'a[:]'  
provides pretty strange results as the buffer that is being read (the  
view) is simultaneously modified. Here is an example:

In [2]: a = numpy.arange(10).reshape((5,2))
In [3]: a
Out[3]:
array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
In [4]: numpy.flipud(a)
Out[4]:
array([[8, 9],
[6, 7],
[4, 5],
[2, 3],
[0, 1]])
In [5]: a[:] = numpy.flipud(a)
In [6]: a
Out[6]:
array([[8, 9],
[6, 7],
[4, 5],
[6, 7],
[8, 9]])

A question, then: Does this represent a bug? Or perhaps there is a  
better idiom for modifying an array in-place than 'a[:] = ...'? Or is  
incumbent on the user to ensure that any time an array is directly  
modified, that the modifying array is not a view of the original array?

Thanks for any thoughts,

Zach Pincus

Program in Biomedical Informatics and Department of Biochemistry
Stanford University School of Medicine

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion