Re: [Numpy-discussion] possibly ctypes related segfault

2007-08-29 Thread Martin Wiechert
Hmpf. Anyway, thanks again, Stefan!

Cheers, Martin


On Tuesday 28 August 2007 18:45, Stefan van der Walt wrote:
 On Tue, Aug 28, 2007 at 02:03:52PM +0200, Martin Wiechert wrote:
  Here's the test script. I'm using it via execfile from an interactive
  session, so I can inspect (and crash with readline) afterwards.
 
  Here's how I compiled:
  gcc
  solver.c -fPIC -ggdb -shared -llapack -lf77blas -lcblas -latlas
  -lgfortran -o librectify.so

 It works perfectly on the two Linux machines I tried (32-bit and
 64-bit).  Maybe your lapack isn't healthy?

 Cheers
 Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Citing Numeric and numpy

2007-08-29 Thread Alan G Isaac
On Wed, 29 Aug 2007, Peter apparently wrote:
 I would like to know if there is a preferred form for 
 citing the old
 Numeric library

I'll attach text from the first two pages of *Numerical 
Python* below.

Cheers,
Alan Isaac

-

An Open Source Project

Numerical Python

David Ascher
Paul F. Dubois
Konrad Hinsen
Jim Hugunin
Travis Oliphant
with contributions from the Numerical Python community.
September 7, 2001
Lawrence Livermore National Laboratory, Livermore, CA 94566
UCRL­MA­128569

ii
Legal Notice
Please see file Legal.html in the source distribution.

This open source project has been contributed to by many people, including 
personnel of the Lawrence Liver­
more National Laboratory. The following notice covers those contributions 
including this manual.
Copyright (c) 1999, 2000, 2001. The Regents of the University of California. 
All rights reserved.
Permission to use, copy, modify, and distribute this software for any purpose 
without fee is hereby granted,
provided that this entire notice is included in all copies of any software 
which is or includes a copy or modifi­
cation of this software and in all copies of the supporting documentation for 
such software.
This work was produced at the University of California, Lawrence Livermore 
National Laboratory under con­
tract no. W­7405­ENG­48 between the U.S. Department of Energy and The Regents 
of the University of Cali­
fornia for the operation of UC LLNL.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Trac ticket

2007-08-29 Thread Joris De Ridder
Hi,

Perhaps a stupid question, but I don't seem to find any info about it  
on the web.
I would like to take up a (simple) Numpy Trac ticket, and fix it in  
the Numpy trunk. How can I assign the ticket to myself? After logging  
in, I don't see any obvious way of doing this. Secondly, committing a  
fix back to the SVN repository seems to require a specific login/pw,  
how to get one (assuming my fix is welcome)?

Cheers,
Joris


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Trac ticket

2007-08-29 Thread Steve Lianoglou
 Perhaps a stupid question, but I don't seem to find any info about it
 on the web.
 I would like to take up a (simple) Numpy Trac ticket, and fix it in
 the Numpy trunk. How can I assign the ticket to myself?

I'm not sure how the trac system is setup @ numpy, but you may not  
have the perms to do that yourself.

Perhaps you can add a comment to the ticket saying that you are  
working on it (an expected completion date may be helpful)

 After logging
 in, I don't see any obvious way of doing this. Secondly, committing a
 fix back to the SVN repository seems to require a specific login/pw,
 how to get one (assuming my fix is welcome)?

You should most likely just attach a patch against the latest trunk  
to the ticket itself for review.

-steve
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Bug in resize method?

2007-08-29 Thread Charles R Harris
Hi all,

This looks like a bug to me.

 a = arange(6).reshape(2,3)
 a.resize((3,3))
Traceback (most recent call last):
  File stdin, line 1, in module
ValueError: cannot resize this array:  it does not own its data

Is there any reason resize should fail in this case? Resize should be
returning an new array, no? There are several other things that
look like bugs in this method, for instance:

 a = arange(6).resize((2,3))
 a

`a` has no value and no error is raised.

The resize function works as expected

 resize(a,(3,3))
array([[0, 1, 2],
   [3, 4, 5],
   [0, 1, 2]])

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Charles R Harris
On 8/29/07, Stefan van der Walt [EMAIL PROTECTED] wrote:

 Hi Charles

 On Wed, Aug 29, 2007 at 09:42:50AM -0600, Charles R Harris wrote:
  Hi all,
 
  This looks like a bug to me.
 
   a = arange(6).reshape(2,3)
   a.resize((3,3))
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: cannot resize this array:  it does not own its data

 From the docstring of a.resize:

 Change size and shape of self inplace.  Array must own its own memory
 and
 not be referenced by other arrays.Returns None.


The documentation is bogus:

 a = arange(6).reshape(2,3)
 a
array([[0, 1, 2],
   [3, 4, 5]])
 a.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : False
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False
 a.resize((3,2))
 a
array([[0, 1],
   [2, 3],
   [4, 5]])


The reshaped array is a view on the original data, hence it doesn't
 own it:

 In [15]: a = N.arange(6).reshape(2,3)

 In [16]: a.flags
 Out[16]:
   C_CONTIGUOUS : True
   F_CONTIGUOUS : False
   OWNDATA : False
   WRITEABLE : True
   ALIGNED : True
   UPDATEIFCOPY : False

   a = arange(6).resize((2,3))
   a
 
  `a` has no value and no error is raised.

 It is because `a` is now None.


This behaviour doesn't match documentation elsewhere, which is why I am
raising the question. What *should* the resize method do? It looks like it
is equivalent to assigning a shape tuple to a.shape, so why do we need it?
Apart from that, the reshape method looks like it would serve for most
cases.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra error?

2007-08-29 Thread Matthieu Brucher


  from numpy import *
  from numpy.linalg import *


linalg is in the numpy module
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Christopher Barker
Charles R Harris wrote:
 What *should* the resize method do? It looks like 
 it is equivalent to assigning a shape tuple to a.shape,

No, that's what reshape does.

  so why do we need it?

resize() will change the SIZE of the array (number of elements), where 
reshape() will only change the shape, but not the number of elements. 
The fact that the size is changing is why it won't work if if doesn't 
own the data.

  a = N.array((1,2,3))
  a.reshape((6,))
Traceback (most recent call last):
   File stdin, line 1, in module
ValueError: total size of new array must be unchanged

can't reshape to a shape that is a different size.

  b = a.resize((6,))
  repr(b)
'None'

resize changes the array in place, so it returns None, but a has been 
changed:
  a
array([1, 2, 3, 0, 0, 0])

Perhaps you want the function, rather than the method:

  b = N.resize(a, (12,))
  b
array([1, 2, 3, 0, 0, 0, 1, 2, 3, 0, 0, 0])

  a
array([1, 2, 3, 0, 0, 0])

a hasn't been changed, b is a brand new array.

-CHB



  Apart from that, the reshape method looks like it would serve
 for most cases.
 
 Chuck
 
 
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Trace returns int32 type for int8 array.

2007-08-29 Thread Charles R Harris
Hi all,

The documentation of trace says it returns the same type as the array. Yet:

 trace(eye(2, dtype=int8)).dtype
dtype('int32')

For float types this promotion does not occur

 trace(eye(2, dtype=float32)).dtype
dtype('float32')


Trace operates the same way as sum. What should be the case here? And if
type promotion is the default, shouldn't float32 be promoted to double?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linear algebra error?

2007-08-29 Thread Robert Kern
F Bitonti wrote:

 However, I was told that the reason I am getting this error is because
 the linear algebra module is already installed

 The more proximate reasonThat's not the reason why you are getting the error,
it's just that you don't need to and shouldn't try to execute that setup.py
since it's already installed. that you are getting that particular traceback is
because you don't have a compiler installed. However, that's not relevant here
since numpy.linalg is already installed.

 yet it dosn't seem to be
 becaues when I exectue the following commands i get these error
 messages. Am I doing someting wrong I have only been using python two days.
 
 from numpy import *
 from linalg import *

  from numpy.linalg import *
  inv(...)

Or preferably:

  from numpy import linalg
  linalg.inv(...)

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Charles R Harris
On 8/29/07, Christopher Barker [EMAIL PROTECTED] wrote:

 Charles R Harris wrote:
  What *should* the resize method do? It looks like
  it is equivalent to assigning a shape tuple to a.shape,

 No, that's what reshape does.


No, reshape returns a view and the view doesn't own its data. Totally
different behavior in this context.

 so why do we need it?

 resize() will change the SIZE of the array (number of elements), where
 reshape() will only change the shape, but not the number of elements.
 The fact that the size is changing is why it won't work if if doesn't
 own the data.


According to the documentation, the resize method changes the array inplace.
How can it be inplace if the number of elements changes? Admittedly, it
*will* change the size, but that is not consistent with the documentation. I
suspect it reallocates memory and (hopefully) frees the old, but then that
is what the documentation should say because it explains why the data must
be owned -- a condition violated in some cases as demonstrated above. I am
working on documentation and that is why I am raising these questions. There
seem to be some inconsistencies that need clarification and/or fixing.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] svn down

2007-08-29 Thread Charles R Harris
Hi all,

The svn server seems to be down, I am getting error messages from the
buildbots:

svn: PROPFIND request failed on '/svn/numpy/trunk'
svn: PROPFIND of '/svn/numpy/trunk': could not connect to server
(http://scipy.org)
program finished with exit code 1

It might be reasonable to check this case before sending posts.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Charles R Harris
On 8/29/07, Timothy Hochberg [EMAIL PROTECTED] wrote:



 On 8/29/07, Charles R Harris [EMAIL PROTECTED] wrote:
 
 
 
  On 8/29/07, Christopher Barker  [EMAIL PROTECTED] wrote:
  
   Charles R Harris wrote:
What *should* the resize method do? It looks like
it is equivalent to assigning a shape tuple to a.shape,
  
   No, that's what reshape does.
 
 
  No, reshape returns a view and the view doesn't own its data. Totally
  different behavior in this context.
 
   so why do we need it?
  
   resize() will change the SIZE of the array (number of elements), where
  
   reshape() will only change the shape, but not the number of elements.
   The fact that the size is changing is why it won't work if if doesn't
   own the data.
 
 
  According to the documentation, the resize method changes the array
  inplace. How can it be inplace if the number of elements changes?
 

 It sounds like you and Chris are talking past each other on a matter of
 terminology. At a C-level, it's obviously not (necessarily) in place, since
 the array may get realloced as you surmise below. However, at the Python
 level, the change is in fact in place, in the same sense that appending to a
 Python list operates in-place, even though under the covers memory may get
 realloced there as well.


  Admittedly, it *will* change the size, but that is not consistent with
  the documentation. I suspect it reallocates memory and (hopefully) frees the
  old, but then that is what the documentation should say because it explains
  why the data must be owned -- a condition violated in some cases as
  demonstrated above. I am working on documentation and that is why I am
  raising these questions. There seem to be some inconsistencies that need
  clarification and/or fixing.
 

 The main inconsistency I see above is that resize appears to only require
 ownership of the data if in fact the number of items changes. I don't think
 that's actually a bug, but I don't like it much; I would prefer that resize
 be strict and always require ownership. However, I'm fairly certain that
 there are people that prefer friendliness over consistency, so I wouldn't
 be surprised to get some pushback on changing that.


I still don't see why the method is needed at all. Given the conditions on
the array, the only thing it buys you over the resize function or a reshape
is the automatic deletion of the old memory if new memory is allocated. And
the latter is easily done as a = reshape(a, new_shape). I know there was a
push to make most things methods, but it is possible to overdo it. Is this a
Numarray compatibility issue?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Timothy Hochberg
On 8/29/07, Anne Archibald [EMAIL PROTECTED] wrote:

 On 29/08/2007, Timothy Hochberg [EMAIL PROTECTED] wrote:

  The main inconsistency I see above is that resize appears to only
 require
  ownership of the data if in fact the number of items changes. I don't
 think
  that's actually a bug, but I don't like it much; I would prefer that
 resize
  be strict and always require ownership. However, I'm fairly certain that
  there are people that prefer friendliness over consistency, so I
 wouldn't
  be surprised to get some pushback on changing that.

 It seems to me like inplace resize is a problem, no matter how you
 implement it --- is there any way to verify that no view exists of a
 given array? (refcounts won't do it since there are other, non-view
 ways to increase the refcount of an array.)


I think that may be overstating the problem a bit; refcounts should work in
the sense that they would prevent segfaults. They'll just be too
conservative in many cases, preventing resizes in cases where they would
otherwise work.


 If there's a view of an
 array, you resize() it in place, and realloc() moves the data, the
 views now point to bogus memory: you can cause the python interpreter
 to segfault by addressing their contents. I really can't see any way
 around this; why not remove inplace resize() (or make it raise
 exceptions if the size has to change) and allow only the function
 resize()?


Probably because in a few cases, it's vastly more efficient to realloc the
data than to copy it.

FWIW, I don't use either the resize function or the resize method, but if I
was going to get rid of one, personally I'd axe the function. Resizing is a
confusing operation and the function doesn't have the possibility of better
efficiency to justify it's existence.


-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Robert Kern
Anne Archibald wrote:
 On 29/08/2007, Timothy Hochberg [EMAIL PROTECTED] wrote:
 
 The main inconsistency I see above is that resize appears to only require
 ownership of the data if in fact the number of items changes. I don't think
 that's actually a bug, but I don't like it much; I would prefer that resize
 be strict and always require ownership. However, I'm fairly certain that
 there are people that prefer friendliness over consistency, so I wouldn't
 be surprised to get some pushback on changing that.
 
 It seems to me like inplace resize is a problem, no matter how you
 implement it --- is there any way to verify that no view exists of a
 given array? (refcounts won't do it since there are other, non-view
 ways to increase the refcount of an array.)

Yes, as long as every view is created using the C API correctly. That's why
Chuck saw the exception he did, because he tried to resize() an array that had a
view stuck of it (or rather, he was trying to resize() the view, which didn't
have ownership of the data).


In [8]: from numpy import *

In [9]: a = zeros(10)

In [10]: a.resize(15)

In [11]: b = a[:]

In [12]: a.resize(20)
---
ValueErrorTraceback (most recent call last)

/Users/rkern/src/VTK-5.0.2/ipython console in module()

ValueError: cannot resize an array that has been referenced or is referencing
another array in this way.  Use the resize function


Of course, if you muck around with the raw data pointer using ctypes, you might
have problems, but that's ctypes for you.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Gael Varoquaux
On Wed, Aug 29, 2007 at 11:31:12AM -0700, Timothy Hochberg wrote:
FWIW, I don't use either the resize function or the resize method, but if
I was going to get rid of one, personally I'd axe the function. Resizing
is a confusing operation and the function doesn't have the possibility of
better efficiency to justify it's existence.

My understand of OOP is that I expect a method to modify an object in
place, and a function to return a new object (or a view).

Now this is not true with Python, as some objects are imutable and this
is not possible, but at least there seems to be some logic that a method
returns a new object only if the object is imutable.

With numpy I often fail to see the logic, but I'd love to see one.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Timothy Hochberg
On 8/29/07, Charles R Harris [EMAIL PROTECTED] wrote:


 I still don't see why the method is needed at all. Given the conditions on
 the array, the only thing it buys you over the resize function or a reshape
 is the automatic deletion of the old memory if new memory is allocated.


Can you explain this more? Both you and Anne seem to share the opinion that
the resize method is useless, while the resize function is useful. So, now
I'm worried I'm missing something since as far as I can tell the function is
useless and the method is only mostly useless.


 And the latter is easily done as a = reshape(a, new_shape). I know there
 was a push to make most things methods,


In general I think methods are easy to overdo, but I'm not on board for this
particular case.

but it is possible to overdo it. Is this a Numarray compatibility issue?


Dunno about that.


-- 
.  __
.   |-\
.
.  [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Charles R Harris
On 8/29/07, Timothy Hochberg [EMAIL PROTECTED] wrote:



 On 8/29/07, Charles R Harris [EMAIL PROTECTED] wrote:

 
  I still don't see why the method is needed at all. Given the conditions
  on the array, the only thing it buys you over the resize function or a
  reshape is the automatic deletion of the old memory if new memory is
  allocated.
 

 Can you explain this more? Both you and Anne seem to share the opinion
 that the resize method is useless, while the resize function is useful. So,
 now I'm worried I'm missing something since as far as I can tell the
 function is useless and the method is only mostly useless.


Heh. I might dump both. The resize function is a concatenation followed by
reshape. It differs from the resize method in that it always returns a new
array and repeats the data instead of filling with zeros. The inconsistency
in the way the array is filled  bothers me a bit, I would have just named
the method realloc. I really don't see the need for either except for
backward compatibility. Maybe someone can make a case.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug in resize method?

2007-08-29 Thread Eric Firing
Timothy Hochberg wrote:
 
 
 On 8/29/07, *Charles R Harris* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 
 I still don't see why the method is needed at all. Given the
 conditions on the array, the only thing it buys you over the resize
 function or a reshape is the automatic deletion of the old memory if
 new memory is allocated. 
 
 
 Can you explain this more? Both you and Anne seem to share the opinion 
 that the resize method is useless, while the resize function is useful. 
 So, now I'm worried I'm missing something since as far as I can tell the 
 function is useless and the method is only mostly useless.

The resize function docstring makes the following distinction:

Definition: numpy.resize(a, new_shape)
Docstring:
 Return a new array with the specified shape.

 The original array's total size can be any size.  The new array is
 filled with repeated copies of a.

 Note that a.resize(new_shape) will fill the array with 0's beyond
 current definition of a.

So the method and the function are subtly different.  As far as I can 
see, the method is causing more trouble than it is worth.  Under what 
circumstances, in real code, can it provide enough benefit to override 
the penalty it is now exacting in confusion?

Eric
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
Hi
When I try
import numpy
id(numpy.dot) == id(numpy.core.multiarray.dot)

I get True. But I have liblapck.a installed in ~/lib and I put the lines
[DEFAULT]
library_dirs = /home/myeates/lib
include_dirs = /home/myeates/include

in site.cfg
In fact, when I build and run a sytem trace I see that liblapack.a is 
being accessed.

Any ideas?
Mathew

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Robert Kern
Mathew Yeates wrote:
 Hi
 When I try
 import numpy
 id(numpy.dot) == id(numpy.core.multiarray.dot)
 
 I get True. But I have liblapck.a installed in ~/lib and I put the lines
 [DEFAULT]
 library_dirs = /home/myeates/lib
 include_dirs = /home/myeates/include
 
 in site.cfg
 In fact, when I build and run a sytem trace I see that liblapack.a is 
 being accessed.
 
 Any ideas?

It is possible that you have a linking problem with _dotblas.so. On some
systems, such a problem will only manifest itself at run-time, not build-time.
At runtime, you will get an ImportError, which we catch because that's also the
error one gets if the _dotblas is legitimately absent.

Try importing _dotblas by itself to see the error message.


In [8]: from numpy.core import _dotblas


Most likely you are missing the appropriate libblas, too, since you don't
mention it.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
yes, I get
from numpy.core import _dotblas
ImportError: No module named multiarray

Now what?
uname -a
Linux 2.6.9-55.0.2.EL #1 Tue Jun 12 17:47:10 EDT 2007 i686 athlon i386 
GNU/Linux


Robert Kern wrote:
 Mathew Yeates wrote:
   
 Hi
 When I try
 import numpy
 id(numpy.dot) == id(numpy.core.multiarray.dot)

 I get True. But I have liblapck.a installed in ~/lib and I put the lines
 [DEFAULT]
 library_dirs = /home/myeates/lib
 include_dirs = /home/myeates/include

 in site.cfg
 In fact, when I build and run a sytem trace I see that liblapack.a is 
 being accessed.

 Any ideas?
 

 It is possible that you have a linking problem with _dotblas.so. On some
 systems, such a problem will only manifest itself at run-time, not build-time.
 At runtime, you will get an ImportError, which we catch because that's also 
 the
 error one gets if the _dotblas is legitimately absent.

 Try importing _dotblas by itself to see the error message.


 In [8]: from numpy.core import _dotblas


 Most likely you are missing the appropriate libblas, too, since you don't
 mention it.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Robert Kern
Mathew Yeates wrote:
 yes, I get
 from numpy.core import _dotblas
 ImportError: No module named multiarray

That's just weird. Can you import numpy.core.multiarray by itself?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
yes

Robert Kern wrote:
 Mathew Yeates wrote:
   
 yes, I get
 from numpy.core import _dotblas
 ImportError: No module named multiarray
 

 That's just weird. Can you import numpy.core.multiarray by itself?

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
oops. sorry
from numpy.core import _dotblas
ImportError: 
/home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so: 
undefined symbol: cblas_zaxpy


Robert Kern wrote:
 Mathew Yeates wrote:
   
 yes, I get
 from numpy.core import _dotblas
 ImportError: No module named multiarray
 

 That's just weird. Can you import numpy.core.multiarray by itself?

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
my site,cfg just is
[DEFAULT]
library_dirs = /home/myeates/lib
include_dirs = /home/myeates/include

python setup.py config gives
F2PY Version 2_3979
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in /home/myeates/lib
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
  NOT AVAILABLE

atlas_blas_info:
  libraries f77blas,cblas,atlas not found in /home/myeates/lib
  NOT AVAILABLE

blas_info:
  FOUND:
libraries = ['blas']
library_dirs = ['/home/myeates/lib']
language = f77

  FOUND:
libraries = ['blas']
library_dirs = ['/home/myeates/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  libraries mkl,vml,guide not found in /home/myeates/lib
  NOT AVAILABLE

  NOT AVAILABLE

atlas_threads_info:
Setting PTATLAS=ATLAS
  libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
  libraries lapack_atlas not found in /home/myeates/lib
numpy.distutils.system_info.atlas_threads_info
  NOT AVAILABLE

atlas_info:
  libraries f77blas,cblas,atlas not found in /home/myeates/lib
  libraries lapack_atlas not found in /home/myeates/lib
numpy.distutils.system_info.atlas_info
  NOT AVAILABLE

lapack_info:
  FOUND:
libraries = ['lapack']
library_dirs = ['/home/myeates/lib']
language = f77

  FOUND:
libraries = ['lapack', 'blas']
library_dirs = ['/home/myeates/lib']
define_macros = [('NO_ATLAS_INFO', 1)]
language = f77

running config


Robert Kern wrote:
 Mathew Yeates wrote:
   
 oops. sorry
 from numpy.core import _dotblas
 ImportError: 
 /home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so: 
 undefined symbol: cblas_zaxpy
 

 Okay, yes, that's the problem. liblapack depends on libblas. Make sure that 
 you
 specify one to use. Follow the directions in site.cfg.example. If you need 
 more
 help, please tell us what libraries you are using, your full site.cfg and the
 output of

   $ python setup.py config

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
more info. My blas library has zaxpy defined but not  cblas_zaxpy

Mathew Yeates wrote:
 my site,cfg just is
 [DEFAULT]
 library_dirs = /home/myeates/lib
 include_dirs = /home/myeates/include

 python setup.py config gives
 F2PY Version 2_3979
 blas_opt_info:
 blas_mkl_info:
   libraries mkl,vml,guide not found in /home/myeates/lib
   NOT AVAILABLE

 atlas_blas_threads_info:
 Setting PTATLAS=ATLAS
   libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
   NOT AVAILABLE

 atlas_blas_info:
   libraries f77blas,cblas,atlas not found in /home/myeates/lib
   NOT AVAILABLE

 blas_info:
   FOUND:
 libraries = ['blas']
 library_dirs = ['/home/myeates/lib']
 language = f77

   FOUND:
 libraries = ['blas']
 library_dirs = ['/home/myeates/lib']
 define_macros = [('NO_ATLAS_INFO', 1)]
 language = f77

 lapack_opt_info:
 lapack_mkl_info:
 mkl_info:
   libraries mkl,vml,guide not found in /home/myeates/lib
   NOT AVAILABLE

   NOT AVAILABLE

 atlas_threads_info:
 Setting PTATLAS=ATLAS
   libraries ptf77blas,ptcblas,atlas not found in /home/myeates/lib
   libraries lapack_atlas not found in /home/myeates/lib
 numpy.distutils.system_info.atlas_threads_info
   NOT AVAILABLE

 atlas_info:
   libraries f77blas,cblas,atlas not found in /home/myeates/lib
   libraries lapack_atlas not found in /home/myeates/lib
 numpy.distutils.system_info.atlas_info
   NOT AVAILABLE

 lapack_info:
   FOUND:
 libraries = ['lapack']
 library_dirs = ['/home/myeates/lib']
 language = f77

   FOUND:
 libraries = ['lapack', 'blas']
 library_dirs = ['/home/myeates/lib']
 define_macros = [('NO_ATLAS_INFO', 1)]
 language = f77

 running config


 Robert Kern wrote:
   
 Mathew Yeates wrote:
   
 
 oops. sorry
 from numpy.core import _dotblas
 ImportError: 
 /home/myeates/lib/python2.5/site-packages/numpy/core/_dotblas.so: 
 undefined symbol: cblas_zaxpy
 
   
 Okay, yes, that's the problem. liblapack depends on libblas. Make sure that 
 you
 specify one to use. Follow the directions in site.cfg.example. If you need 
 more
 help, please tell us what libraries you are using, your full site.cfg and the
 output of

   $ python setup.py config

   
 


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Robert Kern
If your BLAS just the reference BLAS, don't bother with _dotblas. It won't be
any faster than the default implementation in numpy. You only get a win if you
are using an accelerated BLAS with the CBLAS interface for C-style row-major
matrices. Your libblas does not seem to be such an accelerated BLAS.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
I'm the one who created libblas.a so I must have done something wrong. 
This is lapack-3.1.1.


Robert Kern wrote:
 If your BLAS just the reference BLAS, don't bother with _dotblas. It won't be
 any faster than the default implementation in numpy. You only get a win if you
 are using an accelerated BLAS with the CBLAS interface for C-style row-major
 matrices. Your libblas does not seem to be such an accelerated BLAS.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Robert Kern
Mathew Yeates wrote:
 I'm the one who created libblas.a so I must have done something wrong. 
 This is lapack-3.1.1.

No, you didn't do anything wrong, per se, you just built the reference F77 BLAS.
It's not an accelerated BLAS, so there's no point in using it with numpy.
There's not way you *can* build it to be an accelerated BLAS.

If you want an accelerated BLAS, try to use ATLAS:

  http://math-atlas.sourceforge.net/

It is possible that your Linux distribution, whatever it is, already has a build
of it for you.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help! not using lapack

2007-08-29 Thread Mathew Yeates
Thanks Robert
I have a deadline and don't have time to install ATLAS. Instead I'm 
installing clapack. Is this the corrrect thing to do?

Mathew

Robert Kern wrote:
 Mathew Yeates wrote:
   
 I'm the one who created libblas.a so I must have done something wrong. 
 This is lapack-3.1.1.
 

 No, you didn't do anything wrong, per se, you just built the reference F77 
 BLAS.
 It's not an accelerated BLAS, so there's no point in using it with numpy.
 There's not way you *can* build it to be an accelerated BLAS.

 If you want an accelerated BLAS, try to use ATLAS:

   http://math-atlas.sourceforge.net/

 It is possible that your Linux distribution, whatever it is, already has a 
 build
 of it for you.

   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] gesdd hangs

2007-08-29 Thread Mathew Yeates
I guess I can't blame lapack. My system has atlas so I recompiled numpy 
pointing to atlas. Now

id(numpy.dot) == id(numpy.core.multiarray.dot) is False

However when I run decomp.svd on a 25 by 25 identity matrix, it hangs when 
gesdd is called (line 501 of linalag/decomp.py)

Anybody else seeing this?

Mathew




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Bug or surprising undocumented behaviour in irfft

2007-08-29 Thread Anne Archibald
Hi,

numpy's Fourier transforms have the handy feature of being able to
upsample and downsample signals; for example the documentation cites
irfft(rfft(A),16*len(A)) as a way to get a Fourier interpolation of A.
However, there is a peculiarity with the way numpy handles the
highest-frequency coefficient.

First of all, the normalization:

In [65]: rfft(cos(2*pi*arange(8)/8.))
Out[65]:
array([ -3.44505240e-16 +0.e+00j,
 4.e+00 -1.34392280e-15j,
 1.22460635e-16 -0.e+00j,
-1.16443313e-16 -8.54080261e-16j,   9.95839695e-17 +0.e+00j])

In [66]: rfft(cos(2*4*pi*arange(8)/8.))
Out[66]: array([ 0.+0.j,  0.+0.j,  0.-0.j,  0.+0.j,  8.+0.j])

So a cosine signal gives 0.5*N if its frequency F is 0F=N/2, but N
if its frequency is N/2+1 (or zero). This is fine; it's the way
Fourier transforms work.

Now, suppose we take a signal whose Fourier transform is [0,0,0,0,1]:

In [67]: n=8; irfft([0,0,0,0,1],n)[0]*n
Out[67]: 1.0

In [68]: n=16; irfft([0,0,0,0,1],n)[0]*n
Out[68]: 2.0

In [69]: n=32; irfft([0,0,0,0,1],n)[0]*n
Out[69]: 2.0

In [70]: n=64; irfft([0,0,0,0,1],n)[0]*n
Out[70]: 2.0

The value at zero - a non-interpolated point - changes when you interpolate!

Similarly, if we are reducing the number of harmonics:

In [71]: n=8; irfft([0,0,1,0,0],n)[0]*n
Out[71]: 2.0

In [72]: n=4; irfft([0,0,1,0,0],n)[0]*n
Out[72]: 1.0


The upshot is, if I correctly understand what is going on, that the
last coefficient needs to be treated somewhat differently from the
others; when one pads with zeros in order to upsample the signal, one
should multiply the last coefficient by  0.5. Should this be done in
numpy's upsampling code? Should it at least be documented?

Thanks,
Anne M. Archibald
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] gesdd hangs

2007-08-29 Thread Charles R Harris
On 8/29/07, Mathew Yeates [EMAIL PROTECTED] wrote:

 I guess I can't blame lapack. My system has atlas so I recompiled numpy
 pointing to atlas. Now

 id(numpy.dot) == id(numpy.core.multiarray.dot) is False

 However when I run decomp.svd on a 25 by 25 identity matrix, it hangs when
 gesdd is called (line 501 of linalag/decomp.py)

 Anybody else seeing this?


What do you mean by hang?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] gesdd hangs

2007-08-29 Thread Mathew Yeates
never returns


Charles R Harris wrote:


 On 8/29/07, *Mathew Yeates* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 I guess I can't blame lapack. My system has atlas so I recompiled
 numpy
 pointing to atlas. Now

 id(numpy.dot) == id(numpy.core.multiarray.dot) is False

 However when I run decomp.svd on a 25 by 25 identity matrix, it
 hangs when gesdd is called (line 501 of linalag/decomp.py)

 Anybody else seeing this?


 What do you mean by hang?

 Chuck


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

2007-08-29 Thread Charles R Harris
Anne,

On 8/29/07, Anne Archibald [EMAIL PROTECTED] wrote:

 Hi,

 numpy's Fourier transforms have the handy feature of being able to
 upsample and downsample signals; for example the documentation cites
 irfft(rfft(A),16*len(A)) as a way to get a Fourier interpolation of A.
 However, there is a peculiarity with the way numpy handles the
 highest-frequency coefficient.


snip

The upshot is, if I correctly understand what is going on, that the
 last coefficient needs to be treated somewhat differently from the
 others; when one pads with zeros in order to upsample the signal, one
 should multiply the last coefficient by  0.5. Should this be done in
 numpy's upsampling code? Should it at least be documented?


What is going on is that the coefficient at the Nyquist  frequency appears
once in the unextended array, but twice when the array is extended with
zeros because of the Hermitean symmetry. That should probably be fixed in
the upsampling code.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

2007-08-29 Thread Charles R Harris
On 8/29/07, Charles R Harris [EMAIL PROTECTED] wrote:

 Anne,

 On 8/29/07, Anne Archibald [EMAIL PROTECTED] wrote:
 
  Hi,
 
  numpy's Fourier transforms have the handy feature of being able to
  upsample and downsample signals; for example the documentation cites
  irfft(rfft(A),16*len(A)) as a way to get a Fourier interpolation of A.
  However, there is a peculiarity with the way numpy handles the
  highest-frequency coefficient.


 snip

 The upshot is, if I correctly understand what is going on, that the
  last coefficient needs to be treated somewhat differently from the
  others; when one pads with zeros in order to upsample the signal, one
  should multiply the last coefficient by   0.5. Should this be done in
  numpy's upsampling code? Should it at least be documented?


 What is going on is that the coefficient at the Nyquist  frequency appears
 once in the unextended array, but twice when the array is extended with
 zeros because of the Hermitean symmetry. That should probably be fixed in
 the upsampling code.


The inverse irfft also scales by dividing by the new transform size instead
of the original size, so the result needs to be scaled up for the
interpolation to work. It is easy to go wrong with fft's when the correct
sampling/frequency scales aren't carried with the data. I always do that
myself so that the results are independent of transform size/interpolation
and expressed in some standard units.


In [9]: a = array([1, 0, 0, 0], dtype=double)

In [10]: b = rfft(a)

In [11]: b[2] *= .5

In [12]: irfft(b,8)
Out[12]:
array([ 0.5  ,  0.3017767,  0.   , -0.0517767,  0.   ,
   -0.0517767,  0.   ,  0.3017767])

In [13]: 2*irfft(b,8)
Out[13]:
array([ 1.,  0.60355339,  0., -0.10355339,  0.,
   -0.10355339,  0.,  0.60355339])

I don't know where that should be fixed.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] gesdd hangs

2007-08-29 Thread Charles R Harris
On 8/29/07, Mathew Yeates [EMAIL PROTECTED] wrote:

 never returns


Where is decomp coming from? linalg.svd(eye(25)) works fine here.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] gesdd hangs

2007-08-29 Thread Robert Kern
Charles R Harris wrote:
 
 On 8/29/07, *Mathew Yeates* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
 
 never returns
 
 Where is decomp coming from? linalg.svd(eye(25)) works fine here.

scipy, most likely.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

2007-08-29 Thread Anne Archibald
On 29/08/2007, Charles R Harris [EMAIL PROTECTED] wrote:

  What is going on is that the coefficient at the Nyquist  frequency appears
 once in the unextended array, but twice when the array is extended with
 zeros because of the Hermitean symmetry. That should probably be fixed in
 the upsampling code.

Is this also appropriate for the other FFTs? (inverse real, complex,
hermitian, what have you) I have written a quick hack (attached) that
should do just that rescaling, but I don't know that it's a good idea,
as implemented. Really, for a complex IFFT it's extremely peculiar to
add the padding where we do (between frequency -1 and frequency zero);
it would make more sense to pad at the high frequencies (which are in
the middle of the array). Forward FFTs, though, can reasonably be
padded at the end, and it doesn't make much sense to rescale the last
data point.

 The inverse irfft also scales by dividing by the new transform size instead
 of the original size, so the result needs to be scaled up for the
 interpolation to work. It is easy to go wrong with fft's when the correct
 sampling/frequency scales aren't carried with the data. I always do that
 myself so that the results are independent of transform size/interpolation
 and expressed in some standard units.

The scaling of the FFT is a pain everywhere. I always just try it a
few times until I get the coefficients right. I sort of like FFTW's
convention of never normalizing anything - it means the transforms
have nice simple formulas, though unfortunately it also means that
ifft(fft(A))!=A. In any case the normalization of numpy's FFTs is not
something that can reasonably be changed, even in the special case of
the zero-padding inverse (and forward) FFTs.

Anne


fftfix
Description: Binary data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

2007-08-29 Thread Anne Archibald
On 29/08/2007, Charles R Harris [EMAIL PROTECTED] wrote:

  Is this also appropriate for the other FFTs? (inverse real, complex,
  hermitian, what have you) I have written a quick hack (attached) that
  should do just that rescaling, but I don't know that it's a good idea,
  as implemented. Really, for a complex IFFT it's extremely peculiar to
  add the padding where we do (between frequency -1 and frequency zero);
  it would make more sense to pad at the high frequencies (which are in
  the middle of the array). Forward FFTs, though, can reasonably be
  padded at the end, and it doesn't make much sense to rescale the last
  data point.

 It all depends on the data and what you intend. Much of my experience is
 with Michaelson interferometers and in that case the interferogram is
 essentially an autocorrelation, so it is desirable to keep its center at
 sample zero and let the left side wrap around, so ideally you fill in the
 middle as you suggest. You can also pad at the end if you don't put the
 center at zero, but then you need to phase shift the spectrum in a way that
 corresponds to rotating the center to index zero and padding in the middle.
 I expect you would want to do the same thing for complex transforms if they
 are of real data and do the nyquist divided by two thingy. If the high
 frequencies in a complex transform are actually high frequencies and not
 aliases of negative frequencies, then you will want to just append zeros.
 That case also occurs,  I have designed decimating complex filters that
 produce output like that, they were like single sideband in the radi o
 world.

So is it a fair summary to say that for irfft, it is fairly clear that
one should adjust the Nyquist coefficient, but for the other varieties
of FFT, the padding done by numpy is just one of many possible
choices?

Should numpy be modified so that irfft adjusts the Nyquist
coefficient? Should this happen only for irfft?

 I usually multiply the forward transform by the sample interval, in secs or
 cm, and the unscaled inverse transform by the frequency sample interval, in
 Hz or cm^-1. That treats both the forward and inverse fft like
 approximations to the integral transforms and makes the units those of
 spectral density. If you think trapezoidal rule, then you will also see
 factors of .5 at the ends, but that is a sort of apodization that is
 consistent with how Fourier series converge at discontinuities. In the
 normal case where no interpolation is done the product of the sample
 intervals is 1/N, so it reduces to the usual convention. Note that in your
 example the sampling interval decreases when you do the interpolation, so if
 you did another forward transform it would be scaled down to account for the
 extra points in the data.

That's a convenient normalization.

Do you know if there's a current package to associate units with numpy
arrays? For my purposes it would usually be sufficient to have arrays
of quantities with uniform units. Conversions need only be
multiplicative (I don't care about Celsius-to-Fahrenheit style
conversions) and need not even be automatic, though of course that
would be convenient. Right now I use Frink for that sort of thing, but
it would have saved me from making a number of minor mistakes in
several pieces of python code I've written.

Thanks,
Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

2007-08-29 Thread Charles R Harris
On 8/29/07, Anne Archibald [EMAIL PROTECTED] wrote:

 On 29/08/2007, Charles R Harris [EMAIL PROTECTED] wrote:

   Is this also appropriate for the other FFTs? (inverse real, complex,
   hermitian, what have you) I have written a quick hack (attached) that
   should do just that rescaling, but I don't know that it's a good idea,
   as implemented. Really, for a complex IFFT it's extremely peculiar to
   add the padding where we do (between frequency -1 and frequency zero);
   it would make more sense to pad at the high frequencies (which are in
   the middle of the array). Forward FFTs, though, can reasonably be
   padded at the end, and it doesn't make much sense to rescale the last
   data point.
 
  It all depends on the data and what you intend. Much of my experience is
  with Michaelson interferometers and in that case the interferogram is
  essentially an autocorrelation, so it is desirable to keep its center at
  sample zero and let the left side wrap around, so ideally you fill in
 the
  middle as you suggest. You can also pad at the end if you don't put the
  center at zero, but then you need to phase shift the spectrum in a way
 that
  corresponds to rotating the center to index zero and padding in the
 middle.
  I expect you would want to do the same thing for complex transforms if
 they
  are of real data and do the nyquist divided by two thingy. If the high
  frequencies in a complex transform are actually high frequencies and not
  aliases of negative frequencies, then you will want to just append
 zeros.
  That case also occurs,  I have designed decimating complex filters that
  produce output like that, they were like single sideband in the radi o
  world.

 So is it a fair summary to say that for irfft, it is fairly clear that
 one should adjust the Nyquist coefficient, but for the other varieties
 of FFT, the padding done by numpy is just one of many possible
 choices?

 Should numpy be modified so that irfft adjusts the Nyquist
 coefficient? Should this happen only for irfft?


Yes, I think that should be the case. If the complex transforms pad in the
middle, then they are treating the high frequencies as aliases, but unless
they explicitly duplicate the Nyquist coefficient scaling isn't needed. Hmm,
actually, I think that is wrong. The original data points will be
reproduced, but what happens in between points? In between there is a
difference between positive and negative frequences. So in a complex
transform of real data one would want to split the Nyquist coefficient
between high and low frequencies. I don't think it is possible to make a
general statement about the complex case. Just hope the middle frequency is
zero so you can ignore the problem ;)

What happens in the real case is that the irfft algorithm uses the Hermitean
symmetry of the spectrum, so the coefficient is implicitly duplicated.

 I usually multiply the forward transform by the sample interval, in secs
 or
  cm, and the unscaled inverse transform by the frequency sample interval,
 in
  Hz or cm^-1. That treats both the forward and inverse fft like
  approximations to the integral transforms and makes the units those of
  spectral density. If you think trapezoidal rule, then you will also see
  factors of .5 at the ends, but that is a sort of apodization that is
  consistent with how Fourier series converge at discontinuities. In the
  normal case where no interpolation is done the product of the sample
  intervals is 1/N, so it reduces to the usual convention. Note that in
 your
  example the sampling interval decreases when you do the interpolation,
 so if
  you did another forward transform it would be scaled down to account for
 the
  extra points in the data.

 That's a convenient normalization.

 Do you know if there's a current package to associate units with numpy
 arrays? For my purposes it would usually be sufficient to have arrays
 of quantities with uniform units. Conversions need only be
 multiplicative (I don't care about Celsius-to-Fahrenheit style
 conversions) and need not even be automatic, though of course that
 would be convenient. Right now I use Frink for that sort of thing, but
 it would have saved me from making a number of minor mistakes in
 several pieces of python code I've written.


There was a presentation by some fellow from CalTech at SciPy 2005 (4?)
about such a system, but ISTR it looked pretty complex. C++ template
programming does it with traits and maybe the Enthought folks have something
useful along those lines. Otherwise, I don't know of any such system for
general use. Maybe ndarray could be subclassed? It can be convenient to
multiply and divide units, so maybe some sort of string with something to
gather the same units together with a power could be a useful way to track
them and wouldn't tie one down to any particular choice.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org