[Numpy-discussion] Example of numpy cov() not correct?

2008-07-29 Thread Anthony Kong
Hi, all,
 
I am trying out the example here
(http://www.scipy.org/Numpy_Example_List_With_Doc#cov)
 
 
>>> from numpy import *
...
>>> T = array([1.3, 4.5, 2.8, 3.9])   
>>> P = array([2.7, 8.7, 4.7, 8.2])  
>>> cov(T,P)
 
The answer is supposed to be 3.95416657

The result I got is instead a cov matrix
array([[ 1.9758,  3.95416667],
   [ 3.95416667,  8.22916667]])
So, I just wanna confirm this particular example may be no longer
correct.

I am using python 2.4.3 with numpy 1.1.0 on MS win
 
Cheers, Anthony
 
 
 
 
 
 
 

NOTICE
This e-mail and any attachments are confidential and may contain copyright 
material of Macquarie Group Limited or third parties. If you are not the 
intended recipient of this email you should not read, print, re-transmit, store 
or act in reliance on this e-mail or any attachments, and should destroy all 
copies of them. Macquarie Group Limited does not guarantee the integrity of any 
emails or any attached files. The views or opinions expressed are the author's 
own and may not reflect the views or opinions of Macquarie Group Limited.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Why are all fortran compilers looked for when --fcompiler=something is given ?

2008-07-29 Thread David Cournapeau
Hi,

While building numpy in wine, I got some errors in distutils when
initializing the fortran compilers. I build numpy with:

wine python setup.py build -c mingw32 --fcompiler=gnu95

And I got an error in load_all_fcompiler_classes when it tries to load
the Compaq visual compiler. I don't think it has anything to do with
wine, but rather that in python 2.6, running MSVCCompiler().initialize()
can raise an IOError (python 2.6 uses a new method to lookd for msvc
compilers, based on the existence of a bat file, which I don't have on
my wine installation; I can check this on Windows, but that would be
ackward since I would need to uninstall visual studio first...). I could
catch the exception in the CompaqVisualCompiler, but I don't understand
why this class is loaded at all. It also explains a warning I never
quite understand before about "one should fix me in fcompiler/compaq.py"
on windows whereas I have never used this compiler.

Is this something we should "fix" ? Or just let alone to avoid breaking
anything ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread David Cournapeau
Mathew Yeates wrote:
> my set up is similar. Same cpu's. Except I am using atlas 3.9.1 and gcc 
> 4.2.4
>   

ATLAS 3.9.1 is a development version, and is not supposed to be used for
production use. Please use 3.8.2 if you want to build your own atlas,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread James Turner
> This smells like an ATLAS problem.  You should seed a note to Clint 
> Whaley (the ATLAS guy). IIRC, ATLAS has some hand coded asm routines and 
> it seems that support for these very new processors might be broken.

I believe the machine is a couple of years old, though it's a
fairly high-end workstation. Anyway, I have submitted an ATLAS
support request so they're aware of it:

https://sourceforge.net/tracker/index.php?func=detail&aid=2032011&group_id=23725&atid=379483

Cheers,

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread James Turner
> Rather than look for errors in the scaling factors or errors in your code, I
> think that you should try to expand your understanding of the (subtly) 
> different
> types of Fourier representations.

I'd strongly recommend "The Fourier Transform and its Applications"
by Bracewell, if that helps.

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Robert Kern
On Tue, Jul 29, 2008 at 18:00, Mathew Yeates <[EMAIL PROTECTED]> wrote:
> What got fixed?

>>>(look at the second one, warnings wasn't imported?)

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
What got fixed?

Robert Kern wrote:
> On Tue, Jul 29, 2008 at 17:41, Mathew Yeates <[EMAIL PROTECTED]> wrote:
>   
>> Charles R Harris wrote:
>> 
>>> This smells like an ATLAS problem.
>>>   
>> I don't think so. I crash in a call to dsyevd which part of lapack but
>> not atlas. Also, when I commented out the call to test_eigh_build I get
>> zillions of errors like (look at the second one, warnings wasn't imported?)
>> 
>
> Fixed in SVN.
>
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Robert Kern
On Tue, Jul 29, 2008 at 17:41, Mathew Yeates <[EMAIL PROTECTED]> wrote:
> Charles R Harris wrote:
>>
>>
>> This smells like an ATLAS problem.
> I don't think so. I crash in a call to dsyevd which part of lapack but
> not atlas. Also, when I commented out the call to test_eigh_build I get
> zillions of errors like (look at the second one, warnings wasn't imported?)

Fixed in SVN.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
oops. It is ATLAS. I was able to run with a nonoptimized lapack.

Mathew Yeates wrote:
> Charles R Harris wrote:
>   
>> This smells like an ATLAS problem.
>> 
> I don't think so. I crash in a call to dsyevd which part of lapack but 
> not atlas. Also, when I commented out the call to test_eigh_build I get 
> zillions of errors like (look at the second one, warnings wasn't imported?)
> ==
> ERROR: check_single (numpy.linalg.tests.test_linalg.TestSVD)
> --
> Traceback (most recent call last):
>   File 
> "/home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py",
>  
> line 30, in check_single
> self.do(a, b)
>   File 
> "/home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py",
>  
> line 100, in do
> u, s, vt = linalg.svd(a, 0)
>   File 
> "/home/ossetest/lib/python2.5/site-packages/numpy/linalg/linalg.py", 
> line 980, in svd
> s = s.astype(_realType(result_t))
> ValueError: On entry to DLASD0 parameter number 9 had an illegal value
>
> ==
> ERROR: Tests polyfit
> --
> Traceback (most recent call last):
>   File 
> "/home/ossetest/lib/python2.5/site-packages/numpy/ma/tests/test_extras.py", 
> line 365, in test_polyfit
> assert_almost_equal(polyfit(x,y,3),numpy.polyfit(x,y,3))
>   File "/home/ossetest/lib/python2.5/site-packages/numpy/ma/extras.py", 
> line 882, in polyfit
> warnings.warn("Polyfit may be poorly conditioned", np.RankWarning)
> NameError: global name 'warnings' is not defined
>
>
>   
>>You should seed a note to Clint Whaley (the ATLAS guy). IIRC, ATLAS 
>> has some hand coded asm routines and it seems that support for these 
>> very new processors might be broken.
>>
>> Chuck
>>
>>
>> 
>>
>> ___
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>>   
>> 
>
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
Charles R Harris wrote:
>
>
> This smells like an ATLAS problem.
I don't think so. I crash in a call to dsyevd which part of lapack but 
not atlas. Also, when I commented out the call to test_eigh_build I get 
zillions of errors like (look at the second one, warnings wasn't imported?)
==
ERROR: check_single (numpy.linalg.tests.test_linalg.TestSVD)
--
Traceback (most recent call last):
  File 
"/home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 30, in check_single
self.do(a, b)
  File 
"/home/ossetest/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py", 
line 100, in do
u, s, vt = linalg.svd(a, 0)
  File 
"/home/ossetest/lib/python2.5/site-packages/numpy/linalg/linalg.py", 
line 980, in svd
s = s.astype(_realType(result_t))
ValueError: On entry to DLASD0 parameter number 9 had an illegal value

==
ERROR: Tests polyfit
--
Traceback (most recent call last):
  File 
"/home/ossetest/lib/python2.5/site-packages/numpy/ma/tests/test_extras.py", 
line 365, in test_polyfit
assert_almost_equal(polyfit(x,y,3),numpy.polyfit(x,y,3))
  File "/home/ossetest/lib/python2.5/site-packages/numpy/ma/extras.py", 
line 882, in polyfit
warnings.warn("Polyfit may be poorly conditioned", np.RankWarning)
NameError: global name 'warnings' is not defined


>You should seed a note to Clint Whaley (the ATLAS guy). IIRC, ATLAS 
> has some hand coded asm routines and it seems that support for these 
> very new processors might be broken.
>
> Chuck
>
>
> 
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Neil Martinsen-Burrell
Felix Richter  physik3.uni-rostock.de> writes:

> 
> > Do your answers differ from the theory by a constant factor, or are
> > they completely unrelated?
> 
> No, it's more complicated. Below you'll find my most recent, more stripped 
> down code.
> 
> - I don't know how to scale in a way that works for any n.
> - I don't know how to get the oscillations to match. I suppose its a problem 
> with the frequency scale, but usage of fftfreq() is straightforward...
> - I don't know why the imaginary part of the FFT behaves so different from 
> the 
> real part. It should just be a matter of sin vs. cos.
> 
> Is this voodoo? 
> 
> And I didn't find any example on the internet which tries just to reproduce 
> an 
> analytic FT with the FFT...
> 
> Thanks for your help!

This is not voodoo, this is signal processing, which is itself harmonic
analysis.  Just because the Fast Fourier Transform is fast doesn't mean that
this stuff is easy.

You seem to be looking for a simple relationship between the Fourier Transform
(an integral transform from L^2(R) -> L^2(R)) of a function f and the Discrete
Fourier Transform (a linear transformation from R^n to R^n) of the vector of f
sampled at regularly-spaced points.

Such a simple relationship does not exist.  That is why you found no such
examples on the internet.  The closest you might come is to study the
surprisingly cogent explanation at 

http://en.wikipedia.org/wiki/Fourier_analysis

of the differences between the various types of Fourier analysis.  Remember that
the DFT (as implemented by an FFT algorithm) is *not* an approximation to the
Fourier transform, but rather a streamlined way of computing the coefficients of
a Fourier series of a particular periodic function (that contains a finite
number of Fourier modes).

Rather than look for errors in the scaling factors or errors in your code, I
think that you should try to expand your understanding of the (subtly) different
types of Fourier representations.

-Neil


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Charles R Harris
On Tue, Jul 29, 2008 at 2:48 PM, James Turner <[EMAIL PROTECTED]> wrote:

> Thanks everyone. I think I might try using the Netlib BLAS, since
> it's a server installation... but please let me know if you'd like
> me to troubleshoot this some more (the sooner the easier).
>

This smells like an ATLAS problem.  You should seed a note to Clint Whaley
(the ATLAS guy). IIRC, ATLAS has some hand coded asm routines and it seems
that support for these very new processors might be broken.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
more info
when /linalg.py(872)eigh() calls dsyevd I crash

James Turner wrote:
> Thanks everyone. I think I might try using the Netlib BLAS, since
> it's a server installation... but please let me know if you'd like
> me to troubleshoot this some more (the sooner the easier).
>
> James.
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread James Turner
Thanks everyone. I think I might try using the Netlib BLAS, since
it's a server installation... but please let me know if you'd like
me to troubleshoot this some more (the sooner the easier).

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
I am using an ATLAS 64 bit lapack 3.9.1.
My cpu (4 cpus)

-
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 23
model name  : Intel(R) Xeon(R) CPU   X5460  @ 3.16GHz
stepping: 6
cpu MHz : 3158.790
cache size  : 6144 KB
physical id : 0
siblings: 4
core id : 0
cpu cores   : 4
fpu : yes
fpu_exception   : yes
cpuid level : 10
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall 
nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
bogomips: 6321.80
clflush size: 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:
---

A system trace ends with
futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  write(2, ".", 1)  = 1
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  futex(0xc600bb0, FUTEX_WAKE, 1)   = 0
2655  --- SIGSEGV (Segmentation fault) @ 0 (0) ---
2655  +++ killed by SIGSEGV +++

I get no core file



Robert Kern wrote:
> On Tue, Jul 29, 2008 at 14:16, James Turner <[EMAIL PROTECTED]> wrote:
>   
>> I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
>> with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
>> numpy.test() I get a core dump, as follows. I haven't noticed
>> any special errors during the build. Should I post the entire
>> terminal output from "python setup.py install"? Maybe as an
>> attachment? Let me know if I can provide any more info.
>> 
>
> Can you do
>
>   numpy.test(verbosity=2)
>
> ? That will print out the name of the test before running it, so we
> will know exactly which test caused the core dump.
>
> A gdb backtrace would also help.
>
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
my set up is similar. Same cpu's. Except I am using atlas 3.9.1 and gcc 
4.2.4

James Turner wrote:
>> Are you using ATLAS? If so, where did you get it and what cpu do you have?
>> 
>
> Yes. I have Atlas 3.8.2. I think I got it from
> http://math-atlas.sourceforge.net. I also included Lapack 3.1.1
> from Netlib when building it from source. This worked on another
> machine.
>
> According to /proc/cpuinfo, I have a quad-processor (or core?)
> Intel Xeon. It is running the Linux 2.4 kernel (I needed to build
> a load of software including NumPy with an older glibc so it will
> run on older client machines). Maybe I shouldn't use ATLAS for a
> server installation, since it won't be tuned well? We're trying
> to keep things uniform across our sites though.
>
> Thanks!
>
> James.
>
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
>
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread James Turner
> Are you using ATLAS? If so, where did you get it and what cpu do you have?

Yes. I have Atlas 3.8.2. I think I got it from
http://math-atlas.sourceforge.net. I also included Lapack 3.1.1
from Netlib when building it from source. This worked on another
machine.

According to /proc/cpuinfo, I have a quad-processor (or core?)
Intel Xeon. It is running the Linux 2.4 kernel (I needed to build
a load of software including NumPy with an older glibc so it will
run on older client machines). Maybe I shouldn't use ATLAS for a
server installation, since it won't be tuned well? We're trying
to keep things uniform across our sites though.

Thanks!

James.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.1.0 (+ PIL 1.1.6) crashes on large datasets

2008-07-29 Thread Zachary Pincus
> I've managed to crash numpy+PIL when feeding it rather large images.
> Please see the URL for a test image, script, and gdb stack trace. This
> crashes on my box (Windows XP SP3) as well as on a linux box (the gdb
> trace I've been provided with) and a Mac. Windows reports the crash to
> happen in "multiarray.pyd"; the stack trace mentions the equivalent
> file.
> Unfortunately, I don't know how to fix this. Can I help somehow?
>
> -- Chris
>
> [1] http://cracki.ath.cx:10081/pub/numpy-pil-crash/

Hmm... I've opened this file with my homebrew image-IO tools, and I  
cannot provoke a segfault. (These tools are derived from PIL, but with  
many bugs related to the array interface fixed. I had submitted  
patches to the PIL mailing list, which, as usual, languished.)

I wonder if the issue is with how the PIL is providing the buffer  
interface to numpy? Can you get the crash if you get the array into  
numpy through the image's tostring (or whatever) method, and then use  
numpy.fromstring?

Zach

PS. This is with a recent SVN version of numpy, on OS X 10.5.4.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread James Turner
Thanks, Robert.

> Can you do
>
>  numpy.test(verbosity=2)

OK. Here is the line that fails:

check_matvec (numpy.core.tests.test_numeric.TestDot)Floating exception (core 
dumped)

> A gdb backtrace would also help.

OK. I'm pretty ignorant about using debuggers, but I did
"gdb python core.23696" and got the following. Does that help?

Thanks,

James.

---

GNU gdb Red Hat Linux (5.3.90-0.20030710.40rh)
Copyright 2003 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db
library "/lib/tls/libthread_db.so.1".

Core was generated by `python numpytest.py'.
Program terminated with signal 8, Arithmetic exception.
Reading symbols from /lib/tls/libpthread.so.0...done.
Loaded symbols for /lib/tls/libpthread.so.0
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libutil.so.1...done.
Loaded symbols for /lib/libutil.so.1
Reading symbols from /lib/tls/libm.so.6...done.
Loaded symbols for /lib/tls/libm.so.6
Reading symbols from /lib/tls/libc.so.6...done.
Loaded symbols for /lib/tls/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/multiarray.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/multiarray.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/umath.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/umath.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/strop.so...done.
Loaded symbols for 
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/strop.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/_sort.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/_sort.so
---Type  to continue, or q  to quit---
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/_dotblas.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/_dotblas.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/cPickle.so...done.
Loaded symbols for 
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/cPickle.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/cStringIO.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/cStringIO.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/parser.so...done.
Loaded symbols for 
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/parser.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/_struct.so...done.
Loaded symbols for 
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/_struct.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/operator.so...done.
Loaded symbols for 
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/operator.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/itertools.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/itertools.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/collections.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/collections.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/mmap.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/mmap.soReading symbols
from
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/scalarmath.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/core/scalarmath.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/math.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/math.soReading symbols
from /astro/iraf/i686/gempylocal/lib/python2.5/site-packages/num---Type 
to continue, or q  to quit---
py/lib/_compiled_base.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy/lib/_compiled_base.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/time.so...done.
Loaded symbols for
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/time.soReading symbols
from /astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/binascii.so...done.
Loaded symbols for 
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/binascii.so
Reading symbols from
/astro/iraf/i686/gempylocal/lib/python2.5/lib-dynload/_rand

Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Charles R Harris
On Tue, Jul 29, 2008 at 1:16 PM, James Turner <[EMAIL PROTECTED]> wrote:

> I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
> with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
> numpy.test() I get a core dump, as follows. I haven't noticed
> any special errors during the build. Should I post the entire
> terminal output from "python setup.py install"? Maybe as an
> attachment? Let me know if I can provide any more info.
>
> Thanks a lot,
>
> James.
>
> ---
>
> [EMAIL PROTECTED] DRSetupScripts]$ python
> Python 2.5.1 (r251:54863, Jul 28 2008, 19:08:11)
> [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-20)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>  >>> import numpy
>  >>> numpy.test()
> Numpy is installed in
> /astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy
> Numpy version 1.1.0
> Python version 2.5.1 (r251:54863, Jul 28 2008, 19:08:11) [GCC 3.2.3
> 20030502
> (Red Hat Linux 3.2.3-20)]
>   Found 2/2 tests for numpy.core.tests.test_ufunc
>   Found 143/143 tests for numpy.core.tests.test_regression
>   Found 63/63 tests for numpy.core.tests.test_unicode
>   Found 7/7 tests for numpy.core.tests.test_scalarmath
>   Found 3/3 tests for numpy.core.tests.test_errstate
>   Found 16/16 tests for numpy.core.tests.test_umath
>   Found 12/12 tests for numpy.core.tests.test_records
>   Found 70/70 tests for numpy.core.tests.test_numeric
>   Found 18/18 tests for numpy.core.tests.test_defmatrix
>   Found 36/36 tests for numpy.core.tests.test_numerictypes
>   Found 286/286 tests for numpy.core.tests.test_multiarray
>   Found 3/3 tests for numpy.core.tests.test_memmap
>   Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
>   Found 5/5 tests for numpy.distutils.tests.test_misc_util
>   Found 2/2 tests for numpy.fft.tests.test_fftpack
>   Found 3/3 tests for numpy.fft.tests.test_helper
>   Found 15/15 tests for numpy.lib.tests.test_twodim_base
>   Found 1/1 tests for numpy.lib.tests.test_regression
>   Found 4/4 tests for numpy.lib.tests.test_polynomial
>   Found 43/43 tests for numpy.lib.tests.test_type_check
>   Found 1/1 tests for numpy.lib.tests.test_financial
>   Found 1/1 tests for numpy.lib.tests.test_machar
>   Found 53/53 tests for numpy.lib.tests.test_function_base
>   Found 6/6 tests for numpy.lib.tests.test_index_tricks
>   Found 15/15 tests for numpy.lib.tests.test_io
>   Found 10/10 tests for numpy.lib.tests.test_arraysetops
>   Found 1/1 tests for numpy.lib.tests.test_ufunclike
>   Found 5/5 tests for numpy.lib.tests.test_getlimits
>   Found 24/24 tests for numpy.lib.tests.test__datasource
>   Found 49/49 tests for numpy.lib.tests.test_shape_base
>   Found 3/3 tests for numpy.linalg.tests.test_regression
>   Found 89/89 tests for numpy.linalg.tests.test_linalg
>   Found 36/36 tests for numpy.ma.tests.test_old_ma
>   Found 94/94 tests for numpy.ma.tests.test_core
>   Found 15/15 tests for numpy.ma.tests.test_extras
>   Found 17/17 tests for numpy.ma.tests.test_mrecords
>   Found 4/4 tests for numpy.ma.tests.test_subclassing
>   Found 7/7 tests for numpy.tests.test_random
>   Found 16/16 tests for numpy.testing.tests.test_utils
>   Found 5/5 tests for numpy.tests.test_ctypeslib
>
> ..Floating
> exception (core dumped)
>

Are you using ATLAS? If so, where did you get it and what cpu do you have?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Mathew Yeates
I'm getting this too
Ticket #652 ... ok
Ticket 662.Segmentation fault


Robert Kern wrote:
> On Tue, Jul 29, 2008 at 14:16, James Turner <[EMAIL PROTECTED]> wrote:
>   
>> I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
>> with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
>> numpy.test() I get a core dump, as follows. I haven't noticed
>> any special errors during the build. Should I post the entire
>> terminal output from "python setup.py install"? Maybe as an
>> attachment? Let me know if I can provide any more info.
>> 
>
> Can you do
>
>   numpy.test(verbosity=2)
>
> ? That will print out the name of the test before running it, so we
> will know exactly which test caused the core dump.
>
> A gdb backtrace would also help.
>
>   


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Pierre GM
On Tuesday 29 July 2008 15:14:13 Ivan Vilata i Balaguer wrote:
> Pierre GM (el 2008-07-29 a les 12:38:19 -0400) va dir::
> > > Relative time versus relative time
> > > --
> > >
> > > This case would be the same than the previous one (absolute vs
> > > absolute).  Our proposal is to forbid this operation if the time units
> > > of the operands are different.
> >
> > Mmh, less sure on this one. Can't we use a hierarchy of time units, and
> > force to the lowest ?
> >
> > For example:
> > >>>numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[M]")
> > >>>array([15,15,15], dtype="t8['M']")
> >
> > I agree that adding ns to years makes no sense, but ns to s ? min to
> > hr or days ?  In short: systematically raising an exception looks a
> > bit too drastic. There are some simple unambiguous cases that sould be
> > allowed (Y+M, Y+Q, M+Q, H+D...)
>
> Do you mean using the most precise unit for operations with "near
> enough", different units?  I see the point, but what makes me doubt
> about it is giving the user the false impression that the most precise
> unit is *always* expected.  I'd rather spare the user as many surprises
> as possible, by simplifying rules in favour of explicitness (but that
> may be debated).

Let me rephrase:
Adding different relative time units should be allowed when there's no 
ambiguity on the output:
For example, a relative year timedelta is always 12 month timedeltas, or 4 
quarter timedeltas. In that case, I should be able to do:

>>>numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[M]")
array([15,15,15], dtype="t8['M']")
>>>numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[Q]")
array([7,7,7], dtype="t8['Q']")

Similarly:
* an hour is always 3600s, so I could add relative s/ms/us/ns timedeltas to 
hour timedeltas, and get the result in s/ms/us/ns.
* A day is always 24h, so I could add relative hours and days timedeltas and 
get an hour timedelta
* A week is always 7d, so W+D -> D 

However:
* We can't tell beforehand how much days are in any month, so adding relative 
days and months would raise an exception.
* Same thing with weeks and months/quarters/years

There'll be only a limited number of time units, therefore a limited number of 
potential combinations between time units. It'd be just a matter of listing 
which ones are allowed and which ones will raise an exception.


> > > Note: we refused to use the ``.astype()`` method because of the
> > > additional 'time_reference' parameter that will sound strange for other
> > > typical uses of ``.astype()``.
> >
> > A method would be really, really helpful, though...
> > [...]
>
> Yay, but what doesn't seem to fit for me is that the method would only
> have sense to time values.  

Well, what about a .tounit(new_unit, reference=None) ?
By default, the reference would be None and default to the POSIX epoch.
We could also go for .totunit (for to time unit)



> NumPy is pretty orthogonal in that every 
> method and attribute applies to every type.  However, if "units" were to
> be adopted by NumPy, the method would fit in well.  In fact, we are
> thinking of adding a ``unit`` attribute to dtypes to support time units
> (being ``None`` for normal NumPy types).  But full unit support in NumPy
> looks so far away that I'm not sure to adopt the method.
>
> Thanks for the insights.  Cheers,


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Operation over multiple axes? (Or: Partial flattening?)

2008-07-29 Thread Stéfan van der Walt
2008/7/29 Hans Meine <[EMAIL PROTECTED]>:
> On Dienstag 29 Juli 2008, Stéfan van der Walt wrote:
>> > One way to achieve this is partial flattening, which I did like this:
>> >
>> >  dat.reshape((numpy.prod(dat.shape[:3]), dat.shape[3])).sum(0)
>> >
>> > Is there a more elegant way to do this?
>>
>> That looks like a good way to do it.  You can clean it up ever so slightly:
>>
>> x.reshape([-1, x.shape[-1]]).sum(axis=0)
>
> Thanks, that looks more elegant indeed.  I am not sure if I've read about -1
> in shapes before.  I assume it represents "the automatically determined rest"
> and may only appear once?  Should this be documented in the reshape
> docstring?

That's correct, and yes -- it should!  Would you like to document it
yourself?  If you register on

http://sd-2116.dedibox.fr/pydocweb

I'll give you editor's access.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread Robert Kern
On Tue, Jul 29, 2008 at 14:16, James Turner <[EMAIL PROTECTED]> wrote:
> I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
> with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
> numpy.test() I get a core dump, as follows. I haven't noticed
> any special errors during the build. Should I post the entire
> terminal output from "python setup.py install"? Maybe as an
> attachment? Let me know if I can provide any more info.

Can you do

  numpy.test(verbosity=2)

? That will print out the name of the test before running it, so we
will know exactly which test caused the core dump.

A gdb backtrace would also help.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.1.0 (+ PIL 1.1.6) crashes on large datasets

2008-07-29 Thread Christoph Rackwitz
Hi,

I've managed to crash numpy+PIL when feeding it rather large images.
Please see the URL for a test image, script, and gdb stack trace. This
crashes on my box (Windows XP SP3) as well as on a linux box (the gdb
trace I've been provided with) and a Mac. Windows reports the crash to
happen in "multiarray.pyd"; the stack trace mentions the equivalent
file.
Unfortunately, I don't know how to fix this. Can I help somehow?

-- Chris

[1] http://cracki.ath.cx:10081/pub/numpy-pil-crash/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Core dump during numpy.test()

2008-07-29 Thread James Turner
I have built NumPy 1.1.0 on RedHat Enterprise 3 (Linux 2.4.21
with gcc 3.2.3 and glibc 2.3.2) and Python 2.5.1. When I run
numpy.test() I get a core dump, as follows. I haven't noticed
any special errors during the build. Should I post the entire
terminal output from "python setup.py install"? Maybe as an
attachment? Let me know if I can provide any more info.

Thanks a lot,

James.

---

[EMAIL PROTECTED] DRSetupScripts]$ python
Python 2.5.1 (r251:54863, Jul 28 2008, 19:08:11)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-20)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
 >>> import numpy
 >>> numpy.test()
Numpy is installed in 
/astro/iraf/i686/gempylocal/lib/python2.5/site-packages/numpy
Numpy version 1.1.0
Python version 2.5.1 (r251:54863, Jul 28 2008, 19:08:11) [GCC 3.2.3 20030502 
(Red Hat Linux 3.2.3-20)]
   Found 2/2 tests for numpy.core.tests.test_ufunc
   Found 143/143 tests for numpy.core.tests.test_regression
   Found 63/63 tests for numpy.core.tests.test_unicode
   Found 7/7 tests for numpy.core.tests.test_scalarmath
   Found 3/3 tests for numpy.core.tests.test_errstate
   Found 16/16 tests for numpy.core.tests.test_umath
   Found 12/12 tests for numpy.core.tests.test_records
   Found 70/70 tests for numpy.core.tests.test_numeric
   Found 18/18 tests for numpy.core.tests.test_defmatrix
   Found 36/36 tests for numpy.core.tests.test_numerictypes
   Found 286/286 tests for numpy.core.tests.test_multiarray
   Found 3/3 tests for numpy.core.tests.test_memmap
   Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
   Found 5/5 tests for numpy.distutils.tests.test_misc_util
   Found 2/2 tests for numpy.fft.tests.test_fftpack
   Found 3/3 tests for numpy.fft.tests.test_helper
   Found 15/15 tests for numpy.lib.tests.test_twodim_base
   Found 1/1 tests for numpy.lib.tests.test_regression
   Found 4/4 tests for numpy.lib.tests.test_polynomial
   Found 43/43 tests for numpy.lib.tests.test_type_check
   Found 1/1 tests for numpy.lib.tests.test_financial
   Found 1/1 tests for numpy.lib.tests.test_machar
   Found 53/53 tests for numpy.lib.tests.test_function_base
   Found 6/6 tests for numpy.lib.tests.test_index_tricks
   Found 15/15 tests for numpy.lib.tests.test_io
   Found 10/10 tests for numpy.lib.tests.test_arraysetops
   Found 1/1 tests for numpy.lib.tests.test_ufunclike
   Found 5/5 tests for numpy.lib.tests.test_getlimits
   Found 24/24 tests for numpy.lib.tests.test__datasource
   Found 49/49 tests for numpy.lib.tests.test_shape_base
   Found 3/3 tests for numpy.linalg.tests.test_regression
   Found 89/89 tests for numpy.linalg.tests.test_linalg
   Found 36/36 tests for numpy.ma.tests.test_old_ma
   Found 94/94 tests for numpy.ma.tests.test_core
   Found 15/15 tests for numpy.ma.tests.test_extras
   Found 17/17 tests for numpy.ma.tests.test_mrecords
   Found 4/4 tests for numpy.ma.tests.test_subclassing
   Found 7/7 tests for numpy.tests.test_random
   Found 16/16 tests for numpy.testing.tests.test_utils
   Found 5/5 tests for numpy.tests.test_ctypeslib
..Floating
 
exception (core dumped)

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Ivan Vilata i Balaguer
Pierre GM (el 2008-07-29 a les 12:38:19 -0400) va dir::

> > Relative time versus relative time
> > --
> >
> > This case would be the same than the previous one (absolute vs
> > absolute).  Our proposal is to forbid this operation if the time units
> > of the operands are different.  
> 
> Mmh, less sure on this one. Can't we use a hierarchy of time units, and force 
> to the lowest ? 
> For example:
> >>>numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[M]")
> >>>array([15,15,15], dtype="t8['M']")
> 
> I agree that adding ns to years makes no sense, but ns to s ? min to
> hr or days ?  In short: systematically raising an exception looks a
> bit too drastic. There are some simple unambiguous cases that sould be
> allowed (Y+M, Y+Q, M+Q, H+D...)

Do you mean using the most precise unit for operations with "near
enough", different units?  I see the point, but what makes me doubt
about it is giving the user the false impression that the most precise
unit is *always* expected.  I'd rather spare the user as many surprises
as possible, by simplifying rules in favour of explicitness (but that
may be debated).

> > Introducing a time casting function
> > ---
> 
> > change_unit(time_object, new_unit, reference)
> >
> > where 'time_object' is the time object whose unit is to be
> > changed, 'new_unit' is the desired new time unit, and 'reference' is an
> > absolute date that will be used to allow the conversion of relative
> > times in case of using time units with an uncertain number of smaller
> > time units (relative years or months cannot be expressed in days).  
> 
> reference default to the POSIX epoch, right ?
> So this function could be a first step towards our problem of frequency 
> conversion...
> 
> > Note: we refused to use the ``.astype()`` method because of the
> > additional 'time_reference' parameter that will sound strange for other
> > typical uses of ``.astype()``.
> 
> A method would be really, really helpful, though...
> [...]

Yay, but what doesn't seem to fit for me is that the method would only
have sense to time values.  NumPy is pretty orthogonal in that every
method and attribute applies to every type.  However, if "units" were to
be adopted by NumPy, the method would fit in well.  In fact, we are
thinking of adding a ``unit`` attribute to dtypes to support time units
(being ``None`` for normal NumPy types).  But full unit support in NumPy
looks so far away that I'm not sure to adopt the method.

Thanks for the insights.  Cheers,

::

  Ivan Vilata i Balaguer   @ Intellectual Monopoly hinders Innovation! @
  http://www.selidor.net/  @ http://www.nosoftwarepatents.com/ @


signature.asc
Description: Digital signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Operation over multiple axes? (Or: Partial flattening?)

2008-07-29 Thread Robert Kern
On Tue, Jul 29, 2008 at 09:24, Hans Meine
<[EMAIL PROTECTED]> wrote:
> On Dienstag 29 Juli 2008, Stéfan van der Walt wrote:
>> > One way to achieve this is partial flattening, which I did like this:
>> >
>> >  dat.reshape((numpy.prod(dat.shape[:3]), dat.shape[3])).sum(0)
>> >
>> > Is there a more elegant way to do this?
>>
>> That looks like a good way to do it.  You can clean it up ever so slightly:
>>
>> x.reshape([-1, x.shape[-1]]).sum(axis=0)
>
> Thanks, that looks more elegant indeed.  I am not sure if I've read about -1
> in shapes before.  I assume it represents "the automatically determined rest"
> and may only appear once?  Should this be documented in the reshape
> docstring?

Yes, yes, and yes.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] why isn't libfftw.a being accessed?

2008-07-29 Thread Robert Kern
On Tue, Jul 29, 2008 at 11:22, Mathew Yeates <[EMAIL PROTECTED]> wrote:
> Hi
> In my site.cfg I have
>
> [DEFAULT]
> library_dirs = /home/ossetest/lib64:/home/ossetest/lib
> include_dirs = /home/ossetest/include
>
> [fftw]
> libraries = fftw3
>
> but libfftw3.a isn't being accesed.
> ls -lu ~/lib/libfftw3.a
> -rw-r--r-- 1 ossetest ossetest 1572628 Jul 26 15:02
> /home/ossetest/lib/libfftw3.a
>
> anybody know why?

Can you show us the relevant part of the output from "python setup.py build"?

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Ivan Vilata i Balaguer
Tom Denniston (el 2008-07-29 a les 12:21:39 -0500) va dir::

> [...]
> I think it would be ideal if things like the following worked:
> 
> >>> series = numpy.array(['1970-02-01','1970-09-01'], dtype = 'datetime64[D]')
> >>> series == '1970-02-01'
> [True, False]
> 
> I view this as similar to:
> 
> >>> series = numpy.array([1,2,3], dtype=float)
> >>> series == 2
> [False,True,False]
> 
> 1. However it does numpy recognizes that an int is comparable with a
> float and does the float cast.  I think you want the same behavior
> between strings that parse into dates and date arrays.  Some might
> object that the relationship between string and date is more tenuous
> than float and int, which is true, but having used my own homespun
> date array numpy extension for over a year, I've found that the first
> thing I did was wrap it into an object that handles these string->date
> translations elegantly and that made it infinately more usable from an
> ipython session.

That may be feasible as long as there is a very clear rule for what time
units you get given a string.  For instance, '1970' could yield years
and '1970-03-12T12:00' minutes, but then we don't have a way of creating
a time in business days...  However, it looks interesting.  Any more
people interested in this behaviour?

> 2. Even more important to me, however, is the issue of date parsing.
> The mx library does many things badly but it does do a great job of
> parsing dates of many formats.  When you parse '1/1/95' or 1995-01-01'
> it knows that you mean 19950101 which is really nice.  I believe the
> scipy timeseries code for parsing dates is based on it.  I would
> highly suggest starting with that level of functionality.  The one
> major issue with it is an uninterpretable date doesn't throw an error
> but becomes whatever date is right now.  That is obviously
> unfavorable.

Umm, that may get quite complex.  E.g. does '1/2/95' refer to February
the 1st. or January the 2nd.?  There are sooo many date formats and
standards that maybe using an external parser code (like mx, TimeSeries
or even datetime/strptime) for them would be preferable.  I think the
ISO 8601 is enough for a basic, well defined time string support.  At
least to start with.

> 3. Finally my current implementation uses floats uses nan to represent
> an invalid date.  When you assign an element of an date array to None
> it uses nan as the value.  When you assign a real date it puts in the
> equivalent floating point value.  I have found this to be hugely
> beneficial and just wanted to float the idea of reserving a value to
> indicate the floating point equivalent of nan.  People might prefer
> masked arrays as a solution, but I just wanted to float the idea.
> [...]

Good news!  Our next proposal includes a "Not a Time" value which came
around due to the impossibility of converting some times into business
days.  Stay tuned.

However I should point out that the NaT value isn't as powerful as the
floating-point NaN, since the former is completely lacking of any sense
to hardware, and patching that in all cases would make computations
quite slower.  Using floating point values doesn't look like an option
anymore, since they don't have a fixed precision given a time unit.

Cheers,

::

  Ivan Vilata i Balaguer   @ Intellectual Monopoly hinders Innovation! @
  http://www.selidor.net/  @ http://www.nosoftwarepatents.com/ @


signature.asc
Description: Digital signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Francesc Alted
A Tuesday 29 July 2008, Tom Denniston escrigué:
> Francesc,
>
> The datetime proposal is very impressive in its depth and thought.
> For me as well as many other people this would be a massive
> improvement to numpy and allow numpy to get a foothold in areas like
> econometrics where R/S is now dominant.
>
> I had one question regarding casting of strings:
>
> I think it would be ideal if things like the following worked:
> >>> series = numpy.array(['1970-02-01','1970-09-01'], dtype =
> >>> 'datetime64[D]') series == '1970-02-01'
>
> [True, False]
>
> I view this as similar to:
> >>> series = numpy.array([1,2,3], dtype=float)
> >>> series == 2
>
> [False,True,False]

Good point.  Well, I agree that adding the support for setting elements 
from strings, i.e.:

>>> t = numpy.ones(3, 'T8[D]')
>>> t[0] = '2001-01-01'

should be supported.  With this, and appyling the broadcasting rules, 
then the next:

>>> t == '2001-01-01'
[True, False, False]

should work without problems.  We will try to add this explicitely into 
the new proposal.

> 1. However it does numpy recognizes that an int is comparable with a
> float and does the float cast.  I think you want the same behavior
> between strings that parse into dates and date arrays.  Some might
> object that the relationship between string and date is more tenuous
> than float and int, which is true, but having used my own homespun
> date array numpy extension for over a year, I've found that the first
> thing I did was wrap it into an object that handles these
> string->date translations elegantly and that made it infinately more
> usable from an ipython session.

Well, you should not worry because of this.  Hopefully, in the

>>> t == '2001-01-01'

comparison, the scalar part of the expression can be casted into a date 
array, and then the proper comparison will be performed.  If this 
cannot be done for some reason that scapes me, one will always be able 
to do:

>>> t == N.datetime64('2001-01-01', 'Y')
[True, False, False]

which is a bit more verbose, but much more clear too.

> 2. Even more important to me, however, is the issue of date parsing.
> The mx library does many things badly but it does do a great job of
> parsing dates of many formats.  When you parse '1/1/95' or
> 1995-01-01' it knows that you mean 19950101 which is really nice.  I
> believe the scipy timeseries code for parsing dates is based on it. 
> I would highly suggest starting with that level of functionality. 
> The one major issue with it is an uninterpretable date doesn't throw
> an error but becomes whatever date is right now.  That is obviously
> unfavorable.

Hmmm.  We would not like to clutter too much the NumPy core with too 
much date string parsing code.  As it is said in the proposal, we only 
plan to support the parsing for the ISO 8601.  That should be enough 
for most of purposes.  However, I'm sure that parsing for other formats 
will be available in the ``Date`` class of the TimeSeries package.

> 3. Finally my current implementation uses floats uses nan to
> represent an invalid date.  When you assign an element of an date
> array to None it uses nan as the value.  When you assign a real date
> it puts in the equivalent floating point value.  I have found this to
> be hugely beneficial and just wanted to float the idea of reserving a
> value to indicate the floating point equivalent of nan.  People might
> prefer masked arrays as a solution, but I just wanted to float the
> idea.

Hmm, that's another very valid point.  In fact, Ivan and me had already 
foreseen the existence of a NaT (Not A Time), as the maximum negative 
integer (-2**63).  However, as the underlying type of the proposed time 
type is an int64, the arithmetic operations with the time types will be 
done through integer arithmetic, and unfortunately, the majority of 
platforms out there perform this kind of arithmetic as two's-complement 
arithmetic.  That means that there is not provision for handling NaT's 
in hardware:

In [58]: numpy.int64(-2**63)
Out[58]: -9223372036854775808  # this is a NaT

In [59]: numpy.int64(-2**63)+1
Out[59]: -9223372036854775807  # no longer a NaT

In [60]: numpy.int64(-2**63)-1
Out[60]: 9223372036854775807   # idem, and besides, positive!

So, well, due to this limitation, I'm afraid that we will have to live 
without a proper handling of NaT times.  Perhaps this would be the 
biggest limitation of choosing int64 as the base type of the date/time 
dtype (float64 is better in that regard, but has also its 
disadvantages, like the variable precision which is intrinsic to it).

> Forgive me if any of this has already been covered.  There has been a
> lot of volume on this subject and I've tried to read it all
> diligently but may have missed a point or two.

Not at all.  You've touched important issues.  Thanks!

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/

Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Ivan Vilata i Balaguer
David Huard (el 2008-07-29 a les 12:31:54 -0400) va dir::

> Silent casting is often a source of bugs and I appreciate the strict
> rules you want to enforce.  However, I think there should be a simpler
> mechanism for operations between different types than creating a copy
> of a variable with the correct type.
> 
> My suggestion is to have a dtype argument for methods such as add and subs:
> 
> >>> numpy.ones(3, dtype="t8[Y]").add(numpy.zeros(3, dtype="t8[fs]"),
> dtype="t8[fs]")
> 
> This way, `implicit` operations (+,-) enforce strict rules, and
> `explicit` operations (add, subs) let's you do want you want at your
> own risk.

Umm, that looks like a big change (or addition) to the NumPy interface.
I think similar "include a dtype argument for method X" issues hava been
discussed before in the list.  However, given the big change of adding
the new explicit operation methods I think your proposal falls beyond
the scope of the project being discussed.

However, since yours isn't necessarily a time-related proposal, you may
ask what people think of it in a separate thread.

::

  Ivan Vilata i Balaguer   @ Intellectual Monopoly hinders Innovation! @
  http://www.selidor.net/  @ http://www.nosoftwarepatents.com/ @


signature.asc
Description: Digital signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Pierre GM
On Tuesday 29 July 2008 14:08:28 Francesc Alted wrote:
> A Tuesday 29 July 2008, David Huard escrigué:

> Hmm, the idea of the ``.add()`` and ``.subtract()`` methods is tempting,
> but I not sure it is a good idea to add new methods to the ndarray
> object that are meant to be used with just the date/time dtype.
>
> I'm afraid that I'm -1 here.

I fully agree with Francesc, .add and .subtract will be quite confusing.
About inplace conversions, the right-end (other) is cast to the type of the 
left end (self) by default following the basic rule of casting when there's 
no ambiguity and raising an exception otherwise ?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Francesc Alted
A Tuesday 29 July 2008, David Huard escrigué:
> Hi,
>
> Silent casting is often a source of bugs and I appreciate the strict
> rules you want to enforce.
> However, I think there should be a simpler mechanism for operations
> between different types
> than creating a copy of a variable with the correct type.
>
> My suggestion is to have a dtype argument for methods such as add and 
subs:
> >>> numpy.ones(3, dtype="t8[Y]").add(numpy.zeros(3, dtype="t8[fs]"),
>
> dtype="t8[fs]")
>
> This way, `implicit` operations (+,-) enforce strict rules, and
> `explicit` operations (add, subs) let's
> you do want you want at your own risk.

Hmm, the idea of the ``.add()`` and ``.subtract()`` methods is tempting, 
but I not sure it is a good idea to add new methods to the ndarray 
object that are meant to be used with just the date/time dtype.

I'm afraid that I'm -1 here.

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Tom Denniston
Francesc,

The datetime proposal is very impressive in its depth and thought.
For me as well as many other people this would be a massive
improvement to numpy and allow numpy to get a foothold in areas like
econometrics where R/S is now dominant.

I had one question regarding casting of strings:

I think it would be ideal if things like the following worked:

>>> series = numpy.array(['1970-02-01','1970-09-01'], dtype = 'datetime64[D]')
>>> series == '1970-02-01'
[True, False]

I view this as similar to:

>>> series = numpy.array([1,2,3], dtype=float)
>>> series == 2
[False,True,False]

1. However it does numpy recognizes that an int is comparable with a
float and does the float cast.  I think you want the same behavior
between strings that parse into dates and date arrays.  Some might
object that the relationship between string and date is more tenuous
than float and int, which is true, but having used my own homespun
date array numpy extension for over a year, I've found that the first
thing I did was wrap it into an object that handles these string->date
translations elegantly and that made it infinately more usable from an
ipython session.

2. Even more important to me, however, is the issue of date parsing.
The mx library does many things badly but it does do a great job of
parsing dates of many formats.  When you parse '1/1/95' or 1995-01-01'
it knows that you mean 19950101 which is really nice.  I believe the
scipy timeseries code for parsing dates is based on it.  I would
highly suggest starting with that level of functionality.  The one
major issue with it is an uninterpretable date doesn't throw an error
but becomes whatever date is right now.  That is obviously
unfavorable.

3. Finally my current implementation uses floats uses nan to represent
an invalid date.  When you assign an element of an date array to None
it uses nan as the value.  When you assign a real date it puts in the
equivalent floating point value.  I have found this to be hugely
beneficial and just wanted to float the idea of reserving a value to
indicate the floating point equivalent of nan.  People might prefer
masked arrays as a solution, but I just wanted to float the idea.

Forgive me if any of this has already been covered.  There has been a
lot of volume on this subject and I've tried to read it all diligently
but may have missed a point or two.

--Tom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Charles R Harris
On Tue, Jul 29, 2008 at 8:56 AM, Felix Richter <[EMAIL PROTECTED]
> wrote:

> > I quickly copy-pasted and ran your code; it looks to me like the results
> > you calculated analytically oscillate too fast to be represented
> > discretely.  Did you try to transform different, simpler signals?  (e.g.
> a
> > Gaussian?)
> Yes, I run into the same problem.
>
> Since the oscillation frequency is given by the point around which the
> function is centered, it would be good to have it centered around zero.
> The FFT assumes the x axis to be [0..n], so how should I do this?
> The functions I have to transform later won't be symmetrical, so the trick
> abs(fftdata) is not possible.
>

You can apply a linear phase shift to the transformed data, i.e., multiply
by something of the form exp(ixn), where x depends on where you want the
center and n is the index of the transformed data point. This effectively
rotates the original data. Or you can just rotate the data. If the data is
not symmetric you are always going to have complex components. What exactly
are you trying to do? I mean, what is the original problem that you are
trying to solve by this method?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Pierre GM
Francesc,

> Absolute time versus relative time
> --
>
> We think that in this case the absolute time should have priority for
> determining the time unit of the outcome.  

+1

> Absolute time versus absolute time
> --
>
> When operating (basically, only the substraction will be allowed) two
> absolute times with different unit times, we are proposing that the
> outcome would be to raise an exception. 

+1
(However, I don't think that np.zeros(3, dtype="T8[Y]") is the most useful 
example ;))

> Relative time versus relative time
> --
>
> This case would be the same than the previous one (absolute vs
> absolute).  Our proposal is to forbid this operation if the time units
> of the operands are different.  

Mmh, less sure on this one. Can't we use a hierarchy of time units, and force 
to the lowest ? 
For example:
>>>numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[M]")
>>>array([15,15,15], dtype="t8['M']")

I agree that adding ns to years makes no sense, but ns to s ? min to hr or 
days ?
In short: systematically raising an exception looks a bit too drastic. There 
are some simple unambiguous cases that sould be allowed (Y+M, Y+Q, M+Q, 
H+D...)


> Introducing a time casting function
> ---

> change_unit(time_object, new_unit, reference)
>
> where 'time_object' is the time object whose unit is to be
> changed, 'new_unit' is the desired new time unit, and 'reference' is an
> absolute date that will be used to allow the conversion of relative
> times in case of using time units with an uncertain number of smaller
> time units (relative years or months cannot be expressed in days).  

reference default to the POSIX epoch, right ?
So this function could be a first step towards our problem of frequency 
conversion...

> Note: we refused to use the ``.astype()`` method because of the
> additional 'time_reference' parameter that will sound strange for other
> typical uses of ``.astype()``.

A method would be really, really helpful, though...


Back to a previous email:
> >>> numpy.timedelta(20, unit='Y') + numpy.timedelta(365, unit='D')
> 20  # unit is Year

I would have expected days, or an exception (as there's an ambiguity in the 
length in days of a year)

> >>> numpy.timedelta(20, unit='Y') + numpy.timedelta(366, unit='D')
> 21  # unit is Year

> >>> numpy.timedelta(43, unit='M') + numpy.timedelta(30, unit='D')
> 43  # unit is Month
>
> >>> numpy.timedelta(43, unit='M') + numpy.timedelta(31, unit='D')
> 44  # unit is Month

> Would that be ok for you?

Gah, I dunno. Adding relative values is always tricky... I understand the last 
statement as 43 months and 31 days, which could be 44 months if we're 
speaking in months, or 3 years, 7 months, and 31 days...




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread David Huard
Hi,

Silent casting is often a source of bugs and I appreciate the strict rules
you want to enforce.
However, I think there should be a simpler mechanism for operations between
different types
than creating a copy of a variable with the correct type.

My suggestion is to have a dtype argument for methods such as add and subs:

>>> numpy.ones(3, dtype="t8[Y]").add(numpy.zeros(3, dtype="t8[fs]"),
dtype="t8[fs]")

This way, `implicit` operations (+,-) enforce strict rules, and `explicit`
operations (add, subs) let's
you do want you want at your own risk.

David


On Tue, Jul 29, 2008 at 9:12 AM, Francesc Alted <[EMAIL PROTECTED]> wrote:

> Hi,
>
> During the making of the date/time proposals and the subsequent
> discussions in this list, we have changed a couple of times our point
> of view about the way how the castings would work between different
> date/time types and the different time units (previously called
> resolutions).  So I'd like to expose this issue in detail here, and
> give yet another new proposal about this, so as to gather feedback from
> the community before consolidating it in the final date/time proposal.
>
> Casting proposal for date/time types
> 
>
> The operations among the proposed date/time types can be divided in
> three groups:
>
> * Absolute time versus relative time
>
> * Absolute time versus absolute time
>
> * Relative time versus relative time
>
> Now, here are our considerations for each case:
>
> Absolute time versus relative time
> --
>
> We think that in this case the absolute time should have priority for
> determining the time unit of the outcome.  That would represent what
> the people wants to do most of the times.  For example, this would
> allow to do:
>
> >>> series = numpy.array(['1970-01-01', '1970-02-01', '1970-09-01'],
> dtype='datetime64[D]')
> >>> series2 = series + numpy.timedelta(1, 'Y')  # Add 2 relative years
> >>> series2
> array(['1972-01-01', '1972-02-01', '1972-09-01'],
> dtype='datetime64[D]')  # the 'D'ay time unit has been chosen
>
> Absolute time versus absolute time
> --
>
> When operating (basically, only the substraction will be allowed) two
> absolute times with different unit times, we are proposing that the
> outcome would be to raise an exception.  This is because the ranges and
> timespans of the different time units can be very different, and it is
> not clear at all what time unit will be preferred for the user.  For
> example, this should be allowed:
>
> >>> numpy.ones(3, dtype="T8[Y]") - numpy.zeros(3, dtype="T8[Y]")
> array([1, 1, 1], dtype="timedelta64[Y]")
>
> But the next should not:
>
> >>> numpy.ones(3, dtype="T8[Y]") - numpy.zeros(3, dtype="T8[ns]")
> raise numpy.IncompatibleUnitError  # what unit to choose?
>
> Relative time versus relative time
> --
>
> This case would be the same than the previous one (absolute vs
> absolute).  Our proposal is to forbid this operation if the time units
> of the operands are different.  For example, this should be allowed:
>
> >>> numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[Y]")
> array([4, 4, 4], dtype="timedelta64[Y]")
>
> But the next should not:
>
> >>> numpy.ones(3, dtype="t8[Y]") + numpy.zeros(3, dtype="t8[fs]")
> raise numpy.IncompatibleUnitError  # what unit to choose?
>
> Introducing a time casting function
> ---
>
> As forbidding operations among absolute/absolute and relative/relative
> types can be unacceptable in many situations, we are proposing an
> explicit casting mechanism so that the user can inform about the
> desired time unit of the outcome.  For this, a new NumPy function,
> called, say, ``numpy.change_unit()`` (this name is for the purposes of
> the discussion and can be changed) will be provided.  The signature for
> the function will be:
>
> change_unit(time_object, new_unit, reference)
>
> where 'time_object' is the time object whose unit is to be
> changed, 'new_unit' is the desired new time unit, and 'reference' is an
> absolute date that will be used to allow the conversion of relative
> times in case of using time units with an uncertain number of smaller
> time units (relative years or months cannot be expressed in days).  For
> example, that would allow to do:
>
> >>> numpy.change_unit( numpy.array([1,2], 'T[Y]'), 'T[d]' )
> array([365, 731], dtype="datetime64[d]")
>
> or:
>
> >>> ref = numpy.datetime64('1971', 'T[Y]')
> >>> numpy.change_unit( numpy.array([1,2], 't[Y]'), 't[d]',  ref )
> array([366, 365], dtype="timedelta64[d]")
>
> Note: we refused to use the ``.astype()`` method because of the
> additional 'time_reference' parameter that will sound strange for other
> typical uses of ``.astype()``.
>
> Opinions?
>
> --
> Francesc Alted
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listi

[Numpy-discussion] why isn't libfftw.a being accessed?

2008-07-29 Thread Mathew Yeates
Hi
In my site.cfg I have

[DEFAULT]
library_dirs = /home/ossetest/lib64:/home/ossetest/lib
include_dirs = /home/ossetest/include

[fftw]
libraries = fftw3

but libfftw3.a isn't being accesed.
ls -lu ~/lib/libfftw3.a
-rw-r--r-- 1 ossetest ossetest 1572628 Jul 26 15:02 
/home/ossetest/lib/libfftw3.a

anybody know why?

Mathew




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Francesc Alted
Ops, after reviewing this document, I've discovered a couple of typos.

A Tuesday 29 July 2008, Francesc Alted escrigué:
[snip]
> >>> series = numpy.array(['1970-01-01', '1970-02-01', '1970-09-01'], 
> dtype='datetime64[D]')
> >>> series2 = series + numpy.timedelta(1, 'Y')  # Add 2 years
^^^
the above line should read:

>>> series2 = series + numpy.timedelta(2, 'Y')  # Add 2 years

> >>> series2
>
> array(['1972-01-01', '1972-02-01', '1972-09-01'],
> dtype='datetime64[D]')  # the 'D'ay time unit has been chosen

[snip]

> >>> numpy.change_unit( numpy.array([1,2], 'T[Y]'), 'T[d]' )
>
> array([365, 731], dtype="datetime64[d]")
>
> or:
> >>> ref = numpy.datetime64('1971', 'T[Y]')
> >>> numpy.change_unit( numpy.array([1,2], 't[Y]'), 't[d]',  ref )
>
> array([366, 365], dtype="timedelta64[d]")
  ^^^
the above line should read:

array([366, 731], dtype="timedelta64[d]")


-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] No Copy Reduce Operations

2008-07-29 Thread Luis Pedro Coelho
Travis E. Oliphant wrote:
> Your approach using C++ templates is interesting, and I'm very glad for 
> your explanation and your releasing of the code as open source.I'm 
> not prepared to start using C++ in NumPy, however, so your code will 
> have to serve as an example only. 

I will keep this as a separate package. I will write back one I have put
it up somewhere.

If this is not going into numpy, then I will do things a little
differently, namely, I will have a very simple python layer which cleans
up the arguments before calling the C++ implementation.

bye,
Luis
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Felix Richter
> I quickly copy-pasted and ran your code; it looks to me like the results
> you calculated analytically oscillate too fast to be represented
> discretely.  Did you try to transform different, simpler signals?  (e.g. a
> Gaussian?)
Yes, I run into the same problem. 

Since the oscillation frequency is given by the point around which the 
function is centered, it would be good to have it centered around zero.
The FFT assumes the x axis to be [0..n], so how should I do this?
The functions I have to transform later won't be symmetrical, so the trick 
abs(fftdata) is not possible.

Felix
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recent work for branch cuts / C99 complex maths: problems on mingw

2008-07-29 Thread Charles R Harris
On Tue, Jul 29, 2008 at 8:38 AM, Pauli Virtanen <[EMAIL PROTECTED]> wrote:

> Tue, 29 Jul 2008 08:13:23 -0600, Charles R Harris wrote:
> [clip]
> > We also need speed. I think we just say behaviour on the branch cuts is
> > undefined, which is numerically true in any case, and try to get the
> > nan's and infs sorted out. But only if the costs are reasonable.
>
> Well, the branch cut tests have succeeded on all platforms so far, which
> means the behavior is numerically well-defined. I doubt we can lose any
> speed by fixing sqrt in this respect.
>

Because of the discontinuity, roundoff error makes the behaviour undefined.
It's just a fact of computational life. I suspect we mean slightly different
things by "numerically" ;)

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recent work for branch cuts / C99 complex maths: problems on mingw

2008-07-29 Thread Pauli Virtanen
Tue, 29 Jul 2008 08:13:23 -0600, Charles R Harris wrote:
[clip]
> We also need speed. I think we just say behaviour on the branch cuts is
> undefined, which is numerically true in any case, and try to get the
> nan's and infs sorted out. But only if the costs are reasonable.

Well, the branch cut tests have succeeded on all platforms so far, which 
means the behavior is numerically well-defined. I doubt we can lose any 
speed by fixing sqrt in this respect.

The inf-nan business is the one causing problems. Lookup tables might 
solve the problem, but they add a few branches to the code even if the 
arguments are finite.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Operation over multiple axes? (Or: Partial flattening?)

2008-07-29 Thread Hans Meine
On Dienstag 29 Juli 2008, Stéfan van der Walt wrote:
> > One way to achieve this is partial flattening, which I did like this:
> >
> >  dat.reshape((numpy.prod(dat.shape[:3]), dat.shape[3])).sum(0)
> >
> > Is there a more elegant way to do this?
>
> That looks like a good way to do it.  You can clean it up ever so slightly:
>
> x.reshape([-1, x.shape[-1]]).sum(axis=0)

Thanks, that looks more elegant indeed.  I am not sure if I've read about -1 
in shapes before.  I assume it represents "the automatically determined rest" 
and may only appear once?  Should this be documented in the reshape 
docstring?

Ciao, /  /.o.
 /--/ ..o
/  / ANS  ooo


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Hans Meine
Hi Felix,

I quickly copy-pasted and ran your code; it looks to me like the results you 
calculated analytically oscillate too fast to be represented discretely.  Did 
you try to transform different, simpler signals?  (e.g. a Gaussian?)

Ciao, /  /.o.
 /--/ ..o
/  / ANS  ooo


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svn numpy selftests fail on Solaris

2008-07-29 Thread Charles R Harris
On Tue, Jul 29, 2008 at 7:36 AM, Christopher Hanley <[EMAIL PROTECTED]>wrote:

> This has apparently been occurring for a few days.  My apologizes but I
> have been away on vacation.
>
> FAILED (failures=5)
> Running unit tests for numpy
> NumPy version 1.2.0.dev5565
> NumPy is installed in /usr/ra/pyssg/2.5.1/numpy
> Python version 2.5.1 (r251:54863, Jun  4 2008, 15:48:19) [C]
> nose version 0.10.0
> ctypes is not available on this python: skipping the test (import error
> was: ctypes is not available.)
> No distutils available, skipping test.
> errors:
> failures:
>(Test(test_umath.TestC99.test_cacos(, (1.0, NaN),
> (NaN,
> NaN), 'invalid-optional')), 'Traceback (most recent call last):\n  File
> "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n
> self.test(*self.arg)\n  File
> "/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in
> _check\nassert got == expected, (got, expected)\nAssertionError:
> (\'(-NaN, -NaN)\', \'(NaN, NaN)\')\n')
>(Test(test_umath.TestC99.test_cacos(, (NaN, 1.0),
> (NaN,
> NaN), 'invalid-optional')), 'Traceback (most recent call last):\n  File
> "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n
> self.test(*self.arg)\n  File
> "/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in
> _check\nassert got == expected, (got, expected)\nAssertionError:
> (\'(-NaN, -NaN)\', \'(NaN, NaN)\')\n')
>(Test(test_umath.TestC99.test_cacosh(, (1.0, NaN),
> (NaN, NaN), 'invalid-optional')), 'Traceback (most recent call last):\n
>  File "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n
>self.test(*self.arg)\n  File
> "/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in
> _check\nassert got == expected, (got, expected)\nAssertionError:
> (\'(NaN, -NaN)\', \'(NaN, NaN)\')\n')
>(Test(test_umath.TestC99.test_casinh(, (NaN, 1.0),
> (NaN, NaN), 'invalid-optional')), 'Traceback (most recent call last):\n
>  File "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n
>self.test(*self.arg)\n  File
> "/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in
> _check\nassert got == expected, (got, expected)\nAssertionError:
> (\'(NaN, -NaN)\', \'(NaN, NaN)\')\n')
>(Test(test_umath.TestC99.test_clog(, (-0.0, -0.0),
> (-Infinity, 3.1415926535897931), 'divide')), 'Traceback (most recent
> call last):\n  File "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202,
> in runTest\nself.test(*self.arg)\n  File
> "/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in
> _check\nassert got == expected, (got, expected)\nAssertionError:
> (\'(-Infinity, 0.0)\', \'(-Infinity, 3.1415926535897931)\')\n')
>

See the thread on recent work on branch cuts.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recent work for branch cuts / C99 complex maths: problems on mingw

2008-07-29 Thread Charles R Harris
On Tue, Jul 29, 2008 at 5:59 AM, Pauli Virtanen <[EMAIL PROTECTED]> wrote:

> Tue, 29 Jul 2008 20:16:36 +0900, David Cournapeau wrote:
> [clip]
> > Is there a clear explanation about C99 features related to complex math
> > somewhere ? The problem with C99 is that few compilers implement it
>
> The C99 standard (or more precisely its draft but this is probably mostly
> the same thing) can be found here:
>
>http://www.open-std.org/jtc1/sc22/wg14/www/standards
>
> > properly. None of the most used compilers implement it entirely, and
> > some of them don't even try, like MS compilers; the windows situations
> > is the most problematic because the mingw32 compilers are old, and thus
> > may not handle than many C99 features. There are also some shortcuts in
> > the way we detect the math functions, which is not 100 % reliable
> > (because of some mingw problems, in particular: I have already mentioned
> > this problem several times, I really ought to solve it at some points
> > instead of speaking about it).
> >
> > So what matters IMHO is the practical implications with the compilers/C
> > runtime we use for numpy/scipy (gcc, visual studio and intel compilers
> > should cover most of developers/users).
>
> We implement the complex functions completely ourselves and use only the
> real C math functions. As we compose the complex operations from real
> ones, corner cases can and apparently go wrong even if the underlying
> compiler is C99 compliant.
>
> The new Python cmath module actually implements the corner cases using a
> lookup table. I wonder if we should follow...


We also need speed. I think we just say behaviour on the branch cuts is
undefined, which is numerically true in any case, and try to get the nan's
and infs sorted out. But only if the costs are reasonable.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] svn numpy selftests fail on Solaris

2008-07-29 Thread Christopher Hanley
This has apparently been occurring for a few days.  My apologizes but I 
have been away on vacation.

FAILED (failures=5)
Running unit tests for numpy
NumPy version 1.2.0.dev5565
NumPy is installed in /usr/ra/pyssg/2.5.1/numpy
Python version 2.5.1 (r251:54863, Jun  4 2008, 15:48:19) [C]
nose version 0.10.0
ctypes is not available on this python: skipping the test (import error 
was: ctypes is not available.)
No distutils available, skipping test.
errors:
failures:
(Test(test_umath.TestC99.test_cacos(, (1.0, NaN), (NaN, 
NaN), 'invalid-optional')), 'Traceback (most recent call last):\n  File 
"/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n 
self.test(*self.arg)\n  File 
"/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in 
_check\nassert got == expected, (got, expected)\nAssertionError: 
(\'(-NaN, -NaN)\', \'(NaN, NaN)\')\n')
(Test(test_umath.TestC99.test_cacos(, (NaN, 1.0), (NaN, 
NaN), 'invalid-optional')), 'Traceback (most recent call last):\n  File 
"/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n 
self.test(*self.arg)\n  File 
"/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in 
_check\nassert got == expected, (got, expected)\nAssertionError: 
(\'(-NaN, -NaN)\', \'(NaN, NaN)\')\n')
(Test(test_umath.TestC99.test_cacosh(, (1.0, NaN), 
(NaN, NaN), 'invalid-optional')), 'Traceback (most recent call last):\n 
  File "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n 
self.test(*self.arg)\n  File 
"/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in 
_check\nassert got == expected, (got, expected)\nAssertionError: 
(\'(NaN, -NaN)\', \'(NaN, NaN)\')\n')
(Test(test_umath.TestC99.test_casinh(, (NaN, 1.0), 
(NaN, NaN), 'invalid-optional')), 'Traceback (most recent call last):\n 
  File "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, in runTest\n 
self.test(*self.arg)\n  File 
"/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in 
_check\nassert got == expected, (got, expected)\nAssertionError: 
(\'(NaN, -NaN)\', \'(NaN, NaN)\')\n')
(Test(test_umath.TestC99.test_clog(, (-0.0, -0.0), 
(-Infinity, 3.1415926535897931), 'divide')), 'Traceback (most recent 
call last):\n  File "/usr/stsci/pyssgdev/2.5.1/nose/case.py", line 202, 
in runTest\nself.test(*self.arg)\n  File 
"/usr/ra/pyssg/2.5.1/numpy/core/tests/test_umath.py", line 393, in 
_check\nassert got == expected, (got, expected)\nAssertionError: 
(\'(-Infinity, 0.0)\', \'(-Infinity, 3.1415926535897931)\')\n')

-- 
Christopher Hanley
Systems Software Engineer
Space Telescope Science Institute
3700 San Martin Drive
Baltimore MD, 21218
(410) 338-4338
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Operation over multiple axes? (Or: Partial flattening?)

2008-07-29 Thread Stéfan van der Walt
2008/7/29 Hans Meine <[EMAIL PROTECTED]>:
> with a multidimensional array (say, 4-dimensional), I often want to project
> this onto one single dimension, i.e.. let "dat" be a 4D array, I am
> interested in
>
>  dat.sum(0).sum(0).sum(0) # equals dat.sum(2).sum(1).sum(0)
>
> However, creating intermediate results looks more expensive than necessary; I
> would actually like to say
>
>  dat.sum((0,1,2))
>
> One way to achieve this is partial flattening, which I did like this:
>
>  dat.reshape((numpy.prod(dat.shape[:3]), dat.shape[3])).sum(0)
>
> Is there a more elegant way to do this?

That looks like a good way to do it.  You can clean it up ever so slightly:

x.reshape([-1, x.shape[-1]]).sum(axis=0)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] The date/time dtype and the casting issue

2008-07-29 Thread Francesc Alted
Hi,

During the making of the date/time proposals and the subsequent 
discussions in this list, we have changed a couple of times our point 
of view about the way how the castings would work between different 
date/time types and the different time units (previously called 
resolutions).  So I'd like to expose this issue in detail here, and 
give yet another new proposal about this, so as to gather feedback from 
the community before consolidating it in the final date/time proposal.

Casting proposal for date/time types


The operations among the proposed date/time types can be divided in 
three groups:

* Absolute time versus relative time

* Absolute time versus absolute time

* Relative time versus relative time

Now, here are our considerations for each case:

Absolute time versus relative time
--

We think that in this case the absolute time should have priority for 
determining the time unit of the outcome.  That would represent what 
the people wants to do most of the times.  For example, this would 
allow to do:

>>> series = numpy.array(['1970-01-01', '1970-02-01', '1970-09-01'], 
dtype='datetime64[D]')
>>> series2 = series + numpy.timedelta(1, 'Y')  # Add 2 relative years
>>> series2
array(['1972-01-01', '1972-02-01', '1972-09-01'],
dtype='datetime64[D]')  # the 'D'ay time unit has been chosen

Absolute time versus absolute time
--

When operating (basically, only the substraction will be allowed) two 
absolute times with different unit times, we are proposing that the 
outcome would be to raise an exception.  This is because the ranges and 
timespans of the different time units can be very different, and it is 
not clear at all what time unit will be preferred for the user.  For 
example, this should be allowed:

>>> numpy.ones(3, dtype="T8[Y]") - numpy.zeros(3, dtype="T8[Y]")
array([1, 1, 1], dtype="timedelta64[Y]")

But the next should not:

>>> numpy.ones(3, dtype="T8[Y]") - numpy.zeros(3, dtype="T8[ns]")
raise numpy.IncompatibleUnitError  # what unit to choose?

Relative time versus relative time
--

This case would be the same than the previous one (absolute vs 
absolute).  Our proposal is to forbid this operation if the time units 
of the operands are different.  For example, this should be allowed:

>>> numpy.ones(3, dtype="t8[Y]") + 3*numpy.ones(3, dtype="t8[Y]")
array([4, 4, 4], dtype="timedelta64[Y]")

But the next should not:

>>> numpy.ones(3, dtype="t8[Y]") + numpy.zeros(3, dtype="t8[fs]")
raise numpy.IncompatibleUnitError  # what unit to choose?

Introducing a time casting function
---

As forbidding operations among absolute/absolute and relative/relative 
types can be unacceptable in many situations, we are proposing an 
explicit casting mechanism so that the user can inform about the 
desired time unit of the outcome.  For this, a new NumPy function, 
called, say, ``numpy.change_unit()`` (this name is for the purposes of 
the discussion and can be changed) will be provided.  The signature for 
the function will be:

change_unit(time_object, new_unit, reference)

where 'time_object' is the time object whose unit is to be 
changed, 'new_unit' is the desired new time unit, and 'reference' is an 
absolute date that will be used to allow the conversion of relative 
times in case of using time units with an uncertain number of smaller 
time units (relative years or months cannot be expressed in days).  For 
example, that would allow to do:

>>> numpy.change_unit( numpy.array([1,2], 'T[Y]'), 'T[d]' )
array([365, 731], dtype="datetime64[d]")

or:

>>> ref = numpy.datetime64('1971', 'T[Y]')
>>> numpy.change_unit( numpy.array([1,2], 't[Y]'), 't[d]',  ref )
array([366, 365], dtype="timedelta64[d]")

Note: we refused to use the ``.astype()`` method because of the 
additional 'time_reference' parameter that will sound strange for other 
typical uses of ``.astype()``.

Opinions?

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Felix Richter
> Do your answers differ from the theory by a constant factor, or are
> they completely unrelated?

No, it's more complicated. Below you'll find my most recent, more stripped 
down code.

- I don't know how to scale in a way that works for any n.
- I don't know how to get the oscillations to match. I suppose its a problem 
with the frequency scale, but usage of fftfreq() is straightforward...
- I don't know why the imaginary part of the FFT behaves so different from the 
real part. It should just be a matter of sin vs. cos.

Is this voodoo? ;-)

And I didn't find any example on the internet which tries just to reproduce an 
analytic FT with the FFT...

Thanks for your help!




# coding: UTF-8
"""Test for FFT against analytic results"""

from scipy import *
from scipy import fftpack as fft
import pylab

def expdecay(t, dx, a):
return exp(-a*abs(t))*exp(1j*dx*t) * sqrt(pi/2.0)

def lorentz(x, dx, a):
return a/((x-dx)**2+a**2)

origfunc = lorentz
exactfft = expdecay

xrange, dxrange = linspace(0, 100, 2**12, retstep=True)
n = len(xrange)

# calculate original function over positive half of x-axis
# this serves as input to fft, make sure datatype is complex
ftdata = zeros(xrange.shape, complex128)
ftdata += origfunc(xrange, 50, 1.0)

# do FFT
fftft  = fft.fft(ftdata)
# normalize
# but how exactly?
fftft /= sqrt(n)

# shift frequencies into human-readable order
fftfts= fft.fftshift(fftft)

# determine frequency axis
fftscale = fft.fftfreq(n, dxrange)
fftscale = fft.fftshift(fftscale)

# calculate exact result of FT for comparison
exactres = exactfft(fftscale, 50, 1.0)

pylab.subplot(211)
pylab.plot(xrange, ftdata.real, 'x', label='Re data')
pylab.legend()
pylab.subplot(212)
pylab.plot(fftscale, fftfts.real, 'x', label='Re FFT(data)')
pylab.plot(fftscale, fftfts.imag, '.', label='Im FFT(data)')
pylab.plot(fftscale, exactres.real, label='exact Re FT')
pylab.plot(fftscale, exactres.imag, label='exact Im FT')
pylab.legend()
pylab.show()
pylab.close()
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Operation over multiple axes? (Or: Partial flattening?)

2008-07-29 Thread Hans Meine
Hi,

with a multidimensional array (say, 4-dimensional), I often want to project 
this onto one single dimension, i.e.. let "dat" be a 4D array, I am 
interested in

  dat.sum(0).sum(0).sum(0) # equals dat.sum(2).sum(1).sum(0)

However, creating intermediate results looks more expensive than necessary; I 
would actually like to say

  dat.sum((0,1,2))

One way to achieve this is partial flattening, which I did like this:

  dat.reshape((numpy.prod(dat.shape[:3]), dat.shape[3])).sum(0)

Is there a more elegant way to do this?

Ciao, /  /.o.
 /--/ ..o
/  / ANS  ooo


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Stéfan van der Walt
2008/7/29 Felix Richter <[EMAIL PROTECTED]>:
> I learned a few things in the meantime:
>
> In my installation, NumPy uses fftpack_lite while SciPy uses FFTW3. There are
> more test cases in SciPy which all pass. So I am confirmed my problem is a
> pure usage problem.
> One thing I was confused about is the fact that even if I calculate the
> function over a certain interval, I cannot tell FFT which interval this is,
> it will instead assume [0...n]. So actually I did not transform a Lorentz
> function centered at zero but rather centered at 500.
> Unfortunately, this solves only half of my problem, because I still cannot
> reproduce the exact FT. I'll ask for that on the SciPy list, this now seems
> more appropriate.

Felix,

Do your answers differ from the theory by a constant factor, or are
they completely unrelated?

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FFT usage / consistency

2008-07-29 Thread Felix Richter
I learned a few things in the meantime:

In my installation, NumPy uses fftpack_lite while SciPy uses FFTW3. There are 
more test cases in SciPy which all pass. So I am confirmed my problem is a 
pure usage problem.
One thing I was confused about is the fact that even if I calculate the 
function over a certain interval, I cannot tell FFT which interval this is, 
it will instead assume [0...n]. So actually I did not transform a Lorentz 
function centered at zero but rather centered at 500.
Unfortunately, this solves only half of my problem, because I still cannot 
reproduce the exact FT. I'll ask for that on the SciPy list, this now seems 
more appropriate.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recent work for branch cuts / C99 complex maths: problems on mingw

2008-07-29 Thread Pauli Virtanen
Tue, 29 Jul 2008 20:16:36 +0900, David Cournapeau wrote:
[clip]
> Is there a clear explanation about C99 features related to complex math
> somewhere ? The problem with C99 is that few compilers implement it

The C99 standard (or more precisely its draft but this is probably mostly 
the same thing) can be found here:

http://www.open-std.org/jtc1/sc22/wg14/www/standards

> properly. None of the most used compilers implement it entirely, and
> some of them don't even try, like MS compilers; the windows situations
> is the most problematic because the mingw32 compilers are old, and thus
> may not handle than many C99 features. There are also some shortcuts in
> the way we detect the math functions, which is not 100 % reliable
> (because of some mingw problems, in particular: I have already mentioned
> this problem several times, I really ought to solve it at some points
> instead of speaking about it).
> 
> So what matters IMHO is the practical implications with the compilers/C
> runtime we use for numpy/scipy (gcc, visual studio and intel compilers
> should cover most of developers/users).

We implement the complex functions completely ourselves and use only the 
real C math functions. As we compose the complex operations from real 
ones, corner cases can and apparently go wrong even if the underlying 
compiler is C99 compliant.

The new Python cmath module actually implements the corner cases using a 
lookup table. I wonder if we should follow...

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RFC: A (second) proposal for implementing some date/time types in NumPy

2008-07-29 Thread Francesc Alted
A Tuesday 29 July 2008, Francesc Alted escrigué:
[snip]
> > >   In [12]: t[0]
> > >   Out[12]: 24   # representation as an int64
> >
> > why not a "pretty" representation of timedelta64 too? I'd like that
> > better (at least for __str__, perhaps __repr__ should be the raw
> > numbers.
>
> That could be an interesting feature.  Here it is what the
> ``datetime``
>
> module does:
> >>> delta = datetime.datetime(1980,2,1)-datetime.datetime(1970,1,1)
> >>> delta.__str__()
>
> '3683 days, 0:00:00'
>
> >>> delta.__repr__()
>
> 'datetime.timedelta(3683)'
>
> For the NumPy ``timedelta64`` with a time unit of days, it could be
>
> something like:
> >>> delta_days.__str__()
>
> '3683 days'
>
> >>> delta_days.__repr__()
>
> 3683
>
> while for a ``timedelta64`` with a time unit of microseconds it could
>
> be:
> >>> delta_us.__str__()
>
> '3683 days, 3:04:05.64'
>
> >>> delta_us.__repr__()
>
> 3184564
>
> But I'm open to other suggestions, of course.

Sorry, but I've been a bit inconsistent here as this is documented in 
the proposal already.  Just to clarify things, here it goes the 
str/repr suggestions (just a bit more populated with examples) in the 
second version of the second proposal.

For absolute times:

In [5]: numpy.datetime64(42, 'us')
Out[5]: datetime64(42, 'us')

In [6]: print numpy.datetime64(42)
1970-01-01T00:00:00.42  # representation in ISO 8601 format

In [7]: print numpy.datetime64(367.7, 'D')  # decimal part is lost
1971-01-02  # still ISO 8601 format

In [8]: numpy.datetime('2008-07-18T12:23:18', 'm')  # from ISO 8601
Out[8]: datetime64(20273063, 'm')

In [9]: print numpy.datetime('2008-07-18T12:23:18', 'm')
Out[9]: 2008-07-18T12:23

In [10]: t = numpy.zeros(5, dtype="datetime64[D]")

In [11]: print t
[1970-01-01  1970-01-01  1970-01-01  1970-01-01  1970-01-01]

In [12]: repr(t)
Out[12]: array([0, 0, 0, 0, 0], dtype="datetime64[D]")

In [13]: print t[0]
1970-01-01

In [14]: t[0]
Out[14]: datetime64(0, unit='D')

In [15]: t[0].item() # getter in action
Out[15]: datetime.datetime(1970, 1, 1, 0, 0)


For relative times:

In [5]: numpy.timedelta64(10, 'us')
Out[5]: timedelta64(10, 'us')

In [6]: print numpy.timedelta64(10, 'ms')
0:00:00.010

In [7]: print numpy.timedelta64(3600.2, 'm')  # decimal part is lost
2 days, 12:00

In [8]: t0 = numpy.zeros(5, dtype="datetime64[ms]")

In [9]: t1 = numpy.ones(5, dtype="datetime64[ms]")

In [10]: t = t1 - t1

In [11]: t[0] = datetime.timedelta(0, 24)  # setter in action

In [12]: print t
[0:00:24.000  0:00:01.000  0:00:01.000  0:00:01.000  0:00:01.000]

In [13]: repr(t)
Out[13]: array([24000, 1, 1, 1, 1], dtype="timedelta64[ms]")

In [14]: print t[0]
0:00:24.000

In [15]: t[0]
Out[15]: timedelta(24000, unit='ms')

In [16]: t[0].item() # getter in action
Out[16]: datetime.timedelta(0, 24)


Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recent work for branch cuts / C99 complex maths: problems on mingw

2008-07-29 Thread David Cournapeau
Pauli Virtanen wrote:
>
> I'm not sure whether it makes sense to keep the C99 tests in SVN, even if 
> marked as skipped, before the C code is fixed. Right now, it seems that 
> we are far from C99 compliance with regard to corner-case value inf-nan 
> behavior. (The branch cuts are mostly OK, though, and I suspect that what 
> is currently non-C99 could be fixed by making nc_sqrt to handle negative 
> zeros properly.)
>   

Is there a clear explanation about C99 features related to complex math
somewhere ? The problem with C99 is that few compilers implement it
properly. None of the most used compilers implement it entirely, and
some of them don't even try, like MS compilers; the windows situations
is the most problematic because the mingw32 compilers are old, and thus
may not handle than many C99 features. There are also some shortcuts in
the way we detect the math functions, which is not 100 % reliable
(because of some mingw problems, in particular: I have already mentioned
this problem several times, I really ought to solve it at some points
instead of speaking about it).

So what matters IMHO is the practical implications with the compilers/C
runtime we use for numpy/scipy (gcc, visual studio and intel compilers
should cover most of developers/users).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recent work for branch cuts / C99 complex maths: problems on mingw

2008-07-29 Thread Pauli Virtanen
Tue, 29 Jul 2008 15:17:01 +0900, David Cournapeau wrote:

> Charles R Harris wrote:
>>
>>
>> On Mon, Jul 28, 2008 at 10:29 PM, David Cournapeau
>> <[EMAIL PROTECTED] >
>> wrote:
>>
>> Hi,
>>
>>I was away during the discussion on the updated complex
>>functions
>> using C99, and I've noticed it breaks some tests on windows (with
>> mingw;
>> I have not tried with Visual Studio, but it is likely to make
>> things even worse given C support from MS compilers):
>>
>> http://scipy.org/scipy/numpy/ticket/865
>>
>> Were those changes backported to 1.1.x ? If so, I would consider
>> this as
>> a release blocker,
>>
>>
>> The only changes to the computations were in acosh and asinh, which I
>> think should work fine. The tests check branch cuts and corner cases
>> among other things and are only in the trunk, so we aren't any worse
>> off than we were, we just have more failing tests to track down.
> 
> Ok. I though there was more than just tests, but also C code
> modification. If not, it is certainly much less of a problem.

I'm not sure whether it makes sense to keep the C99 tests in SVN, even if 
marked as skipped, before the C code is fixed. Right now, it seems that 
we are far from C99 compliance with regard to corner-case value inf-nan 
behavior. (The branch cuts are mostly OK, though, and I suspect that what 
is currently non-C99 could be fixed by making nc_sqrt to handle negative 
zeros properly.)

Also, it appears that signaling and quiet NaNs (#IND, #QNAN) are printed 
differently on mingw32, so that the comparisons should be reworked to 
treat all nans the same, or the functions should be consistent in which 
flavor they return. I'm not sure whether IEEE 754 or C99 says something 
about what kind of NaNs functions should return. But I guess in practice 
this is not so important, I doubt anyone uses these for anything.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RFC: A (second) proposal for i mplementing some date/time types in NumPy

2008-07-29 Thread Francesc Alted
A Monday 28 July 2008, Pierre GM escrigué:
> On Monday 28 July 2008 12:17:41 Francesc Alted wrote:
> > So, for allowing this to happen, we have concluded that a
> > conceptual change in our second proposal is needed: instead of
> > a 'resolution', we can introduce the 'time unit' concept.
>
> I'm all for that, thanks !
>
> > One thing that will not be possible though, is
> > to change the time unit of a relative time expressed in say, years,
> > to another time unit expressed in say, days.  This is because the
> > impossibility to know how many days has a year that is relative
> > (i.e. not bound to a given year).
>
> OK, that makes sense for timedeltas. But would I still be able to add
> a timedelta['Y'] (in years) to a datetime['D'] (in days) and get the
> proper result ?

Hmmm, good point.  Well, provided that we plan to set the casting rules 
so that the time unit of the outcome will be the largest of the time 
units of the operands, and assuming aproximate values for the number of 
days in a year (365.2425, i.e. the average year length of the Gregorian 
calendar) and in a month (30.436875 = 365.2425/12), I think the next 
operations would be feasible:

>>> numpy.timedelta(20, unit='Y') + numpy.timedelta(365, unit='D')
20  # unit is Year
>>> numpy.timedelta(20, unit='Y') + numpy.timedelta(366, unit='D')
21  # unit is Year

>>> numpy.timedelta(43, unit='M') + numpy.timedelta(30, unit='D')
43  # unit is Month
>>> numpy.timedelta(43, unit='M') + numpy.timedelta(31, unit='D')
44  # unit is Month

Would that be ok for you?

> > More in general, it will not be possible
> > to perform 'time unit' conversions between units above and below a
> > relative week (because it is the maximum time unit that has a
> > definite number of seconds).
>
> Could you rephrase that ? You're still talking about conversion for
> timedelta, not datetime, right ?

Yes.  I was talking about the relative timedelta in that case.  The 
initial idea was to forbid conversions among relative timedeltas with 
different units that imply assumptions in the number of days.  But 
after largely pondering about the example above, I think now that it 
would be sensible to allow conversions from time units shorter than a 
week to larger than a week ones (but not the inverse), assuming the 
truncation of the outcome.  For example, the next would be allowed:

>>> numpy.timedelta(43, unit='D').astype("t8[M]")
1  # One complete month
>>> numpy.timedelta(365, unit='D').astype("t8[Y]")
0  # Not a complete year

But this would not:

>>> numpy.timedelta(2, unit='M').astype("t8[d]")
raise ``IncompatibleUnitError`` # How many days could have 2 months?
>>> numpy.timedelta(1, unit='Y').astype("t8[d]")
raise ``IncompatibleUnitError`` # How many days could have 1 year?

This will add more complexity to the code, but the functionality looks 
sensible to my eyes.  What do you think?

>
> > > >>>series.asfreq('A-MAR')
> >
> > Well, as we don't like an 'origin' to have part of our proposal,
> > you won't be able to do exactly that with the proposed plain dtype.
>
> That's what I was afraid of. Oh well, I'm sure we'll come with a
> way...
>
> Looking forward to reading the third version !

Well, as we are still discussing and changing things, we would like to 
wait a bit more until all the dust has settled.  But we are looking 
forward to produce the third version of the proposal before the end of 
this week.

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RFC: A (second) proposal for implementing some date/time types in NumPy

2008-07-29 Thread Francesc Alted
A Monday 28 July 2008, Christopher Barker escrigué:
> Hi,
>
> Sorry for the very long delay in commenting on this.

Don't worry, we are still in time to receive more comments (but if there 
is people willing to contribute more comments, hurry up, please!).

> In short, it 
> looks great, and thanks for your efforts.
>
> A couple small comments:
>  >  In [11]: t[0] = datetime.datetime.now()  # setter in action
>  >
>  >  In [12]: t[0]
>  >  Out[12]: '2008-07-16T13:39:25.315'   # representation in ISO 8601
>
> format
>
> I like that, but what about:
> >  In [8]: t1 = numpy.zeros(5, dtype="datetime64[s]")
> >  In [9]: t2 = numpy.ones(5, dtype="datetime64[s]")
> >
> >   In [10]: t = t2 - t1
> >
> >   In [11]: t[0] = 24  # setter in action (setting to 24 seconds)
>
> Is there a way to set in any other units? (hours, days, etc.)

Yes.  You will be able to use a scalar ``timedelta64``.  For example, if 
t is an array with dtype = 'timedelta64[s]' (i.e. with a time unit of 
seconds), you will be able to do the next:

>>> t[0] = numpy.timedelta64(2, unit="[D]") 

where you are adding 2 days to the 0-element of t.  However, you won't 
be able to do the next:

>>> t[0] = numpy.timedelta64(2, unit="[M]")

because a month has not a definite number of seconds.  This will 
typically raise a ``TypeError`` exception, or perhaps a 
``numpy.IncompatibleUnitError`` which would be more auto-explaining.

>
> >   In [12]: t[0]
> >   Out[12]: 24   # representation as an int64
>
> why not a "pretty" representation of timedelta64 too? I'd like that
> better (at least for __str__, perhaps __repr__ should be the raw
> numbers.

That could be an interesting feature.  Here it is what the ``datetime`` 
module does:

>>> delta = datetime.datetime(1980,2,1)-datetime.datetime(1970,1,1)
>>> delta.__str__()
'3683 days, 0:00:00'
>>> delta.__repr__()
'datetime.timedelta(3683)'

For the NumPy ``timedelta64`` with a time unit of days, it could be 
something like:

>>> delta_days.__str__()
'3683 days'
>>> delta_days.__repr__()
3683

while for a ``timedelta64`` with a time unit of microseconds it could 
be:

>>> delta_us.__str__()
'3683 days, 3:04:05.64'
>>> delta_us.__repr__()
3184564

But I'm open to other suggestions, of course.

> how will operations between different types work?
>
>  > t1 = numpy.ones(5, dtype="timedelta64[s]")
>  > t2 = numpy.ones(5, dtype="timedelta64[ms]")
>
> t1 + t2
>
>  >> ??

Yeah.  While the proposal stated that these operations should be 
possible, it is true that the casting rules where not stablished yet. 
After thinking a bit about this, we find that we should prioritize 
avoiding overflows rather than trying to keep the maximum precision.  
With this rule in mind, the outcome will always have the larger of the 
units in the operands.  In your example, t1 + t2 will have '[s]' units.

Would that make sense for most of people?

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion