Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-15 Thread David Cournapeau
On Tue, Nov 15, 2011 at 6:22 AM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 On Mon, Nov 14, 2011 at 10:08 PM, David Cournapeau courn...@gmail.com wrote:
 On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett matthew.br...@gmail.com 
 wrote:
 Hi,

 On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
  
   Hi,
  
   On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
Hi,
   
Sorry for my continued confusion here.  This is numpy 1.6.1 on
windows
XP 32 bit.
   
In [2]: np.finfo(np.float96).nmant
Out[2]: 52
   
In [3]: np.finfo(np.float96).nexp
Out[3]: 15
   
In [4]: np.finfo(np.float64).nmant
Out[4]: 52
   
In [5]: np.finfo(np.float64).nexp
Out[5]: 11
   
If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:
   
In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0
   
In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0
   
If the nexp is right, the max should be higher for the float96
type:
   
In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308
   
In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF
   
I see that long double in C is 12 bytes wide, and double is the
usual
8
bytes.
  
   Sorry - sizeof(long double) is 12 using mingw.  I see that long
   double
   is the same as double in MS Visual C++.
  
   http://en.wikipedia.org/wiki/Long_double
  
   but, as expected from the name:
  
   In [11]: np.dtype(np.float96).itemsize
   Out[11]: 12
  
  
   Hmm, good point. There should not be a float96 on Windows using the
   MSVC
   compiler, and the longdouble types 'gG' should return float64 and
   complex128
   respectively. OTOH, I believe the mingw compiler has real float96
   types
   but
   I wonder about library support. This is really a build issue and it
   would be
   good to have some feedback on what different platforms are doing so
   that
   we
   know if we are doing things right.
 
  Is it possible that numpy is getting confused by being compiled with
  mingw on top of a visual studio python?
 
  Some further forensics seem to suggest that, despite the fact the math
  suggests float96 is float64, the storage format it in fact 80-bit
  extended precision:
 
 
  Yes, extended precision is the type on Intel hardware with gcc, the
  96/128
  bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
  and
  long double are both ieee double, and on SPARC, long double is ieee quad
  precision.

 Right - but I think my researches are showing that the longdouble
 numbers are being _stored_ as 80 bit, but the math on those numbers is
 64 bit.

 Is there a reason than numpy can't do 80-bit math on these guys?  If
 there is, is there any point in having a float96 on windows?

 It's a compiler/architecture thing and depends on how the compiler
 interprets the long double c type. The gcc compiler does do 80 bit math on
 Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
 produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
 compiler, so the numbers are there, but if it uses the MS library it will
 have to convert them to double to do computations like sin(x) since there
 are no microsoft routines for extended precision. I suspect that gcc/ms
 combo is what is producing the odd results you are seeing.

 I think we might be talking past each other a bit.

 It seems to me that, if float96 must use float64 math, then it should
 be removed from the numpy namespace, because

 If we were to do so, it would break too much code.

 David - please - obviously I'm not suggesting removing it without
 deprecating it.

Let's say I find it debatable that removing it (with all the
deprecations) would be a good use of effort, especially given that
there is no obviously better choice to be made.


 a) It implies higher precision than float64 but does not provide it
 b) It uses more memory to no obvious advantage

 There is an obvious advantage: to handle memory blocks which use long
 double, created outside numpy (or even python).

 Right - but that's a bit arcane, and I would have thought
 np.longdouble would be a good enough name for that.   Of course, the
 users may be surprised, as I was, that memory allocated for higher
 precision is using float64, and that may take them some time to work
 out.  I'll say again that 'longdouble' says to me 'something specific
 to the compiler' and 'float96' says 

Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-15 Thread Matthew Brett
Hi,

On Tue, Nov 15, 2011 at 12:51 AM, David Cournapeau courn...@gmail.com wrote:
 On Tue, Nov 15, 2011 at 6:22 AM, Matthew Brett matthew.br...@gmail.com 
 wrote:
 Hi,

 On Mon, Nov 14, 2011 at 10:08 PM, David Cournapeau courn...@gmail.com 
 wrote:
 On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett matthew.br...@gmail.com 
 wrote:
 Hi,

 On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett 
  matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
  
   Hi,
  
   On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
Hi,
   
Sorry for my continued confusion here.  This is numpy 1.6.1 on
windows
XP 32 bit.
   
In [2]: np.finfo(np.float96).nmant
Out[2]: 52
   
In [3]: np.finfo(np.float96).nexp
Out[3]: 15
   
In [4]: np.finfo(np.float64).nmant
Out[4]: 52
   
In [5]: np.finfo(np.float64).nexp
Out[5]: 11
   
If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:
   
In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0
   
In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0
   
If the nexp is right, the max should be higher for the float96
type:
   
In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308
   
In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF
   
I see that long double in C is 12 bytes wide, and double is the
usual
8
bytes.
  
   Sorry - sizeof(long double) is 12 using mingw.  I see that long
   double
   is the same as double in MS Visual C++.
  
   http://en.wikipedia.org/wiki/Long_double
  
   but, as expected from the name:
  
   In [11]: np.dtype(np.float96).itemsize
   Out[11]: 12
  
  
   Hmm, good point. There should not be a float96 on Windows using the
   MSVC
   compiler, and the longdouble types 'gG' should return float64 and
   complex128
   respectively. OTOH, I believe the mingw compiler has real float96
   types
   but
   I wonder about library support. This is really a build issue and it
   would be
   good to have some feedback on what different platforms are doing so
   that
   we
   know if we are doing things right.
 
  Is it possible that numpy is getting confused by being compiled with
  mingw on top of a visual studio python?
 
  Some further forensics seem to suggest that, despite the fact the math
  suggests float96 is float64, the storage format it in fact 80-bit
  extended precision:
 
 
  Yes, extended precision is the type on Intel hardware with gcc, the
  96/128
  bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
  and
  long double are both ieee double, and on SPARC, long double is ieee 
  quad
  precision.

 Right - but I think my researches are showing that the longdouble
 numbers are being _stored_ as 80 bit, but the math on those numbers is
 64 bit.

 Is there a reason than numpy can't do 80-bit math on these guys?  If
 there is, is there any point in having a float96 on windows?

 It's a compiler/architecture thing and depends on how the compiler
 interprets the long double c type. The gcc compiler does do 80 bit math on
 Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
 produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
 compiler, so the numbers are there, but if it uses the MS library it will
 have to convert them to double to do computations like sin(x) since there
 are no microsoft routines for extended precision. I suspect that gcc/ms
 combo is what is producing the odd results you are seeing.

 I think we might be talking past each other a bit.

 It seems to me that, if float96 must use float64 math, then it should
 be removed from the numpy namespace, because

 If we were to do so, it would break too much code.

 David - please - obviously I'm not suggesting removing it without
 deprecating it.

 Let's say I find it debatable that removing it (with all the
 deprecations) would be a good use of effort, especially given that
 there is no obviously better choice to be made.


 a) It implies higher precision than float64 but does not provide it
 b) It uses more memory to no obvious advantage

 There is an obvious advantage: to handle memory blocks which use long
 double, created outside numpy (or even python).

 Right - but that's a bit arcane, and I would have thought
 np.longdouble would be a good enough name for that.   Of course, the
 users may be surprised, as I was, that memory allocated for higher
 precision is using float64, and that may take them some time to work
 out.  I'll say 

Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-14 Thread Matthew Brett
Hi,

On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
  
   Hi,
  
   On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
Hi,
   
Sorry for my continued confusion here.  This is numpy 1.6.1 on
windows
XP 32 bit.
   
In [2]: np.finfo(np.float96).nmant
Out[2]: 52
   
In [3]: np.finfo(np.float96).nexp
Out[3]: 15
   
In [4]: np.finfo(np.float64).nmant
Out[4]: 52
   
In [5]: np.finfo(np.float64).nexp
Out[5]: 11
   
If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:
   
In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0
   
In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0
   
If the nexp is right, the max should be higher for the float96
type:
   
In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308
   
In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF
   
I see that long double in C is 12 bytes wide, and double is the
usual
8
bytes.
  
   Sorry - sizeof(long double) is 12 using mingw.  I see that long
   double
   is the same as double in MS Visual C++.
  
   http://en.wikipedia.org/wiki/Long_double
  
   but, as expected from the name:
  
   In [11]: np.dtype(np.float96).itemsize
   Out[11]: 12
  
  
   Hmm, good point. There should not be a float96 on Windows using the
   MSVC
   compiler, and the longdouble types 'gG' should return float64 and
   complex128
   respectively. OTOH, I believe the mingw compiler has real float96
   types
   but
   I wonder about library support. This is really a build issue and it
   would be
   good to have some feedback on what different platforms are doing so
   that
   we
   know if we are doing things right.
 
  Is it possible that numpy is getting confused by being compiled with
  mingw on top of a visual studio python?
 
  Some further forensics seem to suggest that, despite the fact the math
  suggests float96 is float64, the storage format it in fact 80-bit
  extended precision:
 
 
  Yes, extended precision is the type on Intel hardware with gcc, the
  96/128
  bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
  and
  long double are both ieee double, and on SPARC, long double is ieee quad
  precision.

 Right - but I think my researches are showing that the longdouble
 numbers are being _stored_ as 80 bit, but the math on those numbers is
 64 bit.

 Is there a reason than numpy can't do 80-bit math on these guys?  If
 there is, is there any point in having a float96 on windows?

 It's a compiler/architecture thing and depends on how the compiler
 interprets the long double c type. The gcc compiler does do 80 bit math on
 Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
 produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
 compiler, so the numbers are there, but if it uses the MS library it will
 have to convert them to double to do computations like sin(x) since there
 are no microsoft routines for extended precision. I suspect that gcc/ms
 combo is what is producing the odd results you are seeing.

I think we might be talking past each other a bit.

It seems to me that, if float96 must use float64 math, then it should
be removed from the numpy namespace, because

a) It implies higher precision than float64 but does not provide it
b) It uses more memory to no obvious advantage

On the other hand, it seems to me that raw gcc does use higher
precision for basic math on long double, as expected.  For example,
this guy passes:

#include math.h
#include assert.h

int main(int argc, char* argv) {
double d;
long double ld;
d = pow(2, 53);
ld = d;
assert(d == ld);
d += 1;
ld += 1;
/* double rounds down because it doesn't have enough precision */
assert(d != ld);
assert(d == ld - 1);
}

whereas numpy does not use the higher precision:

In [10]: a = np.float96(2**53)

In [11]: a
Out[11]: 9007199254740992.0

In [12]: b = np.float64(2**53)

In [13]: b
Out[13]: 9007199254740992.0

In [14]: a == b
Out[14]: True

In [15]: (a + 1) == (b + 1)
Out[15]: True

So maybe there is a way of picking up the gcc math in numpy?

Best,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-14 Thread David Cournapeau
On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
  
   Hi,
  
   On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
Hi,
   
Sorry for my continued confusion here.  This is numpy 1.6.1 on
windows
XP 32 bit.
   
In [2]: np.finfo(np.float96).nmant
Out[2]: 52
   
In [3]: np.finfo(np.float96).nexp
Out[3]: 15
   
In [4]: np.finfo(np.float64).nmant
Out[4]: 52
   
In [5]: np.finfo(np.float64).nexp
Out[5]: 11
   
If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:
   
In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0
   
In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0
   
If the nexp is right, the max should be higher for the float96
type:
   
In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308
   
In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF
   
I see that long double in C is 12 bytes wide, and double is the
usual
8
bytes.
  
   Sorry - sizeof(long double) is 12 using mingw.  I see that long
   double
   is the same as double in MS Visual C++.
  
   http://en.wikipedia.org/wiki/Long_double
  
   but, as expected from the name:
  
   In [11]: np.dtype(np.float96).itemsize
   Out[11]: 12
  
  
   Hmm, good point. There should not be a float96 on Windows using the
   MSVC
   compiler, and the longdouble types 'gG' should return float64 and
   complex128
   respectively. OTOH, I believe the mingw compiler has real float96
   types
   but
   I wonder about library support. This is really a build issue and it
   would be
   good to have some feedback on what different platforms are doing so
   that
   we
   know if we are doing things right.
 
  Is it possible that numpy is getting confused by being compiled with
  mingw on top of a visual studio python?
 
  Some further forensics seem to suggest that, despite the fact the math
  suggests float96 is float64, the storage format it in fact 80-bit
  extended precision:
 
 
  Yes, extended precision is the type on Intel hardware with gcc, the
  96/128
  bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
  and
  long double are both ieee double, and on SPARC, long double is ieee quad
  precision.

 Right - but I think my researches are showing that the longdouble
 numbers are being _stored_ as 80 bit, but the math on those numbers is
 64 bit.

 Is there a reason than numpy can't do 80-bit math on these guys?  If
 there is, is there any point in having a float96 on windows?

 It's a compiler/architecture thing and depends on how the compiler
 interprets the long double c type. The gcc compiler does do 80 bit math on
 Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
 produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
 compiler, so the numbers are there, but if it uses the MS library it will
 have to convert them to double to do computations like sin(x) since there
 are no microsoft routines for extended precision. I suspect that gcc/ms
 combo is what is producing the odd results you are seeing.

 I think we might be talking past each other a bit.

 It seems to me that, if float96 must use float64 math, then it should
 be removed from the numpy namespace, because

If we were to do so, it would break too much code.


 a) It implies higher precision than float64 but does not provide it
 b) It uses more memory to no obvious advantage

There is an obvious advantage: to handle memory blocks which use long
double, created outside numpy (or even python).

Otherwise, while gcc indeed supports long double, the fact that the C
runtime doesn't really mean it is hopeless to reach any kind of
consistency. And I will reiterate what I said before about long
double: if you care about your code behaving consistency across
platforms, just forget about long double.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-14 Thread Matthew Brett
Hi,

On Mon, Nov 14, 2011 at 10:08 PM, David Cournapeau courn...@gmail.com wrote:
 On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett matthew.br...@gmail.com 
 wrote:
 Hi,

 On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
  
   Hi,
  
   On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
Hi,
   
Sorry for my continued confusion here.  This is numpy 1.6.1 on
windows
XP 32 bit.
   
In [2]: np.finfo(np.float96).nmant
Out[2]: 52
   
In [3]: np.finfo(np.float96).nexp
Out[3]: 15
   
In [4]: np.finfo(np.float64).nmant
Out[4]: 52
   
In [5]: np.finfo(np.float64).nexp
Out[5]: 11
   
If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:
   
In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0
   
In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0
   
If the nexp is right, the max should be higher for the float96
type:
   
In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308
   
In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF
   
I see that long double in C is 12 bytes wide, and double is the
usual
8
bytes.
  
   Sorry - sizeof(long double) is 12 using mingw.  I see that long
   double
   is the same as double in MS Visual C++.
  
   http://en.wikipedia.org/wiki/Long_double
  
   but, as expected from the name:
  
   In [11]: np.dtype(np.float96).itemsize
   Out[11]: 12
  
  
   Hmm, good point. There should not be a float96 on Windows using the
   MSVC
   compiler, and the longdouble types 'gG' should return float64 and
   complex128
   respectively. OTOH, I believe the mingw compiler has real float96
   types
   but
   I wonder about library support. This is really a build issue and it
   would be
   good to have some feedback on what different platforms are doing so
   that
   we
   know if we are doing things right.
 
  Is it possible that numpy is getting confused by being compiled with
  mingw on top of a visual studio python?
 
  Some further forensics seem to suggest that, despite the fact the math
  suggests float96 is float64, the storage format it in fact 80-bit
  extended precision:
 
 
  Yes, extended precision is the type on Intel hardware with gcc, the
  96/128
  bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
  and
  long double are both ieee double, and on SPARC, long double is ieee quad
  precision.

 Right - but I think my researches are showing that the longdouble
 numbers are being _stored_ as 80 bit, but the math on those numbers is
 64 bit.

 Is there a reason than numpy can't do 80-bit math on these guys?  If
 there is, is there any point in having a float96 on windows?

 It's a compiler/architecture thing and depends on how the compiler
 interprets the long double c type. The gcc compiler does do 80 bit math on
 Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
 produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
 compiler, so the numbers are there, but if it uses the MS library it will
 have to convert them to double to do computations like sin(x) since there
 are no microsoft routines for extended precision. I suspect that gcc/ms
 combo is what is producing the odd results you are seeing.

 I think we might be talking past each other a bit.

 It seems to me that, if float96 must use float64 math, then it should
 be removed from the numpy namespace, because

 If we were to do so, it would break too much code.

David - please - obviously I'm not suggesting removing it without
deprecating it.

 a) It implies higher precision than float64 but does not provide it
 b) It uses more memory to no obvious advantage

 There is an obvious advantage: to handle memory blocks which use long
 double, created outside numpy (or even python).

Right - but that's a bit arcane, and I would have thought
np.longdouble would be a good enough name for that.   Of course, the
users may be surprised, as I was, that memory allocated for higher
precision is using float64, and that may take them some time to work
out.  I'll say again that 'longdouble' says to me 'something specific
to the compiler' and 'float96' says 'something standard in numpy', and
that I - was surprised - when I found out what it was.

 Otherwise, while gcc indeed supports long double, the fact that the C
 runtime doesn't really mean it is hopeless to reach any kind of
 consistency.

I'm sorry for my ignorance, but 

Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-13 Thread Charles R Harris
On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  Sorry for my continued confusion here.  This is numpy 1.6.1 on windows
  XP 32 bit.
 
  In [2]: np.finfo(np.float96).nmant
  Out[2]: 52
 
  In [3]: np.finfo(np.float96).nexp
  Out[3]: 15
 
  In [4]: np.finfo(np.float64).nmant
  Out[4]: 52
 
  In [5]: np.finfo(np.float64).nexp
  Out[5]: 11
 
  If there are 52 bits of precision, 2**53+1 should not be
  representable, and sure enough:
 
  In [6]: np.float96(2**53)+1
  Out[6]: 9007199254740992.0
 
  In [7]: np.float64(2**53)+1
  Out[7]: 9007199254740992.0
 
  If the nexp is right, the max should be higher for the float96 type:
 
  In [9]: np.finfo(np.float64).max
  Out[9]: 1.7976931348623157e+308
 
  In [10]: np.finfo(np.float96).max
  Out[10]: 1.#INF
 
  I see that long double in C is 12 bytes wide, and double is the usual 8
 bytes.

 Sorry - sizeof(long double) is 12 using mingw.  I see that long double
 is the same as double in MS Visual C++.

 http://en.wikipedia.org/wiki/Long_double

 but, as expected from the name:

 In [11]: np.dtype(np.float96).itemsize
 Out[11]: 12


Hmm, good point. There should not be a float96 on Windows using the MSVC
compiler, and the longdouble types 'gG' should return float64 and
complex128 respectively. OTOH, I believe the mingw compiler has real
float96 types but I wonder about library support. This is really a build
issue and it would be good to have some feedback on what different
platforms are doing so that we know if we are doing things right.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-13 Thread Matthew Brett
Hi,

On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  Sorry for my continued confusion here.  This is numpy 1.6.1 on windows
  XP 32 bit.
 
  In [2]: np.finfo(np.float96).nmant
  Out[2]: 52
 
  In [3]: np.finfo(np.float96).nexp
  Out[3]: 15
 
  In [4]: np.finfo(np.float64).nmant
  Out[4]: 52
 
  In [5]: np.finfo(np.float64).nexp
  Out[5]: 11
 
  If there are 52 bits of precision, 2**53+1 should not be
  representable, and sure enough:
 
  In [6]: np.float96(2**53)+1
  Out[6]: 9007199254740992.0
 
  In [7]: np.float64(2**53)+1
  Out[7]: 9007199254740992.0
 
  If the nexp is right, the max should be higher for the float96 type:
 
  In [9]: np.finfo(np.float64).max
  Out[9]: 1.7976931348623157e+308
 
  In [10]: np.finfo(np.float96).max
  Out[10]: 1.#INF
 
  I see that long double in C is 12 bytes wide, and double is the usual 8
  bytes.

 Sorry - sizeof(long double) is 12 using mingw.  I see that long double
 is the same as double in MS Visual C++.

 http://en.wikipedia.org/wiki/Long_double

 but, as expected from the name:

 In [11]: np.dtype(np.float96).itemsize
 Out[11]: 12


 Hmm, good point. There should not be a float96 on Windows using the MSVC
 compiler, and the longdouble types 'gG' should return float64 and complex128
 respectively. OTOH, I believe the mingw compiler has real float96 types but
 I wonder about library support. This is really a build issue and it would be
 good to have some feedback on what different platforms are doing so that we
 know if we are doing things right.

Is it possible that numpy is getting confused by being compiled with
mingw on top of a visual studio python?

Some further forensics seem to suggest that, despite the fact the math
suggests float96 is float64, the storage format it in fact 80-bit
extended precision:

On OSX 32-bit where float128 is definitely 80 bit precision we see the
sign bit being flipped to show us the beginning of the number:

In [33]: bigbin(np.float128(2**53)-1)
Out[33]: 
'1011011100111000'

In [34]: bigbin(-np.float128(2**53)+1)
Out[34]: 
'111100111000'

I think that's 48 bits of padding followed by the number (bit 49 is
being flipped with the sign).

On windows (well, wine, but I think it's the same):

bigbin(np.float96(2**53)-1)
Out[14]: 
'011100111000'
bigbin(np.float96(-2**53)+1)
Out[15]: 
'111100111000'

Thanks,

Matthew

bigbin-definition
import sys
LE = sys.byteorder == 'little'

import numpy as np

def bigbin(val):
val = np.asarray(val)
nbytes = val.dtype.itemsize
dt = [('f', np.uint8, nbytes)]
out = [np.binary_repr(el, 8) for el in val.view(dt)['f']]
if LE:
out = out[::-1]
return ''.join(out)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-13 Thread Charles R Harris
On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett matthew.br...@gmail.com
 
  wrote:
 
  Hi,
 
  On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett 
 matthew.br...@gmail.com
  wrote:
   Hi,
  
   Sorry for my continued confusion here.  This is numpy 1.6.1 on windows
   XP 32 bit.
  
   In [2]: np.finfo(np.float96).nmant
   Out[2]: 52
  
   In [3]: np.finfo(np.float96).nexp
   Out[3]: 15
  
   In [4]: np.finfo(np.float64).nmant
   Out[4]: 52
  
   In [5]: np.finfo(np.float64).nexp
   Out[5]: 11
  
   If there are 52 bits of precision, 2**53+1 should not be
   representable, and sure enough:
  
   In [6]: np.float96(2**53)+1
   Out[6]: 9007199254740992.0
  
   In [7]: np.float64(2**53)+1
   Out[7]: 9007199254740992.0
  
   If the nexp is right, the max should be higher for the float96 type:
  
   In [9]: np.finfo(np.float64).max
   Out[9]: 1.7976931348623157e+308
  
   In [10]: np.finfo(np.float96).max
   Out[10]: 1.#INF
  
   I see that long double in C is 12 bytes wide, and double is the usual
 8
   bytes.
 
  Sorry - sizeof(long double) is 12 using mingw.  I see that long double
  is the same as double in MS Visual C++.
 
  http://en.wikipedia.org/wiki/Long_double
 
  but, as expected from the name:
 
  In [11]: np.dtype(np.float96).itemsize
  Out[11]: 12
 
 
  Hmm, good point. There should not be a float96 on Windows using the MSVC
  compiler, and the longdouble types 'gG' should return float64 and
 complex128
  respectively. OTOH, I believe the mingw compiler has real float96 types
 but
  I wonder about library support. This is really a build issue and it
 would be
  good to have some feedback on what different platforms are doing so that
 we
  know if we are doing things right.

 Is it possible that numpy is getting confused by being compiled with
 mingw on top of a visual studio python?

 Some further forensics seem to suggest that, despite the fact the math
 suggests float96 is float64, the storage format it in fact 80-bit
 extended precision:


Yes, extended precision is the type on Intel hardware with gcc, the 96/128
bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double and
long double are both ieee double, and on SPARC, long double is ieee quad
precision.

snip

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-13 Thread Matthew Brett
Hi,

On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.com
 wrote:

 Hi,

 On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
  matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
  matthew.br...@gmail.com
  wrote:
   Hi,
  
   Sorry for my continued confusion here.  This is numpy 1.6.1 on
   windows
   XP 32 bit.
  
   In [2]: np.finfo(np.float96).nmant
   Out[2]: 52
  
   In [3]: np.finfo(np.float96).nexp
   Out[3]: 15
  
   In [4]: np.finfo(np.float64).nmant
   Out[4]: 52
  
   In [5]: np.finfo(np.float64).nexp
   Out[5]: 11
  
   If there are 52 bits of precision, 2**53+1 should not be
   representable, and sure enough:
  
   In [6]: np.float96(2**53)+1
   Out[6]: 9007199254740992.0
  
   In [7]: np.float64(2**53)+1
   Out[7]: 9007199254740992.0
  
   If the nexp is right, the max should be higher for the float96 type:
  
   In [9]: np.finfo(np.float64).max
   Out[9]: 1.7976931348623157e+308
  
   In [10]: np.finfo(np.float96).max
   Out[10]: 1.#INF
  
   I see that long double in C is 12 bytes wide, and double is the usual
   8
   bytes.
 
  Sorry - sizeof(long double) is 12 using mingw.  I see that long double
  is the same as double in MS Visual C++.
 
  http://en.wikipedia.org/wiki/Long_double
 
  but, as expected from the name:
 
  In [11]: np.dtype(np.float96).itemsize
  Out[11]: 12
 
 
  Hmm, good point. There should not be a float96 on Windows using the MSVC
  compiler, and the longdouble types 'gG' should return float64 and
  complex128
  respectively. OTOH, I believe the mingw compiler has real float96 types
  but
  I wonder about library support. This is really a build issue and it
  would be
  good to have some feedback on what different platforms are doing so that
  we
  know if we are doing things right.

 Is it possible that numpy is getting confused by being compiled with
 mingw on top of a visual studio python?

 Some further forensics seem to suggest that, despite the fact the math
 suggests float96 is float64, the storage format it in fact 80-bit
 extended precision:


 Yes, extended precision is the type on Intel hardware with gcc, the 96/128
 bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double and
 long double are both ieee double, and on SPARC, long double is ieee quad
 precision.

Right - but I think my researches are showing that the longdouble
numbers are being _stored_ as 80 bit, but the math on those numbers is
64 bit.

Is there a reason than numpy can't do 80-bit math on these guys?  If
there is, is there any point in having a float96 on windows?

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-13 Thread Charles R Harris
On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 
 
  On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett matthew.br...@gmail.com
  wrote:
 
  Hi,
 
  On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  
  
   On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
  
   Hi,
  
   On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
   matthew.br...@gmail.com
   wrote:
Hi,
   
Sorry for my continued confusion here.  This is numpy 1.6.1 on
windows
XP 32 bit.
   
In [2]: np.finfo(np.float96).nmant
Out[2]: 52
   
In [3]: np.finfo(np.float96).nexp
Out[3]: 15
   
In [4]: np.finfo(np.float64).nmant
Out[4]: 52
   
In [5]: np.finfo(np.float64).nexp
Out[5]: 11
   
If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:
   
In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0
   
In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0
   
If the nexp is right, the max should be higher for the float96
 type:
   
In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308
   
In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF
   
I see that long double in C is 12 bytes wide, and double is the
 usual
8
bytes.
  
   Sorry - sizeof(long double) is 12 using mingw.  I see that long
 double
   is the same as double in MS Visual C++.
  
   http://en.wikipedia.org/wiki/Long_double
  
   but, as expected from the name:
  
   In [11]: np.dtype(np.float96).itemsize
   Out[11]: 12
  
  
   Hmm, good point. There should not be a float96 on Windows using the
 MSVC
   compiler, and the longdouble types 'gG' should return float64 and
   complex128
   respectively. OTOH, I believe the mingw compiler has real float96
 types
   but
   I wonder about library support. This is really a build issue and it
   would be
   good to have some feedback on what different platforms are doing so
 that
   we
   know if we are doing things right.
 
  Is it possible that numpy is getting confused by being compiled with
  mingw on top of a visual studio python?
 
  Some further forensics seem to suggest that, despite the fact the math
  suggests float96 is float64, the storage format it in fact 80-bit
  extended precision:
 
 
  Yes, extended precision is the type on Intel hardware with gcc, the
 96/128
  bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
 and
  long double are both ieee double, and on SPARC, long double is ieee quad
  precision.

 Right - but I think my researches are showing that the longdouble
 numbers are being _stored_ as 80 bit, but the math on those numbers is
 64 bit.

 Is there a reason than numpy can't do 80-bit math on these guys?  If
 there is, is there any point in having a float96 on windows?


It's a compiler/architecture thing and depends on how the compiler
interprets the long double c type. The gcc compiler does do 80 bit math on
Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
compiler, so the numbers are there, but if it uses the MS library it will
have to convert them to double to do computations like sin(x) since there
are no microsoft routines for extended precision. I suspect that gcc/ms
combo is what is producing the odd results you are seeing.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-12 Thread Matthew Brett
Hi,

Sorry for my continued confusion here.  This is numpy 1.6.1 on windows
XP 32 bit.

In [2]: np.finfo(np.float96).nmant
Out[2]: 52

In [3]: np.finfo(np.float96).nexp
Out[3]: 15

In [4]: np.finfo(np.float64).nmant
Out[4]: 52

In [5]: np.finfo(np.float64).nexp
Out[5]: 11

If there are 52 bits of precision, 2**53+1 should not be
representable, and sure enough:

In [6]: np.float96(2**53)+1
Out[6]: 9007199254740992.0

In [7]: np.float64(2**53)+1
Out[7]: 9007199254740992.0

If the nexp is right, the max should be higher for the float96 type:

In [9]: np.finfo(np.float64).max
Out[9]: 1.7976931348623157e+308

In [10]: np.finfo(np.float96).max
Out[10]: 1.#INF

I see that long double in C is 12 bytes wide, and double is the usual 8 bytes.

So - now I am not sure what this float96 is.  I was expecting 80 bit
extended precision, but it doesn't look right for that...

Does anyone know what representation this is?

Thanks a lot,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Odd-looking long double on windows 32 bit

2011-11-12 Thread Matthew Brett
Hi,

On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 Sorry for my continued confusion here.  This is numpy 1.6.1 on windows
 XP 32 bit.

 In [2]: np.finfo(np.float96).nmant
 Out[2]: 52

 In [3]: np.finfo(np.float96).nexp
 Out[3]: 15

 In [4]: np.finfo(np.float64).nmant
 Out[4]: 52

 In [5]: np.finfo(np.float64).nexp
 Out[5]: 11

 If there are 52 bits of precision, 2**53+1 should not be
 representable, and sure enough:

 In [6]: np.float96(2**53)+1
 Out[6]: 9007199254740992.0

 In [7]: np.float64(2**53)+1
 Out[7]: 9007199254740992.0

 If the nexp is right, the max should be higher for the float96 type:

 In [9]: np.finfo(np.float64).max
 Out[9]: 1.7976931348623157e+308

 In [10]: np.finfo(np.float96).max
 Out[10]: 1.#INF

 I see that long double in C is 12 bytes wide, and double is the usual 8 bytes.

Sorry - sizeof(long double) is 12 using mingw.  I see that long double
is the same as double in MS Visual C++.

http://en.wikipedia.org/wiki/Long_double

but, as expected from the name:

In [11]: np.dtype(np.float96).itemsize
Out[11]: 12

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion