Hi,

On Tue, Nov 15, 2011 at 12:51 AM, David Cournapeau <courn...@gmail.com> wrote:
> On Tue, Nov 15, 2011 at 6:22 AM, Matthew Brett <matthew.br...@gmail.com> 
> wrote:
>> Hi,
>>
>> On Mon, Nov 14, 2011 at 10:08 PM, David Cournapeau <courn...@gmail.com> 
>> wrote:
>>> On Mon, Nov 14, 2011 at 9:01 PM, Matthew Brett <matthew.br...@gmail.com> 
>>> wrote:
>>>> Hi,
>>>>
>>>> On Sun, Nov 13, 2011 at 5:03 PM, Charles R Harris
>>>> <charlesr.har...@gmail.com> wrote:
>>>>>
>>>>>
>>>>> On Sun, Nov 13, 2011 at 3:56 PM, Matthew Brett <matthew.br...@gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On Sun, Nov 13, 2011 at 1:34 PM, Charles R Harris
>>>>>> <charlesr.har...@gmail.com> wrote:
>>>>>> >
>>>>>> >
>>>>>> > On Sun, Nov 13, 2011 at 2:25 PM, Matthew Brett 
>>>>>> > <matthew.br...@gmail.com>
>>>>>> > wrote:
>>>>>> >>
>>>>>> >> Hi,
>>>>>> >>
>>>>>> >> On Sun, Nov 13, 2011 at 8:21 AM, Charles R Harris
>>>>>> >> <charlesr.har...@gmail.com> wrote:
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > On Sun, Nov 13, 2011 at 12:57 AM, Matthew Brett
>>>>>> >> > <matthew.br...@gmail.com>
>>>>>> >> > wrote:
>>>>>> >> >>
>>>>>> >> >> Hi,
>>>>>> >> >>
>>>>>> >> >> On Sat, Nov 12, 2011 at 11:35 PM, Matthew Brett
>>>>>> >> >> <matthew.br...@gmail.com>
>>>>>> >> >> wrote:
>>>>>> >> >> > Hi,
>>>>>> >> >> >
>>>>>> >> >> > Sorry for my continued confusion here.  This is numpy 1.6.1 on
>>>>>> >> >> > windows
>>>>>> >> >> > XP 32 bit.
>>>>>> >> >> >
>>>>>> >> >> > In [2]: np.finfo(np.float96).nmant
>>>>>> >> >> > Out[2]: 52
>>>>>> >> >> >
>>>>>> >> >> > In [3]: np.finfo(np.float96).nexp
>>>>>> >> >> > Out[3]: 15
>>>>>> >> >> >
>>>>>> >> >> > In [4]: np.finfo(np.float64).nmant
>>>>>> >> >> > Out[4]: 52
>>>>>> >> >> >
>>>>>> >> >> > In [5]: np.finfo(np.float64).nexp
>>>>>> >> >> > Out[5]: 11
>>>>>> >> >> >
>>>>>> >> >> > If there are 52 bits of precision, 2**53+1 should not be
>>>>>> >> >> > representable, and sure enough:
>>>>>> >> >> >
>>>>>> >> >> > In [6]: np.float96(2**53)+1
>>>>>> >> >> > Out[6]: 9007199254740992.0
>>>>>> >> >> >
>>>>>> >> >> > In [7]: np.float64(2**53)+1
>>>>>> >> >> > Out[7]: 9007199254740992.0
>>>>>> >> >> >
>>>>>> >> >> > If the nexp is right, the max should be higher for the float96
>>>>>> >> >> > type:
>>>>>> >> >> >
>>>>>> >> >> > In [9]: np.finfo(np.float64).max
>>>>>> >> >> > Out[9]: 1.7976931348623157e+308
>>>>>> >> >> >
>>>>>> >> >> > In [10]: np.finfo(np.float96).max
>>>>>> >> >> > Out[10]: 1.#INF
>>>>>> >> >> >
>>>>>> >> >> > I see that long double in C is 12 bytes wide, and double is the
>>>>>> >> >> > usual
>>>>>> >> >> > 8
>>>>>> >> >> > bytes.
>>>>>> >> >>
>>>>>> >> >> Sorry - sizeof(long double) is 12 using mingw.  I see that long
>>>>>> >> >> double
>>>>>> >> >> is the same as double in MS Visual C++.
>>>>>> >> >>
>>>>>> >> >> http://en.wikipedia.org/wiki/Long_double
>>>>>> >> >>
>>>>>> >> >> but, as expected from the name:
>>>>>> >> >>
>>>>>> >> >> In [11]: np.dtype(np.float96).itemsize
>>>>>> >> >> Out[11]: 12
>>>>>> >> >>
>>>>>> >> >
>>>>>> >> > Hmm, good point. There should not be a float96 on Windows using the
>>>>>> >> > MSVC
>>>>>> >> > compiler, and the longdouble types 'gG' should return float64 and
>>>>>> >> > complex128
>>>>>> >> > respectively. OTOH, I believe the mingw compiler has real float96
>>>>>> >> > types
>>>>>> >> > but
>>>>>> >> > I wonder about library support. This is really a build issue and it
>>>>>> >> > would be
>>>>>> >> > good to have some feedback on what different platforms are doing so
>>>>>> >> > that
>>>>>> >> > we
>>>>>> >> > know if we are doing things right.
>>>>>> >>
>>>>>> >> Is it possible that numpy is getting confused by being compiled with
>>>>>> >> mingw on top of a visual studio python?
>>>>>> >>
>>>>>> >> Some further forensics seem to suggest that, despite the fact the math
>>>>>> >> suggests float96 is float64, the storage format it in fact 80-bit
>>>>>> >> extended precision:
>>>>>> >>
>>>>>> >
>>>>>> > Yes, extended precision is the type on Intel hardware with gcc, the
>>>>>> > 96/128
>>>>>> > bits comes from alignment on 4 or 8 byte boundaries. With MSVC, double
>>>>>> > and
>>>>>> > long double are both ieee double, and on SPARC, long double is ieee 
>>>>>> > quad
>>>>>> > precision.
>>>>>>
>>>>>> Right - but I think my researches are showing that the longdouble
>>>>>> numbers are being _stored_ as 80 bit, but the math on those numbers is
>>>>>> 64 bit.
>>>>>>
>>>>>> Is there a reason than numpy can't do 80-bit math on these guys?  If
>>>>>> there is, is there any point in having a float96 on windows?
>>>>>
>>>>> It's a compiler/architecture thing and depends on how the compiler
>>>>> interprets the long double c type. The gcc compiler does do 80 bit math on
>>>>> Intel/AMD hardware. MSVC doesn't, and probably never will. MSVC shouldn't
>>>>> produce float96 numbers, if it does, it is a bug. Mingw uses the gcc
>>>>> compiler, so the numbers are there, but if it uses the MS library it will
>>>>> have to convert them to double to do computations like sin(x) since there
>>>>> are no microsoft routines for extended precision. I suspect that gcc/ms
>>>>> combo is what is producing the odd results you are seeing.
>>>>
>>>> I think we might be talking past each other a bit.
>>>>
>>>> It seems to me that, if float96 must use float64 math, then it should
>>>> be removed from the numpy namespace, because
>>>
>>> If we were to do so, it would break too much code.
>>
>> David - please - obviously I'm not suggesting removing it without
>> deprecating it.
>
> Let's say I find it debatable that removing it (with all the
> deprecations) would be a good use of effort, especially given that
> there is no obviously better choice to be made.
>
>>
>>>> a) It implies higher precision than float64 but does not provide it
>>>> b) It uses more memory to no obvious advantage
>>>
>>> There is an obvious advantage: to handle memory blocks which use long
>>> double, created outside numpy (or even python).
>>
>> Right - but that's a bit arcane, and I would have thought
>> np.longdouble would be a good enough name for that.   Of course, the
>> users may be surprised, as I was, that memory allocated for higher
>> precision is using float64, and that may take them some time to work
>> out.  I'll say again that 'longdouble' says to me 'something specific
>> to the compiler' and 'float96' says 'something standard in numpy', and
>> that I - was surprised - when I found out what it was.
>
> I think the expectation is wrong, rather than the implementation :) If
> you use float96 on windows, each item will take 12 bytes (on 32 bits
> at least), so that part makes sense if you understand the number 96 as
> referring to its size in memory.

I recognize this from our previous float80 / float96 discussion.

I think the user you have in mind does not think as I do.

I wish there was a good way of knowing which user is more common, your
user or my user.

But anyway - nowhere much to go in the discussion - so I expect to
learn to make friends with float96.

See you,

Matthew
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to