Robert Kern wrote:
>
> numpy.signbit() should work like C99 signbit() (where possible), IMO.
> It can only return (integer) 0 or 1, and it does differentiate between
> NAN and -NAN. I don't think we should invent new semantics if we can
> avoid it.

Agreed, but for signbit case, I can't find what the semantic should be
for signbit and NANs; do you know any reference ? For example, going
into your example:

#include <stdio.h>
#include <math.h>

int main(int argc, char **argv)
{
        printf("signbit(NAN) = %d\n", signbit(NAN));
        printf("signbit(-NAN) = %d\n", signbit(-NAN));
        printf("signbit((-1) * NAN) = %d\n", signbit((-1) * NAN));
        printf("signbit(- NAN + NAN) = %d\n", signbit(-NAN + NAN));
        printf("signbit(NAN - NAN) = %d\n", signbit(NAN - NAN));
        return 0;
}


when run tells me:

signbit(NAN) = 0
signbit(-NAN) = -2147483648
signbit((-1) * NAN) = 0
signbit(- NAN + NAN) = -2147483648
signbit(NAN - NAN) = 0

Does not this indicate that signbit(NAN) is undefined ? I guess I am
afraid that the glibc NAN is just one type of NAN, and is not the
behavior of any NAN,

cheers,

David
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to