Hi Vincent,

On Tue, Oct 19, 2010 at 2:28 AM, Vincent Favre-Nicolin
<vincent.favre-nico...@cea.fr> wrote:
>  I'm not sure what is happening exactly, but there is no indication that the
> random numbers are *repeating themselves*
>
>  However you seem to hit a floating point issue *when computing the sum* => if
> you replace numpy.sum(a)/size by numpy.sum(a.astype(numpy.float64))/size, you
> get correct results for float32 (always around 0.5).
>
>  It is actually not suprising: when you sum 2**25 times numbers which are on
> average 0.5, you get ~ 2**24. And in single precision floating point, the
> mantissa is indeed stored using 24 bits => when you try adding the next
> numbers, they are simply too small to register in the mantissa, and are not
> added. e.g. try: float32(2**24)+float32(.5)=  float32(2**24) !
>
>  Funny example of the limits of single precision floating point...

Yes, it seems that you are right. When sum reaches 2 ** 24, every next
term is not big enough to change it. It never occurred to me that I am
limited not only by exponent, but by mantissa too...

Best regards,
Bogdan

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
http://lists.tiker.net/listinfo/pycuda

Reply via email to