On Tue, 8 Dec 2009, Richard O'Keefe wrote:

On Dec 8, 2009, at 12:28 PM, Henning Thielemann wrote:

It is the responsibility of the programmer to choose number types that are appropriate for the application. If I address pixels on a todays screen I will have to choose at least Word16. On 8-bit computers bytes were enough. Thus, this sounds like an error.

That kind of attitude might have done very well in the 1960s.

I don't quite understand. If it is not the responsibility of the programmer to choose numbers of the right size, who else?

If the operating system uses Int32 for describing files sizes and Int16 for screen coordinates, I'm safe to do so as well. The interface to the operating system could use type synonyms FileSize and ScreenCoordinate that scale with future sizes. But the programmer remains responsible for using ScreenCoordinate actually for coordinates and not for file sizes.

In an age when Intel have demonstrated 48 full x86 cores on a single
chip, when it's possible to get a single-chip "DSP" with >240 cores
that's fast enough to *calculate* MHz radio signals in real time,
typical machine-oriented integer sizes run out _really_ fast.
For example, a simple counting loop runs out in well under a second
using 32-bit integers.

The programmer doesn't always have the information necessary to
choose machine-oriented integer sizes.  Or it might not offer a choice.
Or the choice the programmer needs might not be available:  if I want
to compute sums of products of 64-bit integers, where are the 128-bit
integers I need?

And the consequence is to ship a program that raises an exception about problems with the size of integers? I'm afraid I don't understand what you are arguing for.
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to