While I agree on that on a philosophical point of view,
what are the real technical implication of that C design decision?

Every 20 years, when there is a major architecture extension
(for example 32 bit to 64 bit), some work has to be done, when
for example long changes the meaning from 32-bit to 64-bit
to keep the applications running. Nowadays it is an error to use
long, if you want a 32 bit value, because it is different on some
platforms, whereas int is 32 bit on almost every relevant platform.

This was different 20 years ago, when int was 16 bits (like short)
on certain relevant platforms, so you had to use long to get 32 bits
on every relevant platform.

You have two choices:

a) use the standard type (like int or long) that matches the size you
need at all the platforms that are relevant to you ... this will work normally
for a long period of time, and after that, you will have some work to do
to examine your sources and to keep your application running (see example
below), or

b) use own types via typedef, and define them depending on the platform
(macro language)

Here is what I had to do, when I had to convert a large insurance math
package to make it run on a Linux 64-bit platform:

- change all long declarations to int (I did this using a program that
changed all the sources)

- change all "%ld" tags in printf and sprintf etc to "%d" etc (this was also
done using the same program, in fact the program was built from scratch
to do all that and it was refined during the process)

- because all pointers changed the size from 4 to 8 bytes, there were
some dependencies inside the code, where offsets of fields inside
structures where calculated; these calculations had to be eliminated
or corrected

Although the number of source lines was in the 1 million area, I got it
done within 3 weeks. I had several thousand regression testcases
and a testdriver which allowed me to test the outcome of the math package
before and after the change on the different platforms (Win 32,
Linux 64). Without that, the whole conversion would not have been possible.

So my conclusion is:

such a migration is a large effort, anyway. You will get changes from the
architecture change anyway, no matter if your "longs" change size or not.
It is a project. PL/1, for example, doesn't even have a 64-bit variant
(for example on z/OS). While I agree that the C language has its flaws,
it allows for many things that other languages don't.

Kind regards

Bernd



Am 20.02.2015 um 00:17 schrieb Shmuel Metz (Seymour J.):
In <1346088131080403.wa.zatlas1yahoo....@listserv.ua.edu>, on
02/19/2015
    at 08:18 AM, "Ze'ev Atlas"
<0000004b34e7c98a-dmarc-requ...@listserv.ua.edu> said:

I still think that the decision, many decades ago, to leave the
actual definition and implementation of short, int, long, etc. to
the implementation rather than enforce rules (16, 32, 64 bits) was
wrong and shortsighted.
It was appalling; a reversion to FORTRAN-speak after PL/I showed how
it should[1] be done. But C is still the best PDP-7 specific language
ever designed.

[1] In general; I won't argue the Ada approach versus the PL/I
     approach, as both let the compiler figure out what storage unit
     is needed to hold the declared variable.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to