[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
> Date: Mon, 22 Apr 2013 17:52:24 -0700 > From: Mark Janssen <dreamingforw...@gmail.com> > To: Uday S Reddy <u.s.re...@cs.bham.ac.uk> > Cc: types-list@lists.seas.upenn.edu > Subject: Re: [TYPES] Declarative vs imperative > >>> A C program is not a specification in any sense (if it's even >>> appropriate to call C source text a "program", prior to compilation >>> into an executable). Only the theorist who has never gotten "close to >>> the machine" could ever say that. <other ranting elided> >> >> Nikhil's point was the a C program is a "specification" in the sense that it >> has details left out which are filled in by the compiler and the run-time >> system automatically. He is pointing out the *relativity* of the notions of >> specification and implementation. > > But there are no details left out. Neither the computer nor compiler > "fills in the gaps". What computing devices are you talking about? Mark, do you really feel that there are no details left out of a C program? Let's take a toy program as an example: #include <stdio.h> int main() { int a = 3; int b = 0; printf("%d\n", a + b); } Here are some of the details I left out, and that the compiler will have to fill in for me: * Whether, how, and when to optimize out the addition of 0. * What registers `a` and `b` are stored in. * Which instructions should carry out the addition. * ... Different compilers are going to fill in these details in different ways -- perhaps not very different ways in the case of this toy program. But for a more sophisticated program, given five different C compilers all compiling to the same architecture, I doubt that the resulting five compiled programs will be byte-identical! This is as it should be, and it doesn't necessarily mean that any of them are "wrong"; rather, it means that there are various correct ways of implementing the "specification" -- I use the term loosely -- that the C program provides. You could argue that they would be identical in the ways that "matter". And that's one reason why we study programming language semantics, to allow us to precisely characterize what aspects of a program "matter" (in a given setting), which gives us tools to *prove* that those five programs are in fact identical (or not!) with respect to those aspects that "matter". On the other hand, here are some things I *did* include in the program above, but that I may or may not actually care about. These include: * The order in which `a` and `b` are declared. * The order in which `a` and `b` are initialized. * The order of arguments to `+`. * ... In the first chapter of _Optimizing Compilers for Modern Architectures_, Allen and Kennedy write, "Sequential languages introduce constraints that are not critical to preserving the meaning of a computation." The goal of an optimizing compiler, then, might be to determine which constraints *are* critical in a source program so that it can throw away the rest. (This is not such a big deal for my toy program, of course, but it gets a lot more interesting when we are dealing with programs that could benefit considerably from automatic parallelization, which they go on to discuss in the book.) Here, again, we can turn to PL semantics to help us state and prove properties of program transformations. > Now you have pointed out an important issue. Source code as no > meaning. Computers are inert and not capable of determining meaning. > It's meaning is determined by the *programmer* who is following the > intentions and specification of the *language designer*. I don't > believe the word "meaning" needs to be ambiguous because there should > be no disagreement that machines are not conscious. Although you could say that a piece of code on its own doesn't have a meaning until we ascribe it one, it does have a structure that can be analyzed. The idea of the field of programming language semantics is that programs and programming languages are mathematical objects, making it possible to study them using mathematical techniques. There are lots of ways to give a semantics to a program or a language. This is not to say that the intent of the programmer or language designer doesn't matter -- those things do matter, a lot. Rather, I mean to say that programmers and language designers can use the tools of programming language semantics to make their intent clear. Not using those tools leaves both language designers and programmers with a heavy burden -- language designers would have to try to think about every program that a programmer might write in advance (and many programs aren't written by humans anyway), and programmers would have to try to guess the intent of some designer. It doesn't have to be that way. Lindsey