But the thing that goes wrong on the Web is not reflection. If a network protocol is liberal in what packets it accepts, I can send it packets that it really ought to have rejected but it instead interprets and turns into behavior. By pushing this envelope, I can eventually get it into some ill-defined state, from which I end up bashing the stack & friends. If it robustly rejected everything that was even slightly ill-formed, there wouldn't be room for such attacks.
I like to describe such security problems in this way. There is a "logical" space of the system, which is what we can understand by starting with a well-defined kernel and applying well-defined rules of inference (ie, what typical semanticists do). Then there is the actual "physical" space of the system, which is the world in which it executes. If the implementation does not actively guard against things being in the difference, those become points of potential vulnerability. That is, they need white blood cells. For instance, consider URLs. There is the "logical" space of URLs against which the programmer actually wrote their program. They might have decided to embed authentication tokens in the URL: http://mydomain.com/cgi-bin/access?username=acowley Their reasoning might have been, "The only time I create such URLs is after the user has authenticated, so the mere presence of the `username' field is sufficient to indicate that authentication has succeeded, so I need not re-authenticate AND I can provide the privileges of the user so named". This is actually perfectly valid reasoning *provided* nobody else can manufacture such a URL. As we know from the history of the Web until about 2005 (and on a few sites probably even today), the flaw was in this last -- entirely implicit -- step. But usually nothing in the semantics indicates all the ways other entities can masquerade as ones covered by the semantics. Shriram _________________________________________________ For list-related administrative tasks: http://lists.racket-lang.org/listinfo/users

