Re: What is Expressiveness in a Computer Language
Andreas Rossberg schrieb: AFAICT, ADT describes a type whose values can only be accessed by a certain fixed set of operations. Classes qualify for that, as long as they provide proper encapsulation. The first sentence is true if you associate a semantics (i.e. axioms) with the operations. Most OO languages don't have a place for expressing axioms (except via comments and handwaving), so they still don't fully qualify. Regards, jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Matthias Blume schrieb: Erlang relies on a combination of purity, concurrency, and message passing, where messages can carry higher-order values. Data structures are immutable, and each computational agent is a thread. Most threads consist a loop that explicitly passes state around. It dispatches on some input event, applies a state transformer (which is a pure function), produces some output event (if necessary), and goes back to the beginning of the loop (by tail-calling itself) with the new state. Actually any Erlang process, when seen from the outside, is impure: it has observable state. However, from what I hear, such state is kept to a minimum. I.e. the state involved is just the state that's mandated by the purpose of the process, not by computational bookkeeping - you won't send file descriptors in a message, but maybe information about the state of some hardware, or about a permanent log. So to me, the approach of Erlang seems to amount to make pure programming so easy and efficient that aren't tempted to introduce state that isn't already there. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Darren New [EMAIL PROTECTED] writes: Andreas Rossberg wrote: AFAICT, ADT describes a type whose values can only be accessed by a certain fixed set of operations. No. AFAIU, an ADT defines the type based on the operations. The stack holding the integers 1 and 2 is the value (push(2, push(1, empty(. There's no internal representation. The values and operations are defined by preconditions and postconditions. As a user of the ADT you get a specification consisting of a signature (names the operations and defines their arity and type of each argument) and axioms which define the behaviour of the operations. The implementer has the task for producing an implementation (or model to use alebraic specifiction terminology) that satisfies the specification. One model that comes for free is the term model where the operations are treated as terms and the representation of a stack containing 2 and 1 would just be (push(2, push(1, empty(. However, while that model comes for free it isn't necessarily the most efficient model. Thus the non-free aspect of chosing a different implementation is that technically it requires an accompanying proof that the implementation satisfies the specification. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [correction]
Joe Marshall wrote: Andreas Rossberg wrote: Which is why this actually is a very bad example to chose for dynamic typing advocacy... ;-) Actually, this seems a *good* example. The problem seems to be that you end up throwing the baby out with the bathwater: your static type system stops catching the errors you want once you make it powerful enough to express certain programs. That is not the case: you can still express these programs, you just need to do an indirection through a datatype. - Andreas -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Paul Rubin schrieb: It starts to look like sufficiently powerful static type systems are confusing enough, that programming with them is at least as bug-prone as imperative programming in dynamically typed languages. The static type checker can spot type mismatches at compile time, but the types themselves are easier and easier to get wrong. That's where type inference comes into play. Usually you don't write the types, the compiler infers them for you, so you don't get them wrong. Occasionally, you still write down the types, if only for documentation purposes (the general advice is: do the type annotations for external interfaces, but not internally). BTW if you get a type wrong, you'll be told by the compiler, so this is still less evil than bugs in the code that pop up during testing (and *far* less evil than bugs that pop up after roll-out). And the general consensus among FPL programmers is that you get the hang of it fairly quickly (one poster mentioned two to three months - this doesn't seem to be slower than learning to interpret synax error messages, so it's OK considering it's an entirely new kind of diagnostics). Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [correction]
David Hopwood wrote: (defun blackhole (argument) (declare (ignore argument)) #'blackhole) I believe this example requires recursive types. It can also be expressed in a gradual typing system, but possibly only using an unknown ('?') type. ISTR that O'Caml at one point (before version 1.06) supported general recursive types, although I don't know whether it would have supported this particular example. No problem at all. It still is possible today if you really want: ~/ ocaml -rectypes Objective Caml version 3.08.3 # let rec blackhole x = blackhole;; val blackhole : 'b - 'a as 'a = fun The problem is, though, that almost everything can be typed once you have unrestricted recursive types (e.g. missing arguments etc), and consequently many actual errors remain unflagged (which clearly shows that typing is not only about potential value class mismatches). Moreover, there are very few practical uses of such a feature, and they can always be coded easily with recursive datatypes. It is a pragmatic decision born from experience that you simply do *not want* to have this, even though you easily could. E.g. for OCaml, unrestricted recursive typing was removed as default because of frequent user complaints. Which is why this actually is a very bad example to chose for dynamic typing advocacy... ;-) - Andreas -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Paul Rubin wrote: David Hopwood [EMAIL PROTECTED] writes: Note that I'm not claiming that you can check any desirable property of a program (that would contradict Rice's Theorem), only that you can express any dynamically typed program in a statically typed language -- with static checks where possible and dynamic checks where necessary. It starts to look like sufficiently powerful static type systems are confusing enough, that programming with them is at least as bug-prone as imperative programming in dynamically typed languages. The static type checker can spot type mismatches at compile time, but the types themselves are easier and easier to get wrong. My assertion above does not depend on having a sufficiently powerful static type system to express a given dynamically typed program. It is true for static type systems that are not particularly complicated, and suffice to express all dynamically typed programs. I disagree with your implication that in general, static type systems for practical languages are getting too confusing, but that's a separate issue. (Obviously one can find type systems in research proposals that are too confusing as they stand.) -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Pascal Costanza wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. What about programs where the types change at runtime? Staged compilation is perfectly compatible with static typing. Static typing only requires that it be possible to prove absence of some category of type errors when the types are known; it does not require that all types are known before the first-stage program is run. Can you change the types of the program that is already running, or are the levels strictly separated? There are, however, staged compilation systems that guarantee that the generated program will be typeable if the first-stage program is. ...and I guess that this reduces again the kinds of things you can express. (It's clear that to compare the expressiveness of statically and dynamically typed languages, the languages must be comparable in other respects than their type system. Staged compilation is the equivalent feature to 'eval'.) If it is equivalent to eval (i.e., executed at runtime), and the static type checker that is part of eval yields a type error, then you still get a type error at runtime! Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Pascal Costanza wrote: David Hopwood wrote: Pascal Costanza wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. What about programs where the types change at runtime? Staged compilation is perfectly compatible with static typing. Static typing only requires that it be possible to prove absence of some category of type errors when the types are known; it does not require that all types are known before the first-stage program is run. Can you change the types of the program that is already running, or are the levels strictly separated? In most staged compilation systems this is intentionally not permitted. But this is not a type system issue. You can't change the types in a running program because you can't change the program, period. And you can't do that because most designers of these systems consider directly self-modifying code to be a bad idea (I agree with them). Note that prohibiting directly self-modifying code does not prevent a program from specifying another program to *replace* it. There are, however, staged compilation systems that guarantee that the generated program will be typeable if the first-stage program is. ...and I guess that this reduces again the kinds of things you can express. Yes. If you don't want that, use a system that does not impose this restriction/guarantee. (It's clear that to compare the expressiveness of statically and dynamically typed languages, the languages must be comparable in other respects than their type system. Staged compilation is the equivalent feature to 'eval'.) If it is equivalent to eval (i.e., executed at runtime), and the static type checker that is part of eval yields a type error, then you still get a type error at runtime! You can choose to use a system in which this is impossible because only typeable programs can be generated, or one in which non-typeable programs can be generated. In the latter case, type errors are detected at the earliest possible opportunity, as soon as the program code is known and before any of that code has been executed. (In the case of eval, OTOH, the erroneous code may cause visible side effects before any run-time error occurs.) I don't know what else you could ask for. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Pascal Costanza wrote: David Hopwood wrote: Pascal Costanza wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. What about programs where the types change at runtime? Staged compilation is perfectly compatible with static typing. Static typing only requires that it be possible to prove absence of some category of type errors when the types are known; it does not require that all types are known before the first-stage program is run. Can you change the types of the program that is already running, or are the levels strictly separated? In most staged compilation systems this is intentionally not permitted. But this is not a type system issue. You can't change the types in a running program because you can't change the program, period. And you can't do that because most designers of these systems consider directly self-modifying code to be a bad idea (I agree with them). Whether you consider something you cannot do with statically typed languages a bad idea or not is irrelevant. You were asking for things that you cannot do with statically typed languages. There are at least systems that a lot of people rely on (the JVM, .NET) that achieve runtime efficiency primarily by executing what is essentially self-modifying code. These runtime optimization systems have been researched primarily for the language Self, and also implemented in Strongtalk, CLOS, etc., to various degrees. Beyond that, I am convinced that the ability to update a running system without the need to shut it down can be an important asset. Note that prohibiting directly self-modifying code does not prevent a program from specifying another program to *replace* it. ...and this creates problems with moving data from one version of a program to the next. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Pascal Costanza [EMAIL PROTECTED] writes: Whether you consider something you cannot do with statically typed languages a bad idea or not is irrelevant. You were asking for things that you cannot do with statically typed languages. The whole point of static type systems is to make sure that there are things that one cannot do. So the fact that there are such things are not an argument per se against static types. [ ... ] Beyond that, I am convinced that the ability to update a running system without the need to shut it down can be an important asset. And I am convinced that updating a running system in the style of, e.g., Erlang, can be statically typed. Note that prohibiting directly self-modifying code does not prevent a program from specifying another program to *replace* it. ...and this creates problems with moving data from one version of a program to the next. How does this create such a problem? The problem is there in either approach. In fact, I believe that the best chance we have of addressing the problem is by adopting the replace the code model along with a translate the data where necessary at the time of replacement. Translating the data, i.e., re-establishing the invariants expected by the updated/replaced code, seems much harder (to me) in the case of self-modifying code. Erlang got this one right. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: (In the case of eval, OTOH, the erroneous code may cause visible side effects before any run-time error occurs.) Not necessarily. You can replace the primitive eval by compile, which delivers a function encapsulating the program, so you can check the type of the function before actually running it. Eval itself can easily be expressed on top of this as a polymorphic function, which does not run the program if it does not have the desired type: eval ['a] s = typecase compile s of f : (()-'a) - f () _ - raise TypeError - Andreas -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Matthias Blume wrote: Pascal Costanza [EMAIL PROTECTED] writes: Whether you consider something you cannot do with statically typed languages a bad idea or not is irrelevant. You were asking for things that you cannot do with statically typed languages. The whole point of static type systems is to make sure that there are things that one cannot do. So the fact that there are such things are not an argument per se against static types. I am not arguing against static type systems. I am just responding to the question what the things are that you cannot do in statically typed languages. [ ... ] Beyond that, I am convinced that the ability to update a running system without the need to shut it down can be an important asset. And I am convinced that updating a running system in the style of, e.g., Erlang, can be statically typed. Maybe. The interesting question then is whether you can express the kinds of dynamic updates that are relevant in practice. Because a static type system always restricts what kinds of runtime behavior you can express in your language. I am still skeptical, because changing the types at runtime is basically changing the assumptions that the static type checker has used to check the program's types in the first place. For example, all the approaches that I have seen in statically typed languages deal with adding to a running program (say, class fields and methods, etc.), but not with changing to, or removing from it. Note that prohibiting directly self-modifying code does not prevent a program from specifying another program to *replace* it. ...and this creates problems with moving data from one version of a program to the next. How does this create such a problem? The problem is there in either approach. In fact, I believe that the best chance we have of addressing the problem is by adopting the replace the code model along with a translate the data where necessary at the time of replacement. Translating the data, i.e., re-establishing the invariants expected by the updated/replaced code, seems much harder (to me) in the case of self-modifying code. Erlang got this one right. ...and the translate the date where necessary approach is essentially triggered by a dynamic type test (if value x is of an old version of type T, update it to reflect the new version of type T [1]). QED. Pascal [1] BTW, that's also the approach taken in CLOS. -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Andreas Rossberg wrote: David Hopwood wrote: (In the case of eval, OTOH, the erroneous code may cause visible side effects before any run-time error occurs.) Not necessarily. You can replace the primitive eval by compile, which delivers a function encapsulating the program, so you can check the type of the function before actually running it. Eval itself can easily be expressed on top of this as a polymorphic function, which does not run the program if it does not have the desired type: eval ['a] s = typecase compile s of f : (()-'a) - f () _ - raise TypeError What I meant was, in the case of eval in an untyped (dynamically typed) language. The approach you've just outlined is an implementation of staged compilation in a typed language. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Matthias Blume wrote: How does this create such a problem? The problem is there in either approach. In fact, I believe that the best chance we have of addressing the problem is by adopting the replace the code model along with a translate the data where necessary at the time of replacement. Translating the data, i.e., re-establishing the invariants expected by the updated/replaced code, seems much harder (to me) in the case of self-modifying code. Erlang got this one right. Pardon my ignorance, but what is the Erlang solution to this problem? Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Pascal Costanza [EMAIL PROTECTED] writes: And I am convinced that updating a running system in the style of, e.g., Erlang, can be statically typed. Maybe. The interesting question then is whether you can express the kinds of dynamic updates that are relevant in practice. Because a static type system always restricts what kinds of runtime behavior you can express in your language. I am still skeptical, because changing the types at runtime is basically changing the assumptions that the static type checker has used to check the program's types in the first place. That's why I find the Erlang model to be more promising. I am extremely skeptical of code mutation at runtime which would change types, because to me types are approximations of program invariants. So if you do a modification that changes the types, it is rather likely that you did something that also changed the invariants, and existing code relying on those invariants will now break. For example, all the approaches that I have seen in statically typed languages deal with adding to a running program (say, class fields and methods, etc.), but not with changing to, or removing from it. Of course, there are good reasons for that: removing fields or changing their invariants requires that all /deployed/ code which relied on their existence or their invariants must be made aware of this change. This is a semantic problem which happens to reveal itself as a typing problem. By making types dynamic, the problem does not go away but is merely swept under the rug. Note that prohibiting directly self-modifying code does not prevent a program from specifying another program to *replace* it. ...and this creates problems with moving data from one version of a program to the next. How does this create such a problem? The problem is there in either approach. In fact, I believe that the best chance we have of addressing the problem is by adopting the replace the code model along with a translate the data where necessary at the time of replacement. Translating the data, i.e., re-establishing the invariants expected by the updated/replaced code, seems much harder (to me) in the case of self-modifying code. Erlang got this one right. ...and the translate the date where necessary approach is essentially triggered by a dynamic type test (if value x is of an old version of type T, update it to reflect the new version of type T [1]). QED. But this test would have to already exist in code that was deployed /before/ the change! How did this code know what to test for, and how did it know how to translate the data? Plus, how do you detect that some piece of data is of an old version of type T? If v has type T and T changes (whatever that might mean), how can you tell that v's type is the old T rather than the new T! Are there two different Ts around now? If so, how can you say that T has changed? The bottom line is that even the concept of changing types at runtime makes little sense. Until someone shows me a /careful/ formalization that actually works, I just can't take it very seriously. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Joe Marshall wrote: The point is that there exists (by construction) programs that can never be statically checked. I don't think you've shown that. I would like to see a more explicit construction of a dynamically typed [*] program with a given specification, where it is not possible to write a statically typed program that meets the same specification using the same algorithms. I think we're at cross-purposes here. It is trivial to create a static type system that has a `universal type' and defers all the required type narrowing to runtime. In effect, we've `turned off' the static typing (because everything trivially type checks at compile time). However, a sufficiently powerful type system must be incomplete or inconsistent (by Godel). A type system proves theorems about a program. If the type system is rich enough, then there are unprovable, yet true theorems. That is, programs that are type-safe but do not type check. A type system that could reflect on itself is easily powerful enough to qualify. AFAIK, it is *always* possible to construct such a statically typed program in a type system that supports typerec or gradual typing. The informal construction you give above doesn't contradict that, because it depends on reflecting on a [static] type system, and in a dynamically typed program there is no type system to reflect on. A static type system is usually used for universal proofs: For all runtime values of such and such... Dynamic checks usually provide existential refutations: This particular value isn't a string. It may be the case that there is no universal proof, yet no existential refutation. Note that I'm not claiming that you can check any desirable property of a program (that would contradict Rice's Theorem), only that you can express any dynamically typed program in a statically typed language -- with static checks where possible and dynamic checks where necessary. Sure. And I'm not saying that you cannot express these programs in a static language, but rather that there exist programs that have desirable properties (that run without error and produce the right answer) but that the desirable properties cannot be statically checked. The real question is how common these programs are, and what sort of desirable properties are we trying to check. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall [EMAIL PROTECTED] writes: Matthias Blume wrote: How does this create such a problem? The problem is there in either approach. In fact, I believe that the best chance we have of addressing the problem is by adopting the replace the code model along with a translate the data where necessary at the time of replacement. Translating the data, i.e., re-establishing the invariants expected by the updated/replaced code, seems much harder (to me) in the case of self-modifying code. Erlang got this one right. Pardon my ignorance, but what is the Erlang solution to this problem? Erlang relies on a combination of purity, concurrency, and message passing, where messages can carry higher-order values. Data structures are immutable, and each computational agent is a thread. Most threads consist a loop that explicitly passes state around. It dispatches on some input event, applies a state transformer (which is a pure function), produces some output event (if necessary), and goes back to the beginning of the loop (by tail-calling itself) with the new state. When a code update is necessary, a special event is sent to the thread, passing along the new code to be executed. The old code, upon receiving such an event, ceases to go back to its own loop entry point (by simply not performing the self-tail-call). Instead it tail-calls the new code with the current state. If the new code can live with the same data structures that the old code had, then it can directly proceed to perform real work. If new invariants are to be established, it can first run a translation function on the current state before resuming operation. From what I hear from the Erlang folks that I have talked to, this approach works extremely well in practice. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Matthias Blume wrote: Pascal Costanza [EMAIL PROTECTED] writes: And I am convinced that updating a running system in the style of, e.g., Erlang, can be statically typed. Maybe. The interesting question then is whether you can express the kinds of dynamic updates that are relevant in practice. Because a static type system always restricts what kinds of runtime behavior you can express in your language. I am still skeptical, because changing the types at runtime is basically changing the assumptions that the static type checker has used to check the program's types in the first place. That's why I find the Erlang model to be more promising. I am extremely skeptical of code mutation at runtime which would change types, because to me types are approximations of program invariants. So if you do a modification that changes the types, it is rather likely that you did something that also changed the invariants, and existing code relying on those invariants will now break. ...no, it will throw exceptions that you can catch and handle, maybe interactively. For example, all the approaches that I have seen in statically typed languages deal with adding to a running program (say, class fields and methods, etc.), but not with changing to, or removing from it. Of course, there are good reasons for that: removing fields or changing their invariants requires that all /deployed/ code which relied on their existence or their invariants must be made aware of this change. This is a semantic problem which happens to reveal itself as a typing problem. By making types dynamic, the problem does not go away but is merely swept under the rug. ...and yet this requirement sometimes comes up. Note that prohibiting directly self-modifying code does not prevent a program from specifying another program to *replace* it. ...and this creates problems with moving data from one version of a program to the next. How does this create such a problem? The problem is there in either approach. In fact, I believe that the best chance we have of addressing the problem is by adopting the replace the code model along with a translate the data where necessary at the time of replacement. Translating the data, i.e., re-establishing the invariants expected by the updated/replaced code, seems much harder (to me) in the case of self-modifying code. Erlang got this one right. ...and the translate the date where necessary approach is essentially triggered by a dynamic type test (if value x is of an old version of type T, update it to reflect the new version of type T [1]). QED. But this test would have to already exist in code that was deployed /before/ the change! How did this code know what to test for, and how did it know how to translate the data? In CLOS, the test is indeed already there from the beginning. It's part of the runtime semantics. The translation of the data is handled by 'update-instance-for-redefined-class, which is a generic function for which you can define your own methods. So this is user-extensible and can, for example, be provided as part of the program update. Furthermore, slot accesses typically go through generic functions (aka setters and getters in other languages) for which you can provide methods that can perform useful behavior in case the corresponding slots have gone. These methods can also be provided as part of the program update. Note that I am not claiming that CLOS provides the perfect solution, but it seems to work reasonably well that people use it in practice. Plus, how do you detect that some piece of data is of an old version of type T? If v has type T and T changes (whatever that might mean), how can you tell that v's type is the old T rather than the new T! Are there two different Ts around now? If so, how can you say that T has changed? Presumably, objects have references to their class metaobjects which contain version information and references to more current versions of themselves. This is not rocket science. The bottom line is that even the concept of changing types at runtime makes little sense. Until someone shows me a /careful/ formalization that actually works, I just can't take it very seriously. Maybe this feature just isn't made for you. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [correction]
Andreas Rossberg wrote: ~/ ocaml -rectypes Objective Caml version 3.08.3 # let rec blackhole x = blackhole;; val blackhole : 'b - 'a as 'a = fun The problem is, though, that almost everything can be typed once you have unrestricted recursive types (e.g. missing arguments etc), and consequently many actual errors remain unflagged (which clearly shows that typing is not only about potential value class mismatches). Moreover, there are very few practical uses of such a feature, and they can always be coded easily with recursive datatypes. It is a pragmatic decision born from experience that you simply do *not want* to have this, even though you easily could. E.g. for OCaml, unrestricted recursive typing was removed as default because of frequent user complaints. Which is why this actually is a very bad example to chose for dynamic typing advocacy... ;-) Actually, this seems a *good* example. The problem seems to be that you end up throwing the baby out with the bathwater: your static type system stops catching the errors you want once you make it powerful enough to express certain programs. So now it seems to come down to a matter of taste: use a restricted language and catch certain errors early or use an unrestricted language and miss certain errors. It is interesting that the PLT people have made this tradeoff as well. In the DrScheme system, there are different `language levels' that provide a restricted subset of Scheme. At the beginner level, first-class procedures are not allowed. This is obviously restrictive, but it has the advantage that extraneous parenthesis (a common beginner mistake) cannot be misinterpreted as the intent to invoke a first-class procedure. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall [EMAIL PROTECTED] writes: Yes, an important question (IMHO the *more* important question than the terminology) is what *programs* do we give up if we wish to use static typing? I have never been able to pin this one down at all. Well, given Turing Machine equivalence... I'd mention retrospective software. But you can always implement the wanted retrospective features as a layer above the statically typed language. So the question is how much work the programmer needs to do to implement a given program with static typing or with dynamic typing. The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? I think there might be, but I've never been able to find a solid example of one. More than *never* typable, you want to identify some kind of software that is not *economically* statically typable. Was it costlier to develop the software developed in non statically typed programming languages than in a statically typed programming language? -- __Pascal Bourguignon__ http://www.informatimago.com/ I have challenged the entire quality assurance team to a Bat-Leth contest. They will not concern us again. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall [EMAIL PROTECTED] wrote: Yes, an important question (IMHO the *more* important question than the terminology) is what *programs* do we give up if we wish to use static typing? I have never been able to pin this one down at all. You'd need to clarify what you mean by use static typing. Clearly, if you use forms of static typing that currently exist for Java, there's considerable limitation in expressiveness. If you mean a hypothetical perfect type system, then there would be no interesting programs you couldn't write, but such a system may not be possible to implement, and certainly isn't feasible. So it seems to me that we have this ideal point at which it is possible to write all correct or interesting programs, and impossible to write buggy programs. Pure static type systems try to approach that point from the conservative side. Pure dynamic systems (and I actually think that design-by-contract languages and such apply here just as well as type tagging) try to approach that point via runtime verification from the liberal side. (No, this has nothing to do with George Bush or Noam Chomsky... :) Either side finds that the closer they get, the more creative they need to be to avoid creating development work that's no longer commensurate with the gains involved (whether in safety or expressiveness). I think I am now convinced that the answer to the question of whether there is a class of type errors in dynamically checked languages is the same as the answer for static type systems: no. In both cases, it's just a question of what systems may be provided for the purposes of verifying (more or less conservatively or selectively) certain program properties. Type tags or other features that look like types are only relevant in that they encapsulate common kinds of constraints on program correctness without requiring the programmer to express those constraints in any more general purpose language. As for what languages to use right now, we are interestingly enough back where Xah Lee started this whole thread. All languages (except a few of the more extreme statically typed languages) are Turing complete, so in order to compare expressiveness, we'd need some other measure that considers some factor or factors beyond which operations are expressible. There are also what I call packaging issues, such as being able to run partly-wrong programs on purpose so that one would have the opportunity to do runtime analysis without having to, say, implement parts of some interface that one isn't interested in testing yet. These could also be solved in a statically typed language. (Although historically it hasn't been done that way.) I'll note veryh briefly that the built-in compiler for the Eclipse IDE has the very nice feature that when a particular method fails to compile, it can still produces a result but replace the implementation of that method with one that just throws an Error. I've taken advantage of that on occasion, though it doesn't allow the same range of options as a language that will just go ahead and try the buggy operations. The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? I think there might be, but I've never been able to find a solid example of one. This seems to depend on the specific concept of equivalence of programs that you have in mind. -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall [EMAIL PROTECTED] writes: There are also what I call packaging issues, such as being able to run partly-wrong programs on purpose so that one would have the opportunity to do runtime analysis without having to, say, implement parts of some interface that one isn't interested in testing yet. These could also be solved in a statically typed language. (Although historically it hasn't been done that way.) I keep hearing this, but I guess I fail to understand it. How partly-wrong do you require the program to be? During development, I frequently sprinkle my (Haskell) program with 'undefined' and 'error implement later' - which then lets me run the implemented parts until execution hits one of the 'undefined's. The cost of static typing for running an incomplete program is thus simply that I have to name entities I refer to. I'd argue that this is good practice anyway, since it's easy to see what remains to be done. (Python doesn't seem to, but presumably a serious dynamically typed system would warn about references to entities not in scope?) -k -- If I haven't seen further, it is by standing in the footprints of giants -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris Smith schreef: So it seems to me that we have this ideal point at which it is possible to write all correct or interesting programs, and impossible to write buggy programs. I think that is a misconception. Even at the idealest point it will be possible (and easy) to write buggy programs. Gödel! -- Affijn, Ruud Gewoon is een tijger. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. In a statically typed, Turing-complete language that does not have a dynamic type, it is always *possible* to express a dynamically typed program, but this may require a global transformation of the program or use of an interpreter -- i.e. the Felleisen expressiveness of the language is reduced. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Ketil Malde wrote: Marshall [EMAIL PROTECTED] writes: There are also what I call packaging issues, such as being able to run partly-wrong programs on purpose so that one would have the opportunity to do runtime analysis without having to, say, implement parts of some interface that one isn't interested in testing yet. These could also be solved in a statically typed language. (Although historically it hasn't been done that way.) I keep hearing this, but I guess I fail to understand it. How partly-wrong do you require the program to be? During development, I frequently sprinkle my (Haskell) program with 'undefined' and 'error implement later' - which then lets me run the implemented parts until execution hits one of the 'undefined's. The cost of static typing for running an incomplete program is thus simply that I have to name entities I refer to. I'd argue that this is good practice anyway, since it's easy to see what remains to be done. (Python doesn't seem to, but presumably a serious dynamically typed system would warn about references to entities not in scope?) I'll give you my understanding of it, but bear in mind that I only program in statically typed languages, and I in fact do exactly what you decribe above: stub out unimplemented methods. The issue is that, while stubbing out things may not be expensive, it is not free either. There is some mental switching cost to being in a state where one writes some code, wants to execute it, but can't, because of type system constraints that are globally applicable but not locally applicable to the task at hand, or to the specific subprogram one is working on right at that moment. Since I'm very used to doing it, it doesn't seem like an issue to me, but programmers in dynamical languages complain bitterly about this. It is my feeling that programming languages should try to be inclusive, and since this feature is easily enough accomodated, (as a 'lenient' switch to execution) it ought to be. Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. So, how does this dynamic type work? It can't simply be the any type, because that type has no/few functions defined on it. It strikes me that *when* exactly binding happens will matter. In a statically typed language, it may be that all binding occurs at compile time, and in a dynamic language, it may be that all binding occurs at runtime. So you might have to change the binding mechanism as well. Does the dynamic type allow this? Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
On Mon, 26 Jun 2006 13:02:33 -0600, Chris Smith [EMAIL PROTECTED] wrote: George Neuner gneuner2/@comcast.net wrote: I worked in signal and image processing for many years and those are places where narrowing conversions are used all the time - in the form of floating point calculations reduced to integer to value samples or pixels, or to value or index lookup tables. Often the same calculation needs to be done for several different purposes. These are value conversions, not type conversions. Basically, when you convert a floating point number to an integer, you are not simply promising the compiler something about the type; you are actually asking the compiler to convert one value to another -- i.e., see to it that whatever this is now, it /becomes/ an integer by the time we're done. This also results in a type conversion, but you've just converted the value to the appropriate form. There is a narrowing value conversion, but the type conversion is perfectly safe. We're talking at cross purposes. I'm questioning whether a strong type system can be completely static as some people here seem to think. I maintain that it is simply not possible to make compile time guarantees about *all* runtime behavior and that, in particular, narrowing conversions will _always_ require runtime checking. I can know that my conversion of floating point to integer is going to produce a value within a certain range ... but, in general, the compiler can't determine what that range will be. If you mean my compiler can't, then this is probably the case. If you mean no possible compiler could, then I'm not sure this is really very likely at all. Again, the discussion is about narrowing the result. It doesn't matter how much the compiler knows about the ranges. When the computation mixes types, the range of the result can only widen as the compiler determines the types involved. George -- for email reply remove / from address -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Ketil Malde [EMAIL PROTECTED] writes: Marshall [EMAIL PROTECTED] writes: There are also what I call packaging issues, such as being able to run partly-wrong programs on purpose so that one would have the opportunity to do runtime analysis without having to, say, implement parts of some interface that one isn't interested in testing yet. These could also be solved in a statically typed language. (Although historically it hasn't been done that way.) I keep hearing this, but I guess I fail to understand it. How partly-wrong do you require the program to be? This conclusion is false. To be wrong, whether partly or fully, a program needs to specifically not conform to the requirements that the programmer gives it. You are assuming that only a completed program can be right. Let me give a counterexample: Consider this part of a CL program: CL-USER(1): (defun foo (x) (declare (optimize speed (safety 0) (debug 0)) (fixnum x) ) (bar (the fixnum (1+ x FOO CL-USER(2): This is of course not a complete program, because the function BAR is not yet defined. If I try to run it, it will of course get an error. But does that require then that I call this a partly wrong program? No, of course not; it is not a program; it is a function, and for my purposes it is completely correct (I've debugged it already :-) So what can we do with an incomplete program? Nothing? Well, no. Of course, we can't run it (or can we?). But even before talking about running the program, we can at least do some reasoning about it: 1. It seems to have some static type declarations (in a dynamic langiuage? oh my :-). 2. The 1+ operation limits the kinds of operations that would be acceptable, even in the absence of static type declarations. 3. The declarations can be ignored by the CL implementation, so #2 might indeed come into play. I ensured that the declarations would be trusted in Allegro CL, by declaring high speed and low safety. 4. I can compile this function. Note that I get a warning about the incompleteness of the program: CL-USER(2): (compile 'foo) Warning: While compiling these undefined functions were referenced: BAR. FOO NIL NIL CL-USER(3): 5. I can disassemble the function. Note that it is a complete function, despite the fact that BAR is undefined: CL-USER(3): (disassemble 'foo) ;; disassembly of #Function FOO ;; formals: X ;; constant vector: 0: BAR ;; code start: #x406f07ec: 0: 83 c0 04 addl eax,$4 3: 8b 5e 12 movl ebx,[esi+18] ; BAR 6: b1 01 movbcl,$1 8: ff e7 jmp *edi CL-USER(4): 6. I can _even_ run the program! Yes, I get an error: CL-USER(4): (foo 10) Error: attempt to call `BAR' which is an undefined function. [condition type: UNDEFINED-FUNCTION] Restart actions (select using :continue): 0: Try calling BAR again. 1: Return a value instead of calling BAR. 2: Try calling a function other than BAR. 3: Setf the symbol-function of BAR and call it again. 4: Return to Top Level (an abort restart). 5: Abort entirely from this (lisp) process. [1] CL-USER(5): 7. But note first that I have many options, including retrying the call to BAR. What was this call to BAR? I can reason about that as well, by getting a backtrace: [1] CL-USER(5): :zoom Evaluation stack: (ERROR #UNDEFINED-FUNCTION @ #x406f3dfa) -(BAR 11) [... EXCL::%EVAL ] (EVAL (FOO 10)) (TPL:TOP-LEVEL-READ-EVAL-PRINT-LOOP) (TPL:START-INTERACTIVE-TOP-LEVEL #TERMINAL-SIMPLE-STREAM [initial terminal io] fd 0/1 @ #x40120e5a #Function TOP-LEVEL-READ-EVAL-PRINT-LOOP ...) [1] CL-USER(6): 8. If I then define BAR, with or without compiling it, and then take that option to retry the call: [1] CL-USER(6): (defun bar (x) (1- x)) BAR [1] CL-USER(7): :cont 10 CL-USER(8): Hmm, I was able to complete the program dynamically? Who ever heard of doing that? :-) During development, I frequently sprinkle my (Haskell) program with 'undefined' and 'error implement later' - which then lets me run the implemented parts until execution hits one of the 'undefined's. Why do it manually? And what do you do when you've hit the undefined? Start the program over? The cost of static typing for running an incomplete program is thus simply that I have to name entities I refer to. I'd argue that this is good practice anyway, since it's easy to see what remains to be done. Good practice, yes, but why not have the language help you to practice it? -- Duane Rettig[EMAIL PROTECTED]Franz Inc. http://www.franz.com/ 555 12th St., Suite 1450 http://www.555citycenter.com/ Oakland, Ca. 94607Phone: (510) 452-2000; Fax: (510) 452-0182 -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark wrote: Very impressive. It looks right to me and simple enough to understand. I must find the time to learn a modern FP language. Can you write a fold for this that prints the data as a binary tree of triples? I have to believe it isn't that hard {- Refactoring this as a fold is left as an exercise to the reader -} data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show clark = (Cons 42 (Cons 3.14 (Cons abc (Cons (1,2) (Cons (1.2,3.4) (Cons (foo,bar) (Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4)) (Cons ((w,x),(y,z)) Nil) main = print (toTree clark) data Tree a = Empty | Node a (Tree a) (Tree a) deriving Show toTree :: Clark a b c - Tree (a,b,c) toTree (Cons x (Cons y (Cons z rest))) = Node (x,y,z) (toTree (fst (lift_c rest))) (toTree (snd (lift_c rest))) toTree _ = Empty lift_c :: Clark (a,a) (b,b) (c,c) - (Clark a b c, Clark a b c) lift_c Nil = (Nil,Nil) lift_c (Cons (x,y) rest) = (Cons x (fst (lift_c rest)), Cons y (snd (lift_c rest))) -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: Yes, an important question (IMHO the *more* important question than the terminology) is what *programs* do we give up if we wish to use static typing? I have never been able to pin this one down at all. It would depend on the type system, naturally. It isn't clear to me which programs we would have to give up, either. I don't have much experience in sophisticated typed languages. It is rather easy to find programs that baffle an unsophisticated typed language (C, C++, Java, etc.). Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) (defun blackhole (argument) (declare (ignore argument)) #'blackhole) But wait a sec. It seems that these were examples I invented in response to the same question from you! The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? Certainly! As soon as you can reflect on the type system you can construct programs that type-check iff they fail to type-check. Perhaps, there is no such beast. Or, perhaps I just can't formulate it. Or, perhaps we have static type checkers which can do computations of unbounded complexity. However, I thought that one of the characteristics of type systems was that they did not allow unbounded complexity and weren't Turing Complete. The C++ type system is Turing complete, although in practical terms it limits how much processing power it will spend on types at compile time. I think templates only have to expand to seven levels, so you are severely limited here. If you allow Turing Complete type systems, then I would say no--every bug can be reforumlated as a type error. If you require your type system to be less powerful, then some bugs must escape it. I don't think so. Even with a Turing complete type system, a program's runtime behavior is still something different from its static behavior. (This is the other side of the types are not tags issue--not only is it the case that there are things static types can do that tags can't, it is also the case that there are things tags can do that static types can't.) I agree. The point of static checking is to *not* run the program. If the type system gets too complicated, it may be a de-facto interpreter. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
The C++ type system is Turing complete, although in practical terms it limits how much processing power it will spend on types at compile time. I think templates only have to expand to seven levels, so you are severely limited here. You can adjust the iteration-depth. However, as turing-complete implies the potential to create infinite loops - it enforces _some_ boundary. Otherwise the user hitting C-c will do :) Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joe Marshall wrote: Marshall wrote: Yes, an important question (IMHO the *more* important question than the terminology) is what *programs* do we give up if we wish to use static typing? I have never been able to pin this one down at all. It would depend on the type system, naturally. It isn't clear to me which programs we would have to give up, either. I don't have much experience in sophisticated typed languages. It is rather easy to find programs that baffle an unsophisticated typed language (C, C++, Java, etc.). Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) (defun blackhole (argument) (declare (ignore argument)) #'blackhole) But wait a sec. It seems that these were examples I invented in response to the same question from you! The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? Certainly! As soon as you can reflect on the type system you can construct programs that type-check iff they fail to type-check. Sorry, but can you elaborate on this last point a little? I think I missed something. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. So, how does this dynamic type work? http://citeseer.ist.psu.edu/abadi89dynamic.html It can't simply be the any type, because that type has no/few functions defined on it. It isn't. From the abstract of the above paper: [...] even in statically typed languages, there is often the need to deal with data whose type cannot be determined at compile time. To handle such situations safely, we propose to add a type Dynamic whose values are pairs of a value v and a type tag T where v has the type denoted by T. Instances of Dynamic are built with an explicit tagging construct and inspected with a type safe typecase construct. Gradual typing as described in http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf is another alternative. The difference between gradual typing and a dynamic type is one of convenience rather than expressiveness -- gradual typing does not require explicit tagging and typecase constructs. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
George Neuner gneuner2/@comcast.net wrote: We're talking at cross purposes. I'm questioning whether a strong type system can be completely static as some people here seem to think. I maintain that it is simply not possible to make compile time guarantees about *all* runtime behavior and that, in particular, narrowing conversions will _always_ require runtime checking. Nah, we're not at cross-purposes. I'm discussing the same thing you are. My answer is that while narrowing conversion are not (by definition) type-safe, a sufficiently strong type system can remove these type-unsafe narrowing conversions from a program. Again, the discussion is about narrowing the result. It doesn't matter how much the compiler knows about the ranges. When the computation mixes types, the range of the result can only widen as the compiler determines the types involved. Well, there are certain operations that can reduce the range. For example, dividing a number that's known to be in the range 6-10 by the exact constant 2 yields a result that's guaranteed to be in the range 3- 5. That's a smaller range. That said, though, there is a more fundamental point here. When you perform a a narrowing type conversion, one of two things must be true: (a) you know more about the types than the compiler, and thus know that the conversion is safe, or (b) your code is broken. Exluding the possibility that you've written buggy code, you must possess some knowledge that convinces you the conversion is safe. In that case, we need only allow the type system to have that same knowledge, and the problem is solved. (One thing worth pointing out here is that it's quite possible that you want to do something different depending on the kind of data. In that case, the sufficiently powerful type system would need to have rules so that an if statemement creates a modified type environment to take that into account. This is different from a runtime type check, in that you are writing explicit code that provides correct program behavior in either case.) -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Dr.Ruud [EMAIL PROTECTED] wrote: So it seems to me that we have this ideal point at which it is possible to write all correct or interesting programs, and impossible to write buggy programs. I think that is a misconception. Even at the idealest point it will be possible (and easy) to write buggy programs. Gödel! I agree. I never said that the ideal point is achievable... only that there is a convergence toward it. (In some cases, that convergence may have stalled at some equilibrium point because of the costs of being near that point.) -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Marshall wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. So, how does this dynamic type work? http://citeseer.ist.psu.edu/abadi89dynamic.html It can't simply be the any type, because that type has no/few functions defined on it. It isn't. From the abstract of the above paper: [...] even in statically typed languages, there is often the need to deal with data whose type cannot be determined at compile time. To handle such situations safely, we propose to add a type Dynamic whose values are pairs of a value v and a type tag T where v has the type denoted by T. Instances of Dynamic are built with an explicit tagging construct and inspected with a type safe typecase construct. Well, all this says is that the type dynamic is a way to explicitly indicate the inclusion of rtti. But that doesn't address my objection; if a typesafe typecase construct is required, it's not like using a dynamic language. They don't require typecase to inspect values before one can, say, invoke a function. Gradual typing as described in http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf is another alternative. The difference between gradual typing and a dynamic type is one of convenience rather than expressiveness -- gradual typing does not require explicit tagging and typecase constructs. Perhaps this is the one I should read; it sounds closer to what I'm talking about. Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joe Marshall wrote: Marshall wrote: Yes, an important question (IMHO the *more* important question than the terminology) is what *programs* do we give up if we wish to use static typing? I have never been able to pin this one down at all. It would depend on the type system, naturally. Naturally! It isn't clear to me which programs we would have to give up, either. I don't have much experience in sophisticated typed languages. It is rather easy to find programs that baffle an unsophisticated typed language (C, C++, Java, etc.). C and Java, certainly, but I'm wary these days about making any statement about limitations on C++'s type system, for it is subtle and quick to anger. Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) (defun blackhole (argument) (declare (ignore argument)) #'blackhole) But wait a sec. It seems that these were examples I invented in response to the same question from you! Ah, how well I remember that thread, and how little I got from it. My memories of that thread are 1) the troll who claimed that Java was unable to read two numbers from standard input and add them together. 2) The fact that all the smart people agreed the blackhole function indicated something profound. 3) The fact that I didn't understand the black hole function. The noisy-apply function I think I understand; it's generic on the entire arglist. In fact, if I read it correctly, it's even generic on the *arity* of the function, which is actually pretty impressive. True? This is an issue I've been wrestling with in my own type system investigations: how to address genericity across arity. Does noisy-apply get invoked the same way as other functions? That would be cool. As to the black hole function, could you explain it a bit? I apologize for my lisp-ignorance. I am sure there is a world of significance in the # ' on the third line, but I have no idea what it is. If you allow Turing Complete type systems, then I would say no--every bug can be reforumlated as a type error. If you require your type system to be less powerful, then some bugs must escape it. I don't think so. Even with a Turing complete type system, a program's runtime behavior is still something different from its static behavior. (This is the other side of the types are not tags issue--not only is it the case that there are things static types can do that tags can't, it is also the case that there are things tags can do that static types can't.) I agree. The point of static checking is to *not* run the program. If the type system gets too complicated, it may be a de-facto interpreter. Aha! A lightbulb just want off. Consider that a program is a function parameterized on its input. Static analysis is exactly the field of what we can say about a program without know the values of its parameters. Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: David Hopwood wrote: Marshall wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. So, how does this dynamic type work? http://citeseer.ist.psu.edu/abadi89dynamic.html It can't simply be the any type, because that type has no/few functions defined on it. It isn't. From the abstract of the above paper: [...] even in statically typed languages, there is often the need to deal with data whose type cannot be determined at compile time. To handle such situations safely, we propose to add a type Dynamic whose values are pairs of a value v and a type tag T where v has the type denoted by T. Instances of Dynamic are built with an explicit tagging construct and inspected with a type safe typecase construct. Well, all this says is that the type dynamic is a way to explicitly indicate the inclusion of rtti. But that doesn't address my objection; if a typesafe typecase construct is required, it's not like using a dynamic language. They don't require typecase to inspect values before one can, say, invoke a function. I was answering the question posed above: are there some programs that we can't write *at all* in a statically typed language...? Gradual typing as described in http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf is another alternative. The difference between gradual typing and a dynamic type is one of convenience rather than expressiveness -- gradual typing does not require explicit tagging and typecase constructs. Perhaps this is the one I should read; it sounds closer to what I'm talking about. Right; convenience is obviously important, as well as expressiveness. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [correction]
David Hopwood wrote: Marshall wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. Correction: as the paper on gradual typing referenced below points out in section 5, something like the typerec construct of http://citeseer.ist.psu.edu/harper95compiling.html would be needed, in order for a system with a dynamic type to be equivalent in expressiveness to dynamic typing or gradual typing. (typerec can be used to implement dynamic; it is more general.) So, how does this dynamic type work? http://citeseer.ist.psu.edu/abadi89dynamic.html It can't simply be the any type, because that type has no/few functions defined on it. It isn't. From the abstract of the above paper: [...] even in statically typed languages, there is often the need to deal with data whose type cannot be determined at compile time. To handle such situations safely, we propose to add a type Dynamic whose values are pairs of a value v and a type tag T where v has the type denoted by T. Instances of Dynamic are built with an explicit tagging construct and inspected with a type safe typecase construct. Gradual typing as described in http://www.cs.rice.edu/~jgs3847/pubs/pubs/2006/siek06:_gradual.pdf is another alternative. The difference between gradual typing and a dynamic type This should be: the difference between gradual typing and a system with typerec... is one of convenience rather than expressiveness -- gradual typing does not require explicit tagging and typecase constructs. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood ...nospamuk wrote: A good debugger is invaluable regardless of your attitude to type systems. I found that certain language features help greatly in pinning the errors, when programming in my own impure fp language (PILS). Originally, I implemented a single-stepping debugger for the language, but I trashed it for three reasons, of which two are rather uninteresting here: it messed up the GUI system, and the evaluation order of PILS made single-stepping a confusing experience. The interesting reason is: I didn't need single-stepping, partly because the programming system is unit-test friendly, partly because of language features that seem to mimick the concept of taking responsibility. Basically, the PILS execution mechanism is pessimistic. When a function is called, the assumption is it won't work, and the only way the function can return a result is by throwing an exception. So, if a function call doesn't throw an exception, it has failed and execution will stop unless the function call is explicitly marked as fault tolerant, as in f (x) else undefined This has been merged with tail call flattening - rather than throwing the result, an (expression, binding) thing is thrown (I guess .NET'ers would call it an anonymous parameterless delegate), and the expression is then evaluated after the throw. Mimickally speaking, when a function body performs this throw, it takes responsibility for getting the job done, and if it fails, it will suffer the peril of having its guts exposed in public by the error message window - much like certain medieval penal systems. I will spare you for the gory details, just let me add that to honor the demands of modern bureaucracy, I added a try the other office feature so that there are ways of getting things done without ever taking responsibility. Strangely, this all started 20 years ago as I struggled implementing an early PILS version on an 8-bit Z80 processor, at a blazing 4 MHz. I needed a fast way of escaping from a failing pattern match, and it turned out the conditional return operation of this particular processor was the fastest I could get. Somehow, the concept of if it returns, it's bad crept into the language design. I was puzzled by its expressiveness and the ease of using it, only years later did I realize that its ease of use came from the mimicking of responsibility. I'm still puzzled that the notion of mimicking a cultural/ social concept came from a microprocessor instruction. It seems like I - and perhaps computer language designers in general - have a blind spot. We dig physical objects and mathematical theories, and don't take hints from the soft sciences... Perhaps I'm over-generalizing, it just strikes me that the dispute always seems to stand between math versus object orientation, with seemingly no systematic attempts at utilizing insigts from the study of languages and cultural concepts. It's like as if COBOL and VB scared us from ever considering the prospect of making programming languages readable to the uninitiated. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. What about programs where the types change at runtime? Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: Joe Marshall wrote: Marshall wrote: It isn't clear to me which programs we would have to give up, either. I don't have much experience in sophisticated typed languages. It is rather easy to find programs that baffle an unsophisticated typed language (C, C++, Java, etc.). C and Java, certainly, but I'm wary these days about making any statement about limitations on C++'s type system, for it is subtle and quick to anger. Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) (defun blackhole (argument) (declare (ignore argument)) #'blackhole) The noisy-apply function I think I understand; it's generic on the entire arglist. In fact, if I read it correctly, it's even generic on the *arity* of the function, which is actually pretty impressive. True? This is an issue I've been wrestling with in my own type system investigations: how to address genericity across arity. Does noisy-apply get invoked the same way as other functions? That would be cool. As to the black hole function, could you explain it a bit? I apologize for my lisp-ignorance. I am sure there is a world of significance in the # ' on the third line, but I have no idea what it is. You can ignore the #'. In Scheme this as follows: (define blackhole (argument) blackhole) It just means that the function blackhole returns the function blackhole. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joe Marshall wrote: It isn't clear to me which programs we would have to give up, either. I don't have much experience in sophisticated typed languages. It is rather easy to find programs that baffle an unsophisticated typed language (C, C++, Java, etc.). Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) 'noisy-apply' is typeable using either of these systems: http://www.cs.jhu.edu/~pari/papers/fool2004/first-class_FOOL2004.pdf http://www.coli.uni-saarland.de/publikationen/softcopies/Muller:1998:TIF.pdf (it should be easy to see the similarity to a message forwarder). (defun blackhole (argument) (declare (ignore argument)) #'blackhole) This is typeable in any system with universally quantified types (including most practical systems with parametric polymorphism); it has type forall a . a - #'blackhole. The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? Certainly! As soon as you can reflect on the type system you can construct programs that type-check iff they fail to type-check. A program that causes the type checker not to terminate (which is the effect of trying to construct such a thing in the type-reflective systems I know about) is hardly useful. In any case, I think the question implied the condition: and that you can write in a language with no static type checking. [...] If you allow Turing Complete type systems, then I would say no--every bug can be reforumlated as a type error. If you require your type system to be less powerful, then some bugs must escape it. I don't think so. Even with a Turing complete type system, a program's runtime behavior is still something different from its static behavior. (This is the other side of the types are not tags issue--not only is it the case that there are things static types can do that tags can't, it is also the case that there are things tags can do that static types can't.) I agree. The point of static checking is to *not* run the program. If the type system gets too complicated, it may be a de-facto interpreter. The following LtU post: http://lambda-the-ultimate.org/node/1085 argues that types should be expressed in the same language that is being typed. I am not altogether convinced, but I can see the writer's point. There do exist practical languages in which types can be defined as arbitrary predicates or coercion functions, for example E (www.erights.org). E implementations currently perform type checking only at runtime, however, although it was intended to allow static analysis for types that could be expressed in a conventional static type system. I think the delay in implementing this has more to do with the E developers' focus on other (security and concurrency) issues, rather than an inherent difficulty. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: Joe Marshall wrote: Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) (defun blackhole (argument) (declare (ignore argument)) #'blackhole) But wait a sec. It seems that these were examples I invented in response to the same question from you! Ah, how well I remember that thread, and how little I got from it. My memories of that thread are 1) the troll who claimed that Java was unable to read two numbers from standard input and add them together. 2) The fact that all the smart people agreed the blackhole function indicated something profound. 3) The fact that I didn't understand the black hole function. The noisy-apply function I think I understand; it's generic on the entire arglist. In fact, if I read it correctly, it's even generic on the *arity* of the function, which is actually pretty impressive. True? Yes. This is an issue I've been wrestling with in my own type system investigations: how to address genericity across arity. Does noisy-apply get invoked the same way as other functions? That would be cool. Yes, but in testing I found an incompatability. Here's a better definiton: (defun noisy-apply (f rest args) (format t ~Applying ~s to ~s. f (apply #'list* args)) (apply f (apply #'list* args))) CL-USER (apply #'+ '(2 3)) 5 CL-USER (noisy-apply #'+ '(2 3)) Applying #function + 20113532 to (2 3). 5 The internal application of list* flattens the argument list. The Common Lisp APPLY function has variable arity and this change makes my NOISY-APPLY be identical. As to the black hole function, could you explain it a bit? It is a function that takes any argument and returns the function itself. I apologize for my lisp-ignorance. I am sure there is a world of significance in the # ' on the third line, but I have no idea what it is. The #' is a namespace operator. Since functions and variables have different namespaces, I'm indicating that I wish to return the function named blackhole rather than any variable of the same name that might be around. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Pascal Costanza wrote: David Hopwood wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? In a statically typed language that has a dynamic type, all dynamically typed programs are straightforwardly expressible. What about programs where the types change at runtime? Staged compilation is perfectly compatible with static typing. Static typing only requires that it be possible to prove absence of some category of type errors when the types are known; it does not require that all types are known before the first-stage program is run. There are, however, staged compilation systems that guarantee that the generated program will be typeable if the first-stage program is. (It's clear that to compare the expressiveness of statically and dynamically typed languages, the languages must be comparable in other respects than their type system. Staged compilation is the equivalent feature to 'eval'.) -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
QCD Apprentice wrote: Joe Marshall wrote: Marshall wrote: The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? Certainly! As soon as you can reflect on the type system you can construct programs that type-check iff they fail to type-check. Sorry, but can you elaborate on this last point a little? I think I missed something. This is just the halting problem lifted up into the type domain. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
George Neuner wrote: That was interesting, but the authors' method still involves runtime checking of the array bounds. IMO, all they really succeeded in doing was turning the original recursion into CPS and making the code a little bit clearer. Hmm. There is a comparison in both the DML and Haskell code, but that's just there to make the binary search terminate correctly. The point of the exercise is the line in the DML code val m = (hi + lo) div 2, or the bmiddle function in the Haskell code. If you change that line to something like val m = (hi + lo) you'll get a compiler error complaining about an unsolved constraint. The point of the Haskell code is to use the type system to ensure that array indices have a know provenance. They're derived from and statically associated with each individual array, so you can't use an index from one array with a different array. You'll probably also enjoy the paper... Eliminating Array Bound Checking Through Dependent Types http://citeseer.ist.psu.edu/xi98eliminating.html ...and DML itself... http://www.cs.bu.edu/~hwxi/DML/DML.html Greg Buchholz -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Pascal Costanza [EMAIL PROTECTED] wrote: You can ignore the #'. In Scheme this as follows: (define blackhole (argument) blackhole) It just means that the function blackhole returns the function blackhole. So, in other words, it's equivalent to (Y (\fa.f)) in lambda calculus, where \ is lambda, and Y is the fixed point combinator? -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joe Marshall wrote: It isn't clear to me which programs we would have to give up, either. I don't have much experience in sophisticated typed languages. It is rather easy to find programs that baffle an unsophisticated typed language (C, C++, Java, etc.). Looking back in comp.lang.lisp, I see these examples: (defun noisy-apply (f arglist) (format t I am now about to apply ~s to ~s f arglist) (apply f arglist)) Just for fun, here is how you might do apply in Haskell... {-# OPTIONS -fglasgow-exts #-} class Apply x y z | x y - z where apply :: x - y - z instance Apply (a-b) a b where apply f x = f x instance Apply b as c = Apply (a-b) (a,as) c where apply f (x,xs) = apply (f x) xs f :: Int - Double - String - Bool - Int f x y z True = x + floor y * length z f x y z False= x * floor y + length z args = (1::Int,(3.1415::Double,(flub,True))) main = print $ apply f args ...which depends on the fact that functions are automatically curried in Haskell. You could do something similar for fully curried function objects in C++. Also of possible interest... Functions with the variable number of (variously typed) arguments http://okmij.org/ftp/Haskell/types.html#polyvar-fn ...But now I'm curious about how to create the apply function in a language like Scheme. I suppose you could do something like... (define (apply fun args) (eval (cons fun args))) ...but eval seems a little like overkill. Is there a better way? Greg Buchholz -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Joe Marshall wrote: (defun blackhole (argument) (declare (ignore argument)) #'blackhole) This is typeable in any system with universally quantified types (including most practical systems with parametric polymorphism); it has type forall a . a - #'blackhole. The #' is just a namespace operator. The function accepts a single argument and returns the function itself. You need a type that is a fixed point (in addition to the universally quantified argument). The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? Certainly! As soon as you can reflect on the type system you can construct programs that type-check iff they fail to type-check. A program that causes the type checker not to terminate (which is the effect of trying to construct such a thing in the type-reflective systems I know about) is hardly useful. In any case, I think the question implied the condition: and that you can write in a language with no static type checking. Presumably the remainder of the program does something interesting! The point is that there exists (by construction) programs that can never be statically checked. The next question is whether such programs are `interesting'. Certainly a program that is simply designed to contradict the type checker is of limited entertainment value, but there should be programs that are independently non checkable against a sufficiently powerful type system. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [correction]
David Hopwood wrote: Joe Marshall wrote: (defun blackhole (argument) (declare (ignore argument)) #'blackhole) This is typeable in any system with universally quantified types (including most practical systems with parametric polymorphism); it has type forall a . a - #'blackhole. Ignore this answer; I misinterpreted the code. I believe this example requires recursive types. It can also be expressed in a gradual typing system, but possibly only using an unknown ('?') type. ISTR that O'Caml at one point (before version 1.06) supported general recursive types, although I don't know whether it would have supported this particular example. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joe Marshall wrote: David Hopwood wrote: Joe Marshall wrote: (defun blackhole (argument) (declare (ignore argument)) #'blackhole) This is typeable in any system with universally quantified types (including most practical systems with parametric polymorphism); it has type forall a . a - #'blackhole. The #' is just a namespace operator. The function accepts a single argument and returns the function itself. You need a type that is a fixed point (in addition to the universally quantified argument). Yes, see the correction in my follow-up. The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? Certainly! As soon as you can reflect on the type system you can construct programs that type-check iff they fail to type-check. A program that causes the type checker not to terminate (which is the effect of trying to construct such a thing in the type-reflective systems I know about) is hardly useful. In any case, I think the question implied the condition: and that you can write in a language with no static type checking. Presumably the remainder of the program does something interesting! The point is that there exists (by construction) programs that can never be statically checked. I don't think you've shown that. I would like to see a more explicit construction of a dynamically typed [*] program with a given specification, where it is not possible to write a statically typed program that meets the same specification using the same algorithms. AFAIK, it is *always* possible to construct such a statically typed program in a type system that supports typerec or gradual typing. The informal construction you give above doesn't contradict that, because it depends on reflecting on a [static] type system, and in a dynamically typed program there is no type system to reflect on. [*] Let's put aside disagreements about terminology for the time being, although I still don't like the term dynamically typed. The next question is whether such programs are `interesting'. Certainly a program that is simply designed to contradict the type checker is of limited entertainment value, I don't know, it would probably have more entertainment value than most executive toys :-) but there should be programs that are independently non checkable against a sufficiently powerful type system. Why? Note that I'm not claiming that you can check any desirable property of a program (that would contradict Rice's Theorem), only that you can express any dynamically typed program in a statically typed language -- with static checks where possible and dynamic checks where necessary. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood [EMAIL PROTECTED] writes: Note that I'm not claiming that you can check any desirable property of a program (that would contradict Rice's Theorem), only that you can express any dynamically typed program in a statically typed language -- with static checks where possible and dynamic checks where necessary. It starts to look like sufficiently powerful static type systems are confusing enough, that programming with them is at least as bug-prone as imperative programming in dynamically typed languages. The static type checker can spot type mismatches at compile time, but the types themselves are easier and easier to get wrong. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
On Sun, 25 Jun 2006 20:11:22 +0200, Anton van Straaten [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: In this context, the term latently-typed language refers to the language that a programmer experiences, not to the subset of that language which is all that we're typically able to formally define. That language is not a subset, if at all, it's the other way round, but I'd say they are rather incomparable. That is, they are different languages. The subset characterization is not important for what I'm saying. The fact that they are different languages is what's important. If you agree about that, then you can at least understand which language I'm referring to when I say latently-typed language. Besides, many dynamically-typed languages have no formal models, in which case the untyped formal model I've referred to is just a speculative construct. The language I'm referring to with latently-typed language is the language that programmers are familiar with, and work with. That is starting to get a bit too mystical for my tastes. I have to agree. \sarcasm One step further, and somebody starts calling C a latently memory-safe language, because a real programmer knows that his code is in a safe subset... And where he is wrong, dynamic memory page protection checks will guide him. That's a pretty apt comparison, and it probably explains how it is that the software we all use, which relies so heavily on C, works as well as it does. But the comparison critiques the practice of operating without static guarantees, it's not a critique of the terminology. Anton Actually I have never developed a C/C++ program without a bounds checker the last 15 years. It checks all memory references and on program shutdown checks for memory leaks. What is it about you guys that make you blind to these fact's. Allocation problem's haven't really bugged me at all since forever. Now debugging fluky templates on the other hands.. But then debugging Lisp macro's isn't much better. -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris Smith wrote: What makes static type systems interesting is not the fact that these logical processes of reasoning exist; it is the fact that they are formalized with definite axioms and rules of inference, performed entirely on the program before execution, and designed to be entirely evaluatable in finite time bounds according to some procedure or set of rules, so that a type system may be checked and the same conclusion reached regardless of who (or what) is doing the checking. All that, and they still reach interesting conclusions about programs. There's only one issue mentioned there that needs to be negotiated w.r.t. latent types: the requirement that they are performed entirely on the program before execution. More below. If informal reasoning about types doesn't meet these criteria (and it doesn't), then it simply doesn't make sense to call it a type system (I'm using the static terminology here). It may make sense to discuss some of the ways that programmer reasoning resembles types, if indeed there are resemblances beyond just that they use the same basic rules of logic. It may even make sense to try to identify a subset of programmer reasoning that even more resembles... or perhaps even is... a type system. It still doesn't make sense to call programmer reasoning in general a type system. I'm not trying to call programmer reasoning in general a type system. I'd certainly agree that a programmer muddling his way through the development of a program, perhaps using a debugger to find all his problems empirically, may not be reasoning about anything that's sufficiently close to types to call them that. But latent means what it implies: someone who is more aware of types can do better at developing the latent types. As a starting point, let's assume we're dealing with a disciplined programmer who (following the instructions found in books such as the one at htdp.org) reasons about types, writes them in his comments, and perhaps includes assertions (or contracts in the sense of Contracts for Higher Order Functions[1]), to help check his types at runtime (and I'm talking about real type-checking, not just tag checking). When such a programmer writes a type signature as a comment associated with a function definition, in many cases he's making a claim that's provable in principle about the inputs and outputs of that function. For example, if the type in question is int - int, then the provable claim is that given an int, the function cannot return anything other than an int. Voila, assuming we can actually prove our claim, then we've satisfied the requirement in Pierce's definition of type system, i.e. proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute. The fact that the proof in question may not be automatically verified is no more relevant than the fact that so many proofs in mathematics have not been automatically verified. Besides, if the proof turns out to be wrong, we at least have an automated mechanism for finding witnesses to the error: runtime tag checks will generate an error. Such detection is not guaranteed, but again, proof does not imply automation. What I've just described walks like a type and talks like a type, or at least it appears to do so according to Pierce's definition. We can classify many terms in a given dynamically-typed program on this basis (although some languages may be better at supporting this than others). So, on what grounds do you reject this as not being an example of a type, or at the very least, something which has clear and obvious connections to formal types, as opposed to simply being arbitrary programmer reasoning? Is it the lack of a formalized type system, perhaps? I assume you recognize that it's not difficult to define such a system. Regarding errors, we haven't proved that the function in question can never be called with something other than an int, but we haven't claimed to prove that, so there's no problem there. I've already described how errors in our proofs can be detected. Another possible objection is that I'm talking about local cases, and not making any claims about extending it to an entire program (other than to observe that it can be done). But let's take this a step at a time: considered in isolation, if we can assign types to terms at the local level in a program, do you agree that these are really types, in the type theory sense? If you were to agree, then we could move on to looking at what it means to have many local situations in which we have fairly well-defined types, which may all be defined within a common type system. As an initial next step, we could simply rely on the good old universal type everywhere else in the program - it ought to be possible to make the program well-typed in that case, it would just mean that the provable properties would be
Re: What is Expressiveness in a Computer Language
John Thingstad wrote: On Sun, 25 Jun 2006 20:11:22 +0200, Anton van Straaten [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: ... \sarcasm One step further, and somebody starts calling C a latently memory-safe language, because a real programmer knows that his code is in a safe subset... And where he is wrong, dynamic memory page protection checks will guide him. That's a pretty apt comparison, and it probably explains how it is that the software we all use, which relies so heavily on C, works as well as it does. But the comparison critiques the practice of operating without static guarantees, it's not a critique of the terminology. Anton Actually I have never developed a C/C++ program without a bounds checker the last 15 years. It checks all memory references and on program shutdown checks for memory leaks. What is it about you guys that make you blind to these fact's. You misunderstand -- for the purposes of the above comparison, a bounds checker serves essentially the same purpose as dynamic memory page protection checks. The point is that it happens dynamically, i.e. at runtime, and that there's a lack of static guarantees about memory safety in C or C++. That's why, as I said, the comparison to latent vs. static typing is an apt one. Anton -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joachim Durchholz wrote: Anton van Straaten schrieb: Marshall wrote: Can you be more explicit about what latent types means? Sorry, that was a huge omission. (What I get for posting at 3:30am.) The short answer is that I'm most directly referring to the types in the programmer's head. Ah, finally that terminology starts to make sense to me. I have been wondering whether there's any useful difference between latent and run-time typing. (I tend to avoid the term dynamic typing because it's overloaded with too many vague ideas.) Right, that's the reason for using a different term. Runtime and dynamic types are both usually associated with the idea of tagged values, and don't address the types of program phrases. there are usually many possible static type schemes that can be assigned to a given program. This seems to apply to latent types as well. That's what I meant, yes. Actually the set of latent types seems to be the set of possible static type schemes. Um, well, a superset of these - static type schemes tend to be slightly less expressive than what the programmer in his head. (Most type schemes cannot really express things like the range of this index variable is such-and-so, and the boundary to general assertions about the code is quite blurry anyway.) Yes, although this raises the type theory objection of how you know something is a type if you haven't formalized it. For the purposes of comparison to static types, I'm inclined to be conservative and stick to things that have close correlates in traditional static types. There's a close connection between latent types in the sense I've described, and the tagged values present at runtime. However, as type theorists will tell you, the tags used to tag values at runtime, as e.g. a number or a string or a FooBar object, are not the same thing as the sort of types which statically-typed languages have. Would that be a profound difference, or is it just that annotating a value with a full type expression would cause just too much runtime overhead? It's a profound difference. The issue is that it's not just the values that need to be annotated with types, it's also other program terms. In addition, during a single run of a program, all it can ever normally do is record the types seen on the path followed during that run, which doesn't get you to static types of terms. To figure out the static types, you really need to do static analysis. Of course, static types give an approximation of the actual types of values that flow at runtime, and if you come at it from the other direction, recording the types of values flowing through terms on multiple runs, you can get an approximation to the static type. Some systems use that kind of thing for method lookup optimization. In your terminology: So, where do tagged values fit into this? Tags help to check types at runtime, but that doesn't mean that there's a 1:1 correspondence between tags and types. Would it be possible to establish such a correspondence, would it be common consensus that such a system should be called tags anymore, or are there other, even more profound differences? There's a non 1:1 correspondence which I gave an example of in the quoted message. For 1:1 correspondence, you'd need to associate types with terms, which would need some static analysis. It might result in an interesting program browsing and debugging tool... Anton -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Rob Thorpe [EMAIL PROTECTED] writes: I think statements like this are confusing, because there are different interpretations of what a value is. But I mean the value as the semantics of the program itself sees it. Which mostly means the datum in memory. I don't agree with that. Generally, a language specifies a virtual machine and doesn't need to concern itself with things like memory at all. Although langauges like C tries to, look at all the undefined behavior you get when you make assumptions about memory layout etc. Memory representation is just an artifact of a particular implementation of the language for a particular architecture. -k -- If I haven't seen further, it is by standing in the footprints of giants -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten [EMAIL PROTECTED] wrote: I'm not trying to call programmer reasoning in general a type system. I'd certainly agree that a programmer muddling his way through the development of a program, perhaps using a debugger to find all his problems empirically, may not be reasoning about anything that's sufficiently close to types to call them that. Let me be clear, then. I am far less agreeable than you that this couldn't be called a type system. One of my goals here is to try to understand, if we are going to work out what can and can't be called a type system in human thought, how we can possible draw a boundary between kinds of logic and say that one of them is a type system while the other is not. I haven't seen much of a convincing way to do it. The only things that I can see rejecting out of hand are the empirical things; confidence gained through testing or observation of the program in a debugger. Everything else seems to qualify, and that's the problem. So let's go ahead and agree about everything up until... [...] As an initial next step, we could simply rely on the good old universal type everywhere else in the program - it ought to be possible to make the program well-typed in that case, it would just mean that the provable properties would be nowhere near as pervasive as with a traditional statically-typed program. It would, in fact, mean that interesting provable properties would only be provable if data can't flow outside of those individual small bits where the programmer reasoned with types. Of course that's not true, or the programmer wouldn't have written the rest of the program in the first place. If we define the universal type as a variant type (basically, tagged or something like it, so that the type can be tested at runtime) then certain statements become provable, of the form either X or some error condition is raised because a value is inappropriate. That is indeed a very useful property; but wouldn't it be easier to simply prove it by appealing to the language semantics that say that if not X, the operation raises an exception, or to the existence of assertions within the function that verify X, or some other such thing (which must exist, or the statement wouldn't be true)? But the point is we would now have put our types into a formal context. Yes, but somewhat vacuously so... The point about inherent informality is just that if you fully formalize a static type system for a dynamic language, such that entire programs can be statically typed, you're likely to end up with a static type system with the same disadvantages that the dynamic language was trying to avoid. Okay, so you've forced the admission that you have a static type system that isn't checked and doesn't prove anything interesting. If you extended the static type system to cover the whole program, then you'd have a statically typed language that lacks an implementation of the type checker. I don't see what we lose in generality by saying that the language lacks a static type system, since it isn't checked, and doesn't prove anything anyway. The remaining statement is that the programmer may use static types to reason about the code. But, once again, the problem here is that I don't see any meaning to that. It would basically mean the same thing as that the programmer may use logic to reason about code. That isn't particularly surprising to me. I suppose this is why I'm searching for what is meant by saying that programmers reason about types, and am having that conversation in several places throughout this thread... because otherwise, it doesn't make sense to point out that programmers reason with types. Well, I'm trying to use the term latent type, as a way to avoid the ambiguity of type alone, to distinguish it from static type, and to get away from the obvious problems with dynamic type. I replied to your saying something to Marshall about the type systems (in the static sense) of dynamically-typed languages. That's what my comments are addressing. If you somehow meant something other than the static sense, then I suppose this was all a big misunderstanding. But I'm much less interested in the term itself than in the issues surrounding dealing with real types in dynamically-checked programs - if someone had a better term, I'd be all in favor of it. I will only repeat again that in static type systems, the type becomes a degenerate concept when it is removed from the type system. It is only the type system that makes something a type. Therefore, if you want real types, then my first intuition is that you should avoid looking in the static types of programming languages. At best, they happen to coincide frequently with real types, but that's only a pragmatic issue if it means anything at all. So when well-typed program fragments are considered as part of a larger untyped program, you're
Re: What is Expressiveness in a Computer Language
Chris Smith [EMAIL PROTECTED] writes: I've since abandoned any attempt to be picky about use of the word type. That was a mistake on my part. I still think it's legitimate to object to statements of the form statically typed languages X, but dynamically typed languages Y, in which it is implied that Y is distinct from X. When I used the word type above, I was adopting the working definition of a type from the dynamic sense. Umm...what *is* the definition of a dynamic type? In this thread it's been argued from tags and memory representation to (almost?) arbitrary predicates. That is, I'm considering whether statically typed languages may be considered to also have dynamic types, and it's pretty clear to me that they do. Well, if you equate tags with dynamic types: data Universal = Str String | Int Integer | Real Double | List [Universal] ... A pattern match failure can then be seen as a dynamic type error. Another question is how to treat a system (and I think C++'s RTTI qualifies, and probably related languages like Java) where static type information is available at runtime. Does this fit with the term dynamic typing with typing in the same sense as static typing? -k -- If I haven't seen further, it is by standing in the footprints of giants -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris Smith [EMAIL PROTECTED] writes: Joachim Durchholz [EMAIL PROTECTED] wrote: Assume a language that a) defines that a program is type-correct iff HM inference establishes that there are no type errors b) compiles a type-incorrect program anyway, with an establishes rigorous semantics for such programs (e.g. by throwing exceptions as appropriate). So the compiler now attempts to prove theorems about the program, but once it has done so it uses the results merely to optimize its runtime behavior and then throws the results away. Hmm... here's my compiler front end (for Haskell or other languages that ..er.. define 'undefined'): while(type error) { replace an incorrectly typed function with 'undefined' } Wouldn't the resulting program still be statically typed? In practice I prefer to (and do) define troublesome functions as 'undefined' manually during development. -k -- If I haven't seen further, it is by standing in the footprints of giants -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten [EMAIL PROTECTED] writes: But a program as seen by the programmer has types: the programmer performs (static) type inference when reasoning about the program, and debugs those inferences when debugging the program, finally ending up with a program which has a perfectly good type scheme. I'd be tempted to go further, and say that the programmer performs (informally, incompletely, and probably incorrectly ) a proof of correctness, and that type correctness is just a part of general correctness. -k -- If I haven't seen further, it is by standing in the footprints of giants -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Gabriel Dos Reis wrote: [EMAIL PROTECTED] writes: | think that it is too relevant for the discussion at hand. Moreover, | Harper talks about a relative concept of C-safety. Then, I believe you missed the entire point. I think the part of my reply you snipped addressed it well enough. Anyway, I can't believe that we actually need to argue about the fact that - for any *useful* and *practical* notion of safety - C is *not* a safe language. I refrain from continuing the discussion along this line, because *that* is *really* silly. - Andreas -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [off-topic]
Matthias Blume wrote: I agree with Bob Harper about safety being language-specific and all that. But, with all due respect, I think his characterization of C is not accurate. [...] AFAIC, C is C-unsafe by Bob's reasoning. Agreed. Of course, C can be made safe quite easily: Define a state undefined that is considered safe and add a transition to undefined wherever necessary. I wouldn't say that was quite easy at all. C99 4 #2: # If a shall or shall not requirement that appears outside of a constraint # is violated, the behavior is undefined. Undefined behavior is otherwise # indicated in this International Standard by the words undefined behavior # *or by the omission of any explicit definition of behavior*. [...] In other words, to fix C to be a safe language (compatible with Standard C89 or C99), you first have to resolve all the ambiguities in the standard where the behaviour is *implicitly* undefined. There are a lot of them. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language [off-topic]
David Hopwood [EMAIL PROTECTED] writes: Matthias Blume wrote: I agree with Bob Harper about safety being language-specific and all that. But, with all due respect, I think his characterization of C is not accurate. [...] AFAIC, C is C-unsafe by Bob's reasoning. Agreed. Of course, C can be made safe quite easily: Define a state undefined that is considered safe and add a transition to undefined wherever necessary. I wouldn't say that was quite easy at all. C99 4 #2: # If a shall or shall not requirement that appears outside of a constraint # is violated, the behavior is undefined. Undefined behavior is otherwise # indicated in this International Standard by the words undefined behavior # *or by the omission of any explicit definition of behavior*. [...] In other words, to fix C to be a safe language (compatible with Standard C89 or C99), you first have to resolve all the ambiguities in the standard where the behaviour is *implicitly* undefined. There are a lot of them. Yes, if you want to make the transition system completely explict, it won't be easy. I was thinking of a catch-all rule that says: transition to undefined unless specified otherwise. (Note that I am not actually advocating this approach to making a language safe. For practical purposes, C is unsafe. (And so is C++.)) -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: I stand corrected: if one is using C and writing self-modifying code, then one *can* zip one's pants. Static proofs notwithstanding, I'd prefer a dynamic check just prior to this operation. I want my code to be the only self-modifying thing around here. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: Joe Marshall wrote: I do this quite often. Sometimes I'll develop `in the debugger'. I'll change some piece of code and run the program until it traps. Then, without exiting the debugger, I'll fix the immediate problem and restart the program at the point it trapped. This technique takes a bit of practice, but if you are working on something that's complex and has a lot of state, it saves a lot of time because you don't have to reconstruct the state every time you make a change. The problem with this is that from that point on, what you're running is neither the old nor the new program, since its state may have been influenced by the bug before you corrected it. Yes. The hope is that it is closer to the new program than to the old. I find it far simpler to just restart the program after correcting anything. If this is too difficult, I change the design to make it less difficult. In the most recent case where I was doing this, I was debugging transaction rollback that involved multiple database extents. It was somewhat painful to set up a clean database to the point where I wanted to try the rollback, and I wanted a pristine database for each trial so I could examine the raw bits left by a rollback. It was pretty easy to deal with simple errors in the debugger, so I chose to do that instead. Wow, interesting. (I say the following only to point out differing strategies of development, not to say one or the other is right or bad or whatever.) I occasionally find myself doing the above, and when I do, I take it as a sign that I've screwed up. I find that process excruciating, and I do everything I can to avoid it. Over the years, I've gotten more and more adept at trying to turn as many bugs as possible into type errors, so I don't have to run the debugger. Now, there might be an argument to be made that if I had been using a dynamic language, the process wouldn't be so bad, and I woudn't dislike it so much. But mabe: As a strawman: suppose there are two broad categories of mental strategies for thinking about bugs, and people fall naturally into one way or the other, the way there are morning people and night people. One group is drawn to the static analysis, one group hates it. One group hates working in the debugger, one group is able to use that tool very effectively and elegantly. Anyway, it's a thought. I don't buy this -- or at least, I am not in either group. A good debugger is invaluable regardless of your attitude to type systems. Recently I was debugging two programs written to do similar things in the same statically typed language (C), but where a debugger could not be used for one of the programs. It took maybe 5 times longer to find and fix each bug without the debugger, and I found it a much more frustrating experience. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
On Sun, 25 Jun 2006 14:28:22 -0600, Chris Smith [EMAIL PROTECTED] wrote: George Neuner gneuner2/@comcast.net wrote: Undecidability can always be avoided by adding annotations, but of course that would be gross overkill in the case of index type widening. Just what sort of type annotation will convince a compiler that a narrowing conversion which could produce an illegal value will, in fact, never produce an illegal value? The annotation doesn't make the narrowing conversion safe; it prevents the narrowing conversion from happening. That was my point ... you get a program that won't compile. If, for example, I need to subtract two numbers and all I know is that they are both between 2 and 40, then I only know that the result is between -38 and 38, which may contain invalid array indices. However, if the numbers were part of a pair, and I knew that the type of the pair was pair of numbers, 2 through 40, where the first number is greater than the second, then I would know that the difference is between 0 and 38, and that may be a valid index. Of course, the restrictions on code that would allow me to retain knowledge of the form [pair of numbers, 2 through 40, a b] may be prohibitive. That is an open question in type theory, as a matter of fact; whether types of this level of power may be inferred by any tractable procedure so that safe code like this may be written without making giving the development process undue difficulty by requiring ten times as much type annotations as actual code. There are attempts that have been made, and they don't look too awfully bad. I worked in signal and image processing for many years and those are places where narrowing conversions are used all the time - in the form of floating point calculations reduced to integer to value samples or pixels, or to value or index lookup tables. Often the same calculation needs to be done for several different purposes. I can know that my conversion of floating point to integer is going to produce a value within a certain range ... but, in general, the compiler can't determine what that range will be. All it knows is that a floating point value is being truncated and the stupid programmer wants to stick the result into some type too narrow to represent the range of possible values. Like I said to Ben, I haven't seen any _practical_ static type system that can deal with things like this. Writing a generic function is impossible under the circumstances, and writing a separate function for each narrow type is ridiculous and a maintenance nightmare even if they can share the bulk of the code. George -- for email reply remove / from address -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
George Neuner gneuner2/@comcast.net wrote: On Sun, 25 Jun 2006 14:28:22 -0600, Chris Smith [EMAIL PROTECTED] wrote: George Neuner gneuner2/@comcast.net wrote: Undecidability can always be avoided by adding annotations, but of course that would be gross overkill in the case of index type widening. Just what sort of type annotation will convince a compiler that a narrowing conversion which could produce an illegal value will, in fact, never produce an illegal value? The annotation doesn't make the narrowing conversion safe; it prevents the narrowing conversion from happening. That was my point ... you get a program that won't compile. That's not actually the case here. If we're talking only about type conversions and not value conversions (see below), then narrowing conversions are only necessary because you've forgotten some important bit of type information. By adding annotations, you can preserve that piece of information and thus avoid the conversion and get a program that runs fine. I worked in signal and image processing for many years and those are places where narrowing conversions are used all the time - in the form of floating point calculations reduced to integer to value samples or pixels, or to value or index lookup tables. Often the same calculation needs to be done for several different purposes. These are value conversions, not type conversions. Basically, when you convert a floating point number to an integer, you are not simply promising the compiler something about the type; you are actually asking the compiler to convert one value to another -- i.e., see to it that whatever this is now, it /becomes/ an integer by the time we're done. This also results in a type conversion, but you've just converted the value to the appropriate form. There is a narrowing value conversion, but the type conversion is perfectly safe. I can know that my conversion of floating point to integer is going to produce a value within a certain range ... but, in general, the compiler can't determine what that range will be. If you mean my compiler can't, then this is probably the case. If you mean no possible compiler could, then I'm not sure this is really very likely at all. Like I said to Ben, I haven't seen any _practical_ static type system that can deal with things like this. I agree. Such a thing doesn't currently exist for general-purpose programming languages, although it does exist in limited languages for some specific domains. -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Andrew McDonagh schrieb: Joachim Durchholz wrote: Chris Smith schrieb: Joachim Durchholz [EMAIL PROTECTED] wrote: Sorry, I have to insist that it's not me who's stretching terms here. All textbook definitions that I have seen define a type as the set/operations/axioms triple I mentioned above. No mention of immutability, at least not in the definitions. The immutability comes from the fact (perhaps implicit in these textbooks, or perhaps they are not really texts on formal type theory) that types are assigned to expressions, That doesn't *define* what's a type or what isn't! If it's impossible to assign types to all expressions of a program in a language, that does mean that there's no useful type theory for the program, but it most definitely does not mean that there are no types in the program. I can still sensibly talk about sets of values, sets of allowable operations over each value, and about relationships between inputs and outputs of these operations. So programs have types, even if they don't have a static type system. Q.E.D. Of course not. Otherwise programs using dynamically typed systems wouldnt exist. I don't understand. Do you mean dynamic typing (aka runtime types)? I haven't read all of this thread, I wonder, is the problem to do with Class being mistaken for Type? (which is usually the issue) No, not at all. I have seen quite a lot beyond OO ;-) Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Darren New schrieb: Marshall wrote: Also: has subtyping polymorphism or not, has parametric polymorphism or not. And covariant or contravariant. That's actually not a design choice - if you wish to have a sound type system, all input parameters *must* be contravariant, all output parameters *must* be covariant, and all in/out parameters must be novariant. (Eiffel got this one wrong in almost all cases.) Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten schrieb: Joachim Durchholz wrote: Anton van Straaten schrieb: There's a close connection between latent types in the sense I've described, and the tagged values present at runtime. However, as type theorists will tell you, the tags used to tag values at runtime, as e.g. a number or a string or a FooBar object, are not the same thing as the sort of types which statically-typed languages have. Would that be a profound difference, or is it just that annotating a value with a full type expression would cause just too much runtime overhead? It's a profound difference. The issue is that it's not just the values that need to be annotated with types, it's also other program terms. Yes - but isn't that essentially just auxiliary data from and for the data-flow analysis that tracks what values with what types might reach which functions? In addition, during a single run of a program, all it can ever normally do is record the types seen on the path followed during that run, which doesn't get you to static types of terms. To figure out the static types, you really need to do static analysis. Agreed. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
I wrote: These informal systems, which may not prove what they claim to prove are my concept of a type system. Chris Smith [EMAIL PROTECTED] replied: Okay, that works. I'm not sure where it gets us, though Ok, we'll get there. First, we need to step back in time, to when there was roughly algol, cobol, fortran, and lisp. Back then, at least as I understood things, there were only a few types that generally people understood integer, real, and (perhaps) pointer. Now, with algol or fortran things were generally only of the first two types and variables were statically declared to be one or the other. With lisp any cell could hold any one of the 3, and some clever implementor added tag bits to distinguish which the cell held. As I understood it, the tag bits weren't originally for type correctness, so much as they facilitated garbage collection. The garbage collector didn't know what type of data you put in a cell and had to determine which cells contained pointers, so that the mark-and-sweep algorithms could sweep by following the pointers (and not following the integers that looked like pointers). Still, these tag bits did preserve the dynamic type, in the sense that we knew types from the other languages. As I remember it, sophistication with type really didn't occur as a phenomena for the general programmer until the introduction of Pascal (and to a lesser extent PL/I). Moreover, as I recall it, perhaps because I didn't learn the appropriate dialect was that lisp dialects kept with the more general notion of type (lists and tuples) and eschewed low-level bit-fiddling where we might want to talk about a 4 bit integer representing 0 .. 15 or -8 .. 7. The important thing is the dynamicism of lisp allowed one to write polymorphic programs, before most of us knew the term. However, we still had a notion of type: integers and floating point numbers were still different and one could not portably use integer operations on floating pointer values or vice versa. However, one could check the tag and do the right thing and many lisp primitives did just that, making them polymorphic. The way most of us conceived (or were taught to conceive) of the situation was that there still were types, they were just associated with values and the type laws we all knew and loved still worked, they just worked dynamically upon the types of the operand values that were presented at the time. Can this be made rigorous? Perhaps. Instead of viewing the text of the program staticly, let us view it dynamicly, that is by introducing a time (or execution) dimension. This is easier if you take an imperative view of the dynamic program and imagine things having an operational semantics, which is why we stepped back in time in the first place, so that we are in a world where imperative programming is the default model for most programmers. Thus, as we traverse a list, the first element might be an integer, the second a floating point value, the third a sub-list, the fourth and fifth, two more integers, and so on. If you look statically at the head of the list, we have a very wide union of types going by. However, perhaps there is a mapping going on that can be discerned. For example, perhaps the list has 3 distinct types of elements (integers, floating points, and sub-lists) and it circles through the types in the order, first having one of each type, then two of each type, then four of each type, then eight, and so on. The world is now patterned. However, I don't know how to write a static type annotation that describes exactly that type. That may just be my lack of experience in writing annotations. However, it's well within my grasp to imagine the appropriate type structure, even though **I** can't describe it formally. More importantly, I can easily write something which checks the tags and sees whether the data corresponds to my model. And, this brings us to the point, if the data that my program encounters doesn't match the model I have formulated, then something is of the wrong type. Equally importantly, the dynamic tags, have helped me discover that type error. Now, the example may have seemed arbitrary to you, and it was in some sense arbitrary. However, if one imagines a complete binary tree with three kinds elements stored in memory as rows per depth, one can get exactly the structure I described. And, if one were rewriting that unusual representation to a more normal one, one might want exactly the type check I proposed to validate that the input binary tree was actually well formed. Does this help explain dynamic typing as a form of typing? -Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joachim Durchholz wrote: That's actually not a design choice It's certainly a choice you can get wrong, as you say. ;-) I mean, if without runtime safety is a choice, I expect picking the wrong choice here can be. :-) -- Darren New / San Diego, CA, USA (PST) Native Americans used every part of the buffalo, including the wings. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark [EMAIL PROTECTED] wrote: Ok, we'll get there. First, we need to step back in time, to when there was roughly algol, cobol, fortran, and lisp. Back then, at least as I understood things, there were only a few types that generally people understood integer, real, and (perhaps) pointer. In order to continue my tradition of trying to be as precise as possible, let me point out that of course there were more candidate types, as it were, and that people understood them just fine. They just didn't have a type system in place to make them into real types, so they didn't think to call them types. The important thing is the dynamicism of lisp allowed one to write polymorphic programs, before most of us knew the term. Sure. In exchange for giving up the proofs of the type checker, you could write all kinds of programs. To this day, we continue to write programs in languages with dynamic checking features because we don't have powerful enough type systems to express the corresponding type system. Thus, as we traverse a list, the first element might be an integer, the second a floating point value, the third a sub-list, the fourth and fifth, two more integers, and so on. If you look statically at the head of the list, we have a very wide union of types going by. However, perhaps there is a mapping going on that can be discerned. For example, perhaps the list has 3 distinct types of elements (integers, floating points, and sub-lists) and it circles through the types in the order, first having one of each type, then two of each type, then four of each type, then eight, and so on. The world is now patterned. However, I don't know how to write a static type annotation that describes exactly that type. I believe that, in fact, it would be trivial to imagine a type system which is capable of expressing that type. Okay, not trivial, since it appears to be necessary to conceive of an infinite family of integer types with only one value each, along with range types, and relationships between them; but it's probably not completely beyond the ability of a very bright 12-year-old who has someone to describe the problem thoroughly and help her with the notation. The more interesting problem is whether this type system could be made: (a) unobstrusive enough, probably via type inference or something along those lines, that it would be usable in practice; and (b) general enough that you could define a basic type operator that applies to a wide range of problems rather than revising your type operator the first time you need a complete 3-tree of similar form. That may just be my lack of experience in writing annotations. You would, of course, need a suitable type system first. For example, it appears to me that there is simply no possible way of expressing what you've described in Java, even with the new generics features. Perhaps it's possible in ML or Haskell (I don't know). My point is that if you were allowed to design a type system to meet your needs, I bet you could do it. And, this brings us to the point, if the data that my program encounters doesn't match the model I have formulated, then something is of the wrong type. Equally importantly, the dynamic tags, have helped me discover that type error. Sure. The important question, then, is whether there exists any program bug that can't be formulated as a type error. And if so, what good does it do us to talk about type errors when we already have the perfectly good word bug to describe the same concept? Now, the example may have seemed arbitrary to you, and it was in some sense arbitrary. Arbitrary is good. One of the pesky things that keeps getting in the way here is the intuition attached to the word type that makes everyone think string or integer. -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark wrote: Thus, as we traverse a list, the first element might be an integer, the second a floating point value, the third a sub-list, the fourth and fifth, two more integers, and so on. If you look statically at the head of the list, we have a very wide union of types going by. However, perhaps there is a mapping going on that can be discerned. For example, perhaps the list has 3 distinct types of elements (integers, floating points, and sub-lists) and it circles through the types in the order, first having one of each type, then two of each type, then four of each type, then eight, and so on. Sounds like an interesting problem. Although not the exact type specified above, here's something pretty similar that I could think of implementing in Haskell. (I don't know what a sub-list is, for example). Maybe some Haskell wizard could get rid of the tuples? data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show clark = (Cons 42 (Cons 3.14 (Cons abc (Cons (1,2) (Cons (1.2,3.4) (Cons (foo,bar) (Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4)) Nil main = print clark -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
George Neuner wrote: I can know that my conversion of floating point to integer is going to produce a value within a certain range ... but, in general, the compiler can't determine what that range will be. All it knows is that a floating point value is being truncated and the stupid programmer wants to stick the result into some type too narrow to represent the range of possible values. Like I said to Ben, I haven't seen any _practical_ static type system that can deal with things like this. Writing a generic function is impossible under the circumstances, and writing a separate function for each narrow type is ridiculous and a maintenance nightmare even if they can share the bulk of the code. This is the partial function question again, in a different guise. Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
I wrote: The important thing is the dynamicism of lisp allowed one to write polymorphic programs, before most of us knew the term. Chris Smith [EMAIL PROTECTED] writes: Sure. In exchange for giving up the proofs of the type checker, you could write all kinds of programs. To this day, we continue to write programs in languages with dynamic checking features because we don't have powerful enough type systems to express the corresponding type system. And to me the question is what kinds of types apply to these dynamic programs, where in fact you may have to solve the halting problem to know exactly when some statement is executed. I expect that some programs have type signatures that exceed the expressibility of any static system (that is Turing complete). Therefore, we need something that computes the appropriate type at run-time, because we need full Turing power to compute it. To me such a system is a dynamic type system. I think dynamic tags are a good approximation, because they only compute what type the expression has this time. I believe that, in fact, it would be trivial to imagine a type system which is capable of expressing that type. Okay, not trivial, since it appears to be necessary to conceive of an infinite family of integer types with only one value each, along with range types, and relationships between them; but it's probably not completely beyond the ability of a very bright 12-year-old who has someone to describe the problem thoroughly and help her with the notation. Well, it look like you are right in that I see following is a Haskell program that looks essentially correct. I wanted something that was simple enough that one could see that it could be computed, but which was complex enough that it had to be computed (and couldn't be statically expressed with a system that did no type computations). Perhaps, there is no such beast. Or, perhaps I just can't formulate it. Or, perhaps we have static type checkers which can do computations of unbounded complexity. However, I thought that one of the characteristics of type systems was that they did not allow unbounded complexity and weren't Turing Complete. You would, of course, need a suitable type system first. For example, it appears to me that there is simply no possible way of expressing what you've described in Java, even with the new generics features. Perhaps it's possible in ML or Haskell (I don't know). My point is that if you were allowed to design a type system to meet your needs, I bet you could do it. Or, I could do as I think the dynamic programmers do, dispense with trying to formulate a sufficiently general type system and just check the tags at the appropriate points. Sure. The important question, then, is whether there exists any program bug that can't be formulated as a type error. If you allow Turing Complete type systems, then I would say no--every bug can be reforumlated as a type error. If you require your type system to be less powerful, then some bugs must escape it. -Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Greg Buchholz [EMAIL PROTECTED] writes: Chris F Clark wrote: Thus, as we traverse a list, the first element might be an integer, the second a floating point value, the third a sub-list, the fourth and fifth, two more integers, and so on. If you look statically at the head of the list, we have a very wide union of types going by. However, perhaps there is a mapping going on that can be discerned. For example, perhaps the list has 3 distinct types of elements (integers, floating points, and sub-lists) and it circles through the types in the order, first having one of each type, then two of each type, then four of each type, then eight, and so on. Sounds like an interesting problem. Although not the exact type specified above, here's something pretty similar that I could think of implementing in Haskell. (I don't know what a sub-list is, for example). Maybe some Haskell wizard could get rid of the tuples? data Clark a b c = Nil | Cons a (Clark b c (a,a)) deriving Show clark = (Cons 42 (Cons 3.14 (Cons abc (Cons (1,2) (Cons (1.2,3.4) (Cons (foo,bar) (Cons ((9,8),(7,6)) (Cons ((0.1,0.2),(0.3,0.4)) Nil main = print clark Very impressive. It looks right to me and simple enough to understand. I must find the time to learn a modern FP language. Can you write a fold for this that prints the data as a binary tree of triples? I have to believe it isn't that hard -Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark wrote: And to me the question is what kinds of types apply to these dynamic programs, where in fact you may have to solve the halting problem to know exactly when some statement is executed. I expect that some programs have type signatures that exceed the expressibility of any static system (that is Turing complete). Therefore, we need something that computes the appropriate type at run-time, because we need full Turing power to compute it. To me such a system is a dynamic type system. I think dynamic tags are a good approximation, because they only compute what type the expression has this time. Yes, an important question (IMHO the *more* important question than the terminology) is what *programs* do we give up if we wish to use static typing? I have never been able to pin this one down at all. Frankly, if the *only* issue between static and dynamic was the static checking of the types, then static typing woud unquestionably be superior. But that's not the only issue. There are also what I call packaging issues, such as being able to run partly-wrong programs on purpose so that one would have the opportunity to do runtime analysis without having to, say, implement parts of some interface that one isn't interested in testing yet. These could also be solved in a statically typed language. (Although historically it hasn't been done that way.) The real question is, are there some programs that we can't write *at all* in a statically typed language, because they'll *never* be typable? I think there might be, but I've never been able to find a solid example of one. Perhaps, there is no such beast. Or, perhaps I just can't formulate it. Or, perhaps we have static type checkers which can do computations of unbounded complexity. However, I thought that one of the characteristics of type systems was that they did not allow unbounded complexity and weren't Turing Complete. The C++ type system is Turing complete, although in practical terms it limits how much processing power it will spend on types at compile time. If you allow Turing Complete type systems, then I would say no--every bug can be reforumlated as a type error. If you require your type system to be less powerful, then some bugs must escape it. I don't think so. Even with a Turing complete type system, a program's runtime behavior is still something different from its static behavior. (This is the other side of the types are not tags issue--not only is it the case that there are things static types can do that tags can't, it is also the case that there are things tags can do that static types can't.) Marshall -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark [EMAIL PROTECTED] wrote: And to me the question is what kinds of types apply to these dynamic programs, where in fact you may have to solve the halting problem to know exactly when some statement is executed. Yes, I believe (static) type systems will always end up approximating (conservatively) the possible behavior of programs in order to perform their verification. Or, perhaps we have static type checkers which can do computations of unbounded complexity. However, I thought that one of the characteristics of type systems was that they did not allow unbounded complexity and weren't Turing Complete. Honestly, I suspect you'd get disagreement within the field of type theory about this. Certainly, there are plenty of researchers who have proposed type systems that potentially don't even terminate. The consensus appears to be that they are worth studying within the field of type theory, but I note that Pierce still hasn't dropped the word tractable from his definition in his introductory text, despite acknowledging only a couple pages later that languages exist whose type systems are undecidable, and apparently co-authoring a paper on one of them. The consensus seems to be that such systems are interesting if they terminate quickly for interesting cases (in which case, I suppose, you could hope to eventually be able to formalize what cases are interesting, and derive a truly tractable type system that checks that interesting subset). Interestingly, Pierce gives ML as an example of a language whose type checker does not necesarily run in polynomial time (thus failing some concepts of tractable) but that does just fine in practice. I am just quoting here, so I don't know exactly how this is true. Marshall mentioned template meta-programming in C++, which is definitely Turing- complete. -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
On 22 Jun 2006 08:42:09 -0700, [EMAIL PROTECTED] wrote: Darren New schrieb: I'm pretty sure in Pascal you could say Type Apple = Integer; Orange = Integer; and then vars of type apple and orange were not interchangable. No, the following compiles perfectly fine (using GNU Pascal): program bla; type apple = integer; orange = integer; var a : apple; o : orange; begin a := o end. You are both correct. The original Pascal specification failed to mention whether user defined types should be compatible by name or by structure. Though Wirth's own 1974 reference implementation used name compatibility, implementations were free to use structure compatibility instead and many did. There was a time when typing differences made Pascal code completely non-portable[1]. When Pascal was finally standardized in 1983, the committees followed C's (dubious) example and chose to use structure compatibility for simple types and name compatibility for records. [1] Wirth also failed to specify whether boolean expression evaluation should be short-circuit or complete. Again, implementations went in both directions. Some allowed either method by switch, but the type compatibility issue continued to plague Pascal until standard conforming compilers emerged in the mid 80's. George -- for email reply remove / from address -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: But since the relevant feature that the languages in question possess is dynamic tagging, it is more precise and accurate to use that term to describe them. So you're proposing to call them dynamically-tagged languages? Also, dynamic tagging is only a minor help in this respect, as evidenced by the fact that explicit tag tests are quite rarely used by most programs, if I'm not mistaken. It sounds as though you're not considering the language implementations themselves, where tag tests occur all the time - potentially on every operation. That's how type errors get detected. This is what I'm referring to when I say that dynamic tags support latent types. Tags are absolutely crucial for that purpose: without them, you have a language similar to untyped lambda calculus, where latent type errors can result in very difficult to debug errors, since execution can continue past errors and produce completely uninterpretable results. IMHO, the support does not go far enough for it to be considered a defining characteristic of these languages. Since tag checking is an implicit feature of language implementations and the language semantics, it certainly qualifies as a defining characteristic. When tag tests are used implicitly by other language features such as pattern matching and dynamic dispatch, they are used for purposes that are equally applicable to statically typed and non-(statically-typed) languages. A fully statically-typed language doesn't have to do tag checks to detect static type errors. Latently-typed languages do tag checks to detect latent type errors. You can take the preceding two sentences as a summary definition for latently-typed language, which will come in handy below. or that languages that use dynamic tagging are latently typed. This simply is not a property of the language (as you've already conceded). Right. I see at least two issues here: one is that as a matter of shorthand, compressing language which supports latent typing to latently-typed language ought to be fine, as long as the term's meaning is understood. If, for the sake of argument, language which supports latent typing is to be compressed to latently-typed language, then statically typed languages must be considered also latently typed. See definition above. The phrase language which supports latent typing wasn't intended to be a complete definition. After all, statically typed languages support expression and verification of the types in the programmer's head at least as well as non-(statically-typed) languages do. In particular, most recent statically typed OO languages use dynamic tagging and are memory safe. And they support comments ;-) But they don't use tags checks to validate their static types. When statically-typed languages *do* use tags, in cases where the static type system isn't sufficient to avoid them, then indeed, those parts of the program use latent types, in the exact same sense as more fully latently-typed languages do. There's no conflict here, it's simply the case that most statically-typed languages aren't fully statically typed. This is not, quite obviously, what most people mean when they say that a particular *language* is latently typed. They almost always mean that the language is dynamically tagged, *not* statically typed, and memory safe. That is how this term is used in R5RS, for example. The R5RS definition is compatible with what I've just described, because the parts of a statically-typed language that would be considered latently-typed are precisely those which rely on dynamic tags. But beyond that, there's an issue here about the definition of the language. When programming in a latently-typed language, a lot of action goes on outside the language - reasoning about static properties of programs that are not captured by the semantics of the language. This is true of programming in any language. Right, but when you compare a statically-typed language to an untyped language at the formal level, a great deal more static reasoning goes on outside the language in the untyped case. What I'm saying is that it makes no sense, in most realistic contexts, to think of untyped languages as being just that: languages in which the type of every term is simply a tagged value, as though no static knowledge about that value exists. The formal model requires that you do this, but programmers can't function if that's all the static information they have. This isn't true in the case of a fully statically-typed language. This means that there's a sense in which the language that the programmer programs in is not the same language that has a formal semantic definition. As I mentioned in another post, programmers are essentially mentally programming in a richer language - a language which has informal (static) types - but the code they write down elides this type information, or else puts it in comments.
Re: What is Expressiveness in a Computer Language
Marshall wrote: Chris F Clark wrote: I'm particularly interested if something unsound (and perhaps ambiguous) could be called a type system. I definitely consider such things type systems. I don't understand. You are saying you prefer to investigate the unsound over the sound? The problem is that there are no useful sound definitions for the type systems (in the static sense) of dynamically-typed languages. Yet, we work with type-like static properties in those languages all the time, as I've been describing. If you want to talk about those properties as though they were types, one of the options is what Chris Clark described when he wrote I reason about my program using types which I can (at least partially) formalize, but for which there is no sound axiomatic system. However, I like my definitions very general and vague. Your writing suggests the opposite preference. Again, I cannot understand this. In a technical realm, vagueness is the opposite of understanding. To me, it sounds like you are saying that you prefer not to understand the field you work in. The issue as I see it is simply that if we're going to work with dynamically-typed programs at all, our choices are limited at present, when it comes to formal models that capture our informal static reasoning about those programs. In statically-typed languages, this reasoning is mostly captured by the type system, but it has no formal description for dynamically-typed languages. To me if something works in an analogous way to how a known type system, I tend to consider it a type system. That probably isn't going to be at all satisfactory to someone wanting a more rigorous definition. Analogies are one thing; definitions are another. A formal definition is problematic, precisely because we're dealing with something that to a large extent is deliberately unformalized. But as Chris Clark pointed out, these types are locally sound, i.e. I can prove properties that hold for regions of my programs. We don't have to rely entirely on analogies, and this isn't something that's entirely fuzzy. There are ways to relate it to formal type theory. Of course, to my mind, the rigorous definitions are just an attempt to capture something that is already known informally and put it on a more rational foundation. If something is informal and non-rational, it cannot be said to be known. As much as I love the view down into that abyss, we're nowhere near being in such bad shape. We know that we can easily take dynamically-typed program fragments and assign them type schemes, and we can make sure that the type schemes that we use in all our program fragments use the same type system. We know that we can assign type schemes to entire dynamically-typed programs, and we can even automate this process, although not without some practical disadvantages. So we actually have quite a bit of evidence about the presence of static types in dynamically-typed programs. Besides, we should be careful not to forget that our formal methods are incredibly weak compared to the power of our intelligence. If that were not the case, there would be no advantage to possessing intelligence, or implementing AIs. We are capable of reasoning outside of fully formal frameworks. The only way in which we are able to develop formal systems is by starting with the informal. We're currently discussing something that so far has only been captured fairly informally. If we restrict ourselves to only what we can say about it formally, then the conversation was over before it began. Anton -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: This means that there's a sense in which the language that the programmer programs in is not the same language that has a formal semantic definition. As I mentioned in another post, programmers are essentially mentally programming in a richer language - a language which has informal (static) types - but the code they write down elides this type information, or else puts it in comments. [...] In this context, the term latently-typed language refers to the language that a programmer experiences, not to the subset of that language which is all that we're typically able to formally define. That language is not a subset, if at all, it's the other way round, but I'd say they are rather incomparable. That is, they are different languages. That is starting to get a bit too mystical for my tastes. I have to agree. \sarcasm One step further, and somebody starts calling C a latently memory-safe language, because a real programmer knows that his code is in a safe subset... And where he is wrong, dynamic memory page protection checks will guide him. - Andreas -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
George Neuner schrieb: The point is really that the checks that prevent these things must be performed at runtime and can't be prevented by any practical type analysis performed at compile time. I'm not a type theorist but my opinion is that a static type system that could, a priori, prevent the problem is impossible. No type theory is needed. Assume that the wide index type goes into a function and the result is assigned to a variable fo the narrow type, and it's instantly clear that the problem is undecidable. Undecidability can always be avoided by adding annotations, but of course that would be gross overkill in the case of index type widening. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Dimitri Maziuk schrieb: That is the basic argument in favour of compile time error checking, extended to runtime errors. I don't really care if it's the compiler or runtime that tells the luser your code is broken, as long as it makes it clear it's *his* code that's broken, not mine. You can make runtime errors point to the appropriate code. Just apply programming by contract: explicitly state what preconditions a function is requiring of the caller, have it check the preconditions on entry (and, ideally, throw the execption in a way that the error is reported in the caller's code - not a complicated thing but would require changes in the exception machinery of most languages). Any errors past that point are either a too liberal precondition (i.e. a bug in the precondition - but documenting wrong preconditions is still a massive bug, even if it's just a documentation bug), or a bug in the function's code. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
[EMAIL PROTECTED] schrieb: Joachim Durchholz wrote: A type is the encoding of these properties. A type varying over time is an inherent contradiction (or another abuse of the term type). No. It's just a matter of definition, essentially. E.g. in Smalltalk and Lisp, it does make sense to talk of the type of a name or a value, even if that type may change over time. OK, now we are simply back full circle to Chris Smith's original complaint that started this whole subthread, namely (roughly) that long-established terms like type or typing should *not* be stretched in ways like this, because that is technically inaccurate and prone to misinterpretation. Sorry, I have to insist that it's not me who's stretching terms here. All textbook definitions that I have seen define a type as the set/operations/axioms triple I mentioned above. No mention of immutability, at least not in the definitions. Plus, I don't see any necessity on insisting on immutability for the definition of type. Otherwise, you'd have to declare that Smalltalk objects truly don't have a type (not even an implied one), and that would simply make no sense: they do in fact have a type, even though it may occasionally change. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark schrieb: Chris F Clark schrieb: In that sense, a static type system is eliminating tags, because the information is pre-computed and not explicitly stored as a part of the computation. Now, you may not view the tag as being there, but in my mind if there exists a way of perfoming the computation that requires tags, the tag was there and that tag has been eliminated. Joachim Durchholz replied: On a semantic level, the tag is always there - it's the type (and definitely part of an axiomatic definition of the language). Tag elimination is just an optimization. I agree the tag is always there in the abstract. Well - in the context of a discussion of dynamic and static typing, I'd think that the semantic (abstract) level is usually the best level of discourse. Of course, this being the Usenet, shifting the level is common (and can even helpful). In the end, I'm trying to fit things which are too big and too slow into much less space and time, using types to help me reliably make my program smaller and faster is just one trick. [...] However, I also know that my way of thinking about it is fringe. Oh, I really agree that's an important application of static typing. Essentially, which aspects of static typing is more important depends on where your problems lie: if it's ressource constraints, static typing tends to be seen as a tool to keep ressource usage down; if it's bug counts, static typing tends to be seen as a tool to express static properties. These aspects are obviously not equally important to everybody. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Darren New schrieb: John W. Kennedy wrote: 360-family assembler, yes. 8086-family assembler, not so much. And Burroughs B-series, not at all. There was one ADD instruction, and it looked at the data in the addresses to determine whether to add ints or floats. :-) I heard that the Telefunken TR series had a tagged architecture. This seems roughly equivalent to what the B-series did (does?). Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten schrieb: Marshall wrote: Can you be more explicit about what latent types means? Sorry, that was a huge omission. (What I get for posting at 3:30am.) The short answer is that I'm most directly referring to the types in the programmer's head. Ah, finally that terminology starts to make sense to me. I have been wondering whether there's any useful difference between latent and run-time typing. (I tend to avoid the term dynamic typing because it's overloaded with too many vague ideas.) there are usually many possible static type schemes that can be assigned to a given program. This seems to apply to latent types as well. Actually the set of latent types seems to be the set of possible static type schemes. Um, well, a superset of these - static type schemes tend to be slightly less expressive than what the programmer in his head. (Most type schemes cannot really express things like the range of this index variable is such-and-so, and the boundary to general assertions about the code is quite blurry anyway.) There's a close connection between latent types in the sense I've described, and the tagged values present at runtime. However, as type theorists will tell you, the tags used to tag values at runtime, as e.g. a number or a string or a FooBar object, are not the same thing as the sort of types which statically-typed languages have. Would that be a profound difference, or is it just that annotating a value with a full type expression would cause just too much runtime overhead? In your terminology: So, where do tagged values fit into this? Tags help to check types at runtime, but that doesn't mean that there's a 1:1 correspondence between tags and types. Would it be possible to establish such a correspondence, would it be common consensus that such a system should be called tags anymore, or are there other, even more profound differences? Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall schrieb: It seems we have languages: with or without static analysis with or without runtime type information (RTTI or tags) with or without (runtime) safety with or without explicit type annotations with or without type inference Wow. And I don't think that's a complete list, either. I would be happy to abandon strong/weak as terminology because I can't pin those terms down. (It's not clear what they would add anyway.) Indeed. Programmers infer and reason about these latent types while they're writing or reading programs. Latent types become manifest when a programmer reasons about them, or documents them e.g. in comments. Uh, oh, a new term, manifest. Should I worry about that? I think that was OED usage of the term. Well, darn. It strikes me that that's just a decision the language designers made, *not* to record complete RTTI. (Is it going to be claimed that there is an *advantage* to having only incomplete RTTI? It is a serious question.) In most cases, it's probably we don't have to invent or look up efficient algorithms that we can't think of right now. One could consider this a sorry excuse or a wise decision to stick with available resources, both views have their merits IMHO ;-) But function is not a useful type. Why not? Because if all you know is that timestwo is a function, then you have no idea what an expression like timestwo(foo) means. You couldn't write working programs, or read them, if all you knew about functions was that they were functions. As a type, function is incomplete. Yes, function is a parameterized type, and they've left out the parameter values. Well, in JavaScript, the explicit type system as laid down in the run-time type information is unparameterized. You can criticize this as unsound, or inadequate, or whatever you wish, and I'd like agree with that ;-), but the type of timestwo is indeed just function. *Your mental model* is far more detailed, of course. Question: if a language *does* record complete RTTI, and the languages does *not* have subtyping, could we then say that the runtime type information *is* the same as the static type? Only if the two actually use the same type system. In practice, I'd say that this is what most designers intend but what implementors may not have gotten right ;-p Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten schrieb: It seems we have languages: with or without static analysis with or without runtime type information (RTTI or tags) with or without (runtime) safety with or without explicit type annotations with or without type inference Wow. And I don't think that's a complete list, either. Yup. What else? (Genuinely curious.) Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Andreas Rossberg schrieb: Luca Cardelli has given the most convincing one in his seminal tutorial Type Systems, where he identifies typed and safe as two orthogonal dimensions and gives the following matrix: | typed | untyped ---+---+-- safe | ML| Lisp unsafe | C | Assembler Now, jargon dynamically typed is simply untyped safe, while weakly typed is typed unsafe. Here's a matrix how most people that I know would fill in with terminology: | Statically | Not | | typed| statically | | | typed | -+--+-+ typesafe | strongly| Dynamically | | typed | typed | | (ML, Pascal) | (Lisp) | -+--+-+ not | (no common | untyped | typesafe | terminology) | | | (C) | (Assembly) | -+--+-+ (Terms in quotes are challenged on a regular basis, or rarely if ever applied.) With the above terminology, it becomes clear that the opposite if (statically) typed isn't statically untyped, but not statically typed. Statically typed and dynamically typed aren't even opposites, they just don't overlap. Another observation: type safeness is more of a spectrum than a clearcut distinction. Even ML and Pascal have ways to circumvent the type system, and even C is typesafe unless you use unsafe constructs. IOW from a type-theoretic point of view, there is no real difference between their typesafe and not typesafe languages in the statically typed column; the difference is in the amount of unsafe construct usage in practial programs. Regards, Jo -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
In comp.lang.functional Joachim Durchholz [EMAIL PROTECTED] wrote: [...] Even ML and Pascal have ways to circumvent the type system, [...] Show me a Standard ML program that circumvents the type system. -Vesa Karvonen -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joachim Durchholz write: Another observation: type safeness is more of a spectrum than a clearcut distinction. Even ML and Pascal have ways to circumvent the type system, No. I'm not sure about Pascal, but (Standard) ML surely has none. Same with Haskell as defined by its spec. OCaml has a couple of clearly marked unsafe library functions, but no unsafe feature in the language semantics itself. and even C is typesafe unless you use unsafe constructs. Tautology. Every language is safe unless you use unsafe constructs. (Unfortunately, you can hardly write interesting programs in any safe subset of C.) IOW from a type-theoretic point of view, there is no real difference between their typesafe and not typesafe languages in the statically typed column; the difference is in the amount of unsafe construct usage in practial programs. Huh? There is a huge, fundamental difference: namely whether a type system is sound or not. A soundness proof is obligatory for any serious type theory, and failure to establish it simply is a bug in the theory. - Andreas -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joachim Durchholz wrote: Andreas Rossberg schrieb: Luca Cardelli has given the most convincing one in his seminal tutorial Type Systems, where he identifies typed and safe as two orthogonal dimensions and gives the following matrix: | typed | untyped ---+---+-- safe | ML| Lisp unsafe | C | Assembler Now, jargon dynamically typed is simply untyped safe, while weakly typed is typed unsafe. Here's a matrix how most people that I know would fill in with terminology: | Statically | Not | | typed| statically | | | typed | -+--+-+ typesafe | strongly| Dynamically | | typed | typed | | (ML, Pascal) | (Lisp) | -+--+-+ not | (no common | untyped | typesafe | terminology) | | | (C) | (Assembly) | -+--+-+ (Terms in quotes are challenged on a regular basis, or rarely if ever applied.) With the above terminology, it becomes clear that the opposite if (statically) typed isn't statically untyped, but not statically typed. Statically typed and dynamically typed aren't even opposites, they just don't overlap. Another observation: type safeness is more of a spectrum than a clearcut distinction. Even ML and Pascal have ways to circumvent the type system, and even C is typesafe unless you use unsafe constructs. IOW from a type-theoretic point of view, there is no real difference between their typesafe and not typesafe languages in the statically typed column; the difference is in the amount of unsafe construct usage in practial programs. It's also relevant how straightforward it is to distinguish between safe and unsafe code, how explicit you have to be when you use unsafe code, how likely it is that you accidentally use unsafe code without being aware of it, what the generally accepted conventions in a language community are, etc. pp. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/ -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joachim Durchholz wrote: Andreas Rossberg schrieb: Luca Cardelli has given the most convincing one in his seminal tutorial Type Systems, where he identifies typed and safe as two orthogonal dimensions and gives the following matrix: | typed | untyped ---+---+-- safe | ML| Lisp unsafe | C | Assembler Now, jargon dynamically typed is simply untyped safe, while weakly typed is typed unsafe. Here's a matrix how most people that I know would fill in with terminology: | Statically | Not | | typed| statically | | | typed | -+--+-+ typesafe | strongly| Dynamically | | typed | typed | | (ML, Pascal) | (Lisp) | Pascal, in most of its implementations, isn't [memory] safe; it's in the same box as C. -+--+-+ not | (no common | untyped | typesafe | terminology) | | | (C) | (Assembly) | -+--+-+ I have to say that I prefer the labels used by Cardelli. (Terms in quotes are challenged on a regular basis, or rarely if ever applied.) With the above terminology, it becomes clear that the opposite if (statically) typed isn't statically untyped, but not statically typed. Statically typed and dynamically typed aren't even opposites, they just don't overlap. Another observation: type safeness is more of a spectrum than a clearcut distinction. Even ML and Pascal have ways to circumvent the type system, Standard ML doesn't (with just the standard basis; no external libraries, and assuming that file I/O cannot be used to corrupt the language implementation). and even C is typesafe unless you use unsafe constructs. IOW from a type-theoretic point of view, there is no real difference between their typesafe and not typesafe languages in the statically typed column; the difference is in the amount of unsafe construct usage in practial programs. It is quite possible to design languages that have absolutely no unsafe constructs, and are still useful. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris F Clark wrote: Chris Smith [EMAIL PROTECTED] writes: Unfortunately, I have to again reject this idea. There is no such restriction on type theory. Rather, the word type is defined by type theorists to mean the things that they talk about. Do you reject that there could be something more general than what a type theorist discusses? Or do you reject calling such things a type? Let you write: because we could say that anything that checks types is a type system, and then worry about verifying that it's a sound type system without worrying about whether it's a subset of the perfect type system. I'm particularly interested if something unsound (and perhaps ambiguous) could be called a type system. Yes, but not a useful one. The situation is the same as with unsound formal systems; they still satisfy the definition of a formal system. -- David Hopwood [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list