On 1/3/2012 8:04 PM, Devin Jeanpierre wrote:

Can you give an actual example of such a definition of 'strongly typed
object' that excludes Python objects?

I'd love to, but I neglected to above because I don't really know
where we share ideas, and I wouldn't want to jump ahead and say
something that seems outlandish. I hope I've done a decent enough job
below. Stop me if I say something you don't agree with.

Here's a short version: The term often refers to a continuum, where
some languages are "stronger" than others, but there is no objective
definition of "strong" -- it's like asking how big is "big", or how
smart is "smart".

Fine so far. cold/hot, good/bad, etc are false dichotomies. Better to use there as comparatives -- better/worse that average, or than the middle 1/2. I don't use 'strong type' unless someone else does, and maybe I should stop being suckered into using it even then ;-).

> The actual choice of what continuum strength refers
to varies between people -- some specify it as "reinterpretation of
bits in memory", in which case every dynamic language in the world is
strongly typed (so I'll move on from that),

Fine.

or some as "implicit
conversions between types", which is iffy because any function or
overloaded operator can silently convert types (at least in Python),

Since Python does not 'silently convert types' as I understand those 3 words, you lose me here. Can you give a code example of what you mean?

meaning the language is as weak as the user (?). But they're both kind
of reasonable and held by a few people, and it's easy to see how maybe
somebody used to significantly more strict rules (OCaml, for example)
might accuse Python of not being all that strong.  So you see it isn't
really a change of definition, it's a redrawing of the lines on a
continuum. The OCaml people are reasonable for saying what they say,
because their point of reference is their favorite language, just as
ours is ours. To us, maybe Haskell and OCaml are insanely
strong/"strict", to them, we're insanely weak.

I am aware that typed names lead to typed function signatures and that some languages in the ML family even type operators, so that different operators are required for 1+2 and 1.0+2.0 and mixed operations like 1+2.0 are prohibited. But we are talking about the typing of objects here.

Here's another take, more in line with what I really meant: the
continuums expressed above aren't universal either. IME they're chosen
by dynamic typing advocates, which are worried about code readability
and certain styles of immensely common errors that come from
lower-level languages like C. Static typing advocates (at least in the
functional programming language family) are instead concerned about
correctness. For them, type systems are a way to reduce the number of
errors in your code _in general_, not just the specific cases of type
coercion or memory-safety. In this particular sense, Python is not
very far from C

I see the object model of Python as being very different from the data model of C. The class of an object is an internal attribute of the object while the type of a block of bits is an external convention enforced by the compiler. I see Python's objects as more similar to the objects of many functional languages, except that Python has mutable objects.

...

Type systems exist which are much "stronger" in what they can protect
you against. As an extreme example, one might take a dependently-typed
language. A statically dependently typed language can protect you
against out-of-bounds array access with a proof at compile time. In

I am familiar with the length-parameterized array types of Pascal (which seem in practice to be a nuisance), but I see from
https://en.wikipedia.org/wiki/Dependent_type
that there is a lot more to the subject.

fact, a statically-dependently-typed language can do things like prove

sometimes, though not always,

that two different implementations of the same function are in fact
implementations of the same function -- this is neat for implementing
a non-recursive version of a recursive function, and verifying the
change doesn't influence semantics.

Using induction, I can prove, for instance, that these two functions

def f_tr(n, val=base): # tail recursive
    if n:
        return f_tr(n-1, rec(n, val))
    else:
        return val

def f_wi(n, val = base):  # while iterative
    while n:
        n, val = n-1, rec(n, val)
    return val

are equivalent, assuming enough stack and normal procedural Python semantics. (And assuming no typos ;-).

f_tr(0) == base == f_wi(0)
f_tr(n+1) == f_tr(n+1, base) == f_tr(n, rec(n+1, base))
== by inductive hypothsis
f_wi(n, rec(n+1, base)) == f_wi(n+1, base) == f_wi(n+1)

So it is not clear to me what such proofs have to do with types as I understand the term.

It's definitely true that there is something here that Python's types
cannot do, at least not without extremely heavily mangling
(technically __instancecheck__ means that type objects are turing
complete, but I request that we ignore that). These are the sorts of
types that can, in a very real sense, replace unit tests. Just the
fact that your program typechecks is proof that certain properties
hold, and those properties can be very soothing, such as that example
of proving two algorithms return the same output.

I can imagine that if one overloads 'type' with enough extra information, the procedural reasoning involved in such proofs might reduce to a more calculational demonstration. The question is whether the months and years of intellectual investment required on the front end pay off in reduced intellectual effort across several applications.

I happen to be working on a book on inductive algorithms that will include several such equivalences like that above. I will stick with the sort of proof that I can write and that most can understand.

One last note about static vs dynamic vs strong typing: I do believe
that a strongly typed dynamic+statically typed language is possible

Cython with superset features used.

and a good thing. For statically typed code, run-time checks are
disabled (speed),

I believe Cython disables the run-times checks that it can tell are unnecessary. For instance, "for i in range(len(array)): array[i]" does not need a runtime check that i is in range(len(array)) and hence can convert array[i] to its unchecked C equivalent.

 for dynamically typed code they are enabled, and all
the while you can make relatively sophisticated queries about the type
of a thing using an interactive interpreter (somewhat similar to
unifying things in the prolog interpreter, in my view).

There is n

So now that we've gotten this far, here's my proposition: It is
reasonable to consider type systems in terms of how many errors they
eliminate in your programs, rather than just readability/memory-safety

And interesting and for me, different viewpoint. I tend to be interested in correctness through understanding, or, if you will, logical correctness. Textual correctness, especially with what I do with Python, is a secondary issue.

Memory-safety does not directly concern me, since I am not writing or running servers. But it is a major target of malware.

To really eliminate errors, we need a multiplicity of types. We need not just floats, but non-negative floats, positive floats, and 'complex' floats that get promoted to complex as necessary. But the checks have to be dynamic, since in general, only constants can be statically checked at compile time. Or you need relative 'x is bigger than y' pair types so that x-y can be determined to be positive.

Hmmm. I guess that one could say that such a statement in the course of a proof amounts to a type statement. And the same for loop invariants.

concerns. Also, it is reasonable to have relatively strict standards,
if you're used to a very "strong" type system. Such a person might,
very reasonably, decide that Python is quite weak.

Yes, only objects are (strongly ;-) typed. Names are untyped. Function parameters (signatures) are only directly typed as to whether the corresponding arguments are required or not and whether the matching of arguments and parameters is by position, name, or both. The 'type' of Python is something like "object-class based, procedural syntax, certain functional features". Strong/weak has nothing to do with Python itself.

I admit that a lot of what I said here is putting words in other
peoples' mouths, if you object, pretend I said it instead.

No objections except as noted.

> I am, after all, a fan of ATS at the moment, and thus have some
> interest in ridiculously strongly typed languages ;)

I see that in the list in the Wikipedia article.

*phew*. That took a lot of time. I hope I haven't lost your attention.

No. I might even have learned something.

[ An example of a simple dependently typed program:
http://codepad.org/eLr7lLJd ]

Just got it after a minute delay.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to