On Mon, 12 Dec 2005, Steven D'Aprano wrote:
On Sun, 11 Dec 2005 05:48:00 -0800, bonono wrote:
And I don't think Haskell make the programmer do a lot of work(just
because of its static type checking at compile time).
I could be wrong, but I think Haskell is *strongly* typed (just like
Python), not *statically* typed.
Haskell is strongly and statically typed - very strongly and very
statically!
However, what it's not is manifestly typed - you don't have to put the
types in yourself; rather, the compiler works it out. For example, if i
wrote code like this (using python syntax):
def f(x):
return 1 + x
The compiler would think "well, he takes some value x, and he adds it to 1
and 1 is an integer, and the only thing you can add to an integer is
another integer, so x must be an integer; he returns whatever 1 + x works
out to, and 1 and x are both integers, and adding two integers makes an
integer, so the return type must be integer", and concludes that you meant
(using Guido's notation):
def f(x: int) -> int:
return 1 + x
Note that this still buys you type safety:
def g(a, b):
c = "{" + a + "}"
d = 1 + b
return c + d
The compiler works out that c must be a string and d must be an int, then,
when it gets to the last line, finds an expression that must be wrong, and
refuses to accept the code.
This sounds like it wouldn't work for complex code, but somehow, it does.
And somehow, it works for:
def f(x):
return x + 1
Too. I think this is due to the lack of polymorphic operator overloading.
A key thing is that Haskell supports, and makes enormous use of, a
powerful system of generic types; with:
def h(a):
return a + a
There's no way to infer concrete types for h or a, so Haskell gets
generic; it says "okay, so i don't know what type a is, but it's got to be
something, so let's call it alpha; we're adding two alphas, and one thing
i know about adding is that adding two things of some type makes a new
thing of that type, so the type of some-alpha + some-alpha is alpha, so
this function returns an alpha". ISTR that alpha gets written 'a, so this
function is:
def h(a: 'a) -> 'a:
return a + a
Although that syntax might be from ML. This extends to more complex
cases, like:
def i(a, b):
return [a, b]
In Haskell, you can only make lists of a homogenous type, so the compiler
deduces that, although it doesn't know what type a and b are, they must be
the same type, and the return value is a list of that type:
def i(a: 'a, b: 'a) -> ['a]:
return [a, b]
And so on. I don't know Haskell, but i've had long conversations with a
friend who does, which is where i've got this from. IANACS, and this could
all be entirely wrong!
At least the "What Is Haskell?" page at haskell.org describes the
language as strongly typed, non-strict, and allowing polymorphic typing.
When applied to functional languages, 'strict' (or 'eager'), ie that
expressions are evaluated as soon as they are formed; 'non-strict' (or
'lazy') means that expressions can hang around as expressions for a while,
or even not be evaluated all in one go. Laziness is really a property of
the implementation, not the the language - in an idealised pure functional
language, i believe that a program can't actually tell whether the
implementation is eager or lazy. However, it matters in practice, since a
lazy language can do things like manipulate infinite lists.
tom
--
ø¤º°`°º¤ø,,,,ø¤º°`°º¤ø,,,,ø¤º°`°º¤ø,,,,ø¤º°`°º¤ø
--
http://mail.python.org/mailman/listinfo/python-list