Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-22 Thread Hendrik van Rooyen

 Steve Holden [EMAIL PROTECTED],..eb.com wrote:

 The trouble there, though, is that although COBOL was comprehensible (to 
 a degree) relatively few people have the rigor of thought necessary to 
 construct, or even understand, an algorithm of any kind.

This is true - and in my experience the competent people come in two classes -
those that are good at mathematics, and those that are good at languages.

If you find someone good at both, employ her.

- Hendrik



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-21 Thread Paul Rubin
Diez B. Roggisch [EMAIL PROTECTED] writes:
  Come on, this is real-time embedded software,
 Since when did we restrict ourselves to such an environment? I was under the
 impression that this thread is about the merits and capabilities of static
 type-checking?

One branch of the discussion brought up Spark Ada, and I think the
cited article said they used the Ravenscar profile, which means
embedded applications.  

 No, I don't buy that. As I said before: for the most trivial of
 tasks that might be a good thing. But I'm not convinced that
 restricting one self in such a manner for more complex tasks isn't
 going to be a better development path.

There are applications where you care first and foremost about the
reliability of the end result, and accept whatever difficulty that
might cause for the development path.  Not all programming is Visual
Basic.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-21 Thread Steve Holden
Paul Rubin wrote:
 Steve Holden [EMAIL PROTECTED] writes:
 The issue I have with correctness proofs (at least as they were
 presented in the 1980s - for all I know the claims may have become
 more realistic now) is that the proof of correctness can only relate
 to some highly-formal specification so complex that it's only
 comprehensible to computer scientists.
 
 Just like software as it is now, is only comprehensible to
 programmers.  A long while back there was a language called COBOL
 designed to be comprehensible to non-programmers, and it actually did
 catch on, but it was cumbersome and limited even in its time and in
 its application era, not so readable to non-programmers after all, and
 is not used much in new projects nowadays.
 
The trouble there, though, is that although COBOL was comprehensible (to 
a degree) relatively few people have the rigor of thought necessary to 
construct, or even understand, an algorithm of any kind.

 In other words, we can never prove that a program does what the
 customer wants, because the semantics are communicated in different
 ways, involving a translation which, since it is necessarily performed
 by humans using natural language, will always be incomplete at best
 and inaccurate at worst.
 
 The types of things one tries to prove are similar to what is less
 formally expressed as localized test cases in traditional software.
 E.g. for a sorting routine you'd want to prove that output[n+1] =
 output[n] for all n and for all inputs.  As the Intel FDIV bug
 incident reminds us, even billions of test inputs are not enough to
 show that the routine does the right thing for EVERY input.

Well I can buy that as a reasonable answer to my point. The early claims 
of the formal proof crowd were somewhat overblown and unrealistic, but 
that certainly doesn't negate the value of proofs such as the one you 
describe.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-21 Thread Kay Schluehr
On Jul 20, 6:28 pm, Paul Rubin http://[EMAIL PROTECTED] wrote:

 As the Intel FDIV bug
 incident reminds us, even billions of test inputs are not enough to
 show that the routine does the right thing for EVERY input.

When I remember correctly the FDIV bug was due to a wrongly filled
lookup table and occurred only for certain bitpattern in the divisor.
I'm not sure how a formal proof on the structure of the algorithm
could help here? Intel repaired the table i.e. the data not the
algorithm that acted upon it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-21 Thread Paul Rubin
Kay Schluehr [EMAIL PROTECTED] writes:
 When I remember correctly the FDIV bug was due to a wrongly filled
 lookup table and occurred only for certain bitpattern in the divisor.
 I'm not sure how a formal proof on the structure of the algorithm
 could help here? Intel repaired the table i.e. the data not the
 algorithm that acted upon it.

I would expect the table contents to be part of the stuff being
proved.  I hear that they do use formal proofs for their floating
point stuff now, as does AMD.  AMD apparently uses ACL2 (a theorem
prover written in Common Lisp, info about AMD is from the ACL2 web
site) but I don't know what Intel uses.  I've figured that the FDIV
problem was one of the motivations for this but I'm not certain of it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-21 Thread John Nagle
Paul Rubin wrote:
 Kay Schluehr [EMAIL PROTECTED] writes:
 
When I remember correctly the FDIV bug was due to a wrongly filled
lookup table and occurred only for certain bitpattern in the divisor.
I'm not sure how a formal proof on the structure of the algorithm
could help here? Intel repaired the table i.e. the data not the
algorithm that acted upon it.

No, Intel had assumed a symmetry that wasn't actually true.

 I would expect the table contents to be part of the stuff being
 proved.  I hear that they do use formal proofs for their floating
 point stuff now, as does AMD.  AMD apparently uses ACL2 (a theorem
 prover written in Common Lisp, info about AMD is from the ACL2 web
 site) but I don't know what Intel uses.  I've figured that the FDIV
 problem was one of the motivations for this but I'm not certain of it.

 Yes, here's two of the papers describing proofs of AMD
floating point unit correctness.

Division: http://citeseer.ist.psu.edu/53148.html;

Square root: http://portal.acm.org/citation.cfm?id=607571;

A longer paper:

http://www.computationallogic.com/news/amd.html

This stuff is hard to do, but notice that AMD hasn't had any recalls
for floating point problems.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Kay Schluehr
On Jul 20, 6:45 am, John Nagle [EMAIL PROTECTED] wrote:
 Juergen Erhard wrote:
  On proving programs correct... from my CS study days I distinctly
  remember thinking sure, you can prove it correct, but you cannot do
  actual useful stuff with it.  We might have come a long way since
  then (late 80s :P), but I don't hold out much hope (especially since
  the halting problem does exist, and forever will).

  The halting problem only applies to systems with infinite memory.

Sure. But knowing that memory is limited doesn't buy you much because
you achieve an existential proof at best: you can show that the
program must run out of memory but you have to run the program to know
where this happens for arbitrary input values. Moreover you get always
different values for different memory limits. So in the general case
you can't improve over just letting the program run and notice what
happens.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Diez B. Roggisch
Paul Rubin schrieb:
 Diez B. Roggisch [EMAIL PROTECTED] writes:
 For example, SPARK does not support dynamic allocation of memory so
 things such as pointers and heap variables are not supported.
 
 Right, Spark uses a profile intended for embedded systems, so
 no unpredictable gc delays etc.
 
 Which is not to say that trivial code couldn't have errors, and if
 it's extremely cruical code, it's important that it hasn't errors. But
 all I can see is that you can create trustable, simple sub-systems in
 such a language. And by building upon them, you can ensure that at
 least these won't fail.
 
 Yes, that's the usual approach.
 
 But to stick with the example: if the path-planning of the UAV that
 involves tracking a not before-known number of enemy aircrafts steers
 the UAV into the ground, no proven-not-to-fail radius calculation will
 help you.
 
 Whether the program uses dynamic memory allocation or not, there
 either has to be some maximum number of targets that it can be
 required to track, or else it's subject to out-of-memory errors.
 If a maximum number is specified, then memory requirements can
 be computed from that.

In other words: you pre-allocate everything and can then proove what - 
that the program is going to be stop working?

  What does that buy you - where is I'm crashed becaus I ran out of 
memory trying to evade the seventh mig better than sorry, you will be 
shot down because I'm not capable of processing more enemie fighters - 
but hey, at least I'm still here to tell you!

Sorry, but if that's supposed to make a point for static typechecking, I 
don't buy it. Crippling the execution model so that it fits a 
proving-scheme which essentially says don't do anything complex  with 
me is no good.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Paul Rubin
Kay Schluehr [EMAIL PROTECTED] writes:
 Sure. But knowing that memory is limited doesn't buy you much because
 you achieve an existential proof at best: you can show that the
 program must run out of memory but you have to run the program to know
 where this happens for arbitrary input values. Moreover you get always
 different values for different memory limits. So in the general case
 you can't improve over just letting the program run and notice what
 happens.

Program verification is nothing like the halting problem.  It's not
done by dropping the program into some tool and getting back a proof
of correctness or incorrectness.  The proof is developed, by the
programmer, at the same time the program is developed.  Verifiability
means that the programmer-supplied proof is precise enough to be
checked by the computer.

Think of when you first learned to program.  Most non-programmers have
an idea of what it means to follow a set of steps precisely.  For
example they could tell you an algorithm for sorting a shelf full of
books alphabetically by author.  Programming means expressing such
steps in much finer detail than humans usually require, to the point
where a machine can execute it; it takes a lot of patience and
knowledge of special techniques, more than non-programmers know how to
do, but we programmers are used to it and even enjoy it.

Now think of something like a code review.  There is a line of code in
your program (maybe even an assert statement), that depends on some
variable x being greater than 3.  You must have had a reason for
thinking x3 there, so if the reviewer asks why you thought that, you
can explain your argument until the reviewer is convinced.

Do you see where this is going?  Code verification means expressing
such an argument in much finer detail than humans usually require, to
the point where a machine can be convinced by it; it takes a lot of
patience and knowledge of special techniques, more than most of us
regular practical programmers currently know how to do, but the
methods are getting more accessible and are regularly used in
high-assurance software.  In any case there is nothing magic about
it--just like programming, it's a means of expressing precisely to a
machine what you already know informally how to express to a human.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Paul Rubin
Diez B. Roggisch [EMAIL PROTECTED] writes:
   What does that buy you - where is I'm crashed becaus I ran out of
 memory trying to evade the seventh mig better than sorry, you will
 be shot down because I'm not capable of processing more enemie
 fighters - but hey, at least I'm still here to tell you!

Come on, this is real-time embedded software, not some Visual Basic
app running in Windows.  The internal table sizes etc. are chosen
based on the amount of physical memory available, which has tended to
be pretty small until maybe fairly recently.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Steve Holden
John Nagle wrote:
 Juergen Erhard wrote:
 On proving programs correct... from my CS study days I distinctly
 remember thinking sure, you can prove it correct, but you cannot do
 actual useful stuff with it.  We might have come a long way since
 then (late 80s :P), but I don't hold out much hope (especially since
 the halting problem does exist, and forever will).
 
 The halting problem only applies to systems with infinite memory.
 
 Proof of correctness is hard, requires extensive tool support, and
 increases software development costs.  But it does work.
 
 Check out the Spec# effort at Microsoft for current work.
 Work continues in Europe on Extended Static Checking for Java,
 which was developed at DEC before DEC disappeared.
 
The issue I have with correctness proofs (at least as they were 
presented in the 1980s - for all I know the claims may have become more 
realistic now) is that the proof of correctness can only relate to some 
highly-formal specification so complex that it's only comprehensible to 
computer scientists.

In other words, we can never prove that a program does what the 
customer wants, because the semantics are communicated in different 
ways, involving a translation which, since it is necessarily performed 
by humans using natural language, will always be incomplete at best and 
inaccurate at worst.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Diez B. Roggisch
Paul Rubin wrote:

 Diez B. Roggisch [EMAIL PROTECTED] writes:
   What does that buy you - where is I'm crashed becaus I ran out of
 memory trying to evade the seventh mig better than sorry, you will
 be shot down because I'm not capable of processing more enemie
 fighters - but hey, at least I'm still here to tell you!
 
 Come on, this is real-time embedded software, not some Visual Basic
 app running in Windows.  The internal table sizes etc. are chosen
 based on the amount of physical memory available, which has tended to
 be pretty small until maybe fairly recently.

Since when did we restrict ourselves to such an environment? I was under the
impression that this thread is about the merits and capabilities of static
type-checking?

Besides, even if I accept these constraints - I still think the question is
valid: what goof is a proof if it essentially proofs that the code is
severely limited? After all, memory-allocation _might_ create a crash
depending on the actual circumstances - but it might not. Then, you trade a
mere possibility against a certain limitation.

No, I don't buy that. As I said before: for  the most trivial of tasks that
might be a good thing. But I'm not convinced that restricting one self in
such a manner for more complex tasks isn't going to be a better development
path.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread Paul Rubin
Steve Holden [EMAIL PROTECTED] writes:
 The issue I have with correctness proofs (at least as they were
 presented in the 1980s - for all I know the claims may have become
 more realistic now) is that the proof of correctness can only relate
 to some highly-formal specification so complex that it's only
 comprehensible to computer scientists.

Just like software as it is now, is only comprehensible to
programmers.  A long while back there was a language called COBOL
designed to be comprehensible to non-programmers, and it actually did
catch on, but it was cumbersome and limited even in its time and in
its application era, not so readable to non-programmers after all, and
is not used much in new projects nowadays.

 In other words, we can never prove that a program does what the
 customer wants, because the semantics are communicated in different
 ways, involving a translation which, since it is necessarily performed
 by humans using natural language, will always be incomplete at best
 and inaccurate at worst.

The types of things one tries to prove are similar to what is less
formally expressed as localized test cases in traditional software.
E.g. for a sorting routine you'd want to prove that output[n+1] =
output[n] for all n and for all inputs.  As the Intel FDIV bug
incident reminds us, even billions of test inputs are not enough to
show that the routine does the right thing for EVERY input.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-20 Thread John Nagle
Paul Rubin wrote:
 Kay Schluehr [EMAIL PROTECTED] writes:
 
Sure. But knowing that memory is limited doesn't buy you much because
you achieve an existential proof at best: you can show that the
program must run out of memory but you have to run the program to know
where this happens for arbitrary input values. Moreover you get always
different values for different memory limits. So in the general case
you can't improve over just letting the program run and notice what
happens.
 
 
 Program verification is nothing like the halting problem.  It's not
 done by dropping the program into some tool and getting back a proof
 of correctness or incorrectness.  The proof is developed, by the
 programmer, at the same time the program is developed.  Verifiability
 means that the programmer-supplied proof is precise enough to be
 checked by the computer.
 
 Think of when you first learned to program.  Most non-programmers have
 an idea of what it means to follow a set of steps precisely.  For
 example they could tell you an algorithm for sorting a shelf full of
 books alphabetically by author.  Programming means expressing such
 steps in much finer detail than humans usually require, to the point
 where a machine can execute it; it takes a lot of patience and
 knowledge of special techniques, more than non-programmers know how to
 do, but we programmers are used to it and even enjoy it.
 
 Now think of something like a code review.  There is a line of code in
 your program (maybe even an assert statement), that depends on some
 variable x being greater than 3.  You must have had a reason for
 thinking x3 there, so if the reviewer asks why you thought that, you
 can explain your argument until the reviewer is convinced.
 
 Do you see where this is going?  Code verification means expressing
 such an argument in much finer detail than humans usually require, to
 the point where a machine can be convinced by it; it takes a lot of
 patience and knowledge of special techniques, more than most of us
 regular practical programmers currently know how to do, but the
 methods are getting more accessible and are regularly used in
 high-assurance software.  In any case there is nothing magic about
 it--just like programming, it's a means of expressing precisely to a
 machine what you already know informally how to express to a human.

 Ah. At last, an intelligent comment about program proving.

 One of the biggest problems with proof of correctness is that,
as a first step, you need a language where you don't have to lie to
the language.  That is, the language must have sufficient power to
express even the ugly stuff.

 Python isn't bad in that regard.  A good example is the struct
module, where you can explicitly convert a string of bytes into a
Python tuple.  That's an inherently risky operation, but Python has
a safe and unambiguous way to express it that doesn't depend on the
machine representation of Python types.  In C and C++, this is often
done using casts, and what's going on is neither explicit nor checked.
Note that it's quite possible to implement something like the struct
module for C, and it could even be compiled into hard code.  (GCC,
for example, knows about printf strings at compile time, so that's
not an unacceptable idea.)

 The big problem with C and C++ in this area is the array=pointer
convention.  Again, that's lying to the compiler.  Programmers have to
write char* p which is a pointer to a char, when they mean an array
of characters.  That's inherently wrong.  It was a good idea in the 1970s
when it was developed, but we know better now.

 If you were doing proofs of correctness for Python, the biggest headache
would be checking for all the places where some object like a module or function
might be dynamically modified and making sure nobody was patching the code.

 Enough on this issue, though.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-19 Thread Juergen Erhard
On Thu, Jul 12, 2007 at 11:26:22AM -0700, Paul Rubin wrote:
 
 Guy Steele used to describe functional programming -- the evaluation
 of lambda-calculus without side effects -- as separation of Church
 and state, a highly desirable situation ;-).
 
 (For non-FP nerds, the above is a pun referring to Alonzo Church, who
 invented lambda calculus in the 1920's or so).

Wow, I didn't realize I was an FP nerd :)

On proving programs correct... from my CS study days I distinctly
remember thinking sure, you can prove it correct, but you cannot do
actual useful stuff with it.  We might have come a long way since
then (late 80s :P), but I don't hold out much hope (especially since
the halting problem does exist, and forever will).

Bye, J
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-19 Thread John Nagle
Juergen Erhard wrote:
 On proving programs correct... from my CS study days I distinctly
 remember thinking sure, you can prove it correct, but you cannot do
 actual useful stuff with it.  We might have come a long way since
 then (late 80s :P), but I don't hold out much hope (especially since
 the halting problem does exist, and forever will).

The halting problem only applies to systems with infinite memory.

Proof of correctness is hard, requires extensive tool support, and
increases software development costs.  But it does work.

Check out the Spec# effort at Microsoft for current work.
Work continues in Europe on Extended Static Checking for Java,
which was developed at DEC before DEC disappeared.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-16 Thread Paul Rubin
Diez B. Roggisch [EMAIL PROTECTED] writes:
 For example, SPARK does not support dynamic allocation of memory so
 things such as pointers and heap variables are not supported.

Right, Spark uses a profile intended for embedded systems, so
no unpredictable gc delays etc.

 Which is not to say that trivial code couldn't have errors, and if
 it's extremely cruical code, it's important that it hasn't errors. But
 all I can see is that you can create trustable, simple sub-systems in
 such a language. And by building upon them, you can ensure that at
 least these won't fail.

Yes, that's the usual approach.

 But to stick with the example: if the path-planning of the UAV that
 involves tracking a not before-known number of enemy aircrafts steers
 the UAV into the ground, no proven-not-to-fail radius calculation will
 help you.

Whether the program uses dynamic memory allocation or not, there
either has to be some maximum number of targets that it can be
required to track, or else it's subject to out-of-memory errors.
If a maximum number is specified, then memory requirements can
be computed from that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-14 Thread Paul Rubin
Ben Finney [EMAIL PROTECTED] writes:
 This seems to make the dangerous assumption that the programmer has
 the correct program in mind, and needs only to transfer it correctly
 to the computer.

Well, I hope the programmer can at least state some clear
specficiations that a correct program should implement, e.g. it should
not freeze or crash.

 I would warrant that anyone who understands exactly how a program
 should work before writing it, and makes no design mistakes before
 coming up with a program that works, is not designing a program of any
 interest.

I'm not sure what the relevance of this is.  Programmers generally
have an idea of what it is that they're trying to write.  They then
have to make a bunch of design and implementation decisions as they go
along.  Soemtimes those decisions are subtly incorrect, and the more
errors the computer can identify automatically, the fewer errors are
likely to remain in the program.  

 Certainly. Which is why the trend continues toward developing programs
 such that mistakes of all kinds cause early, obvious failure -- where
 they have a much better chance of being caught and fixed *before*
 innocent hands get ahold of them.

Here is what Edward Lee wrote about developing a multi-threaded Java
app (http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf):

A part of the Ptolemy Project experiment was to see whether
effective software engineering practices could be developed for an
academic research setting. We developed a process that included a
code maturity rating system (with four levels, red, yellow, green,
and blue), design reviews, code reviews, nightly builds,
regression tests, and automated code coverage metrics [43]. The
portion of the kernel that ensured a consistent view of the
program structure was written in early 2000, design reviewed to
yellow, and code reviewed to green. The reviewers included
concurrency experts, not just inexperienced graduate students
(Christopher Hylands (now Brooks), Bart Kienhuis, John Reekie, and
myself were all reviewers). We wrote regression tests that
achieved 100 percent code coverage. The nightly build and
regression tests ran on a two processor SMP machine, which
exhibited different thread behavior than the development machines,
which all had a single processor.  The Ptolemy II system itself
began to be widely used, and every use of the system exercised
this code. No problems were observed until the code deadlocked on
April 26, 2004, four years later.

It is certainly true that our relatively rigorous software
engineering practice identified and fixed many concurrency
bugs. But the fact that a problem as serious as a deadlock that
locked up the system could go undetected for four years despite
this practice is alarming. How many more such problems remain? How
long do we need test before we can be sure to have discovered all
such problems? Regrettably, I have to conclude that testing may
never reveal all the problems in nontrivial Multithreaded code.

Now consider that the code you test today on a 2-core processor may be
running on a 100-core processor in 4 years, making any concurrency
errors that much worse.  You can't test on the 100-core cpu today
because it doesn't exist yet, but your code still has to run on it
correctly once it's out there.  Heck, if you're writing in CPython (no
parallelism at all, due to the GIL) you might be in for a surprise as
soon as real parallelism arrives on even 2 cores.  It gets worse,
maybe you're writing operating system or language runtime code
(e.g. parallel garbage collector for Java).  That means there will be
hostile code on the same machine, written by diabolical maniacs who
know every detail of your implementation, and who deliberately do
stuff (e.g. write code to hit specific cache lines or cause page
faults at fiendishly precisely chosen moments) specifically to cause
weird timing conditions and trigger bugs.  Testing is not enough to
ensure that such attacks won't succeed.

I remember an old programming book (might even have been Knuth vol. 3)
recommending using the Shell sorting algorithm (a very simple exchange
sort that runs in about O(n**1.4) time) instead of an O(n log n)
algorithm because Shell's algorithm (being simpler) was less likely to
end up with implementation errors.  Today I think an experienced
programmer needing to code a sort routine wouldn't hesitate to code a
good algorithm, because our languages and coding/testing methods are
better than they were in the Fortran and assembly language era when
that book was written.  But there are still more complex problems
(like shared memory multiprocessing) that we're advised to stay away
from because they're so fraught with hazards.  We instead use
approaches that are simpler but take perforamnce penalties.  (As a
non-concurrency example, we might implement geometric search through
some kind of 

Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-14 Thread Hendrik van Rooyen
 Lenard Lindstrom [EMAIL PROTECTED] wrote:


 Pascal has no break, continue or return. Eiffel doesn't even have a 
 goto. In such imperative languages boolean variables are used a lot.

 Thanks did not know this.

 
 from StringIO import StringIO
 lines = StringIO(one\ntwo\nthree\nfour\n)
 line_number = 0
 eof = False
 found = False
 while not (eof or found):
 line = lines.readline()
 eof = line == 
 found = line == three\n
 if not found:
 line_number += 1
 
 
 if found:
 print Found at, line_number
 else:
 print Not found
 # prints Found at 2
 

All right, so it is possible, with some legerdemain,
forcing the main loop to stop when the condition
is met.
However, I find the above as clear as mud, compared to:

line_number = 0

for line in my_file:
  line_number += 1
  if the Target in line:
break

print line_number, line
  
- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-14 Thread Lenard Lindstrom
Hendrik van Rooyen wrote:
  Lenard Lindstrom [EMAIL PROTECTED] wrote:
 
 
 Pascal has no break, continue or return. Eiffel doesn't even have a 
 goto. In such imperative languages boolean variables are used a lot.
 
  Thanks did not know this.
 
 from StringIO import StringIO
 lines = StringIO(one\ntwo\nthree\nfour\n)
 line_number = 0
 eof = False
 found = False
 while not (eof or found):
 line = lines.readline()
 eof = line == 
 found = line == three\n
 if not found:
 line_number += 1


 if found:
 print Found at, line_number
 else:
 print Not found
 # prints Found at 2

 
 All right, so it is possible, with some legerdemain,
 forcing the main loop to stop when the condition
 is met.
 However, I find the above as clear as mud, compared to:
 
 line_number = 0
 
 for line in my_file:
   line_number += 1
   if the Target in line:
 break
 
 print line_number, line
   
 - Hendrik
 

I am not defending Pascal or similar languages. Pascal was an 
educational language designed to teach structured programming by 
removing temptation. The same example in Pascal would be less 
convoluted, thought, since Pascal has an explicit end-of-file test 
function (No, this is not a Python feature request.) Personally I don't 
miss the continuous retesting of conditions made necessary in Pascal.

---
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Hendrik van Rooyen
Donn Cave [EMAIL PROTECTED] wrote:

In its day, goto was of course very well loved.

Does anybody know for sure if it is in fact possible to
design a language completely free from conditional jumps?

At the lower level, I don't think you can get away with
conditional calls - hence the jumps with dark glasses,
continue and break.

I don't think you can get that functionality in another way.

Think of solving the problem of reading a text file to find
the first occurrence of some given string - can it be done 
without either break or continue?  (given that you have to 
stop when you have found it)

 I can't think of a way, even in assembler, to do this without
using a conditional jump - but maybe my experience has
poisoned my mind, as I see the humble if statement as a plethora
of local jumps...

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Paul Rubin
Ben Finney [EMAIL PROTECTED] writes:
 This is interesting. Do you have any references we can read about this
 assertion -- specifically, that GOTO was not well loved (I assume
 by the programming community at large) even by around 1966?

Dijkstra's famous 1968 GOTO considered harmful letter
(http://www.acm.org/classics/oct95/) quotes a 1966 article by N. Wirth
and C.A.R. Hoare:

The remark about the undesirability of the go to statement is far from
new. I remember having read the explicit recommendation to restrict
the use of the go to statement to alarm exits, but I have not been
able to trace it; presumably, it has been made by C. A. R. Hoare. In
[1, Sec. 3.2.1.] Wirth and Hoare together make a remark in the same
direction in motivating the case construction: Like the conditional,
it mirrors the dynamic structure of a program more clearly than go to
statements and switches, and it eliminates the need for introducing a
large number of labels in the program.

Reference: 1. Wirth, Niklaus, and Hoare C. A. R.  A contribution
to the development of ALGOL. Comm. ACM 9 (June 1966), 413-432.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Ben Finney
John Nagle [EMAIL PROTECTED] writes:

 Donn Cave wrote:
  In its day, goto was of course very well loved.

 No, it wasn't.  By 1966 or so, GOTO was starting to look like a
 bad idea.  It was a huge hassle for debugging.

This is interesting. Do you have any references we can read about this
assertion -- specifically, that GOTO was not well loved (I assume
by the programming community at large) even by around 1966?

-- 
 \ It is not enough to have a good mind. The main thing is to use |
  `\  it well.  -- Rene Descartes |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Diez B. Roggisch
John Nagle schrieb:
 Chris Mellon wrote:
 You can't prove a program to be correct, in the sense that it's proven
 to do what it's supposed to do and only what it's supposed to do.
 
 Actually, you can prove quite a bit about programs with the right 
 tools.
 For example, proving that a program cannot subscript out of range is
 quite feasible.  (Although not for C, where it's most needed; that language
 is just too ill-defined.) 

You can - when there is input involved?

 You can prove that certain assertions about
 an object always hold when control is outside the object.  You can
 prove that certain entry conditions for a function are satisfied by
 all its callers.
 
 Take a look at what the Spec# group at Microsoft is doing.
 There's also some interesting USAF work on avionics at
 
 http://www.stsc.hill.af.mil/crossTalk/2006/09/0609SwardGerkenCasey.html


For example, SPARK does not support dynamic allocation of memory so 
things such as pointers and heap variables are not supported.


Pardon me, but if that's the restrictions one has to impose to his 
programs to make them verifiable, I'm not convinced that this is a way 
to go for python - or any other language - that needs programs beyond 
the most trivial tasks.

Which is not to say that trivial code couldn't have errors, and if it's 
extremely cruical code, it's important that it hasn't errors. But all I 
can see is that you can create trustable, simple sub-systems in such a 
language. And by building upon them, you can ensure that at least these 
won't fail.

But to stick with the example: if the path-planning of the UAV that 
involves tracking a not before-known number of enemy aircrafts steers 
the UAV into the ground, no proven-not-to-fail radius calculation will 
help you.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Antoon Pardon
On 2007-07-13, Hendrik van Rooyen [EMAIL PROTECTED] wrote:
 Donn Cave [EMAIL PROTECTED] wrote:

In its day, goto was of course very well loved.

 Does anybody know for sure if it is in fact possible to
 design a language completely free from conditional jumps?

I think you have to be more clear on what you mean. I would consider a
while loop as a conditional jump but I have the impression you don't.
Is that correct?

 At the lower level, I don't think you can get away with
 conditional calls - hence the jumps with dark glasses,
 continue and break.

Would you consider raise as belonging in this collection?

 I don't think you can get that functionality in another way.

 Think of solving the problem of reading a text file to find
 the first occurrence of some given string - can it be done 
 without either break or continue?  (given that you have to 
 stop when you have found it)

It depend on what the language offers. Should PEP 325 be implemented the
code would look something like:

  do:
line = fl.readline()
  while st not in line:
pass

-- 
Antoon Pardon
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Marc 'BlackJack' Rintsch
On Fri, 13 Jul 2007 08:37:00 +0200, Hendrik van Rooyen wrote:

 Donn Cave [EMAIL PROTECTED] wrote:
 
In its day, goto was of course very well loved.
 
 Does anybody know for sure if it is in fact possible to
 design a language completely free from conditional jumps?

GOTO is unconditional.  I guess it depends on what you call a jump.

 At the lower level, I don't think you can get away with
 conditional calls - hence the jumps with dark glasses,
 continue and break.
 
 I don't think you can get that functionality in another way.

Don't think in terms of calls and jumps but conditionally evaluating
functions with no side effects.

 Think of solving the problem of reading a text file to find
 the first occurrence of some given string - can it be done 
 without either break or continue?  (given that you have to 
 stop when you have found it)

Finding an element in a list in Haskell:

findFirst needle haystack = f 0 haystack where
f _ [] = -1
f i (x:xs) | x == needle = i
   | otherwise   = f (i+1) xs

No ``break`` or ``continue`` but a recursive function.

  I can't think of a way, even in assembler, to do this without
 using a conditional jump - but maybe my experience has
 poisoned my mind, as I see the humble if statement as a plethora
 of local jumps...

Technically yes, and with exceptions we have non local jumps all over the
place.

You are seeing the machine language behind it, and of course there are lots
of GOTOs, but not uncontrolled ones.  The language above hides them and
allows only a limited set of jumps.  Easing the proofs that the program is
correct.  If you have a conditional call you can proof both alternative
calls and build the proof the the function that builds on those
alternatives on them.

Trying to guess how some source would look like in machine language isn't
that easy anymore the higher the abstraction level of the used
programming language is.  In Haskell for example you have to keep in mind
that it uses lazy evaluation and that the compiler knows very much about
the program structure and data types and flow to do optimizations.

Ciao,
Marc 'BlackJack' Rintsch
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Donn Cave
In article [EMAIL PROTECTED],
 Paul Rubin http://[EMAIL PROTECTED] wrote:

 Ben Finney [EMAIL PROTECTED] writes:
  This is interesting. Do you have any references we can read about this
  assertion -- specifically, that GOTO was not well loved (I assume
  by the programming community at large) even by around 1966?
 
 Dijkstra's famous 1968 GOTO considered harmful letter
 (http://www.acm.org/classics/oct95/) quotes a 1966 article by N. Wirth
 and C.A.R. Hoare:
 
 The remark about the undesirability of the go to statement is far from
 new. I remember having read the explicit recommendation to restrict
 the use of the go to statement to alarm exits, but I have not been
 able to trace it; presumably, it has been made by C. A. R. Hoare. In
 [1, Sec. 3.2.1.] Wirth and Hoare together make a remark in the same
 direction in motivating the case construction: Like the conditional,
 it mirrors the dynamic structure of a program more clearly than go to
 statements and switches, and it eliminates the need for introducing a
 large number of labels in the program.
 
 Reference: 1. Wirth, Niklaus, and Hoare C. A. R.  A contribution
 to the development of ALGOL. Comm. ACM 9 (June 1966), 413-432.

So, all I need is comments from a computer scientist or two,
pointing out problems with the procedural/imperative programming
model, and we can establish that it was already hated and on its
way out by the late 90's?  How about OOP?  Already on its way out
by the time Python 1.0 hit the streets?

The thing that allows us to be so smug about the follies of the
past, is that we can pretend everyone knew better.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-13 Thread Lenard Lindstrom
Hendrik van Rooyen wrote:
 Donn Cave [EMAIL PROTECTED] wrote:
 
 In its day, goto was of course very well loved.
 
 Does anybody know for sure if it is in fact possible to
 design a language completely free from conditional jumps?
 
 At the lower level, I don't think you can get away with
 conditional calls - hence the jumps with dark glasses,
 continue and break.
 
 I don't think you can get that functionality in another way.
 
 Think of solving the problem of reading a text file to find
 the first occurrence of some given string - can it be done 
 without either break or continue?  (given that you have to 
 stop when you have found it)
 

Pascal has no break, continue or return. Eiffel doesn't even have a 
goto. In such imperative languages boolean variables are used a lot.

from StringIO import StringIO
lines = StringIO(one\ntwo\nthree\nfour\n)
line_number = 0
eof = False
found = False
while not (eof or found):
line = lines.readline()
eof = line == 
found = line == three\n
if not found:
line_number += 1


if found:
print Found at, line_number
else:
print Not found
# prints Found at 2

---
Lenard Lindstrom
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Paul Rubin
[EMAIL PROTECTED] (Alex Martelli) writes:
 If what you wonder about, and the theory mentioned by Clemmer and
 detailed by the AQF, are both true, then this may help explain why some
 programmers are fiercely innovative why other, equally intelligent ones,
 prefer to stick with some plodding, innovation-killing process that only
 works well for repetitive tasks: the latter group may be the ones who
 dread errors, and therefore miss the making mistakes, experiencing
 failures, and learning from them that is how we improve.

The idea of designing languages with more and more support for
ensuring program correctness is to put the established, repetitive
processes into the computer where it belongs, freeing the programmer
to be innovative while still providing high assurance of that the
program will be right the first time.  And some of the most innovative
work in software is going into this area today.  

Also, taking a learn-from-mistakes approach is fine and dandy if the
consequences of the mistakes stay contained to those who make them.
It's completely different if the consequences are imposed on other
people who weren't part of the process.  Vast amounts of software
today (and I mean the stuff that clpy denizens write for random web
servers or desktop apps, not just scary stuff like nuclear reactor
code) has the potential to screw people who had nothing to do with the
development process.  It's unreassuring to hear the the developers say
oh cool, we learned from the mistake when that happens.  So, it's
irresponsible to deliberately choose development processes that
externalize risks onto outsiders that way.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Paul Rubin
Donn Cave [EMAIL PROTECTED] writes:
 I've wondered if programmers might differ a lot in how much they
 dread errors, or how they react to different kinds of errors.
 For example, do you feel a pang of remorse when your program
 dies with a traceback - I mean, what a horrible way to die?

I'm reminded of the time I found out that a program I had worked on
had crashed due to a coding bug.  It was the control software for an
ATM switch.  I had moved on from that job a year or so earlier, but I
found out about the crash because it took out vast swaths of data
communications for the whole US northeast corridor for 2+ days (almost
like the extended power outage of 2003) and it was on the front page
of the New York Times.  The first thing I thought of was that a
certain subroutine I had rewritten was the culprit.  I got on the
phone with a guy I had worked with to ask what the situation was, and
I was very relieved to find out that the error was in a part of the
code that I hadn't been anywhere near.

That program was a mess of spaghetti C code but even more carefully
written code keeps crashing the same way.  It was one of the incidents
that now has me interested in the quest for type-safe languages with
serious optimizing compilers, that will allow us to finally trash
every line of C code currently in existence ;-).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Ben Finney
Paul Rubin http://[EMAIL PROTECTED] writes:

 The idea of designing languages with more and more support for
 ensuring program correctness is to put the established, repetitive
 processes into the computer where it belongs, freeing the programmer
 to be innovative while still providing high assurance of that the
 program will be right the first time.

This seems to make the dangerous assumption that the programmer has
the correct program in mind, and needs only to transfer it correctly
to the computer.

I would warrant that anyone who understands exactly how a program
should work before writing it, and makes no design mistakes before
coming up with a program that works, is not designing a program of any
interest.

 Also, taking a learn-from-mistakes approach is fine and dandy if the
 consequences of the mistakes stay contained to those who make them.
 It's completely different if the consequences are imposed on other
 people who weren't part of the process.

Certainly. Which is why the trend continues toward developing programs
such that mistakes of all kinds cause early, obvious failure -- where
they have a much better chance of being caught and fixed *before*
innocent hands get ahold of them.

-- 
 \We have to go forth and crush every world view that doesn't |
  `\ believe in tolerance and free speech.  -- David Brin |
_o__)  |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Hendrik van Rooyen
John Nagle [EMAIL PROTECTED] wrote:

 I've worked in big mainframe shops, where an operating system
 crash caused everything to suddenly freeze, red lights came on all
 over the building, and a klaxon sounded.  I've worked for aerospace
 companies, where one speaks of defects, not bugs, and there
 are people around whose job involves getting in high performance
 aircraft and flying with the software written there.  I've worked
 with car-sized robot vehicles, ones big enough to injure people.
 
 This gives one a stronger feeling about wanting to eliminate
 software defects early.
 

Does this actually cause you to make less goofy mistakes?

- I find that no matter how careful I try to be, I still f***k up.
not so much in the big things, its the little things that get me...

And the errors seem to come in runs - almost like a bad hair day.

- Hendrik

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Donn Cave
In article [EMAIL PROTECTED],
 Steve Holden [EMAIL PROTECTED] wrote:
 Donn Cave wrote:
  Someday we will look at variables like we look at goto.
  
 How very functional. I believe some people naturally think in terms of 
 state transformations and some in terms of functional evaluation.  I am 
 pretty firmly in the former camp myself, so variables are a natural 
 repository for state to me.

Don't worry - there will be a state transformation monad for you!

Nature or nurture?  it would be interesting to see some identical twin
studies on novice programmers.  Since few of us were exposed first
to strictly functional programming, though, you have to wonder.  In
its day, goto was of course very well loved.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Donn Cave
In article [EMAIL PROTECTED],
 Ben Finney [EMAIL PROTECTED] wrote:

 Paul Rubin http://[EMAIL PROTECTED] writes:
 
  The idea of designing languages with more and more support for
  ensuring program correctness is to put the established, repetitive
  processes into the computer where it belongs, freeing the programmer
  to be innovative while still providing high assurance of that the
  program will be right the first time.
 
 This seems to make the dangerous assumption that the programmer has
 the correct program in mind, and needs only to transfer it correctly
 to the computer.
 
 I would warrant that anyone who understands exactly how a program
 should work before writing it, and makes no design mistakes before
 coming up with a program that works, is not designing a program of any
 interest.

I don't get it.  Either you think that the above mentioned support
for program correctness locks program development into a frozen
stasis at its original conception, in which case you don't seem to
have read or believed the whole paragraph and haven't been reading
much else in this thread.  Certainly up to you, but you wouldn't be
in a very good position to be drawing weird inferences as above.

Or you see original conception of the program as so inherently
suspect, that random errors introduced during implementation can
reasonably be seen as helpful, which would be an interesting but
unusual point of view.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Paul Rubin
Donn Cave [EMAIL PROTECTED] writes:
 Don't worry - there will be a state transformation monad for you!
 
 Nature or nurture?  it would be interesting to see some identical twin
 studies on novice programmers.  Since few of us were exposed first
 to strictly functional programming, though, you have to wonder.  In
 its day, goto was of course very well loved.

Guy Steele used to describe functional programming -- the evaluation
of lambda-calculus without side effects -- as separation of Church
and state, a highly desirable situation ;-).

(For non-FP nerds, the above is a pun referring to Alonzo Church, who
invented lambda calculus in the 1920's or so).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Chris Mellon
On 7/12/07, Donn Cave [EMAIL PROTECTED] wrote:
 In article [EMAIL PROTECTED],
  Ben Finney [EMAIL PROTECTED] wrote:

  Paul Rubin http://[EMAIL PROTECTED] writes:
 
   The idea of designing languages with more and more support for
   ensuring program correctness is to put the established, repetitive
   processes into the computer where it belongs, freeing the programmer
   to be innovative while still providing high assurance of that the
   program will be right the first time.
 
  This seems to make the dangerous assumption that the programmer has
  the correct program in mind, and needs only to transfer it correctly
  to the computer.
 
  I would warrant that anyone who understands exactly how a program
  should work before writing it, and makes no design mistakes before
  coming up with a program that works, is not designing a program of any
  interest.

 I don't get it.  Either you think that the above mentioned support
 for program correctness locks program development into a frozen
 stasis at its original conception, in which case you don't seem to
 have read or believed the whole paragraph and haven't been reading
 much else in this thread.  Certainly up to you, but you wouldn't be
 in a very good position to be drawing weird inferences as above.

 Or you see original conception of the program as so inherently
 suspect, that random errors introduced during implementation can
 reasonably be seen as helpful, which would be an interesting but
 unusual point of view.


You can't prove a program to be correct, in the sense that it's proven
to do what it's supposed to do and only what it's supposed to do. You
can prove type-correctness, and the debate is really over the extent
that a type-correct program is also behavior-correct.

Personally, I remain unconvinced, but am open to evidence - I've only
heard anecdotes which are readily discounted by other anecdotes. I
absolutely do not believe that pure functional style programming,
where we shun variables, is the best model for computing, either now
or in the future.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread Paul Rubin
Chris Mellon [EMAIL PROTECTED] writes:
 You can't prove a program to be correct, in the sense that it's proven
 to do what it's supposed to do and only what it's supposed to do. You
 can prove type-correctness, and the debate is really over the extent
 that a type-correct program is also behavior-correct.

But every mathematical theorem corresponds to a type, so if you can
formalize an argument that your code has a certain behavior, then a
type checker can statically verify it.  There are starting to be
programming languages with embedded proof assistants, like Concoqtion
(google for it).

Of course you can only prove formal properties of programs, which says
nothing about whether the application is doing the right thing for
what the user needs.  However, you're still way ahead of the game if
you have a machine-checked mathematical proof that (say) your
multi-threaded program never deadlocks or clobbers data through race
conditions, instead of merely a bunch of test runs in which you didn't
observe deadlock or clobbered data.  Similarly for things like
floating-point arithmetic, Intel and AMD now use formal verification
on their designs, apparently unlike in the days of the notorious
original Pentium FDIV implementation, which passed billions of test
vectors and then shipped with an error.  Larger applications like the
Javacard bytecode interpreter have been certified for various
properties and pretty soon we may start seeing certified compilers and
OS kernels.  Things have come a long way since the 1970's.

 Personally, I remain unconvinced, but am open to evidence - I've only
 heard anecdotes which are readily discounted by other anecdotes. I
 absolutely do not believe that pure functional style programming,
 where we shun variables, is the best model for computing, either now
 or in the future.

I wonder if you looked at the Tim Sweeney presentation that I linked
to a few times upthread.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread John Nagle
Chris Mellon wrote:
 You can't prove a program to be correct, in the sense that it's proven
 to do what it's supposed to do and only what it's supposed to do.

 Actually, you can prove quite a bit about programs with the right tools.
For example, proving that a program cannot subscript out of range is
quite feasible.  (Although not for C, where it's most needed; that language
is just too ill-defined.)  You can prove that certain assertions about
an object always hold when control is outside the object.  You can
prove that certain entry conditions for a function are satisfied by
all its callers.

 Take a look at what the Spec# group at Microsoft is doing.
There's also some interesting USAF work on avionics at

http://www.stsc.hill.af.mil/crossTalk/2006/09/0609SwardGerkenCasey.html

But it's not a very active field right now.  The basic problem is that
C and C++ aren't well suited to proof of correctness, yet none of the
other hard-compiled languages have any significant market share left.

  This is irrelevant to Python issues, though.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-12 Thread John Nagle
Donn Cave wrote:
 In its day, goto was of course very well loved.

 No, it wasn't.  By 1966 or so, GOTO was starting to look
like a bad idea.  It was a huge hassle for debugging.

 It took another decade to get the iteration constructs
approximately right (FORTRAN was too restrictive, ALGOL
was too general), and to deal with problems like the
dangling ELSE, but it was clear quite early that GOTO
was on the way out.

 There was a detour through BASIC and its obsession with
line numbers (GOSUB 14000 - bad subroutine call construct)
that took a while to overcome.  Much of that was from trying
to cram interactive systems into computers with very modest
capacity per user.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-11 Thread Donn Cave
In article [EMAIL PROTECTED],
 Chris Mellon [EMAIL PROTECTED] wrote:

 I don't think it's the syntax that keeps people from checking for
 errors. It's more a difference of error handling philosophy - in
 Python, if you can't do something sensible with an error you  just
 pretend it can't happen and rely on your caller to do something
 sensible. In a lot of cases, this means that nobody does anything and
 errors propagate all the way up and thats fine.
 
 What you're describing with the type inference solution feels more
 like Javas checked exceptions to me, where the compiler ends up
 signaling an error if you don't pretend like you considered the error
 condition. In Java, at least, this tends to create actively harmful
 code where you stick in empty or otherwise useless exception handlers
 to shut up the compiler, and when can't happen errors actually do
 occur they are lost or worse. Maybe this doesn't happen in Haskel, but
 I'd be interested in how it doesn't.

As Paul mentioned parenthetically, Haskell's type system doesn't
support all this by itself.  You can write a function that is defined
for only some cases of a value, so indeed you can fail to check for
an error status, or match the first element of an empty list, and
in general it is quite possible to get a run time error in this way.
There is naturally some dissatisfaction with this state of affairs.
Another point about this is that a 

The second point, about syntax and failure handling is another matter.
In the example -

  g1 = re.match(pattern, string)
  a = g2.group(0)
  g2 = re.match(template % a, other_string)
  result = g2.group(1)


... if all these re functions have a monad type that supports it,
this would be a sequence of monad bindings with failure options.
The algebra of these things is such that a failure short circuits
the larger expression just like a false result short circuits
evaluation of an and sequence.  I wouldn't say this prevents
data errors in Haskell, though, it only simplifies them in a sequential
computation where such a monad type is convenient.

I missed the type error - data error point that this evidently
was all supposed to be some response to, and I don't know what
that was supposed to mean, but Haskell's type system addresses
just what we all would expect, the structural consistency of the
program in terms of data types.  It doesn't generally prevent data
errors or correct your misunderstanding of an algorithm or in
general avoid every kind of error.  What it does, though, it does
rather well.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-11 Thread Donn Cave
In article [EMAIL PROTECTED],
 Steve Holden [EMAIL PROTECTED] wrote:

 Paul Rubin wrote:
  Steven D'Aprano [EMAIL PROTECTED] writes:
  As far as I can see, the only difference is that the list comp variable
  isn't explicitly created with a statement of the form name = value. Why
  is that a problem?
  
  I don't know that listcomp vars are worse problem than other vars;
  however there is an easy workaround for the listcomp vars so I use it.
  If there was a way to restrict the scope of other local vars (I gave
  examples from other languages of how this can be done), I'd use that too.

Someday we will look at variables like we look at goto.

 Maybe we just have different styles, and I naturally tend to write in 
 smaller scopes than you do.

I've wondered if programmers might differ a lot in how much they
dread errors, or how they react to different kinds of errors.
For example, do you feel a pang of remorse when your program
dies with a traceback - I mean, what a horrible way to die?
Do you resent the compiler's reprimands about your code errors,
or nagging about warnings?  Maybe the language implementation's
job has as much to do with working around our feelings as anything
else.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-11 Thread John Nagle
Donn Cave wrote:
 I've wondered if programmers might differ a lot in how much they
 dread errors, or how they react to different kinds of errors.
 For example, do you feel a pang of remorse when your program
 dies with a traceback - I mean, what a horrible way to die?
 Do you resent the compiler's reprimands about your code errors,
 or nagging about warnings?  Maybe the language implementation's
 job has as much to do with working around our feelings as anything
 else.

That's worth thinking about.

I've worked in big mainframe shops, where an operating system
crash caused everything to suddenly freeze, red lights came on all
over the building, and a klaxon sounded.  I've worked for aerospace
companies, where one speaks of defects, not bugs, and there
are people around whose job involves getting in high performance
aircraft and flying with the software written there.  I've worked
with car-sized robot vehicles, ones big enough to injure people.

This gives one a stronger feeling about wanting to eliminate
software defects early.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-11 Thread Steve Holden
Donn Cave wrote:
 In article [EMAIL PROTECTED],
  Steve Holden [EMAIL PROTECTED] wrote:
 
 Paul Rubin wrote:
 Steven D'Aprano [EMAIL PROTECTED] writes:
 As far as I can see, the only difference is that the list comp variable
 isn't explicitly created with a statement of the form name = value. Why
 is that a problem?
 I don't know that listcomp vars are worse problem than other vars;
 however there is an easy workaround for the listcomp vars so I use it.
 If there was a way to restrict the scope of other local vars (I gave
 examples from other languages of how this can be done), I'd use that too.
 
 Someday we will look at variables like we look at goto.
 
How very functional. I believe some people naturally think in terms of 
state transformations and some in terms of functional evaluation.  I am 
pretty firmly in the former camp myself, so variables are a natural 
repository for state to me.

 Maybe we just have different styles, and I naturally tend to write in 
 smaller scopes than you do.
 
 I've wondered if programmers might differ a lot in how much they
 dread errors, or how they react to different kinds of errors.
 For example, do you feel a pang of remorse when your program
 dies with a traceback - I mean, what a horrible way to die?
 Do you resent the compiler's reprimands about your code errors,
 or nagging about warnings?  Maybe the language implementation's
 job has as much to do with working around our feelings as anything
 else.
 
That's an interesting point of view. I certainly don't take the somewhat 
anthropomorphic approach you describe above. My first response to any 
error message is to ask myself what did I do wrong *now*? - the very 
fact that we talk about error messages implies a point of view that's 
discouraging to new users: you did something wrong, fix it and try again.

I do think that most language implementations could spend more time, and 
more sympathetic thought, on creating messages that were less pejorative 
and more indicative of the required corrective actions. But after forty 
years programming I know myself well enough to understand that I am the 
most likely cause of incorrect results.

It's always amusing to see a newbie appear on this list and suggest that 
there's a bug in some long-standing feature of the language. It's always 
a possibility, but the probability is pretty low.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-11 Thread Alex Martelli
Donn Cave [EMAIL PROTECTED] wrote:

 I've wondered if programmers might differ a lot in how much they
 dread errors, or how they react to different kinds of errors.

That's quite possible.  I'm reminded of a by-now commonplace theory,
well summarized at
http://www.clemmer-group.com/articles/article_128.aspx:


The only place you should try doing it right the first time is with
established, repetitive processes. Beyond that, this quality improvement
cliché can kill innovation. A major study from the American Quality
Foundation concluded, We don't do things right the first time. Trial
and error -- making mistakes, experiencing failures, and learning from
them -- is how we improve. We need mistakes in order to learn; they are
an integral part of how we get better.


If what you wonder about, and the theory mentioned by Clemmer and
detailed by the AQF, are both true, then this may help explain why some
programmers are fiercely innovative why other, equally intelligent ones,
prefer to stick with some plodding, innovation-killing process that only
works well for repetitive tasks: the latter group may be the ones who
dread errors, and therefore miss the making mistakes, experiencing
failures, and learning from them that is how we improve.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-10 Thread Diez B. Roggisch
Paul Rubin schrieb:
 Diez B. Roggisch [EMAIL PROTECTED] writes:
 Which implies that even in ADA, runtime type errors are in fact
 expected - else there would be no handler for such a case.
 Well, yes, runtime errors occur - in statically typed languages as
 well. That's essentially the halting-problem.
 
 Well, no, it's quite possible for a language to reject every program
 that has any possibility of throwing a runtime type error.  The
 halting problem implies that no algorithm can tell EXACTLY which
 programs throw errors and which do not.  So the language cannot accept
 all programs that are free of runtime type errors and reject all
 programs that definitely have runtime type errors, with no uncertain
 cases.  But it's fine for a language to reject uncertain cases and
 accept only those which it can rigorously demonstrate have no type
 errors.

Sure. But which class of programs are decidable? There's lot's of 
research going on with model checking and the like. But AFAIK, the 
consensus is that the very moment you allow recursive types, the 
type-checking is either incomplete, or possibly non-deterministic. Which 
makes then the compiler hang.

Diez

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-10 Thread Paul Rubin
Diez B. Roggisch [EMAIL PROTECTED] writes:
 Sure. But which class of programs are decidable? There's lot's of
 research going on with model checking and the like. But AFAIK, the
 consensus is that the very moment you allow recursive types, the
 type-checking is either incomplete, or possibly
 non-deterministic. Which makes then the compiler hang.

Type checking is already undecidable in many mainstream languages
including C++.  Nonetheless people manage to use those languages.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-10 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
 
If the assertion is wrong, the compiler signals an error.  In
that sense it's like a unit test; it makes sure the function does what
the user expects.

It still boils down to the same problem : possibly valid types are
rejected based on a declaration.
 
 
 I'm not sure what you mean.  If the programmer intended for the
 function to compute an integer, then it's invalid for the function to
 return (say) a string, whether there is a static typecheck or not.

I was mostly thinking of inputs, but it's the same thing for outputs - 
in most cases the concrete type (= class) of the object returned by a 
function is no more significant than the concrete type(s) of it's 
input(s). What is important is that these objects supports the 
(implicit) protocols expected by either the function itself (for inputs) 
or the caller (for output(s)). And FWIW, in Python, it's not unusual for 
a function or method to be able to return objects of any type - cf the 
dict.get(key, default=None) method.

(snip)

However, that may be a self-fulfilling prophecy since maybe I'm
cultivating a coding style that doesn't use the dynamism,

Perhaps is this coding style not making the best use of Python's
features ? To me, it sounds like doing procedural programming in OCaml -
it's of course possible, but probably not the best way to use the language.
  
 This is in principle possible, though I feel like I'm coding more
 effectively since gravitating to this style. 

 Anton van Straaten (a
 dynamic typing proponent) wrote of static checks
 (http://lambda-the-ultimate.org/node/2216#comment-31954):
 
 In most cases, the correctness proof itself is not the most important
 advantage of static checking of a program. Rather, it's the type
 discipline that must be followed in order to achieve that correctness
 proof. The final proof is like the certificate you get when you
 graduate from college: it's not that important by itself, but to have
 obtained it you must have done a lot of work, at least some of which
 is important and necessary. You can do the work without getting the
 certificate, but many people don't, and in the programming case they
 may pay a price for that in terms of programs with poor type
 discipline, which can have consequences for reasoning and the
 ilities.
 
 I coded a fair amount of Lisp before starting with Python though, so
 even before, I tended to code Python in a Lisp-like style, ignoring
 Python's more dynamic features such as metaclasses.

While my knowledge of Lisp is somewhat limited - I played with Common 
Lisp and Scheme (and Emacs Lisp) but never seriously used any Lisp 
dialect - it also had some influence on my coding style. But Python's 
support for hi-level programming - including the restricted support 
for functional approach - mostly comes from it's object model, and I 
prefer to take advantage of this object model instead of trying to write 
Lisp (or Haskell or whatever other FPL) in Python. A good example could 
be partial application - which in Python is better implemented as a 
class than using closures.

 I'd also compare the situation with Forth, where functions can consume
 and insert arbitrary numbers of values to the stack, but programmers
 in practice maintain careful stack discipline (making sure to pop and
 push a constant number of values, and documenting the meaning of each
 one) in order to avoid going crazy.  Just because the language offers
 you rope, doesn't mean you can't decline to hang yourself with it.

Indeed. But this doesn't mean you should not use the rope at all - 
specially when it happens to be the RightThing(tm) to do. Compare an old 
framework like Zope 2.x, and new ones (internally) making heavy use of 
metaclasses, descriptors etc, and you may find out that these features 
not only greatly help wrt/ both robustness (less boilerplate) and 
readability (less boilerplate), but also tend to encourage a more 
declarative style (.

 Finally, this article about gradual typing (you can write your code
 dynamically and then later add static annotations resulting in type
 safety) might be of interest:

I have too few problems with type errors to feel a need for more type 
safety than what Python actually provides, and I don't see the point of 
adding arbitrary restrictions. What's appropriate for language X is not 
necessarily appropriate for language Y, and I know where to find smart 
statically typed languages if and when I do need them.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-09 Thread John Nagle
Paul Rubin wrote:
 Diez B. Roggisch [EMAIL PROTECTED] writes:
 
Which implies that even in ADA, runtime type errors are in fact
expected - else there would be no handler for such a case.

Well, yes, runtime errors occur - in statically typed languages as
well. That's essentially the halting-problem.
 
 
 Well, no, it's quite possible for a language to reject every program
 that has any possibility of throwing a runtime type error.  The
 halting problem implies that no algorithm can tell EXACTLY which
 programs throw errors and which do not.  So the language cannot accept
 all programs that are free of runtime type errors and reject all
 programs that definitely have runtime type errors, with no uncertain
 cases.  But it's fine for a language to reject uncertain cases and
 accept only those which it can rigorously demonstrate have no type
 errors.

That's correct.  Speaking as someone who once implemented an automatic
proof of correctness system (here's the manual:
http://www.animats.com/papers/verifier/verifiermanual.pdf), the halting
problem isn't an issue.  If you write a program for which halting is
undecidable, the program is basically broken.

The way you prove loop termination in the real world is to define
some integer value (you can also use tuples like (1,2,1,3), like
paragraph numbers) that gets smaller each time you go through the
loop, but never goes negative.

Most type issues can be resolved at compile time if you can
see the whole program.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-09 Thread John Nagle
Paul Rubin wrote:
 Diez B. Roggisch [EMAIL PROTECTED] writes:
 
Which implies that even in ADA, runtime type errors are in fact
expected - else there would be no handler for such a case.

Well, yes, runtime errors occur - in statically typed languages as
well. That's essentially the halting-problem.
 
 
 Well, no, it's quite possible for a language to reject every program
 that has any possibility of throwing a runtime type error.  The
 halting problem implies that no algorithm can tell EXACTLY which
 programs throw errors and which do not.  So the language cannot accept
 all programs that are free of runtime type errors and reject all
 programs that definitely have runtime type errors, with no uncertain
 cases.  But it's fine for a language to reject uncertain cases and
 accept only those which it can rigorously demonstrate have no type
 errors.

Also, if you're really into this, read this DEC SRL report on
Extended Static Checking for Java, a system from the 1990s.

ftp://gatekeeper.research.compaq.com/pub/DEC/SRC/research-reports/SRC-159.pdf

They were doing great work until Compaq bought DEC and killed off research.

The follow up to that is Microsoft's Spec# effort, which is program
verification for C#.  See

http://research.microsoft.com/specsharp/

They have some of the same people, and some of the same code, as the
DEC effort.

Back when I was working on this, we only had a 1 MIPS VAX, and theorem
proving was slow.  That's much less of a problem now.

John Nagle

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-09 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
 Some users in fact recommend writing an explicit type signature for
 every Haskell function, which functions sort of like a unit test.
 Stop here. explicit type signature == declarative static typing !=
 unit test.
 
 The user-written signature is not a declaration that informs the
 compiler of the type.  The compiler still figures out the type by
 inference, just as if the signature wasn't there.  The user-written
 signature is more like an assertion about what type the compiler will
 infer.  If the assertion is wrong, the compiler signals an error.  In
 that sense it's like a unit test; it makes sure the function does what
 the user expects.

It still boils down to the same problem : possibly valid types are 
rejected based on a declaration.

 I have few surprises with typing in Python. Very few. Compared to
 the flexibility and simplicity gained from a dynamism that couldn't
 work with static typing - even using type inference -, I don't see it
 a such a wonderful gain. At least in my day to day work.
 
 I'm going to keep an eye out for it in my day-to-day coding but I'm
 not so convinced that I'm gaining much from Python's dynamism.

I sure do.

 However, that may be a self-fulfilling prophecy since maybe I'm
 cultivating a coding style that doesn't use the dynamism,

Perhaps is this coding style not making the best use of Python's 
features ? To me, it sounds like doing procedural programming in OCaml - 
it's of course possible, but probably not the best way to use the language.

 and I could
 be doing things differently.  I do find since switching to Python 2.5
 and using iterators more extensively, I use the class/object features
 a lot less.  Data that I would have put into instance attributes on
 objects that get passed from one function to another, instead become
 local variables in functions that get run over sequences, etc.

Dynamism is more than simply adding cargo attributes to objects.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-09 Thread Chris Mellon
On 07 Jul 2007 23:27:08 -0700, Paul Rubin
http://phr.cx@nospam.invalid wrote:
 Hamilton, William  [EMAIL PROTECTED] writes:
   Why on earth would anyone prefer taking a failure in the field over
   having a static type check make that particular failure impossible?
 
  Because static typechecking won't make that particular failure
  impossible,  but instead just change it from a type error into a data
  error that may or may not be harder to identify.  If your program gets a
  piece of data that breaks it, you'll get a failure in the field.  Static
  typechecking won't prevent that.

 I'm not doing a good job explaining that regexp example.  Static
 checking a la Haskell and ML avoids these problems by:

   1) noticing through case analysis that you actually handle the
  error return (except it looks like you have to use a separate
  tool for this in Haskell, sigh); and

   2) (in Haskell) using monadic types to propagate match results from
  one operation to another, automatically taking care of turning
  a match failure in a chain of operations into a failure of the
  whole chain.

 In Python it's all too common to say

  g1 = re.match(pattern, string)
  a = g2.group(0)
  g2 = re.match(template % a, other_string)
  result = g2.group(1)

 or stuff like that, without bothering to check for match failures,
 just because of Python's inconvenient syntax.


I don't think it's the syntax that keeps people from checking for
errors. It's more a difference of error handling philosophy - in
Python, if you can't do something sensible with an error you  just
pretend it can't happen and rely on your caller to do something
sensible. In a lot of cases, this means that nobody does anything and
errors propagate all the way up and thats fine.

What you're describing with the type inference solution feels more
like Javas checked exceptions to me, where the compiler ends up
signaling an error if you don't pretend like you considered the error
condition. In Java, at least, this tends to create actively harmful
code where you stick in empty or otherwise useless exception handlers
to shut up the compiler, and when can't happen errors actually do
occur they are lost or worse. Maybe this doesn't happen in Haskel, but
I'd be interested in how it doesn't.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-09 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
  If the assertion is wrong, the compiler signals an error.  In
  that sense it's like a unit test; it makes sure the function does what
  the user expects.
 
 It still boils down to the same problem : possibly valid types are
 rejected based on a declaration.

I'm not sure what you mean.  If the programmer intended for the
function to compute an integer, then it's invalid for the function to
return (say) a string, whether there is a static typecheck or not.
The type signature just makes sure the function works as intended, at
least as regards its return type.

If you mean that the programmer might have valid reasons to return a
mix of types dynamically and static languages get in the way of that,
well sure that's true, though I suspect that polymorphism and
algebraic types can handle most of these cases.

  However, that may be a self-fulfilling prophecy since maybe I'm
  cultivating a coding style that doesn't use the dynamism,
 
 Perhaps is this coding style not making the best use of Python's
 features ? To me, it sounds like doing procedural programming in OCaml -
 it's of course possible, but probably not the best way to use the language.

This is in principle possible, though I feel like I'm coding more
effectively since gravitating to this style.  Anton van Straaten (a
dynamic typing proponent) wrote of static checks
(http://lambda-the-ultimate.org/node/2216#comment-31954):

In most cases, the correctness proof itself is not the most important
advantage of static checking of a program. Rather, it's the type
discipline that must be followed in order to achieve that correctness
proof. The final proof is like the certificate you get when you
graduate from college: it's not that important by itself, but to have
obtained it you must have done a lot of work, at least some of which
is important and necessary. You can do the work without getting the
certificate, but many people don't, and in the programming case they
may pay a price for that in terms of programs with poor type
discipline, which can have consequences for reasoning and the
ilities.

I coded a fair amount of Lisp before starting with Python though, so
even before, I tended to code Python in a Lisp-like style, ignoring
Python's more dynamic features such as metaclasses.

I'd also compare the situation with Forth, where functions can consume
and insert arbitrary numbers of values to the stack, but programmers
in practice maintain careful stack discipline (making sure to pop and
push a constant number of values, and documenting the meaning of each
one) in order to avoid going crazy.  Just because the language offers
you rope, doesn't mean you can't decline to hang yourself with it.

Finally, this article about gradual typing (you can write your code
dynamically and then later add static annotations resulting in type
safety) might be of interest:

  http://lambda-the-ultimate.org/node/1707
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Paul Rubin
Steve Holden [EMAIL PROTECTED] writes:
  Python even leaks the index variable of list comprehensions (I've
  mostly stopped using them because of this), though that's a
  recognized wart and is due to be fixed.
 
 Wow, you really take non-pollution of the namespace seriously. I agree
 it's a wart, but it's one I have happily worked around since day one.

Well, the obvious workaround is just say list(some genexp)
instead of [the same genexp] so that's what I do.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Paul Rubin
Hamilton, William  [EMAIL PROTECTED] writes:
  Why on earth would anyone prefer taking a failure in the field over
  having a static type check make that particular failure impossible?
 
 Because static typechecking won't make that particular failure
 impossible,  but instead just change it from a type error into a data
 error that may or may not be harder to identify.  If your program gets a
 piece of data that breaks it, you'll get a failure in the field.  Static
 typechecking won't prevent that.  

I'm not doing a good job explaining that regexp example.  Static
checking a la Haskell and ML avoids these problems by:

  1) noticing through case analysis that you actually handle the
 error return (except it looks like you have to use a separate
 tool for this in Haskell, sigh); and

  2) (in Haskell) using monadic types to propagate match results from
 one operation to another, automatically taking care of turning
 a match failure in a chain of operations into a failure of the
 whole chain.

In Python it's all too common to say

 g1 = re.match(pattern, string)
 a = g2.group(0)
 g2 = re.match(template % a, other_string)
 result = g2.group(1)

or stuff like that, without bothering to check for match failures,
just because of Python's inconvenient syntax.

This article explaining Haskell's Maybe typeclass (what you'd use
instead of returning None in a Python function) may be of interest:

  http://blogs.nubgames.com/code/?cat=2
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Steve Holden
Paul Rubin wrote:
 Steve Holden [EMAIL PROTECTED] writes:
 Python even leaks the index variable of list comprehensions (I've
 mostly stopped using them because of this), though that's a
 recognized wart and is due to be fixed.

 Wow, you really take non-pollution of the namespace seriously. I agree
 it's a wart, but it's one I have happily worked around since day one.
 
 Well, the obvious workaround is just say list(some genexp)
 instead of [the same genexp] so that's what I do.

I think I'll start calling you Captain Sensible. But you are right, it 
does avoid that extra little potential for errors.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread sturlamolden
On Jun 20, 8:53 pm, Stephen R Laniel [EMAIL PROTECTED] wrote:

 Reading [1], I wonder: why isn't the compiler making better
 use of (purely optional) type labeling? Why not make a compiler
 directive so that

 a) it will check the types of all my arguments and return
values,

If that is what you want, you can get typechecking using a simple
function decorator:

def typechecked(func):
types = func.func_defaults
def decorator(*args):
if len(args) != len(types):
raise TypeError, 'Wrong number or arguments, expected %d
got %d' % (len(types),len(args))
for a,t in zip(args,types):
if type(t) == type:
if type(a) is not t:
raise TypeError, 'Expected ' + str(t) + ' got ' +
str(type(a))
else:
if type(a) is not type(t):
raise TypeError, 'Expected ' + str(type(t)) + '
got ' + str(type(a))
return func(*args)
return decorator


Now code like this:

@typechecked
def foobar(a = int, b = float, c = tuple):
   return None

or

@typechecked
def foobar(a = int, b = float, c = 1.0):
   return None

Your calls to foobar will be typechecked (at runtime, not statically),
and a TypeError exception thrown if your calling types were
incorrect.


Regards,

Sturla Molden

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Steven D'Aprano
On Sun, 08 Jul 2007 08:49:26 -0400, Steve Holden wrote:

 Paul Rubin wrote:
 Steve Holden [EMAIL PROTECTED] writes:
 Python even leaks the index variable of list comprehensions (I've
 mostly stopped using them because of this), though that's a
 recognized wart and is due to be fixed.

 Wow, you really take non-pollution of the namespace seriously. I agree
 it's a wart, but it's one I have happily worked around since day one.
 
 Well, the obvious workaround is just say list(some genexp)
 instead of [the same genexp] so that's what I do.
 
 I think I'll start calling you Captain Sensible. But you are right, it 
 does avoid that extra little potential for errors.

I'd be thinking it should be Captain Paranoid. Why is list comp leakage
a worse problem than, well, having local variables in the first place?

def foo(x):
y = something_or_other(x+1)
z = do_something_else(y) + another_thing(y)
# Oh noes!!! y still exists!!1!11!! What to do
return (foo, y)  # OMG I meant to say z!!!


If your function has got so many lines of code, and so many variables that
this becomes a source of errors, your function should be refactored into
smaller functions.

As far as I can see, the only difference is that the list comp variable
isn't explicitly created with a statement of the form name = value. Why
is that a problem?


-- 
Steven.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Paul Rubin
Steven D'Aprano [EMAIL PROTECTED] writes:
 As far as I can see, the only difference is that the list comp variable
 isn't explicitly created with a statement of the form name = value. Why
 is that a problem?

I don't know that listcomp vars are worse problem than other vars;
however there is an easy workaround for the listcomp vars so I use it.
If there was a way to restrict the scope of other local vars (I gave
examples from other languages of how this can be done), I'd use that too.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Steve Holden
Paul Rubin wrote:
 Steven D'Aprano [EMAIL PROTECTED] writes:
 As far as I can see, the only difference is that the list comp variable
 isn't explicitly created with a statement of the form name = value. Why
 is that a problem?
 
 I don't know that listcomp vars are worse problem than other vars;
 however there is an easy workaround for the listcomp vars so I use it.
 If there was a way to restrict the scope of other local vars (I gave
 examples from other languages of how this can be done), I'd use that too.

Maybe we just have different styles, and I naturally tend to write in 
smaller scopes than you do.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Paul Rubin
Steve Holden [EMAIL PROTECTED] writes:
 Maybe we just have different styles, and I naturally tend to write in
 smaller scopes than you do.

It's easy to make errors even in very small scopes.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-08 Thread Paul Rubin
Diez B. Roggisch [EMAIL PROTECTED] writes:
  Which implies that even in ADA, runtime type errors are in fact
  expected - else there would be no handler for such a case.
 
 Well, yes, runtime errors occur - in statically typed languages as
 well. That's essentially the halting-problem.

Well, no, it's quite possible for a language to reject every program
that has any possibility of throwing a runtime type error.  The
halting problem implies that no algorithm can tell EXACTLY which
programs throw errors and which do not.  So the language cannot accept
all programs that are free of runtime type errors and reject all
programs that definitely have runtime type errors, with no uncertain
cases.  But it's fine for a language to reject uncertain cases and
accept only those which it can rigorously demonstrate have no type
errors.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-07 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
  Some users in fact recommend writing an explicit type signature for
  every Haskell function, which functions sort of like a unit test.
 
 Stop here. explicit type signature == declarative static typing !=
 unit test.

The user-written signature is not a declaration that informs the
compiler of the type.  The compiler still figures out the type by
inference, just as if the signature wasn't there.  The user-written
signature is more like an assertion about what type the compiler will
infer.  If the assertion is wrong, the compiler signals an error.  In
that sense it's like a unit test; it makes sure the function does what
the user expects.

 I have few surprises with typing in Python. Very few. Compared to
 the flexibility and simplicity gained from a dynamism that couldn't
 work with static typing - even using type inference -, I don't see it
 a such a wonderful gain. At least in my day to day work.

I'm going to keep an eye out for it in my day-to-day coding but I'm
not so convinced that I'm gaining much from Python's dynamism.
However, that may be a self-fulfilling prophecy since maybe I'm
cultivating a coding style that doesn't use the dynamism, and I could
be doing things differently.  I do find since switching to Python 2.5
and using iterators more extensively, I use the class/object features
a lot less.  Data that I would have put into instance attributes on
objects that get passed from one function to another, instead become
local variables in functions that get run over sequences, etc.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-05 Thread Nis Jørgensen
Bruno Desthuilliers skrev:
 Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:

 Haskell - as other languages using type-inference like OCaml - are in
 a different category. Yes, I know, don't say it, they are statically
 typed - but it's mostly structural typing, not declarative
 typing. Which makes them much more usable IMHO. 


 Some users in fact recommend writing an explicit type signature for
 every Haskell function, which functions sort of like a unit test.
 
 Stop here. explicit type signature == declarative static typing != unit
 test.

Well, it quacks like a duck ...

Nis
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-05 Thread Steve Holden
Paul Rubin wrote:
 John Nagle [EMAIL PROTECTED] writes:
  This has been tried.  Original KR C had non-enforced static typing.
 All struct pointers were equivalent.  It wasn't pretty.

  It takes strict programmer discipline to make non-enforced static
 typing work.  I've seen it work in an aerospace company, but the Python
 crowd probably doesn't want that level of engineering discipline.
 
 I think even enforced static types wouldn't cure what I see as the
 looseness in Python.  There is not enough composability of small
 snippets of code.  For example, the for statement clobbers its index
 variable and then leaks it to the outside of the loop.  That may be
 more of a culprit than dynamic types.  Perl and C++ both fix this with
 syntax like
 
 for (my $i in ...) ...   (perl)  or
 for (int i = 0; i  n; i++) ...  (C++, Java)
 
 making a temporary scope for the index variable.  Python even leaks
 the index variable of list comprehensions (I've mostly stopped using
 them because of this), though that's a recognized wart and is due to
 be fixed.
 
Wow, you really take non-pollution of the namespace seriously. I agree 
it's a wart, but it's one I have happily worked around since day one.

 Python would be helped a lot by some way to introduce temporary
 scopes.  We had some discussion of this recently, concluding that
 there should be a compiler warning message if a variable in a
 temporary scope shadows one from a surrounding scope.

Yeah, well the approach to scoping could really be improved, but if you 
keep your namespaces small it's not difficult to keep things straight. I 
have always been rather guarded in my approach to accessing non-local 
scopes because the coupling is rather less than obvious, and is subject 
to variation due to non-local changes.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-05 Thread Hamilton, William
 From: Paul Rubin
 
 greg [EMAIL PROTECTED] writes:
   E.g. your program might pass its test and run properly for years
   before some weird piece of input data causes some regexp to not
quite
   work.
 
  Then you get a bug report, you fix it, and you add a test
  for it so that particular bug can't happen again.
 
 Why on earth would anyone prefer taking a failure in the field over
 having a static type check make that particular failure impossible?
 

Because static typechecking won't make that particular failure
impossible,  but instead just change it from a type error into a data
error that may or may not be harder to identify.  If your program gets a
piece of data that breaks it, you'll get a failure in the field.  Static
typechecking won't prevent that.  


--
-Bill Hamilton
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-05 Thread Donn Cave
In article [EMAIL PROTECTED],
 Paul Boddie [EMAIL PROTECTED] wrote:

 However, it's interesting to consider the work that sometimes needs to
 go in to specify data structures in some languages - thinking of ML
 and friends, as opposed to Java and friends. The campaign for optional
 static typing in Python rapidly became bogged down in this matter,
 fearing that any resulting specification for type information might
 not be the right combination of flexible and powerful to fit in with
 the rest of the language, and that's how we really ended up with PEP
 3107: make the semantics vague and pretend it has nothing to do with
 types, thus avoiding the issue completely.

I missed the campaign for optional static typing, must have been
waged in the developer list.  Unless it was not much more than
some on-line musings from GvR a year or two ago.  I don't see
how it could ever get anywhere without offending a lot of the
Python crowd, however well designed, so I can see why someone
might try to sneak it past by pretending it has nothing to do
with types.  But he didn't -- look at the examples, I think he
rather overstates the potential for static typing applications.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-05 Thread George Sakkis
On Jul 5, 3:24 pm, Donn Cave [EMAIL PROTECTED] wrote:
 In article [EMAIL PROTECTED],
  Paul Boddie [EMAIL PROTECTED] wrote:

  However, it's interesting to consider the work that sometimes needs to
  go in to specify data structures in some languages - thinking of ML
  and friends, as opposed to Java and friends. The campaign for optional
  static typing in Python rapidly became bogged down in this matter,
  fearing that any resulting specification for type information might
  not be the right combination of flexible and powerful to fit in with
  the rest of the language, and that's how we really ended up with PEP
  3107: make the semantics vague and pretend it has nothing to do with
  types, thus avoiding the issue completely.

 I missed the campaign for optional static typing, must have been
 waged in the developer list.  Unless it was not much more than
 some on-line musings from GvR a year or two ago.  I don't see
 how it could ever get anywhere without offending a lot of the
 Python crowd, however well designed, so I can see why someone
 might try to sneak it past by pretending it has nothing to do
 with types.  But he didn't -- look at the examples, I think he
 rather overstates the potential for static typing applications.

The key point is that this is left to 3rd party libraries; the
language won't know anything more about static typing than it does
now. FWIW, there is already a typechecking module [1] providing a
syntax as friendly as it gets without function annotations. If the
number of its downloads from the Cheeshop is any indication of static
typing's popularity among Pythonistas, I doubt that PEP 3107 will give
significant momentum to any non-standard module anytime soon.

George


[1] http://oakwinter.com/code/typecheck/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Paul Rubin
greg [EMAIL PROTECTED] writes:
  E.g. your program might pass its test and run properly for years
  before some weird piece of input data causes some regexp to not quite
  work.
 
 Then you get a bug report, you fix it, and you add a test
 for it so that particular bug can't happen again.

Why on earth would anyone prefer taking a failure in the field over
having a static type check make that particular failure impossible?

  Once I got the function to work, I deployed it without writing
  permanent tests for it.
 
 That suggests you had a temporary test at some point.

I ran the function on real, dynamic data and made sure the results
were right.

 So, keep it and make it a permanent test. Even if it's just a small
 manually-created directory, it's still better than nothing, and you
 can add to it over time to cover any bugs that turn up.

That doesn't obviously fit into the doctest framework; such a test
would be considerably more complex than the function itself, and the
types of situations that I could imagine causing failures wouldn't be
included in a test like that.  Basically I had to get on to the next
thing.  Given the somewhat throwaway nature of this particular code,
nothing else really made sense.  But I think I'm going to look into
using dejagnu for more heavyweight testing of some other parts of the
program, so I guess some good came out of discussing this issue.
Thanks.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 greg [EMAIL PROTECTED] writes:
 E.g. your program might pass its test and run properly for years
 before some weird piece of input data causes some regexp to not quite
 work.
 Then you get a bug report, you fix it, and you add a test
 for it so that particular bug can't happen again.
 
 Why on earth would anyone prefer taking a failure in the field over
 having a static type check make that particular failure impossible?

Because static type checks impose a lot of arbitrary restrictions, 
boilerplate code etc, which tends to make code more complicated than it 
needs to be, which is a good way of introducing bugs that wouldn't have 
existed without static type checks. Depending on the application domain 
and some technical and non-technical constraints and requirements, it 
(often) happens that it's better to have the application deployed now 
with an occasional error message than to have it next year...

And FWIW, when it comes to weird piece of input data, statically typed 
languages are not specially better than dynamic ones...

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Roy Smith
greg [EMAIL PROTECTED] wrote:

 Paul Rubin wrote:
  E.g. your program might pass its test and run properly for years
  before some weird piece of input data causes some regexp to not quite
  work.
 
 Then you get a bug report, you fix it, and you add a test
 for it so that particular bug can't happen again.

The TDD zealots would tell you you've got the order wrong.  Instead of 
fix, then write a test, it should be write a failing test, then fix it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Alex Martelli
Roy Smith [EMAIL PROTECTED] wrote:

 greg [EMAIL PROTECTED] wrote:
 
  Paul Rubin wrote:
   E.g. your program might pass its test and run properly for years
   before some weird piece of input data causes some regexp to not quite
   work.
  
  Then you get a bug report, you fix it, and you add a test
  for it so that particular bug can't happen again.
 
 The TDD zealots would tell you you've got the order wrong.  Instead of
 fix, then write a test, it should be write a failing test, then fix it.

Incidentally, I was pretty surprised recently (re-reading Weinberg's
Psychology of Computer Programming classic from _1971_) to find out
Weinberg advocating test-first coding (not the same thing as
test-driven design, but sharing the key insight that tests should be
written before the code they test) for psychological reasons. He's
discussing common practices of the '60s, with the same professional
writing both the code and the tests, and pointing out how often the
knowledge of the already-written code subconsciously influences the
programmer to write tests that don't really challenge the code enough
-- writing the tests in advance would avoid this problem.

Nihil sub sole novi...


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
 Because static type checks impose a lot of arbitrary restrictions,
 boilerplate code etc, which tends to make code more complicated than
 it needs to be, which is a good way of introducing bugs that wouldn't
 have existed without static type checks. 

Why do you say that?  By metrics and anecdotal evidence, Haskell code
appears to be at least as compact as Python code.

 Depending on the application domain and some technical and
 non-technical constraints and requirements, it (often) happens that
 it's better to have the application deployed now with an occasional
 error message than to have it next year...

I suppose that includes the thing I'm currently working on.  For
some other stuff I've done, such errors would have caused huge hassles,
lost customer money, etc.  

 And FWIW, when it comes to weird piece of input data, statically
 typed languages are not specially better than dynamic ones...

I know that ML gives compiler warning messages if you have a pattern
match (sort of a variant of a case statement, not a regexp match)
which is non-exhaustive.  And Haskell's Maybe monad is part of an
idiom that handles failing computations (like regexp matches) much
more gracefully than Python can.  Both of those would help this
situation.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Paul Boddie
Paul Rubin wrote:
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
  Because static type checks impose a lot of arbitrary restrictions,
  boilerplate code etc, which tends to make code more complicated than
  it needs to be, which is a good way of introducing bugs that wouldn't
  have existed without static type checks.

 Why do you say that?  By metrics and anecdotal evidence, Haskell code
 appears to be at least as compact as Python code.

I think Bruno is referring to another class of languages here.
However, it's interesting to consider the work that sometimes needs to
go in to specify data structures in some languages - thinking of ML
and friends, as opposed to Java and friends. The campaign for optional
static typing in Python rapidly became bogged down in this matter,
fearing that any resulting specification for type information might
not be the right combination of flexible and powerful to fit in with
the rest of the language, and that's how we really ended up with PEP
3107: make the semantics vague and pretend it has nothing to do with
types, thus avoiding the issue completely.

Paul

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread John Nagle
Paul Boddie wrote:
 Paul Rubin wrote:
 
 The campaign for optional
 static typing in Python rapidly became bogged down in this matter,
 fearing that any resulting specification for type information might
 not be the right combination of flexible and powerful to fit in with
 the rest of the language, and that's how we really ended up with PEP
 3107: make the semantics vague and pretend it has nothing to do with
 types, thus avoiding the issue completely.

 Unfortunately, that may lead to the worst of both worlds.

 If you think enforced static typing is painful, try maintaining code with
non-enforced static typing.  You can't rely on the type information, and
inevitably some of it will be wrong.  You can't tell by looking which
is wrong, of course.

 This has been tried.  Original KR C had non-enforced static typing.
All struct pointers were equivalent.  It wasn't pretty.

 It takes strict programmer discipline to make non-enforced static
typing work.  I've seen it work in an aerospace company, but the Python
crowd probably doesn't want that level of engineering discipline.
Non-enforced static typing requires a quality assurance group that
reads code and checks coding standards.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Paul Rubin
John Nagle [EMAIL PROTECTED] writes:
  This has been tried.  Original KR C had non-enforced static typing.
 All struct pointers were equivalent.  It wasn't pretty.
 
  It takes strict programmer discipline to make non-enforced static
 typing work.  I've seen it work in an aerospace company, but the Python
 crowd probably doesn't want that level of engineering discipline.

I think even enforced static types wouldn't cure what I see as the
looseness in Python.  There is not enough composability of small
snippets of code.  For example, the for statement clobbers its index
variable and then leaks it to the outside of the loop.  That may be
more of a culprit than dynamic types.  Perl and C++ both fix this with
syntax like

for (my $i in ...) ...   (perl)  or
for (int i = 0; i  n; i++) ...  (C++, Java)

making a temporary scope for the index variable.  Python even leaks
the index variable of list comprehensions (I've mostly stopped using
them because of this), though that's a recognized wart and is due to
be fixed.

Python would be helped a lot by some way to introduce temporary
scopes.  We had some discussion of this recently, concluding that
there should be a compiler warning message if a variable in a
temporary scope shadows one from a surrounding scope.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
 
Because static type checks impose a lot of arbitrary restrictions,
boilerplate code etc, which tends to make code more complicated than
it needs to be, which is a good way of introducing bugs that wouldn't
have existed without static type checks. 
 
 
 Why do you say that?   By metrics and anecdotal evidence, Haskell code
 appears to be at least as compact as Python code.

Haskell - as other languages using type-inference like OCaml - are in a 
different category. Yes, I know, don't say it, they are statically typed 
- but it's mostly structural typing, not declarative typing. Which makes 
them much more usable IMHO. It's too bad they are not more widely adopted.

Depending on the application domain and some technical and
non-technical constraints and requirements, it (often) happens that
it's better to have the application deployed now with an occasional
error message than to have it next year...
 
 
 I suppose that includes the thing I'm currently working on.
 
 For
 some other stuff I've done, such errors would have caused huge hassles,
 lost customer money, etc.  

Still, static typechecking is not a garantee against runtime errors. Nor 
against logical errors.

 
And FWIW, when it comes to weird piece of input data, statically
typed languages are not specially better than dynamic ones...
  
 I know that ML gives compiler warning messages if you have a pattern
 match (sort of a variant of a case statement, not a regexp match)

I know what pattern matching is, I did play a bit with OCaml and Haskell.

 which is non-exhaustive.  And Haskell's Maybe monad is part of an
 idiom that handles failing computations (like regexp matches) much
 more gracefully than Python can.  Both of those would help this
 situation.

I'd have to see a concrete use case. And I'd need much more real-world 
experience with some ML variant, but this is not something I can expect 
to happen in a near future - it's difficult enough to convince PHBs that 
Python is fine.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
 Haskell - as other languages using type-inference like OCaml - are in
 a different category. Yes, I know, don't say it, they are statically
 typed - but it's mostly structural typing, not declarative
 typing. Which makes them much more usable IMHO. 

Some users in fact recommend writing an explicit type signature for
every Haskell function, which functions sort of like a unit test.
That doesn't bloat the code up noticibly.  The conciseness of those
languages comes more from polymorphism and convenient ways of writing
and using higher-order functions, than from type inference.

 Still, static typechecking is not a garantee against runtime
 errors. Nor against logical errors.

Right, however the reality is it does seem to prevent a lot of
surprises.  There's even an intermediate language (a language
generated by a compiler as an intermediate step towards emitting
machine code) called Henk, in which EVERY value is type-annotated (and
in a very fancy type system too).  The author reports that the
annotations have been very helpful for noticing compiler bugs.

 I'd have to see a concrete use case. And I'd need much more real-world
 experience with some ML variant, but this is not something I can
 expect to happen in a near future - it's difficult enough to convince
 PHBs that Python is fine.

Monad Reader #7 has an article about some Wall street company using ML:

  http://www.haskell.org/sitewiki/images/0/03/TMR-Issue7.pdf

see the article by Yaron Minsky.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
 
Haskell - as other languages using type-inference like OCaml - are in
a different category. Yes, I know, don't say it, they are statically
typed - but it's mostly structural typing, not declarative
typing. Which makes them much more usable IMHO. 
 
 
 Some users in fact recommend writing an explicit type signature for
 every Haskell function, which functions sort of like a unit test.

Stop here. explicit type signature == declarative static typing != unit 
test.

 That doesn't bloat the code up noticibly.  The conciseness of those
 languages comes more from polymorphism and convenient ways of writing
 and using higher-order functions, than from type inference.

Type inference is certainly helpful for genericity.

 
Still, static typechecking is not a garantee against runtime
errors. Nor against logical errors.
 
 
 Right, however the reality is it does seem to prevent a lot of
 surprises.  

I have few surprises with typing in Python. Very few. Compared to the 
flexibility and simplicity gained from a dynamism that couldn't work 
with static typing - even using type inference -, I don't see it a such 
a wonderful gain. At least in my day to day work.

 
I'd have to see a concrete use case. And I'd need much more real-world
experience with some ML variant, but this is not something I can
expect to happen in a near future - it's difficult enough to convince
PHBs that Python is fine.
 
 
 Monad Reader #7 has an article about some Wall street company using ML:
 
   http://www.haskell.org/sitewiki/images/0/03/TMR-Issue7.pdf
 
 see the article by Yaron Minsky.

Sorry, I don't live near Wall Street !-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread BJörn Lindqvist
On 6/22/07, Eduardo EdCrypt O. Padoan [EMAIL PROTECTED] wrote:
 Remember that pure CPython has no different compile time and
 runtiime. But Psyco and ShedSkin could use the annotations the way
 they want.
.
 def compile(source: something compilable,
filename: where the compilable thing comes from,
mode: is this a single statement or a suite?):

I think the above would make an annotation-enhanced Psyco or ShedSkin
very confused.

-- 
mvh Björn
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Eduardo \EdCrypt\ O. Padoan
On 7/4/07, BJörn Lindqvist [EMAIL PROTECTED] wrote:
 On 6/22/07, Eduardo EdCrypt O. Padoan [EMAIL PROTECTED] wrote:
  Remember that pure CPython has no different compile time and
  runtiime. But Psyco and ShedSkin could use the annotations the way
  they want.
 .
  def compile(source: something compilable,
 filename: where the compilable thing comes from,
 mode: is this a single statement or a suite?):

 I think the above would make an annotation-enhanced Psyco or ShedSkin
 very confused.

This example was to show that annotations are for documentation too,
not only type checking or optimization. It is from the PEP.



EduardoOPadoan (eopadoan-altavix::com)
Bookmarks: http://del.icio.us/edcrypt
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread greg
Roy Smith wrote:
 greg [EMAIL PROTECTED] wrote:

  Then you get a bug report, you fix it, and you add a test
  for it so that particular bug can't happen again.

 The TDD zealots would tell you you've got the order wrong.  Instead of 
 fix, then write a test, it should be write a failing test, then fix it.

Yes, I'm aware of that. I said and, not then. :-)

--
Greg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Roy Smith
In article [EMAIL PROTECTED],
 John Nagle [EMAIL PROTECTED] wrote:

 Non-enforced static typing requires a quality assurance group that
 reads code and checks coding standards.

In other words, it's enforced, but it's enforced by QA people instead of 
the compiler.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-04 Thread Michael Hoffman
Eduardo EdCrypt O. Padoan wrote:
 On 6/30/07, Bruno Desthuilliers 
 [EMAIL PROTECTED] wrote:
 Eduardo EdCrypt O. Padoan a écrit :
  Remember that pure CPython has no different compile time and
  runtiime.

 Oh yes ? So what's the compiler doing, and what are those .pyc files ?
 (hint: read the doc)
 
 Sorry, I surely know that Python has a compile time, I wanted to say
 somthing like compile time checks except from syntax.

Well, if you try to reassign __debug__ or None you get a SyntaxError, 
but I don't think it is truly checking syntax.
-- 
Michael Hoffman
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-03 Thread John Nagle
Alex Martelli wrote:
 Donn Cave [EMAIL PROTECTED] wrote:
 
 
In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Alex Martelli) wrote:


Dynamic typing is recommended, they conclude, when programs must be
as flexible as possible.  I recommend reading the Agile Manifesto to
understand why maximal flexibility is crucial in most real-world 
application programming -- and therefore why, in said real world rather
than in the more academic circles Dr. Van Roy and Dr. Hadidi move in,
dynamic typing is generally preferable, and not such a tiny issue as
they make the difference to be.

I guess there may be more than one kind of flexibility.  The language
I fool around with sometimes that has strong static typing is Haskell.
Though a relatively elementary Haskell programmer (the learning curve
with this language is something you would have to experience to believe),
 
 
 I do have (some of:-) that experience, and am reasonably at ease in
 Haskell (except when it comes to coding monads, which I confess I still
 have trouble wrapping my head around).
 
 
I feel that the type checking actually helps me program faster.
The compiler's check for structural correctness is after all for my
benefit, and expedites development and maintenance of code, in my
limited experience.  The more extensive my changes, the more useful
the type checking is - I'm much more casual about significant code
revision when writing in Haskell, somewhat more casual when writing
in C, and worried when writing Python.
 
 
 Eckel's and Martin's well-known essays on why good testing can replace
 strict static type checking:
 http://www.mindview.net/WebLog/log-0025
 http://www.artima.com/weblogs/viewpost.jsp?thread=4639

 He's all over the place in that article.  He really isn't talking
about strong typing at all.  What he does make a case
for is dispatch by name without inheritance, or duck typing.
But that, in fact, is a form of strong type checking.  The effect is
as if object had an abstract function for every member function
in the whole program.  (And, in fact, in Python you pay exactly the
dispatch cost that would take.)

 The problem is that in both C++ and Java, the first round
of the type system was botched.  C++ originally had neither
generics nor checked downcasting (a la dynamic_cast).
Java originally lacked generics.  Both languages added those
as bolt-ons, and it shows.

 But, again, this really isn't a Python issue.

John Nagle
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-03 Thread Paul Rubin
[EMAIL PROTECTED] (Alex Martelli) writes:
 I do have (some of:-) that experience, and am reasonably at ease in
 Haskell (except when it comes to coding monads, which I confess I still
 have trouble wrapping my head around).

I recently found the article

  http://en.wikibooks.org/wiki/Haskell/Category_theory

which had (for me anyway) a far more comprehensible explanation of
monads than anything I ever saw about Haskell programming.  Don't be
scared by the title--the article is very readable.  It's much clearer
than many Haskell tutorials I've seen, whose monad explanations simply
wouldn't fit in my mind, with weird analogies about robots handling
nuclear waste in containers moving around on conveyor belts, etc.  I
think right now I could write an explanation of monads understandable
by any well-versed Python programmer (it would be based on showing the
parallels between Python's list type and Haskell's List monad, then
moving onto sequencing with the Maybe monad, instead of trying to
start by explaining Haskell I/O), so maybe I'll try to do that
sometime.

 Eckel's and Martin's well-known essays on why good testing can replace
 strict static type checking:
 http://www.mindview.net/WebLog/log-0025
 http://www.artima.com/weblogs/viewpost.jsp?thread=4639

Those are pretty unpersuasive since they're based on Java and C++.
Eckel's example would have been much easier in Haskell, I think.

Did you look at Tim Sweeney's presentation The Next Mainstream
Programming Language: A Game Developer's Perspective?  I wonder what
you thought of it.  It's here:

  Powerpoint version:
  http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt

  PDF version:
  http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf

He advocates a mixed functional/procedural language with even fancier
static types than Haskell.

 Me, I'm always worried about significant code revision _unless I have
 good tests in place_.  Strict static typechecks catch only a small
 subset of frequent mistakes -- good tests catch a far higher quota.

It seems to me that the typecheck is sort of like a test that the
compiler provides for you automatically.  You still have to write
tests of your own, but not as many.  Also, Python-style unit tests
usually are written for fairly small pieces of code.  They usually
don't manage to cover every way that data can flow through a program.
E.g. your program might pass its test and run properly for years
before some weird piece of input data causes some regexp to not quite
work.  It then hands None to some function that expects a string, and
the program crashes with a runtime type error.  Static typechecking
makes sure that a function expecting a string can never ever receive
None.  

I also sometimes have trouble figuring out how to write useful tests,
though maybe there's some understanding that I haven't yet grokked.
However, a lot of my programs crawl the filesystem, retrieve things
over the web, etc.  It's hard to make any meaningful tests
self-contained.  Last week I needed to write a 20-line Python function
that used os.walk to find all filenames of a certain format in a
certain directory tree and bundle them up in a particular way.  The
main debugging hassle was properly understanding the output format of
os.walk.  A static type signature for os.walk, checked against my
program, would have taken care of that immediately.  Once I got the
function to work, I deployed it without writing permanent tests for
it.  An actual self-contained test would have required making some
subdirectories with the right layout before running the function.
Writing a test script that really did that would have taken 10x as
long as writing the function took, and such a complex test would have
needed its own tests (I'd have probably used something like dejagnu).

For functions that simply transform data, I've been enjoying using
doctest, but for functions like the above os.walk thing, I'm not sure
what to do.  I'd be happy to receive advice, of course.

 Typechecks do catch some mistakes faster, but, in my experience on
 today's machines, that tiny difference is becoming insignificant,
 particularly when you consider that typechecks typically require
 whole-program analysis while, as Van Roy and Haridi point out, dynamic
 typing affords totally open coding.

I don't know if this holds in Haskell, but at least in C, the compiler
could usually find several errors in one pass, while dynamic types
often mean running the program, seeing some function crash from
getting None instead of a string, figuring out and fixing the error,
running again, getting a similar crash in a different place, and
repeating all the above several times.

Lately I've been looking at a somewhat highbrow book on programming
language theory:

   http://www.cs.cmu.edu/~rwh/plbook/book.pdf

I don't understand that much of it, but the parts I can make any sense
of are giving me a better picture of what PL designers these days
think about.  It's really 

Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-03 Thread Roy Smith
[EMAIL PROTECTED] (Alex Martelli) wrote:
 Eckel's and Martin's well-known essays on why good testing can replace
 strict static type checking:
 http://www.mindview.net/WebLog/log-0025
 http://www.artima.com/weblogs/viewpost.jsp?thread=4639

I've read the first before.  I just re-read it.  There seem to be three 
different concepts all being talked about at the same time.

1) Static vs. dynamic checking.
2) Type (is-a) checking vs. behavior (has-a) checking.
3) Automatic (i.e. compiler generated) vs. manually written tests.

They all allow you to write manual tests.  No sane programmer will rely 
exclusively on the automatic checks, no matter what flavor they are.  The 
interesting thing is that most people seem to conflate items 1 and 2 above 
into two composite camps: static type checking vs. dynamic behavior 
checking.  There's really no reason you can't have dynamic type checking 
(things that raise TypeError in Python, for example, or C++'s 
dynamic_cast).  There's also no reason you can't have static behavior 
checking (Java's interfaces).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-03 Thread greg
Paul Rubin wrote:
 E.g. your program might pass its test and run properly for years
 before some weird piece of input data causes some regexp to not quite
 work.

Then you get a bug report, you fix it, and you add a test
for it so that particular bug can't happen again.

 Once I got the
 function to work, I deployed it without writing permanent tests for
 it.

That suggests you had a temporary test at some point.
So, keep it and make it a permanent test. Even if it's
just a small manually-created directory, it's still
better than nothing, and you can add to it over time
to cover any bugs that turn up.

--
Greg
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-02 Thread Donn Cave
In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Alex Martelli) wrote:

 Dynamic typing is recommended, they conclude, when programs must be 
 as flexible as possible.  I recommend reading the Agile Manifesto to 
 understand why maximal flexibility is crucial in most real-world 
 application programming -- and therefore why, in said real world rather
 than in the more academic circles Dr. Van Roy and Dr. Hadidi move in, 
 dynamic typing is generally preferable, and not such a tiny issue as 
 they make the difference to be.

I guess there may be more than one kind of flexibility.  The language
I fool around with sometimes that has strong static typing is Haskell.
Though a relatively elementary Haskell programmer (the learning curve
with this language is something you would have to experience to believe),
I feel that the type checking actually helps me program faster.
The compiler's check for structural correctness is after all for my
benefit, and expedites development and maintenance of code, in my
limited experience.  The more extensive my changes, the more useful
the type checking is - I'm much more casual about significant code
revision when writing in Haskell, somewhat more casual when writing
in C, and worried when writing Python.

This is the kind of flexibility where you make significant changes
to something, but the result is as structurally consistent as it
would have been if written that way from the start.  I have also
seen the kind of flexibility where you use expedient hacks to make
changes with relatively small amounts of code, and I've seen it in
Python applications.  It's flexibility when you're doing it, but
it paradoxically causes rigidity as the hacks create more points
of articulation and more things that aren't obvious to the person
making the changes.  If that's flexibility, you can have it.

   Donn Cave, [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-02 Thread Steve Holden
Donn Cave wrote:
 In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (Alex Martelli) wrote:
 
 Dynamic typing is recommended, they conclude, when programs must be 
 as flexible as possible.  I recommend reading the Agile Manifesto to 
 understand why maximal flexibility is crucial in most real-world 
 application programming -- and therefore why, in said real world rather
 than in the more academic circles Dr. Van Roy and Dr. Hadidi move in, 
 dynamic typing is generally preferable, and not such a tiny issue as 
 they make the difference to be.
 
 I guess there may be more than one kind of flexibility.  The language
 I fool around with sometimes that has strong static typing is Haskell.
 Though a relatively elementary Haskell programmer (the learning curve
 with this language is something you would have to experience to believe),
 I feel that the type checking actually helps me program faster.
 The compiler's check for structural correctness is after all for my
 benefit, and expedites development and maintenance of code, in my
 limited experience.  The more extensive my changes, the more useful
 the type checking is - I'm much more casual about significant code
 revision when writing in Haskell, somewhat more casual when writing
 in C, and worried when writing Python.
 
 This is the kind of flexibility where you make significant changes
 to something, but the result is as structurally consistent as it
 would have been if written that way from the start.  I have also
 seen the kind of flexibility where you use expedient hacks to make
 changes with relatively small amounts of code, and I've seen it in
 Python applications.  It's flexibility when you're doing it, but
 it paradoxically causes rigidity as the hacks create more points
 of articulation and more things that aren't obvious to the person
 making the changes.  If that's flexibility, you can have it.
 
Indeed you describe an arthritic design quite well, but I don't think 
it's fair to single Python out as a potential problem. As we have known 
for a long time you can write unstructured code in almost any language, 
and you can do structured programming in almost any language given 
discipline.

I certainly agree that if the programmer shows insufficient discipline 
the results will likely be bad, but in that light Haskell is really the 
ultimate bondage language. Some of us don't like to be tied that tightly.

I don't feel the same trepidation you appear to do when refactoring 
Python code. I suppose I might say that in my experience Python allows 
you to appreciate a better-engineered solution more easily, and so it 
tends to encourage better engineering.

In some ways C# is quite Python-like, and I am beginning to enjoy 
writing it from time to time, but when you are dealing with amorphous 
collections the casting and conversions can drive you almost insane. And 
since the collections are ultimately collections of Object, none of it 
actually gets you any nearer your programming goal.

regards
  Steve
-- 
Steve Holden+1 571 484 6266   +1 800 494 3119
Holden Web LLC/Ltd   http://www.holdenweb.com
Skype: holdenweb  http://del.icio.us/steve.holden
--- Asciimercial --
Get on the web: Blog, lens and tag the Internet
Many services currently offer free registration
--- Thank You for Reading -

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-02 Thread Alex Martelli
Donn Cave [EMAIL PROTECTED] wrote:

 In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (Alex Martelli) wrote:
 
  Dynamic typing is recommended, they conclude, when programs must be
  as flexible as possible.  I recommend reading the Agile Manifesto to
  understand why maximal flexibility is crucial in most real-world 
  application programming -- and therefore why, in said real world rather
  than in the more academic circles Dr. Van Roy and Dr. Hadidi move in,
  dynamic typing is generally preferable, and not such a tiny issue as
  they make the difference to be.
 
 I guess there may be more than one kind of flexibility.  The language
 I fool around with sometimes that has strong static typing is Haskell.
 Though a relatively elementary Haskell programmer (the learning curve
 with this language is something you would have to experience to believe),

I do have (some of:-) that experience, and am reasonably at ease in
Haskell (except when it comes to coding monads, which I confess I still
have trouble wrapping my head around).

 I feel that the type checking actually helps me program faster.
 The compiler's check for structural correctness is after all for my
 benefit, and expedites development and maintenance of code, in my
 limited experience.  The more extensive my changes, the more useful
 the type checking is - I'm much more casual about significant code
 revision when writing in Haskell, somewhat more casual when writing
 in C, and worried when writing Python.

Eckel's and Martin's well-known essays on why good testing can replace
strict static type checking:
http://www.mindview.net/WebLog/log-0025
http://www.artima.com/weblogs/viewpost.jsp?thread=4639

Me, I'm always worried about significant code revision _unless I have
good tests in place_.  Strict static typechecks catch only a small
subset of frequent mistakes -- good tests catch a far higher quota.
Typechecks do catch some mistakes faster, but, in my experience on
today's machines, that tiny difference is becoming insignificant,
particularly when you consider that typechecks typically require
whole-program analysis while, as Van Roy and Haridi point out, dynamic
typing affords totally open coding.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-07-01 Thread Diez B. Roggisch
 
 
 Because of the different flight path, a data conversion from a 64-bit 
 floating point to 16-bit signed integer value caused a hardware 
 exception (more specifically, an arithmetic overflow, as the floating 
 point number had a value too large to be represented by a 16-bit signed 
 integer).
 
 
 As far as I'm concerned, it surely qualifies as a runtime type error - 
 data conversion from a floating point to a 16-bit signed int should 
 not be allowed when unsafe, isn't it ?

I wouldn't say so. The conversion is a partial function. It _has_ to be 
a partial function, as you can't cram 64 bit into 16 without losses. If 
you could, infinite compression would be possible.

So - you _have_ to allow such a conversion. That has nothing to do with 
types, the types were proper. The values weren't.


 
 Efficiency considerations had led to the disabling of the software 
 handler (in Ada code) for this error trap.
 
 
 Which implies that even in ADA, runtime type errors are in fact expected 
 - else there would be no handler for such a case.

Well, yes, runtime errors occur - in statically typed languages as well. 
That's essentially the halting-problem.


  But what, how could an ADA module not be correct if it compiles - the
  ADA type system is here to prove the absence of certain bugs, isn't
  it ?-)


Yes, certain bugs. Not those though. In the same sense as the python 
compiler pukes statically on me if I write

def f():
n = x * 2
x = n


The whole point of the Ariane failure was that nobody believed in the 
code because the type analyzer had woven it through - but because the 
specs and a mathematical analysis of the code had shown it was to be 
trusted. For the old rocket, that was.

The question is not if static typing that prevents certain errors is a 
bad thing. It's not. But it comes with a pricetag - the lack of 
flexibility. And usually, I'm not willing to pay that price, because it 
makes my whole design much more cumbersome and thus often error-prone - 
as line numbers increase.

But in a constrained environment like the ariane, where you have limited 
resources, you want as much certainity as you can get, together with 
operations that run on the native types of your processor in an 
efficient way.


Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Paul Rubin
Bruno Desthuilliers [EMAIL PROTECTED] writes:
  [A type system is a] tractable syntactic method for proving the
  absence of certain program behaviors by classifying phrases
  according to the kinds of values they compute. (Pierce 2002).
 
 Is this supposed to contradict my assertion that *static* typing is
 for compilers ?

Yes, the main benefit these days is to prove the absence of certain
types of bugs in the program.

The article What To Know Before Debating Type Systems is pretty
informative (though it doesn't go as far as to call C/C++ untyped):

   http://cdsmith.twu.net/types.html

  C and C++ are basically untyped languages.

 Hem... This assertion is at least debatable. Care to post this on
 c.l.c or c.l.c++, so we get some feedback ?

I wouldn't consider the aficionados of those dinosaur languages to be
authorities on such a question.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Bruno Desthuilliers
Paul Rubin a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] writes:
 
[A type system is a] tractable syntactic method for proving the
absence of certain program behaviors by classifying phrases
according to the kinds of values they compute. (Pierce 2002).

Is this supposed to contradict my assertion that *static* typing is
for compilers ?
 
 Yes, the main benefit these days is to prove the absence of certain
 types of bugs in the program.

As someone said, if declaring types thrices was enough to ensure 
correctness, I'd happily do so.

I still maintain that the primary *practical* reason behind static 
typing is to provide optimization clues for the compiler. You can (or at 
least could) have declarative static typing with very few type 
*checking* - I may be wrong here but I think one could even write a C 
compiler without *any* type checking. Heck, the programmer said it's a 
char*, so it must be one, right ?-) Of course, no one in it's own mind 
would use such a compiler (if you have to suffer from the restrictions 
imposed by static typing, you want to also profit from it's benefices - 
or at least avoid the possibly dramatic effects of a type error in these 
systems). Also, some (mostly) dynamically typed languages have grown 
type-annotations for the very same reason : hinting the compiler about 
possible optimizations.

wrt/ proofs of correctness, I'll just point to the spectacular failure 
of Ariane, which was caused by a *runtime* type error in a system 
programmed in ADA - one of the languages with the most psychorigid 
declarative static type systems. Not to say these proofs are inexistant, 
just that they are IMHO a bit overvalued - at least wrt/ to my daily 
concerns. Most of the errors I have to deal with are logical errors, not 
type errors, and given the kind of logic (err...) you can find in 
business applications (to make a long story short : it has very few to 
do with mathematics), I have some doubt about what a static type system 
- even a _very_ smart one - could prove here, at least if we hope to 
deliver the application within time and budget constraints (sorry for 
being soo practical here).

 The article What To Know Before Debating Type Systems is pretty
 informative (though it doesn't go as far as to call C/C++ untyped):
 
http://cdsmith.twu.net/types.html

Quote :

Therefore: I give the following general definitions for strong and weak 
typing, at least when used as absolutes:

 * Strong typing: A type system that I like and feel comfortable with
 * Weak typing: A type system that worries me, or makes me feel 
uncomfortable


According to this definition, I herefore declare that Python is strongly 
typed and Java weakly typed !-)

(sorry, couldn't resist).

More seriously, it's indeed an interesting article - even if I do not 
necessarily share all the views of the author - for those not knowing 
anything about type systems. But it's clearly a bit oriented and IMHO 
fails to present the pros of dynamic typing.

BTW, if I may insert another quote, that I think is relevant here as 
well as in another recent thread about 'typed' containers in Python:

I find it amusing when novice programmers believe their main job is 
preventing programs from crashing.  I imagine this spectacular failure 
argument wouldn't be so appealing to such a programmer.  More 
experienced programmers realize that correct code is great, code that 
crashes could use improvement, but incorrect code that doesn't crash is 
a horrible nightmare.


 
C and C++ are basically untyped languages.

Hem... This assertion is at least debatable. Care to post this on
c.l.c or c.l.c++, so we get some feedback ?
  
 I wouldn't consider the aficionados of those dinosaur languages

Hmmm... For a dinausor, C seems well alive. Can you remind me which 
language is used to implement CPython and most of actual operating 
systems and low-level libraries ?

 to be
 authorities on such a question.

If you mean that most them aren't necessarily great CS theorists, then 
you may be right. OTHO, they do have some practical experience with 
these languages, so they might have something to say...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Eduardo \EdCrypt\ O. Padoan
On 6/30/07, Bruno Desthuilliers [EMAIL PROTECTED] wrote:
 Eduardo EdCrypt O. Padoan a écrit :
  Remember that pure CPython has no different compile time and
  runtiime.

 Oh yes ? So what's the compiler doing, and what are those .pyc files ?
 (hint: read the doc)

Sorry, I surely know that Python has a compile time, I wanted to say
somthing like compile time checks except from syntax. My intention
was to show that a *huge* change in the Python interpretation model
would be needed to allow what he wanted, but I finished my email too
early :P
And yes, I readed the docs :)

-- 
EduardoOPadoan (eopadoan-altavix::com)
Bookmarks: http://del.icio.us/edcrypt
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Alex Martelli
Bruno Desthuilliers [EMAIL PROTECTED] wrote:
   ...
 I still maintain that the primary *practical* reason behind static 
 typing is to provide optimization clues for the compiler. You can (or at

It's definitely a helpful aspect, yes -- given that compilers able to
infer types are still not very common for widely used languages; also,
complete type checking or inference requires analysis of the whole
program, which may be quite impractical -- and stops what Van Roy and
Haridi, in their masterpiece Concepts, Techniques and Models of
Computer Programming, call totally open programming.  Quoting a post
of mine from 2004:

I love the explanations of Van Roy and Haridi, p. 104-106 of their book,
though I may or may not agree with their conclusions (which are 
basically that the intrinsic difference is tiny -- they point to Oz and
Alice as interoperable languages without and with static typing, 
respectively), all the points they make are good.  Most importantly, I 
believe, the way dynamic typing allows real modularity (harder with 
static typing, since type discipline must be enforced across module 
boundaries), and exploratory computing in a computation model that 
integrates several programming paradigms. 

Dynamic typing is recommended, they conclude, when programs must be 
as flexible as possible.  I recommend reading the Agile Manifesto to 
understand why maximal flexibility is crucial in most real-world 
application programming -- and therefore why, in said real world rather
than in the more academic circles Dr. Van Roy and Dr. Hadidi move in, 
dynamic typing is generally preferable, and not such a tiny issue as 
they make the difference to be.  Still, they at least show more 
awareness of the issues, in devoting 3 excellent pages of discussion 
about it, pros and cons, than almost any other book I've seen -- most 
books have clearly delineated and preformed precedence one way or the 
other, so the discussion is rarely as balanced as that;). 


 least could) have declarative static typing with very few type 
 *checking* - I may be wrong here but I think one could even write a C
 compiler without *any* type checking. Heck, the programmer said it's a
 char*, so it must be one, right ?-)

That compiler, I believe, would violate the ISO standard for C (so
calling it a C compiler would be about as correct as calling it a
banana, in my view:-).

 wrt/ proofs of correctness, I'll just point to the spectacular failure
 of Ariane, which was caused by a *runtime* type error in a system 

I like a quote by Knuth -- beware this program may have bugs as I have
only proven it and not tested it:-)

 Hmmm... For a dinausor, C seems well alive. Can you remind me which 

So do chickens.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Bruno Desthuilliers
Alex Martelli a écrit :
 Bruno Desthuilliers [EMAIL PROTECTED] wrote:
...
 
I still maintain that the primary *practical* reason behind static 
typing is to provide optimization clues for the compiler. You can (or at
 
 
 It's definitely a helpful aspect, yes -- given that compilers able to
 infer types are still not very common for widely used languages;

or given that languages using a type-inferenced based system are not 
very widely used...

(snip)

least could) have declarative static typing with very few type 
*checking* - I may be wrong here but I think one could even write a C
compiler without *any* type checking. Heck, the programmer said it's a
char*, so it must be one, right ?-)
 
 
 That compiler, I believe, would violate the ISO standard for C (so
 calling it a C compiler would be about as correct as calling it a
 banana, in my view:-).

Lol. That being said, there were bananas - oops, I meant C compilers - 
before the standard existed !-)

 
wrt/ proofs of correctness, I'll just point to the spectacular failure
of Ariane, which was caused by a *runtime* type error in a system 
 
 
 I like a quote by Knuth -- beware this program may have bugs as I have
 only proven it and not tested it:-)

Yeps.

 
Hmmm... For a dinausor, C seems well alive. Can you remind me which 
  
 So do chickens.
 
I'm afraid I didn't get the joke... Are you saying that C is a rather, 
well, primitive language ?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Diez B. Roggisch
 
 wrt/ proofs of correctness, I'll just point to the spectacular failure 
 of Ariane, which was caused by a *runtime* type error in a system 
 programmed in ADA - one of the languages with the most psychorigid 
 declarative static type systems.

That's simply not true. The problem hadn't to do with typing - it was a 
overflow due to larger input quantities. It had been shown for the 
Ariane 4 that this overrun could never happen - by whatever thechnique, 
operational or denotational semanticse, I don't know.

For the ariane 5, these input constraints didn't hold, which caused the 
unexpected runtime error to occur.

But there is _no_ type-error here! The programmers might have chosen a 
different type with larger range, sure. But to say that's a type error 
is as saying using a string where a tuple of int and string were better 
is a type error. It's another path of implementation.

Or from a testing angle: if the unit tests did only produce input values 
to a certain quantity, the results were ok. Nobody thought about writing 
new tests with larger quantities - the same way nobody thought about 
proving the program with those same quantities.

Diez
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Bruno Desthuilliers
Diez B. Roggisch a écrit :

 wrt/ proofs of correctness, I'll just point to the spectacular failure 
 of Ariane, which was caused by a *runtime* type error in a system 
 programmed in ADA - one of the languages with the most psychorigid 
 declarative static type systems.
 
 
 That's simply not true. The problem hadn't to do with typing - it was a 
 overflow due to larger input quantities.

cf below.

 It had been shown for the 
 Ariane 4 that this overrun could never happen - by whatever thechnique, 
 operational or denotational semanticse, I don't know.

IIRC it was in the specs.

 For the ariane 5, these input constraints didn't hold, which caused the 
 unexpected runtime error to occur.


Because of the different flight path, a data conversion from a 64-bit 
floating point to 16-bit signed integer value caused a hardware 
exception (more specifically, an arithmetic overflow, as the floating 
point number had a value too large to be represented by a 16-bit signed 
integer).


As far as I'm concerned, it surely qualifies as a runtime type error - 
data conversion from a floating point to a 16-bit signed int should 
not be allowed when unsafe, isn't it ?


Efficiency considerations had led to the disabling of the software 
handler (in Ada code) for this error trap.


Which implies that even in ADA, runtime type errors are in fact expected 
- else there would be no handler for such a case.

The root of the problem is - of course - not technical but human. As you 
said, no one cared to write the appropriate unit tests to ensure this 
module - specified for Ariane 4 - could be reused 'as is' for Ariane 5. 
But what, how could an ADA module not be correct if it compiles - the 
ADA type system is here to prove the absence of certain bugs, isn't it ?-)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Paddy
On Jun 30, 8:30 pm, Bruno Desthuilliers
[EMAIL PROTECTED] wrote:
 Paul Rubin a écrit :

  Bruno Desthuilliers [EMAIL PROTECTED] writes:

 [A type system is a] tractable syntactic method for proving the
 absence of certain program behaviors by classifying phrases
 according to the kinds of values they compute. (Pierce 2002).

 Is this supposed to contradict my assertion that *static* typing is
 for compilers ?

  Yes, the main benefit these days is to prove the absence of certain
  types of bugs in the program.

 As someone said, if declaring types thrices was enough to ensure
 correctness, I'd happily do so.

 I still maintain that the primary *practical* reason behind static
 typing is to provide optimization clues for the compiler.

The primary *practical* reason for static typing is that it allows
large software companies to convince customers that they have *the*
method to allow them to apply divide  conquer techniques to solve
their problems with a large team of programmers and justifying
large fees.

How many of those cancelled UK government software projects that
cost me hundreds of millions of pounds were programmed by people
espousing the greater safety of their static typing?

- Paddy.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 3107 and stronger typing (note: probably a newbie question)

2007-06-30 Thread Alex Martelli
Bruno Desthuilliers [EMAIL PROTECTED] wrote:
   ...
 Hmmm... For a dinausor, C seems well alive. Can you remind me which
   
  So do chickens.
  
 I'm afraid I didn't get the joke... Are you saying that C is a rather,
 well, primitive language ?

The joke's just based on the fact that (based on DNA analysis) birds are
said to be dinosaurs, and are definitely well alive:-).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >