Re: is laziness a programer's virtue?

2007-04-16 Thread Torben Ægidius Mogensen
Dan Bensen [EMAIL PROTECTED] writes:

 Xah Lee wrote:
 Laziness, Perl, and Larry Wall
 When the sorcerer Larry Wall said “The three chief virtues of a
 programmer are: Laziness, Impatience and Hubris”, he used the word
 “laziness” to loosely imply “natural disposition that results in being
 economic”.

 Programming by definition is the process of automating repetitive
 actions to reduce the human effort required to perform them.  A good
 programmer faced with a hard problem always looks for ways to make
 his|her job easier by delegating work to a computer.  That's what
 Larry means.  Automation is MUCH more effective than repetition.

Indeed.  A programmer is someone who, after doing similar tasks by
hand a few times, writes a program to do it.  This extends to
programming tasks, so after writing similar programs a few times, a
(good) programmer will use programming to make writing future similar
programs easier.  This can be by abstracting the essence of the task
into library functions so new programs are just sequences of
parameterized calls to these, or it can be by writing a program
generator (such as a parser generator) or it can be by designing a
domain-specific language and writing a compiler or interpreter for
this.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: is laziness a programer's virtue?

2007-04-16 Thread Torben Ægidius Mogensen
[EMAIL PROTECTED] (Rob Warnock) writes:

 Daniel Gee [EMAIL PROTECTED] wrote:
 +---
 | You fail to understand the difference between passive laziness and
 | active laziness. Passive laziness is what most people have. It's
 | active laziness that is the virtue. It's the desire to go out and /
 | make sure/ that you can be lazy in the future by spending just a
 | little time writing a script now. It's thinking about time
 | economically and acting on it.
 +---

 Indeed. See Robert A. Heinlein's short story (well, actually just
 a short section of his novel Time Enough For Love: The Lives of
 Lazarus Long) entitled The Tale of the Man Who Was Too Lazy To
 Fail. It's about a man who hated work so much that he worked
 very, *very* hard so he wouldn't have to do any (and succeeded).

You can also argue that the essence of progress is someone saying
Hey, there must be an easier way to do this!.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-21 Thread Torben Ægidius Mogensen
Rob Thorpe [EMAIL PROTECTED] writes:

 Andreas Rossberg wrote:

  No, variables are insignificant in this context. You can consider a
  language without variables at all (such languages exist, and they can
  even be Turing-complete) and still have evaluation, values, and a
  non-trivial type system.
 
 Hmm.  You're right, ML is no-where in my definition since it has no
 variables.

That's not true.  ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times.  What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).

What Andreas was alluding to was presumably FP-style languages where
functions or relations are built by composing functions or relations
without ever naming values.

Torben
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-19 Thread Torben Ægidius Mogensen
Raffael Cavallaro [EMAIL PROTECTED]'espam-s'il-vous-plait-mac.com writes:

 This is purely a matter of programming style. For explorative
 programming it is easier to start with dynamic typing and add static
 guarantees later rather than having to make decisions about
 representation and have stubs for everything right from the start.

I think you are confusing static typing with having to write types
everywhere.  With type inference, you only have to write a minimum of
type information (such as datatype declarations), so you can easily do
explorative progrmming in such languages -- I don't see any advantage
of dynamic typing in this respect.

 The
 lisp programming style is arguably all about using heterogenous lists
 and forward references in the repl for everything until you know what
 it is that you are doing, then choosing a more appropriate
 representation and filling in forward references once the program
 gels. Having to choose representation right from the start and needing
 working versions (even if only stubs) of *every* function you call may
 ensure type correctness, but many programmers find that it also
 ensures that you never find the right program to code in the first
 place.

If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function.  So you can't really do much exploration until
you have some definitions.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.

 This is because you don't have the freedom to explore possible
 solutions without having to break your concentration to satisfy the
 nagging of a static type checker.

I tend to disagree.  I have programmed a great deal in Lisp, Scheme,
Prolog (all dynamically typed) and SML and Haskell (both statically
typed).  And I don't find that I need more stubs etc. in the latter.
In fact, I do a lot of explorative programming when writing programs
in ML and Haskell.  And I find type inference very helpful in this, as
it guides the direction of the exploration, so it is more like a
safari with a local guide than a blindfolded walk in the jungle.

Torben
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-19 Thread Torben Ægidius Mogensen
Rob Thorpe [EMAIL PROTECTED] writes:

 Torben Ægidius Mogensen wrote:
  Rob Thorpe [EMAIL PROTECTED] writes:
 
   Torben Ægidius Mogensen wrote:
 
  Indeed.  So use a language with type inference.
 
 Well, for most purposes that's the same as dynamic typing since the
 compiler doesn't require you to label the type of your variables.

That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations.  Dynamic typing means
that such violations are detected at run-time.  This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all.  The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value.  ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.

Anyway, type inference for statically typed langauges don't make them
any more dynamically typed.  It just moves the burden of assigning the
types from the programmer to the compiler.  And (for HM type systems)
the compiler doesn't guess at a type -- it finds the unique most
general type from which all other legal types (within the type system)
can be found by instantiation.

  I
 occasionally use CMUCL and SBCL which do type inference, which is
 useful at improving generated code quality.  It also can warn the
 programmer if they if they reuse a variable in a context implying that
 it's a different type which is useful.
 
 I see type inference as an optimization of dynamic typing rather than a
 generalization of static typing.  But I suppose you can see it that way
 around.

Some compilers for dynamically typed languages will do a type analysis
similar to type inference, but they will happily compile a program
even if they can't guarantee static type safety.

Such type inference can be seen as an optimisation of dynamic
typing, as it allows the compiler to omit _some_ of the runtime type
checks.  I prefer the term soft typing for this, though, so as not
to confuse with static type inference.

Soft typing can give feedback similar to that of type inference in
terms of identifying potential problem spots, so in that respect it is
similar to static type inference, and you might get similar fast code
development.  You miss some of the other benefits of static typing,
though, such as a richer type system -- soft typing often lacks
features like polymorphism (it will find a set of monomorphic
instances rather than the most general type) and type classes.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-19 Thread Torben Ægidius Mogensen
George Neuner gneuner2/@comcast.net writes:

 On 19 Jun 2006 10:19:05 +0200, [EMAIL PROTECTED] (Torben Ægidius
 Mogensen) wrote:

 I expect a lot of the exploration you do with incomplete programs
 amount to the feedback you get from type inference.
 
 The ability to write functions and test them immediately without
 writing a lot of supporting code is _far_ more useful to me than type
 inference.  

I can't see what this has to do with static/dynamic typing.  You can
test individula functions in isolation is statically typed languages
too.
 
 I'm not going to weigh in on the static v dynamic argument ... I think
 both approaches have their place.  I am, however, going to ask what
 information you think type inference can provide that substitutes for
 algorithm or data structure exploration.

None.  But it can provide a framework for both and catch some types of
mistakes early.

Torben
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-16 Thread Torben Ægidius Mogensen
Raffael Cavallaro [EMAIL PROTECTED]'espam-s'il-vous-plait-mac.com writes:

 On 2006-06-14 16:36:52 -0400, Pascal Bourguignon [EMAIL PROTECTED] said:
 
  In lisp, all lists are homogenous: lists of T.
 
 CL-USER 123  (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect
 (type-of elt))
 (CHARACTER FIXNUM DOUBLE-FLOAT RATIO)
 
 i.e., heterogenous in the common lisp sense: having different
 dynamic types, not in the H-M sense in which all lisp values are of
 the single union type T.

What's the difference?  Dynamically types values _are_ all members of
a single tagged union type.  The main difference is that the tages
aren't always visible and that there are only a fixed, predefined
number of them.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-16 Thread Torben Ægidius Mogensen
Pascal Costanza [EMAIL PROTECTED] writes:

 Torben Ægidius Mogensen wrote:
 
  On a similar note, is a statically typed langauge more or less
  expressive than a dynamically typed language?  Some would say less, as
  you can write programs in a dynamically typed language that you can't
  compile in a statically typed language (without a lot of encoding),
  whereas the converse isn't true.
 
 It's important to get the levels right here: A programming language
 with a rich static type system is more expressive at the type level,
 but less expressive at the base level (for some useful notion of
 expressiveness ;).
 
  However, I think this is misleading,
  as it ignores the feedback issue: It takes longer for the average
  programmer to get the program working in the dynamically typed
  language.
 
 This doesn't seem to capture what I hear from Haskell programmers who
 say that it typically takes quite a while to convince the Haskell
 compiler to accept their programs. (They perceive this to be
 worthwhile because of some benefits wrt correctness they claim to get
 in return.)

That's the point: Bugs that in dynamically typed languages would
require testing to find are found by the compiler in a statically
typed language.  So whil eit may take onger to get a program thatgets
past the compiler, it takes less time to get a program that works.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-16 Thread Torben Ægidius Mogensen
Pascal Costanza [EMAIL PROTECTED] writes:

 Torben Ægidius Mogensen wrote:

  So while it may take longer to get a program that gets
  past the compiler, it takes less time to get a program that works.
 
 That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps -
 especially Figure 3.

There are many other differences between these languages than static
vs. dynamic types, and some of these differences are likely to be more
significant.  What you need to test is langauges with similar features
and syntax, except one is statically typed and the other dynamically
typed.

And since these languages would be quite similar, you can use the same
test persons: First let one half solve a problem in the statically
typed language and the other half the same problem in the dynamically
typed language, then swap for the next problem.  If you let a dozen
persons each solve half a dozen problems, half in the statically typed
language and half in the dynamically typed language (using different
splits for each problem), you might get a useful figure.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-16 Thread Torben Ægidius Mogensen
Rob Thorpe [EMAIL PROTECTED] writes:

 Torben Ægidius Mogensen wrote:

  That's the point: Bugs that in dynamically typed languages would
  require testing to find are found by the compiler in a statically
  typed language.  So whil eit may take onger to get a program thatgets
  past the compiler, it takes less time to get a program that works.
 
 In my experience the opposite is true for many programs.
 Having to actually know the precise type of every variable while
 writing the program is not necessary, it's a detail often not relevant
 to the core problem. The language should be able to take care of
 itself.
 
 In complex routines it can be useful for the programmer to give types
 and for the compiler to issue errors when they are contradicted.  But
 for most routines it's just an unnecessary chore that the compiler
 forces on the programmer.

Indeed.  So use a language with type inference.

Torben

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-14 Thread Torben Ægidius Mogensen
Joe Marshall [EMAIL PROTECTED] writes:


  On the Expressive Power of Programming Languages, by Matthias
  Felleisen, 1990.
  http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf
 
 The gist of the paper is this: Some computer languages seem to be
 `more expressive' than others.  But anything that can be computed in
 one Turing complete language can be computed in any other Turing
 complete language.  Clearly the notion of expressiveness isn't
 concerned with ultimately computing the answer.

 Felleisen's paper puts forth a formal definition of expressiveness
 in terms of semantic equivilances of small, local constructs.  In
 his definition, wholescale program transformation is disallowed so
 you cannot appeal to Turing completeness to claim program
 equivalence.

I think expressiveness is more subtle than this.  Basically, it boils
down to: How quickly can I write a program to solve my problem?.

There are several aspects relevant to this issue, some of which are:

 - Compactness: How much do I have to type to do what I want?

 - Naturality: How much effort does it take to convert the concepts of
   my problem into the concepts of the language?

 - Feedback: Will the language provide sensible feedback when I write
   nonsensical things?

 - Reuse: How much effort does it take to reuse/change code to solve a
   similar problem?

Compactness is hard to measure.  It isn't really about the number of
characters needed in a program, as I don't think one-character symbols
instead of longer keywords make a language more expressive.  It is
better to count lexical units, but if there are too many different
predefined keywords and operators, this isn't reasonable either.
Also, the presence of opaque one-liners doesn't make a language
expressible.  Additionally, as mentioned above, Turing-completeness
(TC) allows you to implement any TC language in any other, so above a
certain size, the choice of language doesn't affect size.  But
something like (number of symbols in program)/log(number of different
symbols) is not too bad.  If programs are allowed to use standard
libraries, the identifiers in the libraries should be counted in the
number of different symbols.

Naturality is very difficult to get a grip on, and it strongly depends
on the type of problem you want to solve.  So it only makes sense to
talk about expressiveness relative to a set of problem domains.  If
this set is small, domain-specific languages win hands down, so if you
want to compare expressiveness of general-purpose languages, you need
a large set of very different problems.  And even with a single
problem, it is hard to get an objective measure of how difficult it is
to map the problem's concepts to those of the language.  But you can
normally observe whether you need to overspecify the concept (i.e.,
you are required to make arbitrary decisions when mapping from concept
to data), if the mapping is onto (i.e., can you construct data that
isn't sensible in the problem domain) and how much redundancy your
representation has.

Feedback is a mixture of several things.  Partly, it is related to
naturality, as a close match between problem concepts and language
concepts makes it less likely that you will express nonsense (relative
to the problem domain) that makes sense in the language.  For example,
if you have to code everything as natural numbers, untyped pure lambda
calculus or S-expressions, there is a good chance that you can get
nonsense past the compiler.  Additionally, it is about how difficult
it is to tie an observation about a computed result to a point in the
program.

Measuring reuse depends partly on what is meant by problems being
similar and also on whether you at the time you write the original
code can predict what types of problems you might later want to solve,
i.e., if you can prepare the code for reuse.  Some languages provide
strong mechanisms for reuse (templates, object hierarchies, etc.), but
many of those require that you can predict how the code is going to be
reused.  So, maybe, you should measure how difficult it is to reuse a
piece of code that is _not_ written with reuse in mind.

This reminds me a bit of last years ICFP contest, where part of the
problem was adapting to a change in specification after the code was
written.

 Expressiveness isn't necessarily a good thing.  For instance, in C,
 you can express the addresses of variables by using pointers.  You
 cannot express the same thing in Java, and most people consider this
 to be a good idea.

I think this is pretty much covered by the above points on naturality
and feedback: Knowing the address of a value or object is an
overspecification unless the address maps back into something in the
problem domain.

On a similar note, is a statically typed langauge more or less
expressive than a dynamically typed language?  Some would say less, as
you can write programs in a dynamically typed language that you can't
compile in a statically