Re: [Haskell-cafe] Re: FW: Haskell

2008-04-02 Thread Janis Voigtlaender

apfelmus wrote:

Janis Voigtlaender wrote:


Loup Vaillant wrote:


  Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).



As far as names go:

 for type classes, of course Wadler, but also Blott and Kaes.

 for higher order types, well, where to start?



Girard and Reynolds?


Yes, that's the obvious suspects, of course. But I'm not sure I would
say they brought polymorphism (assuming that's what is meant by higher
order types) to Haskell. Not in the same way Wadler and co. brought
type classes, quite specifically, to Haskell.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: FW: Haskell

2008-04-02 Thread Loup Vaillant
2008/4/2, Janis Voigtlaender [EMAIL PROTECTED]:
 apfelmus wrote:

  Janis Voigtlaender wrote:
 
   Loup Vaillant wrote:
 Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).
  
   As far as names go:
  
    for type classes, of course Wadler, but also Blott and Kaes.
  
    for higher order types, well, where to start?
 
  Girard and Reynolds?

  Yes, that's the obvious suspects, of course. But I'm not sure I would
  say they brought polymorphism (assuming that's what is meant by higher
  order types) to Haskell. Not in the same way Wadler and co. brought
  type classes, quite specifically, to Haskell.

By higher order types, I meant the type of runST (ST monad),
or dpSwich (in yampa). I meant things like
(forall a, a- b) - a - b, and maybe existential types as well.

But now you mention it, I don't know who came up with mere parametric
polymorphism. (Hindley, Damas, Milner?)

Anyway, thanks for the names (now I have more papers to swallow :-).
Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: FW: Haskell

2008-04-02 Thread Janis Voigtlaender

Loup Vaillant wrote:

By higher order types, I meant the type of runST (ST monad),
or dpSwich (in yampa). I meant things like
(forall a, a- b) - a - b


That's then usually called higher-rank polymorphic types, just in case
you need more keywords for literature search ;-)

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: FW: Haskell

2008-04-02 Thread Lennart Augustsson
Mark Jones brought higher order polymorphism to Haskell.

On Wed, Apr 2, 2008 at 8:08 AM, Janis Voigtlaender 
[EMAIL PROTECTED] wrote:

 apfelmus wrote:

  Janis Voigtlaender wrote:
 
   Loup Vaillant wrote:
  
 Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).
   
  
  
   As far as names go:
  
    for type classes, of course Wadler, but also Blott and Kaes.
  
    for higher order types, well, where to start?
  
 
 
  Girard and Reynolds?
 

 Yes, that's the obvious suspects, of course. But I'm not sure I would
 say they brought polymorphism (assuming that's what is meant by higher
 order types) to Haskell. Not in the same way Wadler and co. brought
 type classes, quite specifically, to Haskell.

 --
 Dr. Janis Voigtlaender
 http://wwwtcs.inf.tu-dresden.de/~voigt/http://wwwtcs.inf.tu-dresden.de/%7Evoigt/
 mailto:[EMAIL PROTECTED]
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread Chris Smith
Just random thoughts here.

Andrew Bagdanov wrote:
 Well, if I don't have side effects (and don't mind extra, unneeded
 evaluations), I can write my conditionals as functions in Scheme too.
 Heck, now that I think of it I can even avoid those extra evaluations
 and side-effect woes if i require promises for each branch of the
 conditional.  No macros required...

This is essentially doing lazy evaluation in Scheme.  It's certainly 
possible; just clumsy.  You must explicitly say where to force 
evaluation; but if you think about it, the run-time system already knows 
when it needs a value.  This is very analogous to having type inference 
instead of explicitly declaring a bunch of types as in Java or C++.

 Again, I think this is highly problem
 dependent, though I think you win more with lazy evaluation in the long
 run.  Do more experienced Haskellers than me have the opposite
 experience?  I mean, do you ever find yourself forcing strict evaluation
 so frequently that you just wish you could switch on strict evaluation
 as a default for a while?

The first thing I'd say is that Haskell, as a purely functional language 
that's close enough to the pure lambda calculus, has unique normal 
forms.  Furthermore, normal order (and therefore lazy) evaluation is 
guaranteed to be an effective evaluation order for reaching those normal 
forms.  Therefore, forcing strictness can never be needed to get a 
correct answer from a program.  (Applicative order evaluation does not 
have this property.)

Therefore, strictness is merely an optimization.  In some cases, it can 
improve execution time (by a constant factor) and memory usage (by a 
lot).  In other cases, it can hurt performance by doing calculations that 
are not needed.  In still more cases, it is an incorrect optimization and 
can actually break the code by causing certain expressions that should 
have an actual value to become undefined (evaluate to bottom).  I've 
certainly seen all three cases.

There are certainly situations where Haskell uses a lot of strictness 
annotations.  For example, see most of the shootout entries.  In 
practice, though, code isn't written like that.  I have rarely used any 
strictness annotations at all.  Compiling with optimization in GHC is 
usually good enough.  The occasional bang pattern (often when you intend 
to run something in the interpreter) works well enough.

(As an aside, this situation is quite consistent with the general 
worldview of the Haskell language and community.  Given that strictness 
is merely an optimization of laziness, the language itself naturally opts 
for the elegant answer, which is lazy evaluation; and then Simon and 
friends work a hundred times as hard to make up for it in GHC!)

 I think this is possibly the weakest reason to choose Haskell over
 Scheme.  Lispers like the regularity of the syntax of S-expressions, the
 fact that there is just one syntactic form to learn, understand, teach,
 and use.

I am strongly convinced, by the collective experience of a number of 
fields of human endeavor, that noisy syntax gets in the way of 
understanding.  Many people would also say that mathematical notation is 
a bit impenetrable -- capital sigmas in particular seem to scare people 
-- but I honestly think we'd be a good ways back in the advancement of 
mathematical thought if we didn't have such a brief and non-obstructive 
syntax for these things.  Mathematicians are quite irregular.  Sometimes 
they denote that y depends on x by writing y(x); sometimes by writing y_x 
(a subscript); and sometimes by writing y and suppressing x entirely in 
the notation.  These are not arbitrary choices; they are part of how 
human beings communicate with each other, by emphasizing some things, and 
suppressing others.  If one is to truly believe that computer programs 
are for human consumption, then striving for regularity in syntax doesn't 
seem consistent.

Initially, syntax appears to be on a completely different level from all 
the deep semantic differences; but they are in reality deeply 
interconnected.  The earlier comment I made about it being clumsy to do 
lazy programming in Scheme was precisely that the syntax is too noisy.  
Other places where lazy evaluation helps, in particular compositionality, 
could all be simulated in Scheme, but you'd have to introduce excessive 
syntax.  The result of type inference is also a quieter expression of 
code.  So if concise syntax is not desirable, then one may as well throw 
out laziness and type inference as well.  Also, sections and currying.  
Also, do notation.  And so on.

 In short, I think the orginal question must be asked in context.  For
 some problems, types are just a natural way to start thinking about
 them.  For others dynamic typing, with _judicious_ use of macros to
 model key aspects, is the most natural approach.

I wouldn't completely rule out, though, the impact of the person solving 
the problem on whether type-based problem solving is a 

[Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread apfelmus

Janis Voigtlaender wrote:

Loup Vaillant wrote:

  Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).


As far as names go:

 for type classes, of course Wadler, but also Blott and Kaes.

 for higher order types, well, where to start?


Girard and Reynolds?


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread Andrew Bagdanov
On Tue, Apr 1, 2008 at 5:37 PM, Chris Smith [EMAIL PROTECTED] wrote:
 Just random thoughts here.


Same here...


  Andrew Bagdanov wrote:
   Well, if I don't have side effects (and don't mind extra, unneeded
   evaluations), I can write my conditionals as functions in Scheme too.
   Heck, now that I think of it I can even avoid those extra evaluations
   and side-effect woes if i require promises for each branch of the
   conditional.  No macros required...

  This is essentially doing lazy evaluation in Scheme.  It's certainly
  possible; just clumsy.  You must explicitly say where to force
  evaluation; but if you think about it, the run-time system already knows
  when it needs a value.  This is very analogous to having type inference
  instead of explicitly declaring a bunch of types as in Java or C++.


Boy is it ever clumsy, and I like your analogy too.  But lazy
evaluation semantics typically come with purity, which is also a
fairly heavy burden to foist onto the user...  Certainly not without
benefits, but at times a burden nonetheless...


   Again, I think this is highly problem
   dependent, though I think you win more with lazy evaluation in the long
   run.  Do more experienced Haskellers than me have the opposite
   experience?  I mean, do you ever find yourself forcing strict evaluation
   so frequently that you just wish you could switch on strict evaluation
   as a default for a while?

  The first thing I'd say is that Haskell, as a purely functional language
  that's close enough to the pure lambda calculus, has unique normal
  forms.  Furthermore, normal order (and therefore lazy) evaluation is
  guaranteed to be an effective evaluation order for reaching those normal
  forms.  Therefore, forcing strictness can never be needed to get a
  correct answer from a program.  (Applicative order evaluation does not
  have this property.)


I thought that in a pure functional language any evaluation order was
guaranteed to reduce to normal form.  But then it's been a very, very
long time since I studied the lambda calculus...

  Therefore, strictness is merely an optimization.  In some cases, it can
  improve execution time (by a constant factor) and memory usage (by a
  lot).  In other cases, it can hurt performance by doing calculations that
  are not needed.  In still more cases, it is an incorrect optimization and
  can actually break the code by causing certain expressions that should
  have an actual value to become undefined (evaluate to bottom).  I've
  certainly seen all three cases.

  There are certainly situations where Haskell uses a lot of strictness
  annotations.  For example, see most of the shootout entries.  In
  practice, though, code isn't written like that.  I have rarely used any
  strictness annotations at all.  Compiling with optimization in GHC is
  usually good enough.  The occasional bang pattern (often when you intend
  to run something in the interpreter) works well enough.

  (As an aside, this situation is quite consistent with the general
  worldview of the Haskell language and community.  Given that strictness
  is merely an optimization of laziness, the language itself naturally opts
  for the elegant answer, which is lazy evaluation; and then Simon and
  friends work a hundred times as hard to make up for it in GHC!)


Yeah, I'm actually pretty convinced on the laziness issue.  Lazy
evaluation semantics are a big win in many ways.


   I think this is possibly the weakest reason to choose Haskell over
   Scheme.  Lispers like the regularity of the syntax of S-expressions, the
   fact that there is just one syntactic form to learn, understand, teach,
   and use.

  I am strongly convinced, by the collective experience of a number of
  fields of human endeavor, that noisy syntax gets in the way of
  understanding.  Many people would also say that mathematical notation is
  a bit impenetrable -- capital sigmas in particular seem to scare people
  -- but I honestly think we'd be a good ways back in the advancement of
  mathematical thought if we didn't have such a brief and non-obstructive
  syntax for these things.  Mathematicians are quite irregular.  Sometimes
  they denote that y depends on x by writing y(x); sometimes by writing y_x
  (a subscript); and sometimes by writing y and suppressing x entirely in
  the notation.  These are not arbitrary choices; they are part of how
  human beings communicate with each other, by emphasizing some things, and
  suppressing others.  If one is to truly believe that computer programs
  are for human consumption, then striving for regularity in syntax doesn't
  seem consistent.


All good points, but noisy is certainly in the eye of the beholder.
I'd make a distinction between background and foreground noise.  A
simple, regular syntax offers less background noise.  I don't have to
commit lots of syntactic idioms and special cases to memory to read
and write in that language.  Low background noise in Scheme, and I'm

[Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread Chris Smith
I've just got a minute, so I'll answer the factual part.

Andrew Bagdanov wrote:
 I thought that in a pure functional language any evaluation order was
 guaranteed to reduce to normal form.  But then it's been a very, very
 long time since I studied the lambda calculus...

If you don't have strong normalization, such as is the case with Haskell, 
then you can look at the language as being a restriction of the pure 
untyped lambda calculus.  In that context, you know that: (a) a given 
expression has at most one normal form, so that *if* you reach a normal 
form, it will always be the right one; and (b) normal order evaluation 
(and therefore lazy evaluation) will get you to that normal form if it 
exists.  Other evaluation strategies may or may not reach the normal 
form, even if the expression does have a normal form.

You may be thinking of typed lambda calculi, which tend to be strongly 
normalizing.  Unlike the case with the untyped lambda calculus, in sound 
typed lambda calculi every (well-typed) term has exactly one normal form, 
and every evaluation strategy reaches it.  However, unrestricted 
recursive types break normalization.  This is not entirely a bad thing, 
since a strongly normalizing language can't be Turing complete.  So real-
world programming languages tend to provide recursive types and other 
features that break strong normalization.

I'm sure there are others here who know this a lot better than I.  I'm 
fairly confident everything there is accurate, but I trust someone will 
correct me if that confidence is misplaced.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe