[Haskell-cafe] FW: Haskell

2008-04-01 Thread Simon Peyton-Jones
Dear Haskell Cafe members

Here's an open-ended question about Haskell vs Scheme.  Don't forget to cc 
Douglas in your replies; he may not be on this list (yet)!

Simon

-Original Message-
From: D. Gregor [mailto:[EMAIL PROTECTED]
Sent: 30 March 2008 07:58
To: Simon Peyton-Jones
Subject: Haskell

Hello,

In your most humble opinion, what's the difference between Haskell and
Scheme?  What does Haskell achieve that Scheme does not?  Is the choice less
to do with the language, and more to do with the compiler?  Haskell is a
pure functional programming language; whereas Scheme is a functional
language, does the word pure set Haskell that much apart from Scheme?  I
enjoy Haskell.  I enjoy reading your papers on parallelism using Haskell.
How can one answer the question--why choose Haskell over Scheme?

Regards,

Douglas


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread Jules Bean

PR Stanley wrote:
Why can't we have 
function application implemented outwardly (inside-out).


No reason we can't.

We could.

We just don't.

People have spent some time thinking and experimenting and have decided 
this way round is more convenient. It's certainly possible to disagree.


Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread jerzy . karczmarczuk
Paul Stanley writes: 


Hi
If
f x = x
and
g y = y
then
f g x
returns an error because f takes only one argument. Why can't we have 
function application implemented outwardly (inside-out)
etc. 

Paul, 


There were already some answers, but it seems that people did not react to
the statement that f g x fails. It doesn't, in normal order everything
should go smoothly, f g 5 returns 5 = (f g) 5 = g 5, unless I am
terribly mistaken...
Where did you see an error? 

Jerzy Karczmarczuk 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Function Precedence

2008-04-01 Thread PR Stanley

Hi
If
f x = x
and
g y = y
then
f g x
returns an error because f takes only one argument. Why can't we have 
function application implemented outwardly (inside-out). So

f g x would be applied with
gx first followed by its return value passed to f instead of putting 
g x in brackets.


Cheers,
Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread Jeremy Apthorp
On 01/04/2008, PR Stanley [EMAIL PROTECTED] wrote:
 Hi
  If
  f x = x
  and
  g y = y
  then
  f g x
  returns an error because f takes only one argument. Why can't we have
  function application implemented outwardly (inside-out). So
  f g x would be applied with
  gx first followed by its return value passed to f instead of putting
  g x in brackets.

Think about this:

map (+1) [1..10]

What should it do?

How about:

f 1 2 3

Should that be f (1 (2 3)), or ((f 1) 2) 3?

Jeremy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Bulat Ziganshin
Hello Simon,

Tuesday, April 1, 2008, 2:18:25 PM, you wrote:

 How can one answer the question--why choose Haskell over Scheme?

1. static typing with type inference - imho, must-be for production
code development. as many haskellers said, once compiler accept your
program, you may be 95% sure that it contains no bugs. just try it!

2. lazy evaluation - reduces complexity of language. in particular,
all control structures are usual functions while in scheme they are
macros

3. great, terse syntax. actually, the best syntax among several
dozens of languages i know

4. type classes machinery, together with type inference, means that
code for dealing with complex data types (say, serialization) is
generated on the fly and compiled right down to machine code


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread Janis Voigtlaender

PR Stanley wrote:

Hi
If
f x = x
and
g y = y
then
f g x
returns an error because f takes only one argument. Why can't we have 
function application implemented outwardly (inside-out).


Why should it be so?


So
f g x would be applied with
gx first followed by its return value passed to f instead of putting g x 
in brackets.


You can get the same behavior with

 f . g $ x

if you mislike brackets.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: wxHaskell 0.10.3

2008-04-01 Thread Jeremy O'Donoghue
The wxHaskell development team is pleased to announce the release of
wxHaskell 0.10.3, a Haskell binding
for the wxWidgets GUI library.

The Haskell support is built on a reasonably complete C language
binding, which
could be used as the basis for wxWidgets support on other
languages/platforms
which do not have easy mechanisms for linking with C++ code.

The feature set is the same as wxHaskell 0.10.3 rc1 and rc2, with a
number of
additional bugfixes.

This is the first full release since June 2005, and is the result of a
great deal of work by a new team of contributors.

Highlights of 0.10.3 include:

- Support for Unicode builds of wxWidgets
- Support for additional widgets including calendar, toolbar divider,
  styled text control (wxScintilla), media control
- Support for clipboard, drag and drop
- Support for 64bit (Linux) targets
- Support for wxWidgets 2.6.x (support for wxWidgets 2.4.2 if
  you compile from source). wxWidgets 2.8 is not yet supported
- Support for building with GHC 6.6.x and 6.8.x
- Parts of wxHaskell are now built with Cabal
- New test cases
- Removed support GHC version  6.4
- Profiling support
- Smaller generated binary sizes (using --split-objs)

Binary packages are available from the wxHaskell download site at
http://sourceforge.net/project/showfiles.php?group_id=73133, for the
following platforms:

- Debian
- Windows
- OS X (Intel and PPC platforms)
- Source code .tar.gz and .zip
- Documentation (cross-platform)

The wxHaskell libraries (wxcore and wx) are also available from Hackage
(http://hackage.haskell.org).

About wxHaskell
---

wxHaskell is a Haskell binding to the wxWidgets GUI library for recent
versions
of the Glasgow Haskell Compiler. It provides a native look and feel on
Windows,
OS X and Linux, and a medium level programming interface.

The main project page for wxHaskell is at
http://wxhaskell.sourceforge.net.
The latest source code for wxHaskell can always be obtained from
http://darcs.haskell.org/wxhaskell.
There are developer ([EMAIL PROTECTED] and user
([EMAIL PROTECTED]) mailing lists, and a wiki page
at http://haskell.org/haskellwiki/WxHaskell which can provide more
information to those interested.

wxHaskell was originally created by Daan Leijen. The contributors to
this new release include:

- Eric Kow
- shelarcy
- Arie Middelkoop
- Mads Lindstroem
- Jeremy O'Donoghue
- Lennart Augustson

The C language binding for wxHaskell was derived from an original C
language binding created for the Eiffel programming language by the
ELJ project (http://elj.sourceforge.net).
-- 
  Jeremy O'Donoghue
  [EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Thomas Schilling


On 1 apr 2008, at 13.02, Bulat Ziganshin wrote:


Hello Simon,

Tuesday, April 1, 2008, 2:18:25 PM, you wrote:


How can one answer the question--why choose Haskell over Scheme?


1. static typing with type inference - imho, must-be for production
code development. as many haskellers said, once compiler accept your
program, you may be 95% sure that it contains no bugs. just try it!

2. lazy evaluation - reduces complexity of language. in particular,
all control structures are usual functions while in scheme they are
macros

3. great, terse syntax. actually, the best syntax among several
dozens of languages i know

4. type classes machinery, together with type inference, means that
code for dealing with complex data types (say, serialization) is
generated on the fly and compiled right down to machine code


3 and 4 are no convincing arguments for a Scheme programmer.  Syntax  
is subjective and there Scheme implementations that can serialize  
entire continuations (closures), which is not possible in Haskell (at  
least not without GHC-API).


Static typing, though it might sound constraining at first, can be  
liberating!  How that?  Because it allows you to let the type-checker  
work for you!  By choosing the right types for your API, you can  
enforce invariants.  For example you can let the type-checker ensure  
that inputs from a web-application are always quoted properly, before  
using them as output.  A whole class of security problems is taken  
care of forever, because the compiler checks them for you.


If you're used to REPL-based programming, it can be a bit annoying  
that you can't run non-type-checking code, but you get used to it.   
After a while you will miss the safety when you program in Scheme again.


There's more, but I count on others to step in here.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Tomas Andersson
On Tuesday 01 April 2008 12.18.25 Simon Peyton-Jones wrote:

 How can one answer the question--why choose Haskell over Scheme?

 Regards,

 Douglas


For me, who is still a Haskell newbie, the following are the main reasons why 
I fell in love with Haskell and prefer it over Scheme/Common Lisp today.

1) Pattern matching
Being able to type for example:
fact 0 = 1
fact n = n * (fact (n - 1))

instead of having to write the conditionals and if/case statements every time 
I write a function is amazing. It makes simple funtctions _much_ shorter, 
easier to understand and faster to write

2) Static typing
 Having static typing eliminates tons of bugs at compile time that wouldn't 
show up until runtime in a dynamic language and does it while giving very 
clear error messages.
 And most of the time I don't even have to do anything to get it. The compiler 
figures it out all by is self.


3) Prettier syntax
 Yes, S-expressions are conceptually amazing. Code is data is code, macros, 
backquotes and so on. But I've found that most of the code I write doesn't 
need any of that fancy stuff and is both shorter and clearer in Haskell


4) List comprehension
 I fell in love with in in Python and Haskells version is even more 
beautiful/powerful. Being able to write in one line expression that would 
normally require multiple 'map's and 'filters' is something I'll have a hard 
time living without now.

 Later I've found even more reasons to prefer Haskell over Scheme, for example 
monads, classes, speed, parallellism, laziness, arrows and Parsec but the 
list above are the first things that caught my eye and made me switch.

/Tomas Andersson

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread Loup Vaillant
2008/4/1, Jules Bean [EMAIL PROTECTED]:
 PR Stanley wrote:
   Why can't we have
   function application implemented outwardly (inside-out).

 No reason we can't.

  We could.

  We just don't.

  People have spent some time thinking and experimenting and have decided
  this way round is more convenient. It's certainly possible to disagree.

I bet this time and thinking involved currying. For instance, with:
f :: int - int - int
f a b = a + b + 3

Let's explore the two possibilities

(1) f 4 2 = (f 4) 2 -- don't need parentheses
(2) f 4 2 = f (4 2) -- do need parentheses: (f 4) 2

Curried functions are pervasive, so (1) just saves us more brakets
than (2) does.

  f g x
  returns an error because f takes only one argument.

Do not forget that *every* function take only one argument. The trick
is that the result may also be a function. Therefore,

f g 5 = id id 5 = (id id) 5 = id 5 = 5
indeed do run smoothly (just checked in the Ocaml toplevel, thanks to
Jerzy for pointing this out).

Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Loup Vaillant
2008/4/1, Bulat Ziganshin [EMAIL PROTECTED]:
 Hello Simon,


  Tuesday, April 1, 2008, 2:18:25 PM, you wrote:

   How can one answer the question--why choose Haskell over Scheme?


 1. static typing with type inference - imho, must-be for production
  code development. as many haskellers said, once compiler accept your
  program, you may be 95% sure that it contains no bugs. just try it!

  2. lazy evaluation - reduces complexity of language. in particular,
  all control structures are usual functions while in scheme they are
  macros

  3. great, terse syntax. actually, the best syntax among several
  dozens of languages i know

  4. type classes machinery, together with type inference, means that
  code for dealing with complex data types (say, serialization) is
  generated on the fly and compiled right down to machine code

In my opinion, (1) and (3), as they are stated, are a bit dangerous if
you want to convince a lisper: they represent two long standing
religious wars.

  About (3), I see a trade-off: a rich syntax is great as long as we
don't need macros. Thanks to lazy evaluation and monads, we rarely
need macros in Haskell, even when writing DSLs. Sometimes, however, we
do need macros (remember the arrow notation, whose need was at some
time unforeseen).
  I think the only way we could compare the two is to make a
s-expression syntax for Haskell, add macros to it (either hygienic, or
with some kind of gensym), and (re)write some programs in both
syntaxes. I bet it would be very difficult (if not impossible) to
eliminate the trade-off.

  About (1), In most (if not all) dynamic vs static debate, the
dynamic camp argues that the safety brought by a static type system
comes at the price of lost flexibility. If we compare duck-typing and
Hindley-Milner, they are right: heterogeneous collections are at best
clumsy in Hindley-Milner, and overloading is near impossible.
  Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more). These two
features eliminate most (if not all) need for a dynamic type system.

  About (4), I think type classes shines even more on simple and
mundane stuff. No more clumsy + for ints and +. for floats. No
more passing around the compare fucntion. No more should I return a
Maybe type or throw an exception? (monads can delay this question).
No more whatever I forgot.
  For more impressive stuff, I think quick-check is a great example.

About (2), I'm clueless. The consequences of lazy evaluation are so
far-reaching I wouldn't dare entering the Lazy vs Strict debate.

Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Janis Voigtlaender

Loup Vaillant wrote:

  Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).


As far as names go:

... for type classes, of course Wadler, but also Blott and Kaes.

... for higher order types, well, where to start?

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread Hans Aberg

On 1 Apr 2008, at 12:40, PR Stanley wrote:
Why can't we have function application implemented outwardly  
(inside-out). So

f g x would be applied with
gx first followed by its return value passed to f instead of  
putting g x in brackets.


It seems me it may come from an alteration of math conventions:  
Normally (x) = x, and function application is written as f(x), except  
for a few traditional names, like for example sin x. So if one  
reasons that f(x) can be simplified to f x, then f g x becomes short  
for f(g)(x) = (f(g))(x).


It is just a convention. In math, particularly in algebra, one  
sometimes writes f of x as x f or (x)f, so that one does not have  
to reverse the order for example in diagrams.


  Hans Aberg


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Andrew Bagdanov
On Tue, Apr 1, 2008 at 1:02 PM, Bulat Ziganshin
[EMAIL PROTECTED] wrote:
 Hello Simon,


  Tuesday, April 1, 2008, 2:18:25 PM, you wrote:

   How can one answer the question--why choose Haskell over Scheme?


Well as a longtime Scheme and OCaml programmer, and Haskell-cafe
lurker, I'll take a stab at this...

  1. static typing with type inference - imho, must-be for production
  code development. as many haskellers said, once compiler accept your
  program, you may be 95% sure that it contains no bugs. just try it!


I think this is the biggest, and most obvious, difference to consider
when choosing either Scheme or Haskell over the other -- for a
particular problem.  Dynamic and static typing each have their
advantages, depending on the context.  I think it's dangerous to try
to answer the question Scheme or Haskell? without a problem context.

  2. lazy evaluation - reduces complexity of language. in particular,
  all control structures are usual functions while in scheme they are
  macros


Well, if I don't have side effects (and don't mind extra, unneeded
evaluations), I can write my conditionals as functions in Scheme too.
Heck, now that I think of it I can even avoid those extra evaluations
and side-effect woes if i require promises for each branch of the
conditional.  No macros required...

I think some problems are just more naturally modeled with lazy
thinking, and a language with implicit support for lazy evaluation is
a _huge_ win then.  I written plenty of lazy Scheme, and OCaml for
that matter, code where I wished and wished that it just supported
lazy evaluation semantics by default.  Again, I think this is highly
problem dependent, though I think you win more with lazy evaluation in
the long run.  Do more experienced Haskellers than me have the
opposite experience?  I mean, do you ever find yourself forcing strict
evaluation so frequently that you just wish you could switch on strict
evaluation as a default for a while?

  3. great, terse syntax. actually, the best syntax among several
  dozens of languages i know


I think this is possibly the weakest reason to choose Haskell over
Scheme.  Lispers like the regularity of the syntax of S-expressions,
the fact that there is just one syntactic form to learn, understand,
teach, and use.  For myself, I find them to be exactly the right
balance between terseness and expressiveness.  For me, Haskell syntax
can be a bit impenetrable at times unless I squint (and remember I'm
also an OCaml programmer).  Once you get it, though, I agree that
the brevity and expressiveness of Haskell is really beautiful.

  4. type classes machinery, together with type inference, means that
  code for dealing with complex data types (say, serialization) is
  generated on the fly and compiled right down to machine code


This is obviously related to #1, and Haskell sure does provide a lot
of fancy, useful machinery for manipulating types -- machinery whose
functionality is tedious at best to mimic in Scheme, when even
possible.

In short, I think the orginal question must be asked in context.  For
some problems, types are just a natural way to start thinking about
them.  For others dynamic typing, with _judicious_ use of macros to
model key aspects, is the most natural approach.  For the problems
that straddle the fence, I usually pick the language I am most
familiar with (Scheme) if there are any time constraints on solving
it, and the language I'm least familiar with (Haskell, right now) if I
have some breathing room and can afford to learn something in the
process.

Cheers,

-Andy


  --
  Best regards,
   Bulatmailto:[EMAIL PROTECTED]



  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Loup Vaillant
2008/4/1, Andrew Bagdanov [EMAIL PROTECTED]:

  In short, I think the orginal question must be asked in context.  For
  some problems, types are just a natural way to start thinking about
  them.  For others dynamic typing, with _judicious_ use of macros to
  model key aspects, is the most natural approach.

Do you have any example? I mean, you had to choose between Scheme and
Ocaml, sometimes, right? Ocaml is not Haskell, but maybe the reasons
which influenced your choices would have been similar if you knew
Haskell instead of Ocaml.

Cheers,
Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unescaping with HaXmL (or anything else!)

2008-04-01 Thread Yitzchak Gale
On Fri, Mar 28, 2008 at 4:26 AM, Anton van Straaten wrote:
 I want to unescape an encoded XML or HTML string, e.g. converting quot;
  to the quote character, etc.
  Since I'm using HaXml anyway, I tried using xmlUnEscapeContent with no
  luck

Hi Anton,

I only noticed your post today, sorry for the delay.

I also need this. In fact, it seems to me that it would be
generally useful. I hope that simple functions to escape/unescape
a string will be added to the API.

In the meantime, you are right that it is a bit tricky
to do this in HaXml. Besides the wrappers that you found
to be needed, there are two other issues:

One issue is that you need to lex and then parse the text first.
If you tell HaXml that your string is a CString, it
will believe you and just use the text the way it is without
any further processing.

The other issue is that HaXml's lexer currently can only
deal with XML content that begins with an XML tag. (I've
pointed this out to Malcolm Wallace, the author of HaXml.)
So in order to use it, you need to wrap your content in a
tag and then unwrap it after parsing.

The code below works for me (obviously it would be better to
remove the error calls):

Regards,
Yitz

import Text.XML.HaXml
import Text.XML.HaXml.Parse (xmlParseWith, document)
import Text.XML.HaXml.Lex (xmlLex)

unEscapeXML :: String - String
unEscapeXML = concatMap ctext . xmlUnEscapeContent stdXmlEscaper .
  unwrapTag .
  either error id . fst . xmlParseWith document .
  xmlLex oops, lexer failed . wrapWithTag t
  where
ctext (CString _ txt _) = txt
ctext (CRef (RefEntity name) _) = '' : name ++ ; -- skipped by escaper
ctext (CRef (RefChar num) _)= '' : '#' : show num ++ ; -- ditto
ctext _ = error oops, can't unescape non-cdata
wrapWithTag t s = concat [, t, , s, /, t, ]
unwrapTag (Document _ _ (Elem _ _ c) _) = c
unwrapTag _ = error oops, not wrapped
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread Chris Smith
Just random thoughts here.

Andrew Bagdanov wrote:
 Well, if I don't have side effects (and don't mind extra, unneeded
 evaluations), I can write my conditionals as functions in Scheme too.
 Heck, now that I think of it I can even avoid those extra evaluations
 and side-effect woes if i require promises for each branch of the
 conditional.  No macros required...

This is essentially doing lazy evaluation in Scheme.  It's certainly 
possible; just clumsy.  You must explicitly say where to force 
evaluation; but if you think about it, the run-time system already knows 
when it needs a value.  This is very analogous to having type inference 
instead of explicitly declaring a bunch of types as in Java or C++.

 Again, I think this is highly problem
 dependent, though I think you win more with lazy evaluation in the long
 run.  Do more experienced Haskellers than me have the opposite
 experience?  I mean, do you ever find yourself forcing strict evaluation
 so frequently that you just wish you could switch on strict evaluation
 as a default for a while?

The first thing I'd say is that Haskell, as a purely functional language 
that's close enough to the pure lambda calculus, has unique normal 
forms.  Furthermore, normal order (and therefore lazy) evaluation is 
guaranteed to be an effective evaluation order for reaching those normal 
forms.  Therefore, forcing strictness can never be needed to get a 
correct answer from a program.  (Applicative order evaluation does not 
have this property.)

Therefore, strictness is merely an optimization.  In some cases, it can 
improve execution time (by a constant factor) and memory usage (by a 
lot).  In other cases, it can hurt performance by doing calculations that 
are not needed.  In still more cases, it is an incorrect optimization and 
can actually break the code by causing certain expressions that should 
have an actual value to become undefined (evaluate to bottom).  I've 
certainly seen all three cases.

There are certainly situations where Haskell uses a lot of strictness 
annotations.  For example, see most of the shootout entries.  In 
practice, though, code isn't written like that.  I have rarely used any 
strictness annotations at all.  Compiling with optimization in GHC is 
usually good enough.  The occasional bang pattern (often when you intend 
to run something in the interpreter) works well enough.

(As an aside, this situation is quite consistent with the general 
worldview of the Haskell language and community.  Given that strictness 
is merely an optimization of laziness, the language itself naturally opts 
for the elegant answer, which is lazy evaluation; and then Simon and 
friends work a hundred times as hard to make up for it in GHC!)

 I think this is possibly the weakest reason to choose Haskell over
 Scheme.  Lispers like the regularity of the syntax of S-expressions, the
 fact that there is just one syntactic form to learn, understand, teach,
 and use.

I am strongly convinced, by the collective experience of a number of 
fields of human endeavor, that noisy syntax gets in the way of 
understanding.  Many people would also say that mathematical notation is 
a bit impenetrable -- capital sigmas in particular seem to scare people 
-- but I honestly think we'd be a good ways back in the advancement of 
mathematical thought if we didn't have such a brief and non-obstructive 
syntax for these things.  Mathematicians are quite irregular.  Sometimes 
they denote that y depends on x by writing y(x); sometimes by writing y_x 
(a subscript); and sometimes by writing y and suppressing x entirely in 
the notation.  These are not arbitrary choices; they are part of how 
human beings communicate with each other, by emphasizing some things, and 
suppressing others.  If one is to truly believe that computer programs 
are for human consumption, then striving for regularity in syntax doesn't 
seem consistent.

Initially, syntax appears to be on a completely different level from all 
the deep semantic differences; but they are in reality deeply 
interconnected.  The earlier comment I made about it being clumsy to do 
lazy programming in Scheme was precisely that the syntax is too noisy.  
Other places where lazy evaluation helps, in particular compositionality, 
could all be simulated in Scheme, but you'd have to introduce excessive 
syntax.  The result of type inference is also a quieter expression of 
code.  So if concise syntax is not desirable, then one may as well throw 
out laziness and type inference as well.  Also, sections and currying.  
Also, do notation.  And so on.

 In short, I think the orginal question must be asked in context.  For
 some problems, types are just a natural way to start thinking about
 them.  For others dynamic typing, with _judicious_ use of macros to
 model key aspects, is the most natural approach.

I wouldn't completely rule out, though, the impact of the person solving 
the problem on whether type-based problem solving is a 

Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Andrew Bagdanov
On Tue, Apr 1, 2008 at 4:55 PM, Loup Vaillant [EMAIL PROTECTED] wrote:
 2008/4/1, Andrew Bagdanov [EMAIL PROTECTED]:

 
In short, I think the orginal question must be asked in context.  For
some problems, types are just a natural way to start thinking about
them.  For others dynamic typing, with _judicious_ use of macros to
model key aspects, is the most natural approach.

  Do you have any example? I mean, you had to choose between Scheme and
  Ocaml, sometimes, right? Ocaml is not Haskell, but maybe the reasons
  which influenced your choices would have been similar if you knew
  Haskell instead of Ocaml.


Sure.  This may not be the best example, but it's the most immediate
one for me.  I'll try to be brief and hopefully still clear...  Years
ago I implemented an image processing system based on algorithmic
patterns of IP defined over algebraic pixel types (algebraic in the
ring, field, vector space sense).  Here's a link to the chapter from
my dissertation, for the chronically bored:

 http://www.micc.unifi.it/~bagdanov/thesis/thesis_08.pdf

This was partially motivated by the observation that a lot of image
processing is about _types_ and about getting them _right_.  There's a
complex interplay between the numerical, computational and perceptual
semantics of the data you need to work with.  A functional programming
language with strict typing and type inference seemed ideal for
modeling this.  You get plenty of optimizations for free when
lifting primitive operations to work on images (except OCaml functors
really let me down here), and you don't have to worry figuring out
what someone means when convolving a greyscale image with a color
image -- unless you've already defined an instantiation of the
convolution on these types that has a meaningful interpretation.
Where meaningful is of course up to the implementor.

In retrospect, if I had it all over to do again, I might choose Scheme
over OCaml specifically because of dynamic typing.  Or more flexible
typing, rather.  To completely define a new pixel datatype it is
necessary to define a flotilla of primitive operations on it (e.g.
add, mul, neg, div, dot, abs, mag, ...) but for many higher-level
operations, only a handful were necessary.  For example, for a
standard convolution, mul and add are sufficient.  In cases like this,
I would rather explicitly dispatch at a high level -- in a way that
admits partial implementations of datatypes to still play in the
system.  In retro-retrospect, the structural typing of OCaml objects
could probably do this pretty well too...  Oh well.

This is a case where the resulting system was difficult to use in the
exploratory, experimental it was intended to be used, in my opinion
because typing got in the way.  Strict typing and type inference were
a huge boon for the design and implementation.  I would consider
Haskell this time around too (I think I did all those years ago too),
as I think lazy evaluation semantics, direct support of monadic style,
and yes, even it's terse syntax, could address other aspects of the
domain that are untouched.  I don't have a clear enough understanding
of or experience with Haskell type classes, but my intuition is that
I'd have the same problems with typing as I did with OCaml.

Cheers,

-Andy

  Cheers,
  Loup

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Justin Bailey
On Tue, Apr 1, 2008 at 3:18 AM, Simon Peyton-Jones
[EMAIL PROTECTED] wrote:
 Dear Haskell Cafe members

  Here's an open-ended question about Haskell vs Scheme.  Don't forget to cc 
 Douglas in your replies; he may not be on this list (yet)!

  Simon

No one seems to have pointed out how friendly the Haskell community
is. Not only can you email the language designers (and they respond) -
they'll even help you answer your question! I don't want to encourage
more unsolicited email to Simon, but I'm impressed.

Justin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: all threads are blocked by recvFrom

2008-04-01 Thread Simon Marlow

Vitaliy Akimov wrote:

Hello, I have a problem with building multithreaded UDP server. If
main thread is waiting for new request in recvFrom all other threads
are blocked too. I've checked every variant with
forkIO,forkOS,-threaded etc, nothing's helped.  After reading GHC docs
I've understood this is happened becouse foreign function call from
recvFrom (network library) is marked to be unsefe, so it's execution
blocks every other thread.  How can I resolve it?


Sorry for the late reply.  This will be fixed in GHC 6.8.3:

http://hackage.haskell.org/trac/ghc/ticket/1129

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Newbie] Problem with Data.Map (or something else ?)

2008-04-01 Thread Bruno Carnazzi
2008/4/1, Chaddaï Fouché [EMAIL PROTECTED]:
 2008/3/31, Bruno Carnazzi [EMAIL PROTECTED]:

 Dears Haskellers,
  
As an Haskell newbie, I'm learning Haskell by trying to resolve Euler
Project problems (http://projecteuler.net/ ). I'm hanging on problem
14 (Collatz problem).
  
I've written the following program... Which does not end in a reasonable 
 time :(
My algorithm seems ok to me but I see that memory consumption is 
 gigantic...
Is this a memory problem with Data.Map ? Or an infinite loop ? (Where ?)
In a more general way, how can I troubleshoot these kind of problem ?


 Others have pointed potential source of memory leaks, but I must say
  that using Data.Map for the cache in the first place appear to me as a
  very bad idea... Data.Map by nature take much more place than
  necessary. You have an integer index, why not use an array instead ?

Because I don't know anything about arrays in Haskell. Thank you for
pointing this, I have to read some more Haskell manuals :)


   import Data.Array
   import Data.List
   import Data.Ord
  
   syrs n = a
   where a = listArray (1,n) $ 0:[ syr n x | x - [2..n]]
 syr n x = if x' = n then a ! x' else 1 + syr n x'
 where x' = if even x then x `div` 2 else 3 * x + 1
  
   main = print $ maximumBy (comparing snd) $ assocs $ syrs 100


The logic and the complexity in this algorithm is comparable to mine
but the performance difference is huge, which is not very intuitive in
my mind (There is no 1+1+1+1+1... problem with array ?)

  This solution takes 2 seconds (on my machine) to resolve the problem.

  On the other hand, now that I have read your solution, I see that
  using Map was the least of the problem... All those Map.map, while
  retaining the original Map... Your solution is too clever (twisted)
  for its own good, I suggest you aim for simplicity next time.


  --
  Jedaï


Thank you,

Bruno.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] function type def

2008-04-01 Thread PR Stanley

HI
It's one of those things - I know sort of instinctively why it is so 
but can't think of the formal rationale for it:

f g x = g (g x) :: (t - t) - (t - t)
Why not
(t - t) - t - (t - t)
to take account of the argument x for g?
Cheers
Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] function type def

2008-04-01 Thread Ketil Malde
PR Stanley [EMAIL PROTECTED] writes:

 It's one of those things - I know sort of instinctively why it is so
 but can't think of the formal rationale for it:

 f g x = g (g x) :: (t - t) - (t - t)

(t - t) - (t - t)

So
   g :: t - t 
   x :: t
Thus
   f :: (t - t) - t - t

(The last parenthesis is not necessary, but implies that the type of
the partial application  f g  is a function t - t .)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] function type def

2008-04-01 Thread jerzy . karczmarczuk
PR Stanley: 

I know sort of instinctively why it is so but 
can't think of the formal rationale for it:

f g x = g (g x) :: (t - t) - (t - t)


First of all - it is not the definition f g x = ... :: (t- ...
but the type of the function which might be specified:
f :: (t-t)-t-t 


Then, the answer to:

Why not
(t - t) - t - (t - t)
to take account of the argument x for g?


is simple. If t is the type of x, then g must be g :: t-t, you're right.
So f :: (t-t) - t - [the type of the result]
But this result is of the type t, it is g(g x), not (t-t), it is as
simple as that. Perhaps you didn't recognize that - is syntactically
a right-associative op, so
a-b-c   is equivalent to a-(b-c), or
(t-t)-t-t equiv. to  (t-t)-(t-t) 



Jerzy Karczmarczuk 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [GSoC] Student applications deadline extended one week

2008-04-01 Thread Manlio Perillo

Adam Langley ha scritto:

On Mon, Mar 31, 2008 at 12:00 PM, Manlio Perillo
[EMAIL PROTECTED] wrote:

 Since Nginx is asynchronous, how can be solved the producer-consumer
 problem (that is, the Haskell program produces more data that Nginx can
 send to the client without blocking)?


I assume that the Haskell process is connected to nginx over a pipe or
socket. 


No, the idea is to have the Haskell application embedded in nginx.


In which case, nginx can use flow control to block the sending
side of the pipe and the Haskell code will backup on that.

If many connections need to be multiplexed over the same
flow-controlled entity (i.e. a pipe), without head-of-line blocking
then you can just suspend the current thread using an MVar or the STM
objects.

Alternatively, with Network.MiniHTTP the problem is turned inside out.
Request handlers give return a Source object, which can be asked to
generate more data on request. This would be similar to generators in
Python.



This (the first suggestion) seems very interesting, thanks.




AGL




Manlio Perillo
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Function Precedence

2008-04-01 Thread PR Stanley



Think about this:

map (+1) [1..10]

What should it do?
take (+1) and return a function which takes a list as its 
argument and finally return a list.




How about:

f 1 2 3

Should that be f (1 (2 3)), or ((f 1) 2) 3?
The latter, of course, but that's not really what I'm 
driving at. I'm asking why we can't have a function treated 
differently with regard to the precedence and associativity rules. f 
1 2 is indeed ((f 1) 2). Why not f 1 g 2 == ((f 1) (g 2))?


Cheers, Paul 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Function Precedence

2008-04-01 Thread Chris Smith
PR Stanley wrote:
Should that be f (1 (2 3)), or ((f 1) 2) 3?
  The latter, of course, but that's not really what I'm
 driving at. I'm asking why we can't have a function treated differently
 with regard to the precedence and associativity rules. f 1 2 is indeed
 ((f 1) 2). Why not f 1 g 2 == ((f 1) (g 2))?

Are you asking why one doesn't change the rules for all functions?  Or 
are you asking why Haskell doesn't include a system of user-defined 
precedence and associativity for function application so that one could 
declare that g binds more tightly than f?  I see good reasons for both 
questions, but I'm unsure which you mean.

In both cases, it comes down to consistency of the syntax rules.  In 
order for (f 1 g 2) to parse as (f 1) (g 2), one would have to do 
something surprising.  It's unclear what that is: perhaps treat literals 
differently from variables?  Somehow determine a precedence level for
(f 1)?  Or maybe favor shorter argument lists for grouping function 
application?

If you have a very clear kind of grouping that you think makes sense in 
all cases, feel free to mention it.  It seems unlikely to me, but perhaps 
everyone will agree, once they see it, that it is in fact better than the 
current parsing rules.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread apfelmus

Janis Voigtlaender wrote:

Loup Vaillant wrote:

  Thanks to some geniuses (could someone name them?), we have type
classes and higher order types in Haskell (and even more).


As far as names go:

 for type classes, of course Wadler, but also Blott and Kaes.

 for higher order types, well, where to start?


Girard and Reynolds?


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Function Precedence

2008-04-01 Thread PR Stanley

Are you asking why one doesn't change the rules for all functions?  Or
are you asking why Haskell doesn't include a system of user-defined
precedence and associativity for function application so that one could
declare that g binds more tightly than f?  I see good reasons for both
questions, but I'm unsure which you mean.

In both cases, it comes down to consistency of the syntax rules.  In
order for (f 1 g 2) to parse as (f 1) (g 2), one would have to do
something surprising.  It's unclear what that is: perhaps treat literals
differently from variables?  Somehow determine a precedence level for
(f 1)?  Or maybe favor shorter argument lists for grouping function
application?

If you have a very clear kind of grouping that you think makes sense in
all cases, feel free to mention it.  It seems unlikely to me, but perhaps
everyone will agree, once they see it, that it is in fact better than the
current parsing rules.

Paul:
All you'd have to do is to give the inner most function the highest precdence
therefore
f g x == f (g x)
let f x = x^2
let g x = x`div`2
f g 4 == error while
f (g 4) == 4

I'm beginning to wonder if I fully understand the right associativity 
rule for the - operator.


Cheers, Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell vs Scheme

2008-04-01 Thread Greg Meredith
Douglas,

Excellent questions you posed to Simon P-J -- who then forwarded them to the
Haskell Cafe list. By way of answering i should say i was a Schemer from the
get-go; it was really the first programming language i studied as an
undergraduate majoring in maths at Oberlin in the early 80's. Eventually, i
went on to design and build my own language (at MCC with Christine
Tomlinson, the principal investigator) called Rosette. While Scheme was
Sussman and Abelson's way of making sense of Hewitt's ideas in a sequential
setting Rosette was our way of doing the full banana -- including the
actor-based form of concurrency as well as both structural and 3-Lisp-style
procedural reflection and a whole host of other advanced features. So, i was
naturally profoundly frustrated when the world at large turned to languages
like C, C++ and even Java. i have been waiting more than 20 years for the
industry to catch up to the joys of advanced language design.

Now that the industry has taken a shine to functional languages again i have
been spending more time with the various modern flavors and have to say that
while each of the major contenders (ML, OCaml, Erlang, Scala, Haskell) have
something to be said for them, Haskell stands out among them. Haskell enjoys
a particular form of mental hygiene that few other languages enjoy. Its
syntax, by comparison with Scheme, is remarkably concise -- and, the
importance of syntax is almost impossible to gauge because at the end of the
day it is code one is staring at ;-). The chief semantic differences that
make a difference to my mind may be classified as follows.

   - types
   - monads
   - meta-programming

In order, then: at its outset Haskell made a strong commitment to a potent
(static) typing scheme. Even if types can be layered on Scheme, the two
language design vectors are remarkably different and give programming a
different feel. As a result of my academic and professional training i have
come to rely heavily on types as a development discipline. In fact, if i
cannot devise a sensible type algebra for a given (application) domain
then i feel i don't really have a good understanding of the domain. One way
of seeing this from the Schemer point if view is that the deep sensibility
embodied in the Sussman and Abelson book of designing a DSL to solve a
problem is further refined by types. Types express an essential part of the
grammar of that language. In Haskell the close connection between typing and
features like pattern-matching are also part of getting a certain kind of
coherence between data and control. Again, this can be seen as taking the
coherence between data and control -- already much more evident in Scheme
than say C or C++ or even Java -- up a notch.

Haskell's language-level and library support for monads are really what set
this language apart. i feel pretty confident when i voice my opinion that
the most important contribution to computing (and science) that functional
programming has made in the last 15 years has been to work out practical
presentations of the efficacy of the notion of monad. As a Schemer i'm sure
you understand the critical importance of composition and compositional
design. The monad provides an important next step in our understanding of
composition effectively by making the notion parametric along certain
dimensions. This allows a programmer to capture very general container
patterns and control patterns (as well as other phenomena) with a very
concise abstraction. Again, this could be layered onto Scheme, but Haskell
embraced monad as a central abstraction at the language design level and
this leads to very different approaches to programming.

Now, the place where Haskell and the other statically typed functional
languages have some catching up to do is meta-programming. Scheme, Lisp and
other languages deriving from the McCarthy branch of the investigation of
lambda-calculus-based programming languages enjoy a much longer and deeper
investigation of meta-programming constructs. While MetaOCaml stands out as
a notable exception i think it safe to say that 3-Lisp and Brown are pretty
strong evidence of the long history and much richer investigation of
meta-programming notions along the McCarthy branch than along the Milner
branch. The industry as a whole, i think, has embraced the value of
meta-programming -- witness (structural) reflection in such mainstream
languages as Java and C#. And the Milner branch family of languages are
moving rapidly to catch up -- see the efforts on generic programming like S
P-J's SYB or TemplateHaskell -- but the deep coherence evident in the
simplicity of the monadic abstraction has not met up with the deep coherence
of 3-Lisp, yet.

Anyway, that's my two cents... but i note that US currency is not worth what
it used to be.

Best wishes,

--greg

Message: 21
Date: Tue, 1 Apr 2008 11:18:25 +0100
From: Simon Peyton-Jones [EMAIL PROTECTED]
Subject: [Haskell-cafe] FW: Haskell
To: Haskell Cafe 

Re: [Haskell-cafe] function type def

2008-04-01 Thread PR Stanley

Try putting this through your GHCI:
:t twice f x = f (f x)
I'd presume that based on the inference of (f x) f is (t - t) and x :: t

Yes, Maybe I should get the right associativity rule cleared first.
Cheers,
Paul

At 20:35 01/04/2008, you wrote:

PR Stanley:
I know sort of instinctively why it is so but can't think of the 
formal rationale for it:

f g x = g (g x) :: (t - t) - (t - t)


First of all - it is not the definition f g x = ... :: (t- ...
but the type of the function which might be specified:
f :: (t-t)-t-t
Then, the answer to:

Why not
(t - t) - t - (t - t)
to take account of the argument x for g?


is simple. If t is the type of x, then g must be g :: t-t, you're right.
So f :: (t-t) - t - [the type of the result]
But this result is of the type t, it is g(g x), not (t-t), it is as
simple as that. Perhaps you didn't recognize that - is syntactically
a right-associative op, so
a-b-c   is equivalent to a-(b-c), or
(t-t)-t-t equiv. to  (t-t)-(t-t)

Jerzy Karczmarczuk
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Function Precedence

2008-04-01 Thread Brandon S. Allbery KF8NH


On Apr 1, 2008, at 17:07 , PR Stanley wrote:
I'm beginning to wonder if I fully understand the right  
associativity rule for the - operator.


Read a parenthesized unit as an argument:

 (a - (b - (c - d))) (((f 1) 2) 3)
 (((a - b) - c) - d) (f (1 (2 3)))

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread Andrew Bagdanov
On Tue, Apr 1, 2008 at 5:37 PM, Chris Smith [EMAIL PROTECTED] wrote:
 Just random thoughts here.


Same here...


  Andrew Bagdanov wrote:
   Well, if I don't have side effects (and don't mind extra, unneeded
   evaluations), I can write my conditionals as functions in Scheme too.
   Heck, now that I think of it I can even avoid those extra evaluations
   and side-effect woes if i require promises for each branch of the
   conditional.  No macros required...

  This is essentially doing lazy evaluation in Scheme.  It's certainly
  possible; just clumsy.  You must explicitly say where to force
  evaluation; but if you think about it, the run-time system already knows
  when it needs a value.  This is very analogous to having type inference
  instead of explicitly declaring a bunch of types as in Java or C++.


Boy is it ever clumsy, and I like your analogy too.  But lazy
evaluation semantics typically come with purity, which is also a
fairly heavy burden to foist onto the user...  Certainly not without
benefits, but at times a burden nonetheless...


   Again, I think this is highly problem
   dependent, though I think you win more with lazy evaluation in the long
   run.  Do more experienced Haskellers than me have the opposite
   experience?  I mean, do you ever find yourself forcing strict evaluation
   so frequently that you just wish you could switch on strict evaluation
   as a default for a while?

  The first thing I'd say is that Haskell, as a purely functional language
  that's close enough to the pure lambda calculus, has unique normal
  forms.  Furthermore, normal order (and therefore lazy) evaluation is
  guaranteed to be an effective evaluation order for reaching those normal
  forms.  Therefore, forcing strictness can never be needed to get a
  correct answer from a program.  (Applicative order evaluation does not
  have this property.)


I thought that in a pure functional language any evaluation order was
guaranteed to reduce to normal form.  But then it's been a very, very
long time since I studied the lambda calculus...

  Therefore, strictness is merely an optimization.  In some cases, it can
  improve execution time (by a constant factor) and memory usage (by a
  lot).  In other cases, it can hurt performance by doing calculations that
  are not needed.  In still more cases, it is an incorrect optimization and
  can actually break the code by causing certain expressions that should
  have an actual value to become undefined (evaluate to bottom).  I've
  certainly seen all three cases.

  There are certainly situations where Haskell uses a lot of strictness
  annotations.  For example, see most of the shootout entries.  In
  practice, though, code isn't written like that.  I have rarely used any
  strictness annotations at all.  Compiling with optimization in GHC is
  usually good enough.  The occasional bang pattern (often when you intend
  to run something in the interpreter) works well enough.

  (As an aside, this situation is quite consistent with the general
  worldview of the Haskell language and community.  Given that strictness
  is merely an optimization of laziness, the language itself naturally opts
  for the elegant answer, which is lazy evaluation; and then Simon and
  friends work a hundred times as hard to make up for it in GHC!)


Yeah, I'm actually pretty convinced on the laziness issue.  Lazy
evaluation semantics are a big win in many ways.


   I think this is possibly the weakest reason to choose Haskell over
   Scheme.  Lispers like the regularity of the syntax of S-expressions, the
   fact that there is just one syntactic form to learn, understand, teach,
   and use.

  I am strongly convinced, by the collective experience of a number of
  fields of human endeavor, that noisy syntax gets in the way of
  understanding.  Many people would also say that mathematical notation is
  a bit impenetrable -- capital sigmas in particular seem to scare people
  -- but I honestly think we'd be a good ways back in the advancement of
  mathematical thought if we didn't have such a brief and non-obstructive
  syntax for these things.  Mathematicians are quite irregular.  Sometimes
  they denote that y depends on x by writing y(x); sometimes by writing y_x
  (a subscript); and sometimes by writing y and suppressing x entirely in
  the notation.  These are not arbitrary choices; they are part of how
  human beings communicate with each other, by emphasizing some things, and
  suppressing others.  If one is to truly believe that computer programs
  are for human consumption, then striving for regularity in syntax doesn't
  seem consistent.


All good points, but noisy is certainly in the eye of the beholder.
I'd make a distinction between background and foreground noise.  A
simple, regular syntax offers less background noise.  I don't have to
commit lots of syntactic idioms and special cases to memory to read
and write in that language.  Low background noise in Scheme, and I'm

Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Artem V. Andreev
Simon Peyton-Jones [EMAIL PROTECTED] writes:

 Dear Haskell Cafe members

 Here's an open-ended question about Haskell vs Scheme.  Don't forget to cc 
 Douglas in your replies; he may not be on this list (yet)!

 Simon

 -Original Message-
 From: D. Gregor [mailto:[EMAIL PROTECTED]
 Sent: 30 March 2008 07:58
 To: Simon Peyton-Jones
 Subject: Haskell

 Hello,

 In your most humble opinion, what's the difference between Haskell and
 Scheme?  What does Haskell achieve that Scheme does not?  Is the choice less
 to do with the language, and more to do with the compiler?  Haskell is a
 pure functional programming language; whereas Scheme is a functional
 language, does the word pure set Haskell that much apart from Scheme?  I
 enjoy Haskell.  I enjoy reading your papers on parallelism using Haskell.
 How can one answer the question--why choose Haskell over Scheme?

In my most humble of opinions, the comparison between Haskell and Scheme is just
methodologically incorrect. What I mean is that these are actually different
kinds of entities, despite they both are called programming languages.  In
particular, Scheme is nothing but a minimal core of a programming language --
despite it being Turing complete, one can hardly write any serious, real-world
program in pure Scheme, as defined by IEEE or whatever. So Scheme is, to my
mind, what is it called -- a scheme, which different implementors supply with
various handy additions. And we do not have any leading Scheme implementation
that would count as a de facto definition of a real Scheme language. Thus we
conclude that the term Scheme denotes not a programming language, but rather a
family of programming languages.

On the other hand, Haskell, as defined by The Report (well, plus FFI addendum)
is a true solid real-world language which can actually be used for real-world
programming as it is. And we do have a dominating implementation as well, etc
etc.

Thus: a methodologically correct comparison should be done either between two
implementations (Bigloo vs GHC, or MIT Scheme vs Hugs or Stalin vs Jhc or
whatever you like) or on the level of PL families and then we'd have Scheme
versus Haskell+Helium+Clean+maybe even Miranda+whatever else. In the latter 
case we'd
have two choices again: comparing upper bounds or lower bounds, that is,
comparing sets of features provided by any representative of a class or by *all*
representatives. Needless to say that the outcome would differ drastically
depending on which way we take.


-- 

S. Y. A(R). A.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Thomas Schilling


On 2 apr 2008, at 00.27, Artem V. Andreev wrote:

 Needless to say that the outcome would differ drastically
depending on which way we take.



Right.  Hence we try to answer Douglas' request in best faith to give  
him most useful answers.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FW: Haskell

2008-04-01 Thread Dan Weston

This one's easy to answer:

When I studied Scheme, I did not have an uncontrollable urge to pore 
through arcane papers trying to find out what the heck a natural 
transformation was, or a Kleisli arrow, or wonder how you can download 
Theorems for Free instead of having to pay for them, or see if I really 
could write a program only in point-free fashion. Nor did I use to take 
perfectly working code and refactor it until it cried for mercy, and 
then stay awake wondering if there was some abstraction out there I was 
missing that would really make it sing.


You can debate the role of Haskell as a programming language per se, but 
when it comes to consciousness-raising, the jury is in...Haskell is my 
drug of choice!


Dan

Simon Peyton-Jones wrote:

Dear Haskell Cafe members

Here's an open-ended question about Haskell vs Scheme.  Don't forget to cc 
Douglas in your replies; he may not be on this list (yet)!

Simon

-Original Message-
From: D. Gregor [mailto:[EMAIL PROTECTED]
Sent: 30 March 2008 07:58
To: Simon Peyton-Jones
Subject: Haskell

Hello,

In your most humble opinion, what's the difference between Haskell and
Scheme?  What does Haskell achieve that Scheme does not?  Is the choice less
to do with the language, and more to do with the compiler?  Haskell is a
pure functional programming language; whereas Scheme is a functional
language, does the word pure set Haskell that much apart from Scheme?  I
enjoy Haskell.  I enjoy reading your papers on parallelism using Haskell.
How can one answer the question--why choose Haskell over Scheme?

Regards,

Douglas


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Function Precedence

2008-04-01 Thread Chris Smith
PR Stanley wrote:
 All you'd have to do is to give the inner most function the highest
 precdence therefore
 f g x == f (g x)
 let f x = x^2
 let g x = x`div`2
 f g 4 == error while
 f (g 4) == 4

I'm afraid I still don't understand what you're proposing.  How can
f g x mean f (g x), and yet f g 4 is different from f (g 4)?

Maybe it'll help to point out that using functions as first-class 
concepts -- including passing them around as data -- is fundamental to 
functional programming languages.  In other words, anything in the world 
could be a function, whether it's acting like a function right now or 
not.  So distinguishing between (f g 4) and (f 1 2) is probably not 
wise.  They either need to both parse like ((f g) 4), or they need to 
both parse like (f (1 2)).  It has been the experience of the Haskell, 
ML, and other related languages that left associativity for function 
application works best.

 I'm beginning to wonder if I fully understand the right associativity
 rule for the - operator.

It just means that if I have a string of things separated by -, I can 
put parentheses around all but the leftmost one, and it doesn't change 
the meaning.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FW: Haskell

2008-04-01 Thread Chris Smith
I've just got a minute, so I'll answer the factual part.

Andrew Bagdanov wrote:
 I thought that in a pure functional language any evaluation order was
 guaranteed to reduce to normal form.  But then it's been a very, very
 long time since I studied the lambda calculus...

If you don't have strong normalization, such as is the case with Haskell, 
then you can look at the language as being a restriction of the pure 
untyped lambda calculus.  In that context, you know that: (a) a given 
expression has at most one normal form, so that *if* you reach a normal 
form, it will always be the right one; and (b) normal order evaluation 
(and therefore lazy evaluation) will get you to that normal form if it 
exists.  Other evaluation strategies may or may not reach the normal 
form, even if the expression does have a normal form.

You may be thinking of typed lambda calculi, which tend to be strongly 
normalizing.  Unlike the case with the untyped lambda calculus, in sound 
typed lambda calculi every (well-typed) term has exactly one normal form, 
and every evaluation strategy reaches it.  However, unrestricted 
recursive types break normalization.  This is not entirely a bad thing, 
since a strongly normalizing language can't be Turing complete.  So real-
world programming languages tend to provide recursive types and other 
features that break strong normalization.

I'm sure there are others here who know this a lot better than I.  I'm 
fairly confident everything there is accurate, but I trust someone will 
correct me if that confidence is misplaced.

-- 
Chris Smith

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] unix support

2008-04-01 Thread Galchin Vasili
Hello,

 On my personal computer, I added some functionality to the unix package
and  now I want to test this functionality. Basically I did a cabal install
to the global env on my laptop. I just ran a session of ghci:

[EMAIL PROTECTED]:~$ ghci
GHCi, version 6.8.2: http://www.haskell.org/ghc/  :? for help
Loading package base ... linking ... done.
Prelude :m System.Posix
Prelude System.Posix :t openFd
openFd :: FilePath - OpenMode - Maybe FileMode - OpenFileFlags - IO Fd
Prelude System.Posix :t mqOpen

interactive:1:0: Not in scope: `mqOpen'

There is a signature for openFd (existing functionality) but not mqOpen (new
functionality!)  The new functionality got some warnings vis-a-vis Storage
because I have defined alignment and something else yet. However, when I
did runhaskell Setup.hs install everything seemed to get installed. ???


Thanks, B.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I represent 4x4 map in haskell

2008-04-01 Thread Richard A. O'Keefe

On 1 Apr 2008, at 3:51 am, iliali16 wrote:
so my question is if this is ok to represent a map. If yes I will  
try to

write the function which makes it 4 x 4 myself. What I jsut need as an
answer is Yes or No. Just to let you know is that I am trying to  
build the

Wumpus World


What you should know is that there is no rule that the Wumpus World
has to be 4 x 4.  Quoting
http://web.inf.tu-dresden.de/~mit/LRAPP/wumpus/wumpus.htm
The size of the grid may vary for different scenarios.

You are tacitly assuming that if you want to know what's where in the
wumpus world you have to store an *image* of that world.  But this is
not so.

The wumpus may be dead, in which case we don't care where it is,
or it may be in some specific location:
wumpus :: Maybe Location
The gold may have been grabbed, in which case we know where it is
without looking (we have it), or it may be in some specific place:
gold :: Maybe Location
In some versions of the wumpus world, there may be more than one
piece of gold.  In that case,
gold :: [Location]
There is some number of pits.  There might be none.  All we really
want to know is whether a particular square is a pit or not.
is_pit :: Location - Bool

It's not clear to me whether the gold or the wumpus might be in a
pit.  Since it's deadly to enter a pit anyway, we don't really care.
The gold and the wumpus might well be in the same cell.

So we can do this:

type Location = (Int, Int)

data Wumpus_World
   = Wumpus_World {
bounds :: Location,
wumpus :: Maybe Location,
gold   :: Maybe Location,
is_pit :: Location - Bool
 }

initial_world = Wumpus_World {
bounds = (4,4),
wumpus = Just (3,3),
gold   = Just (3,3),
is_pit = \loc - case loc of
(2,1) - True
(4,3) - True
_ - False
} -- I just made this one up

holds :: Eq a = Maybe a - a - Bool

holds (Just x)  y = x == y
holds (Nothing) _ = False

has_wumpus, has_gold :: Wumpus_World - Location - Bool

has_wumpus world location = wumpus world `holds` location

has_gold world location = wumpus world `holds` location

There are lots of other ways to do this, and whether this is a
good one depends on what you need to do with it.  It might be
better to have

has_wumpus :: Location - Bool
has_gold   :: Location - Bool

as the field members directly, for example.  One thing that is
right about it is that it doesn't build in any specific idea of
the size of the world.  Another good thing is that it postpones
the decision about how to tell where the pits are.

But of course there are two maps: the complete map of how the world
really is, which the world simulator has to have, and the agent's
partial map recording what it has seen so far.  They might have
similar representations, or they might not.

It's a really good idea to write as much of the code as you can so
that it doesn't know what the map representation is.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Newbie] Problem with Data.Map (or something else ?)

2008-04-01 Thread Chaddaï Fouché
2008/4/1, Bruno Carnazzi [EMAIL PROTECTED]:
 Because I don't know anything about arrays in Haskell. Thank you for
  pointing this, I have to read some more Haskell manuals :)

A good place to learn about Haskell's array (which come in many
flavours) is this wiki page :
http://www.haskell.org/haskellwiki/Modern_array_libraries

  
 import Data.Array
 import Data.List
 import Data.Ord

 syrs n = a
 where a = listArray (1,n) $ 0:[ syr n x | x - [2..n]]
   syr x = if x' = n then 1 + a ! x' else 1 + syr x'
   where x' = if even x then x `div` 2 else 3 * x + 1

 main = print $ maximumBy (comparing snd) $ assocs $ syrs 100
  


 The logic and the complexity in this algorithm is comparable to mine
  but the performance difference is huge, which is not very intuitive in
  my mind (There is no 1+1+1+1+1... problem with array ?)

Array or Map isn't really the problem here (my algorithm with a Map
instead only take 6s to find the solution) as I thought at first.
The main problem in your code I think is that because of Map.map, you
create multiple copies of your smaller Maps in memory and union force
them to materialize, while the fact that you don't evaluate the value
means the GC won't collect them. Anyway, your algorithm by itself is
pretty slow I think, since for every step to a number which is not
already recorded you must add 1 to all the numbers you passed on the
way.

-- 
Jedaï
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe