Re: [Haskell-cafe] Strictness leak

2007-10-31 Thread Ketil Malde
Jeff Polakow [EMAIL PROTECTED] writes:

 Besides anything else, sequence will diverge on an infinite list. 

Argh, of course.  Thanks!

 It is necessary to compute all of the computations in the list before 
 returning
 any of the pure resulting list.

Replacing sequence with sequence', given as:

 sequence' ms = foldr k (return []) ms
 where
   k m m' = do { x - m; xs - unsafeInterleaveIO m'; return (x:xs) }

seems to solve it.
 
-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Embedding the GHC API

2007-10-31 Thread Joel Reymont

Has anyone tried to embed GHC as a library recently?

What is the size of the resulting binary?

I'm assuming a bare minimum of needed libraries.

Thanks, Joel

--
http://wagerlabs.com





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Embedding the GHC API

2007-10-31 Thread pepe

Austin Seipp has written about this in his blog:

http://austin.youareinferior.net/?q=node/29

I will take this time to point out that using the GHC API in your  
applications results in *large* executables. The Interact example  
above when compiled with vanilla --make options resulted in a whopping  
17mb executable. I've observed however you can mitigate this by an  
enormous amount using the tools strip and gzexe, taking it down to a  
light 2.5mb (a size reduction of about 85%)


Cheers
pepe

On 31/10/2007, at 9:44, Joel Reymont wrote:


Has anyone tried to embed GHC as a library recently?

What is the size of the resulting binary?

I'm assuming a bare minimum of needed libraries.

Thanks, Joel

--
http://wagerlabs.com





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why can't Haskell be faster?

2007-10-31 Thread Paulo J. Matos
Hello all,

I, along with some friends, have been looking to Haskell lately. I'm
very happy with Haskell as a language, however, a friend sent me the
link:
http://shootout.alioth.debian.org/gp4/

which enables you compare several language implementations. Haskell
seems to lag behind of Clean.
From what I've seen of Clean it seems almost like Haskell. It even
distributes a Haskell-Clean translator so the obvious question is,
why is Haskell slower?
Being similar languages and being GHC a very good compiler, can't it
get at least as fast as Clean?

What am I missing here? (I wrote this mail assuming the results from
the URL are trustworthy).

Cheers,

-- 
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re[2]: [Haskell] Image manipulation

2007-10-31 Thread Bulat Ziganshin
Hello Bjorn,

Wednesday, October 31, 2007, 11:54:33 AM, you wrote:

 data fields. Data Field Haskell had a function to force the evaluation of
 all elements in a data field at once.

this looks like a parallel arrays?


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why can't Haskell be faster?

2007-10-31 Thread manu

From what I've seen of Clean it seems almost like Haskell. It even


distributes a Haskell-Clean translator so the obvious question is,
why is Haskell slower?



It's also something I've wondered about, and I'm curious about the  
answer...


One of the differences between Haskell and Clean is how side-effects  
are allowed

(Uniqueness Types for Clean, and Monadic I/O for Haskell)

GHC also supports a lot of extensions beyong Haskell98.

Does it explain the difference in performances ? I don't know...

Experts please !


Manu


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why can't Haskell be faster?

2007-10-31 Thread Adrian Hey

Paulo J. Matos wrote:

Hello all,

I, along with some friends, have been looking to Haskell lately. I'm
very happy with Haskell as a language, however, a friend sent me the
link:
http://shootout.alioth.debian.org/gp4/

which enables you compare several language implementations. Haskell
seems to lag behind of Clean.

From what I've seen of Clean it seems almost like Haskell. It even

distributes a Haskell-Clean translator so the obvious question is,
why is Haskell slower?
Being similar languages and being GHC a very good compiler, can't it
get at least as fast as Clean?

What am I missing here? (I wrote this mail assuming the results from
the URL are trustworthy).


I don't know for certain that this is still the case (and if so why).
But I do remember that when I was a Clean user a few years ago both
the Clean compiler and the resulting executables were amazingly fast
(certainly by FPL standards).

I've often thought it's a real shame that two different but very
similar languages exist. I think that the Clean compiler would
be one of the best if not *the* best Haskell implementations available,
apart from minor snag that it isn't Haskell at all :-)

As things are at the moment ghc has no serious competition so we don't
really know how fast it should be. Maybe this will change in future.

BTW, the reason I still jumped ship in the end and became a Haskell
user instead had nothing to do with performance. The reason was that if
I was going to invest a lot of time in progs/libs I wanted to have some
confidence I'd made the right choice long term and I had issues with the
Clean approach to concurrency (what the Clean folk call deterministic
concurrency). I didn't (and still don't) see this as viable, but during
a long and heated flame war on the Clean mailing list it became clear
that the Clean team did not agree with my point of view, so things
were not likely to change any time soon :-(

Regards
--
Adrian Hey
















___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Peter Hercek

I'm curious what experts think too.

So far I just guess it is because of clean type system getting
 better hints for optimizations:

* it is easy to mark stuff strict (even in function signatures
 etc), so it is possible to save on unnecessary CAF creations

* uniqueness types allow to do in-place modifications (instead
 of creating a copy of an object on heap and modifying the copy),
 so you save GC time and also improve cache hit performance

Peter.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Peter Hercek

Add to that better unbox / box annotations, this may make even
 bigger difference than the strictness stuff because it allows
 you to avoid a lot of indirect references do data.

Anyway, if Haskell would do some kind of whole program analyzes
 and transformations it probably can mitigate all the problems
 to a certain degree.

So the slowness of Haskell (compared to Clean) is consequence of
 its type system. OK, I'll stop, I did not write Clean nor Haskell
 optimizers or stuff like that :-D

Peter.

Peter Hercek wrote:

I'm curious what experts think too.

So far I just guess it is because of clean type system getting
 better hints for optimizations:

* it is easy to mark stuff strict (even in function signatures
 etc), so it is possible to save on unnecessary CAF creations

* uniqueness types allow to do in-place modifications (instead
 of creating a copy of an object on heap and modifying the copy),
 so you save GC time and also improve cache hit performance

Peter.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Dougal Stanton
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote:

 Anyway, if Haskell would do some kind of whole program analyzes
   and transformations it probably can mitigate all the problems
   to a certain degree.


I think JHC is supposed to do whole-program optimisations. Rumour has
it that its Hello World examples are the fastest around - I have heard
it has problems with larger code bases though. ;-) What's the current
state of play on this?

D.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help needed packaging curl bindings

2007-10-31 Thread Ian Lynagh

Hi Brad,

On Tue, Oct 30, 2007 at 10:10:17PM -0700, brad clawsie wrote:
 i have decided to take on the task of packaging-up (for hackage) and
 documenting the curl bindings as available here:
 
 http://code.haskell.org/curl/
 
 if the originators of this code are reading this and do not wish me to
 proceed please say so, i won't be offended

I am CCing [EMAIL PROTECTED], the maintainer listed in the Cabal file.

 otherwise i was wondering if people had good examples to point me to
 for providing the cross-platform support needed for a FFI-based module
 such as this. i have made the necessary changes to compile the code on
 freebsd, but for other platforms i am not sure at all, particularly
 non-unix style platforms like windows.

What sort of changes do you mean?

 my guess is that providing cross-platform support requires autoconf
 etc prior to the hackage build process (?)
 
 any info/references appreciated


Thanks
Ian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Paulo J. Matos
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote:
 Add to that better unbox / box annotations, this may make even
   bigger difference than the strictness stuff because it allows
   you to avoid a lot of indirect references do data.

 Anyway, if Haskell would do some kind of whole program analyzes
   and transformations it probably can mitigate all the problems
   to a certain degree.


So, I might assert that it is not a problem of the Haskell language
itself, it is a problem with the compiler. Which means that with
enough effort it would be possible for the compiler to generate
compiled code with performance as good as Clean.

 So the slowness of Haskell (compared to Clean) is consequence of
   its type system. OK, I'll stop, I did not write Clean nor Haskell
   optimizers or stuff like that :-D


type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will know all types at compile time?

 Peter.

 Peter Hercek wrote:
  I'm curious what experts think too.
 
  So far I just guess it is because of clean type system getting
   better hints for optimizations:
 
  * it is easy to mark stuff strict (even in function signatures
   etc), so it is possible to save on unnecessary CAF creations
 
  * uniqueness types allow to do in-place modifications (instead
   of creating a copy of an object on heap and modifying the copy),
   so you save GC time and also improve cache hit performance
 
  Peter.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Reinier Lamers

Paulo J. Matos wrote:


So the slowness of Haskell (compared to Clean) is consequence of
 its type system. OK, I'll stop, I did not write Clean nor Haskell
 optimizers or stuff like that :-D

   



type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will know all types at compile time?
 

Yes, but apparently the Clean type system gives more information to the 
compiler than the Haskell system does. The Haskell type system doesn't 
say that a certain value can be updated in-place or that a certain value 
should not be boxed (not counting the GHC extension for unboxed types).


Reinier

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Jules Bean

Paulo J. Matos wrote:

type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will know all types at compile time?


The *existence* of a type system is helpful to the compiler.

Peter was referring to the differences between haskell and clean.

Specifically, clean's uniqueness types allow for a certain kind of 
zero-copy mutation optimisation which is much harder for a haskell 
compiler to automatically infer. It's not clear to me that it's actually 
worth it, but I think that's the point at issue. I can *imagine* 
algorithms in which copying is actually faster than mutation, if copying 
gives you better locality.


Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Robin Green
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean [EMAIL PROTECTED] wrote:

 Specifically, clean's uniqueness types allow for a certain kind of 
 zero-copy mutation optimisation which is much harder for a haskell 
 compiler to automatically infer. It's not clear to me that it's
 actually worth it, but I think that's the point at issue. I can
 *imagine* algorithms in which copying is actually faster than
 mutation, if copying gives you better locality.

If you want in-place update in Haskell, you can use the ST monad, or
IORefs. Yes, you have to refactor code, but anecdotally, uniqueness
types aren't without problems either - you can make one small change
and your code no longer satisfies the uniqueness condition.
-- 
Robin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Jules Bean

Robin Green wrote:

On Wed, 31 Oct 2007 14:17:13 +
Jules Bean [EMAIL PROTECTED] wrote:

Specifically, clean's uniqueness types allow for a certain kind of 
zero-copy mutation optimisation which is much harder for a haskell 
compiler to automatically infer. It's not clear to me that it's

actually worth it, but I think that's the point at issue. I can
*imagine* algorithms in which copying is actually faster than
mutation, if copying gives you better locality.


If you want in-place update in Haskell, you can use the ST monad, or
IORefs. Yes, you have to refactor code, but anecdotally, uniqueness
types aren't without problems either - you can make one small change
and your code no longer satisfies the uniqueness condition.


IORefs don't give you in-place update.

They give you mutation, but new values are still allocated in new heap.

foo - newIORef hi
writeIORef foo bye

-- bye is a new string, allocated in new heap. the only thing that got
-- mutated was a pointer.


STArrays and certain IO Arrays give you in-place update, though.

Jules

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Jeff . Harper
Peter Hercek wrote:
 * it is easy to mark stuff strict (even in function signatures
  etc), so it is possible to save on unnecessary CAF creations

Also, the Clean compiler has a strictness analyzer.  The compiler will 
analyze code and find many (but not all) cases where a function argument 
can be made strict without changing the behavior of the program.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help needed packaging curl bindings

2007-10-31 Thread brad clawsie
On Wed, Oct 31, 2007 at 01:36:40PM +, Ian Lynagh wrote:
  otherwise i was wondering if people had good examples to point me to
  for providing the cross-platform support needed for a FFI-based module
  such as this. i have made the necessary changes to compile the code on
  freebsd, but for other platforms i am not sure at all, particularly
  non-unix style platforms like windows.
 
 What sort of changes do you mean?

the need to locate the curl library and headers in different places on 
different platforms. the defaults used (for linux i presume) do not
work for freebsd for example. 

my guess is i need autotools to do this, but i am not sure


pgpEydM7nKXLn.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary operator modifiers

2007-10-31 Thread Brent Yorgey
On 10/29/07, Tim Newsham [EMAIL PROTECTED] wrote:

 I would love to have the ability to define binary operator modifiers.
 For example:

f \overline{op} g   = liftM2 op f g
f \overleftarrow{op} n  = liftM2 op f (return n)
n \overrightarrow{op} g = liftM2 op (return n) f
\widehat{f} x   = liftM f x

 so that for example you could define functions like:

 (*3) \overline{+} (/2)

 and

 3 \overrightarrow{+} \widehat{read} getContents

 Obviously you could write this out the long way:

 liftM2 (3+) $ liftM read getContents

 or go through the trouble of defining a bunch of binops

 f + g = liftM2 (+) f g
 n +  g = return n + g
 f +  n = f + return n
 read' = liftM read

 (*3) + (/2)
 3 + read' getContents

 but doing this for more than one or two operators gets tedious
 quickly...

 Is there any way in Haskell to modify binops in this way and still
 be able to use them infix?  If not, has anyone considered supporting
 strange syntaxes like this?


I've wanted this at times, too, due to using a lot of J a year or so ago.  J
has some weird parsing/semantics rules so that f g h essentially means
liftM2 g f h.  For example, avg =. +/ % #   is the J equivalent of avg =
liftM2 (/) sum length.  Anyway, the closest you can get in Haskell is
something like this, using the infix expressions of Ken Shan and Dylan
Thurstonhttp://www.haskell.org/pipermail/haskell-cafe/2002-July/003215.html
:

import Control.Monad
import Control.Monad.Instances

infixr 0 -:, :-

data Infix f y = f :- y
x -: f :- y = x `f` y

ov op = liftM2 op
ovL op f n = liftM2 op f (return n)
ovR op n f = liftM2 op (return n) f
hat f = liftM f

*Main :t (*3) -:ov (+):- (/2)
(*3) -:ov (+):- (/2) :: forall a1. (Fractional a1) = a1 - a1
*Main ((*3) -:ov (+):- (/2)) 7
24.5
*Main :t 3 -:ovR (+):- ((hat read) getContents)
3 -:ovR (+):- ((hat read) getContents) :: forall a. (Num a, Read a) = IO a

It works (?), but it's pretty ugly and hardly seems worth it, unfortunately.

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Neil Mitchell
Hi

I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this.  The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I don't think the difference between the
performance is a language thing, but it is a compiler thing. The
uniqueness type stuff may give Clean a slight benefit, but I'm not
sure how much they use that in their analyses.

Both Clean and GHC do strictness analysis - I don't know which one
does better, but both do quite well. I think Clean has some
generalised fusion framework, while GHC relies on rules and short-cut
deforestation. GHC goes through C-- to C or ASM, while Clean has been
generating native code for a lot longer. GHC is based on the STG
machine, while Clean is based on the ABC machine - not sure which is
better, but there are differences there.

My guess is that the native code generator in Clean beats GHC, which
wouldn't be too surprising as GHC is currently rewriting its CPS and
Register Allocator to produce better native code.

Thanks

Neil

On 10/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Peter Hercek wrote:
   * it is easy to mark stuff strict (even in function signatures
etc), so it is possible to save on unnecessary CAF creations

 Also, the Clean compiler has a strictness analyzer.  The compiler will
 analyze code and find many (but not all) cases where a function argument can
 be made strict without changing the behavior of the program.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Don Stewart
ndmitchell:
 Hi
 
 I've been working on optimising Haskell for a little while
 (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
 on this.  The Clean and Haskell languages both reduce to pretty much
 the same Core language, with pretty much the same type system, once
 you get down to it - so I don't think the difference between the
 performance is a language thing, but it is a compiler thing. The
 uniqueness type stuff may give Clean a slight benefit, but I'm not
 sure how much they use that in their analyses.
 
 Both Clean and GHC do strictness analysis - I don't know which one
 does better, but both do quite well. I think Clean has some
 generalised fusion framework, while GHC relies on rules and short-cut
 deforestation. GHC goes through C-- to C or ASM, while Clean has been
 generating native code for a lot longer. GHC is based on the STG
 machine, while Clean is based on the ABC machine - not sure which is
 better, but there are differences there.
 
 My guess is that the native code generator in Clean beats GHC, which
 wouldn't be too surprising as GHC is currently rewriting its CPS and
 Register Allocator to produce better native code.

Yes, this was my analysis too -- its in the native code gen. Which is
perhaps the main GHC bottleneck now.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference problem with division (/)

2007-10-31 Thread Tim Chevalier
On 10/30/07, Felipe Lessa [EMAIL PROTECTED] wrote:
 On 10/30/07, Tim Chevalier [EMAIL PROTECTED] wrote:
  ppos = pi/len2; pi and len2 are both Ints, so dividing them gives you
  an Int. To convert to a Double, write ppos = fromIntegral (pi/len2).
  (Type :t fromIntegral in ghci to see what else fromIntegral can be
  used for.)

 You mean pi / fromIntegral len2, right?


You're right, that's what I meant. Sorry for the confusion.

Cheers,
Tim

-- 
Tim Chevalier * catamorphism.org * Often in error, never in doubt
Unfortunately, there is no algorithm for making human relationships
work. -- Robin Williams
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Embedding the GHC API

2007-10-31 Thread Don Stewart
I often use this in my cabal ghc-options:

ghc-options: -funbox-strict-fields -O2 -fasm -Wall -optl-Wl,-s

the last runs ld's strip automatically.

mnislaih:
 Austin Seipp has written about this in his blog:
 
 http://austin.youareinferior.net/?q=node/29
 
 I will take this time to point out that using the GHC API in your  
 applications results in *large* executables. The Interact example  
 above when compiled with vanilla --make options resulted in a whopping  
 17mb executable. I've observed however you can mitigate this by an  
 enormous amount using the tools strip and gzexe, taking it down to a  
 light 2.5mb (a size reduction of about 85%)
 
 Cheers
 pepe
 
 On 31/10/2007, at 9:44, Joel Reymont wrote:
 
 Has anyone tried to embed GHC as a library recently?
 
 What is the size of the resulting binary?
 
 I'm assuming a bare minimum of needed libraries.
 
  Thanks, Joel
 
 --
 http://wagerlabs.com
 
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Ryan Dickie
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?

--ryan

On 10/31/07, Don Stewart [EMAIL PROTECTED] wrote:

 ndmitchell:
  Hi
 
  I've been working on optimising Haskell for a little while
  (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
  on this.  The Clean and Haskell languages both reduce to pretty much
  the same Core language, with pretty much the same type system, once
  you get down to it - so I don't think the difference between the
  performance is a language thing, but it is a compiler thing. The
  uniqueness type stuff may give Clean a slight benefit, but I'm not
  sure how much they use that in their analyses.
 
  Both Clean and GHC do strictness analysis - I don't know which one
  does better, but both do quite well. I think Clean has some
  generalised fusion framework, while GHC relies on rules and short-cut
  deforestation. GHC goes through C-- to C or ASM, while Clean has been
  generating native code for a lot longer. GHC is based on the STG
  machine, while Clean is based on the ABC machine - not sure which is
  better, but there are differences there.
 
  My guess is that the native code generator in Clean beats GHC, which
  wouldn't be too surprising as GHC is currently rewriting its CPS and
  Register Allocator to produce better native code.

 Yes, this was my analysis too -- its in the native code gen. Which is
 perhaps the main GHC bottleneck now.

 -- Don
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Don Stewart
goalieca:
So in a few years time when GHC has matured we can expect performance to
be on par with current Clean? So Clean is a good approximation to peak
performance?
 

The current Clean compiler, for micro benchmarks, seems to be rather
good, yes. Any slowdown wrt. the same program in Clean could be
considered a bug in GHC...

And remember usually Haskell is competing against 'high level' languages
like python for adoption, where we're 5-500x faster anyway...

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] binary operator modifiers

2007-10-31 Thread Tim Newsham

liftM2 (/) sum length.  Anyway, the closest you can get in Haskell is
something like this, using the infix expressions of Ken Shan and Dylan
Thurstonhttp://www.haskell.org/pipermail/haskell-cafe/2002-July/003215.html
:

[]


It works (?), but it's pretty ugly and hardly seems worth it, unfortunately.


Hmm.. that might be decent if you added rules to pretty-print them in 
lhs2tex.  The src code would be slightly messy but the formatted code 
would be very clean.  And it opens the doors for other binop decorators. 
Interesting idea.



-Brent


Tim Newsham
http://www.thenewsh.com/~newsham/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Sebastian Sylvan
On 31/10/2007, Don Stewart [EMAIL PROTECTED] wrote:
 goalieca:
 So in a few years time when GHC has matured we can expect performance to
 be on par with current Clean? So Clean is a good approximation to peak
 performance?
 

 The current Clean compiler, for micro benchmarks, seems to be rather
 good, yes. Any slowdown wrt. the same program in Clean could be
 considered a bug in GHC...

 And remember usually Haskell is competing against 'high level' languages
 like python for adoption, where we're 5-500x faster anyway...

Not so sure about that last thing. I'd love to use Haskell for
performance, in other words use it because it makes it easier to write
parallel and concurrent programs (NDP and STM mainly, though I
wouldn't mind some language support for message passing, and perhaps
Sing#-style static protocol specifications, with some high degree of
inference).

Anyway, in order for that to be reasonable I think it's important that
even the sequential code (where actual data dependencies enforce
evaluation sequence) runs very quickly, otherwise we'll lose out to
some C-based language (written with 10x the effort) again when we
start bumping into the wall of Almdahls law...


-- 
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help needed packaging curl bindings

2007-10-31 Thread Ian Lynagh
On Wed, Oct 31, 2007 at 08:35:42AM -0700, brad clawsie wrote:
 On Wed, Oct 31, 2007 at 01:36:40PM +, Ian Lynagh wrote:
   otherwise i was wondering if people had good examples to point me to
   for providing the cross-platform support needed for a FFI-based module
   such as this. i have made the necessary changes to compile the code on
   freebsd, but for other platforms i am not sure at all, particularly
   non-unix style platforms like windows.
  
  What sort of changes do you mean?
 
 the need to locate the curl library and headers in different places on 
 different platforms. the defaults used (for linux i presume) do not
 work for freebsd for example. 
 
 my guess is i need autotools to do this, but i am not sure

Oh, you mean adding library/include paths to the Cabal file? Right, that
will either need to use the new pkg-config support in Cabal 1.2 (if curl
uses pkg-config), or autotools (substituting into a .buildinfo file is
probably the easiest way).


Thanks
Ian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the order of splice definitions matter in template haskell, or is this a bug?

2007-10-31 Thread Ian Lynagh

Hi Thomas,

On Wed, Oct 31, 2007 at 03:27:20PM -0400, Thomas Hartman wrote:
 I have a situation where
 
 ... stuff...
 
 $(expose ['setState, 'getState]
 f = SetState
 
 compiles but
 
 f = SetState
 $(expose ['setState, 'getState]
 
 doesn't compile, with error: Not in scope: data constructor 'SetState.
 
 Is this a bug?

Not if SetState is created by the splice or defined after the splice,
no.

GHC typechecks up to the splice before running the splice, and then
continues typechecking everything past the splice afterwards. So if f is
defined before the splice then it must typecheck without the results of
the splice (or anything later in the file).


Thanks
Ian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Neil Mitchell
Hi

 So in a few years time when GHC has matured we can expect performance to be
 on par with current Clean? So Clean is a good approximation to peak
 performance?

No. The performance of many real world programs could be twice as fast
at least, I'm relatively sure. Clean is a good short term target, but
in the long run Haskell should be aiming for equivalence with highly
optimised C.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] does the order of splice definitions matter in template haskell, or is this a bug?

2007-10-31 Thread Thomas Hartman
I have a situation where

... stuff...

$(expose ['setState, 'getState]
f = SetState

compiles but

f = SetState
$(expose ['setState, 'getState]

doesn't compile, with error: Not in scope: data constructor 'SetState.

Is this a bug?

expose is defined in HAppS.State.EventTH

t,.

---

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail. Any 
unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Dan Piponi
On 10/31/07, Neil Mitchell [EMAIL PROTECTED] wrote:
 in the long run Haskell should be aiming for equivalence with highly
 optimised C.

Really, that's not very ambitious. Haskell should be setting its
sights higher. :-)

When I first started reading about Haskell I misunderstood what
currying was all about. I thought that if you provided one argument to
a two argument function, say, then it'd do partial evaluation. Very I
soon I was sorely let down as I discovered that it simply made a
closure that waits for the second argument to arrive so the reduction
can be carried out.

But every day, while coding at work (in C++), I see situations where
true partial evaluation would give a big performance payoff, and yet
there are so few languages that natively support it. Of course it
would require part of the compiler to be present in the runtime. But
by generating code in inner loops specialised to the data at hand it
could easily outperform C code in a wide variety of real world code. I
know there has been some research in this area, and some commercial
C++ products for partial evaluation have appeared, so I'd love to see
it in an easy to use Haskell form one day.

Just dreaming, I know...
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: does the order of splice definitions matter in template haskell, or is this a bug?

2007-10-31 Thread Alex Jacobson
order matters.  But I hope people are transitioning to using mkCommand 
instead of expose as it provides more functionality.


-Alex-

Thomas Hartman wrote:


I have a situation where

... stuff...

$(expose ['setState, 'getState]
f = SetState

compiles but

f = SetState
$(expose ['setState, 'getState]

doesn't compile, with error: Not in scope: data constructor 'SetState.

Is this a bug?

expose is defined in HAppS.State.EventTH

t,.
---

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google 
Groups HAppS group.

To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to 
[EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/HAppS?hl=en

-~--~~~~--~~--~--~---



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why can't Haskell be faster?

2007-10-31 Thread Don Stewart
bf3:
 Are these benchmarks still up-to-date? When I started learning FP, I had 
 to choose between Haskell and Clean, so I made a couple of little 
 programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes 
 a lot faster... I don't have the source code anymore, but it was based 
 on the book The Haskell road to math  logic.

Could be in the better Haskell libraries? We only really have the
shootout programs, which are very small.
  
 However, the Clean compiler itself is really fast, which is nice, it 
 reminds me to the feeling I had with Turbo Pascal under DOS :-) I find 
 GHC rather slow in compilation. But that is another topic of course.

I find it comforting that GHC thinks so hard about my code. :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type inference problem with division (/)

2007-10-31 Thread Henning Thielemann

On Tue, 30 Oct 2007, noa wrote:

 I have the following function:

 theRemainder :: [String] - [String] - Double
 theRemainder xs xt = sum( map additional (unique xs) )
   where
   additional x = poccur * (inf [ppos,pneg]) --inf takes [Double]
   where
   xsxt = zip xs xt
   pi = countPos xr -- countPos returns an Int
   ni = (length xr) - pi
   len = length xs

- genericLength

   len2 = length xr
   ppos = pi/len2 -- THESE ARE THE PROBLEM
   pneg = ni/len2 -- THESE ARE THE PROBLEM
   poccur = (pi+ni)/len
   xr = (filter ((\y - (fst y)==x)) (xsxt))
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why can't Haskell be faster?

2007-10-31 Thread Peter Verswyvelen
Are these benchmarks still up-to-date? When I started learning FP, I had 
to choose between Haskell and Clean, so I made a couple of little 
programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes 
a lot faster... I don't have the source code anymore, but it was based 
on the book The Haskell road to math  logic.


However, the Clean compiler itself is really fast, which is nice, it 
reminds me to the feeling I had with Turbo Pascal under DOS :-) I find 
GHC rather slow in compilation. But that is another topic of course.


Peter

Paulo J. Matos wrote:

Hello all,

I, along with some friends, have been looking to Haskell lately. I'm
very happy with Haskell as a language, however, a friend sent me the
link:
http://shootout.alioth.debian.org/gp4/

which enables you compare several language implementations. Haskell
seems to lag behind of Clean.
From what I've seen of Clean it seems almost like Haskell. It even
distributes a Haskell-Clean translator so the obvious question is,
why is Haskell slower?
Being similar languages and being GHC a very good compiler, can't it
get at least as fast as Clean?

What am I missing here? (I wrote this mail assuming the results from
the URL are trustworthy).

Cheers,

  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Henning Thielemann

On Wed, 31 Oct 2007, Dan Piponi wrote:

 But every day, while coding at work (in C++), I see situations where
 true partial evaluation would give a big performance payoff, and yet
 there are so few languages that natively support it. Of course it
 would require part of the compiler to be present in the runtime. But
 by generating code in inner loops specialised to the data at hand it
 could easily outperform C code in a wide variety of real world code. I
 know there has been some research in this area, and some commercial
 C++ products for partial evaluation have appeared, so I'd love to see
 it in an easy to use Haskell form one day.

I weakly remember an article on Hawiki about that ...

If you write

 foo :: X - Y - Z
 foo x =
let bar y = ... x ... y ...
in  bar

would this give you true partial evaluation?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Lennart Augustsson
There are many ways to implement currying.  And even with GHC you can get it
to do some work given one argument if you write the function the right way.
I've used this in some code where it was crucial.

But yeah, a code generator at run time is a very cool idea, and one that has
been studied, but not enough.

  -- Lennart

On 10/31/07, Dan Piponi [EMAIL PROTECTED] wrote:

 On 10/31/07, Neil Mitchell [EMAIL PROTECTED] wrote:
  in the long run Haskell should be aiming for equivalence with highly
  optimised C.

 Really, that's not very ambitious. Haskell should be setting its
 sights higher. :-)

 When I first started reading about Haskell I misunderstood what
 currying was all about. I thought that if you provided one argument to
 a two argument function, say, then it'd do partial evaluation. Very I
 soon I was sorely let down as I discovered that it simply made a
 closure that waits for the second argument to arrive so the reduction
 can be carried out.

 But every day, while coding at work (in C++), I see situations where
 true partial evaluation would give a big performance payoff, and yet
 there are so few languages that natively support it. Of course it
 would require part of the compiler to be present in the runtime. But
 by generating code in inner loops specialised to the data at hand it
 could easily outperform C code in a wide variety of real world code. I
 know there has been some research in this area, and some commercial
 C++ products for partial evaluation have appeared, so I'd love to see
 it in an easy to use Haskell form one day.

 Just dreaming, I know...
 --
 Dan
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread david48
I'd like to see Supero and Jhc - compiled examples in the language shootout.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Peter Hercek

The site claims it is quite up to date:

about Haskell GHC
The Glorious Glasgow Haskell Compilation System, version 6.6

Examples are compiled mostly in the middle of this year and
 at least -O was used. Each test has a log available. They
 are good at documenting what they do.

Peter.

Peter Verswyvelen wrote:
Are these benchmarks still up-to-date? When I started learning FP, I had 
to choose between Haskell and Clean, so I made a couple of little 
programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes 
a lot faster... I don't have the source code anymore, but it was based 
on the book The Haskell road to math  logic.


However, the Clean compiler itself is really fast, which is nice, it 
reminds me to the feeling I had with Turbo Pascal under DOS :-) I find 
GHC rather slow in compilation. But that is another topic of course.


Peter



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Derek Elkins
On Wed, 2007-10-31 at 23:44 +0100, Henning Thielemann wrote:
 On Wed, 31 Oct 2007, Dan Piponi wrote:
 
  But every day, while coding at work (in C++), I see situations where
  true partial evaluation would give a big performance payoff, and yet
  there are so few languages that natively support it. Of course it
  would require part of the compiler to be present in the runtime. But
  by generating code in inner loops specialised to the data at hand it
  could easily outperform C code in a wide variety of real world code. I
  know there has been some research in this area, and some commercial
  C++ products for partial evaluation have appeared, so I'd love to see
  it in an easy to use Haskell form one day.
 
 I weakly remember an article on Hawiki about that ...

Probably RuntimeCompilation (or something like that and linked from the
Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage.
 
 If you write
 
  foo :: X - Y - Z
  foo x =
 let bar y = ... x ... y ...
 in  bar
 
 would this give you true partial evaluation?

No.  Partial evaluation (usually) implies a heck of a lot more than what
you are trying to do.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Stefan O'Rear
On Wed, Oct 31, 2007 at 03:37:12PM +, Neil Mitchell wrote:
 Hi
 
 I've been working on optimising Haskell for a little while
 (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
 on this.  The Clean and Haskell languages both reduce to pretty much
 the same Core language, with pretty much the same type system, once
 you get down to it - so I don't think the difference between the
 performance is a language thing, but it is a compiler thing. The
 uniqueness type stuff may give Clean a slight benefit, but I'm not
 sure how much they use that in their analyses.
 
 Both Clean and GHC do strictness analysis - I don't know which one
 does better, but both do quite well. I think Clean has some
 generalised fusion framework, while GHC relies on rules and short-cut
 deforestation. GHC goes through C-- to C or ASM, while Clean has been
 generating native code for a lot longer. GHC is based on the STG
 machine, while Clean is based on the ABC machine - not sure which is
 better, but there are differences there.
 
 My guess is that the native code generator in Clean beats GHC, which
 wouldn't be too surprising as GHC is currently rewriting its CPS and
 Register Allocator to produce better native code.

I don't think the register allocater is being rewritten so much as it is
being written:

[EMAIL PROTECTED]:/tmp$ cat X.hs
module X where

import Foreign
import Data.Int

memset :: Ptr Int32 - Int32 - Int - IO ()
memset p v i = p `seq` v `seq` case i of
0 - return ()
_ - poke p v  memset (p `plusPtr` sizeOf v) v (i - 1)
[EMAIL PROTECTED]:/tmp$ ghc -fbang-patterns -O2 -c -fforce-recomp -ddump-asm 
X.hs
...
X_zdwa_info:
movl 8(%ebp),%eax
testl %eax,%eax
jne .LcH6
movl $base_GHCziBase_Z0T_closure+1,%esi
addl $12,%ebp
jmp *(%ebp)
.LcH6:
movl 4(%ebp),%ecx
movl (%ebp),%edx
movl %ecx,(%edx)
movl (%ebp),%ecx
addl $4,%ecx
decl %eax
movl %eax,8(%ebp)
movl %ecx,(%ebp)
jmp X_zdwa_info
...

Admittedly that's better than it used to be (I recall 13 memory
references last time I tested it), but still... the reason for your
performance woes should be quite obvious in that snippet.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Neil Mitchell
Hi

 I don't think the register allocater is being rewritten so much as it is
 being written:

From talking to Ben, who rewrote the register allocator over the
summer, he said that the new graph based register allocator is pretty
good. The thing that is holding it back is the CPS conversion bit,
which was also being rewritten over the summer, but didn't get
finished. I think these are both things which are likely to be done
for 6.10.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Stefan O'Rear
On Thu, Nov 01, 2007 at 02:30:17AM +, Neil Mitchell wrote:
 Hi
 
  I don't think the register allocater is being rewritten so much as it is
  being written:
 
 From talking to Ben, who rewrote the register allocator over the
 summer, he said that the new graph based register allocator is pretty
 good. The thing that is holding it back is the CPS conversion bit,
 which was also being rewritten over the summer, but didn't get
 finished. I think these are both things which are likely to be done
 for 6.10.

Oh, that's good news.  I look forward to a massive increase in the
performance of GHC-compiled programs, most specifically GHC itself.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Bernie Pope


On 01/11/2007, at 2:37 AM, Neil Mitchell wrote:


My guess is that the native code generator in Clean beats GHC, which
wouldn't be too surprising as GHC is currently rewriting its CPS and
Register Allocator to produce better native code.


I discussed this with Rinus Plasmeijer (chief designer of Clean) a  
couple of years ago, and if I remember correctly, he said that the  
native code generator in Clean was very good, and a significant  
reason why Clean produces (relatively) fast executables. I think he  
said that they had an assembly programming guru on their team.  
(Apologies to Rinus if I am mis-remembering the conversation).


At the time I was impressed by how fast Clean could recompile itself.

Cheers,
Bernie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread ajb

G'day all.

Quoting Derek Elkins [EMAIL PROTECTED]:


Probably RuntimeCompilation (or something like that and linked from the
Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage.


I didn't keep a copy, but if someone wants to retrieve it from the Google
cache and put it on the new wiki (under the new licence, of course), please
do so.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe