[Haskell-cafe] Number 1, at least for now

2006-01-31 Thread Donald Bruce Stewart
Haskell is now ranked number 1 on the Great Language Shootout!

http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all

Hooray :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Known Unknowns

2006-01-31 Thread Donald Bruce Stewart
joelkoerwer:
> 
>Thanks Chris. I was actually asking about analyzing Core
>output in general. I'm well aware of the problems we're
>having with the nbody entry.
>I'm convinced my list based version can go faster than it is
>now. That's why I was asking if Don could put together a few
>notes on how to optimize inner loops using -ddump-simpl and
>the resulting Core code.

Here's a brief introduction. I intend to write up (on the performance page on
the wiki) a list of things we've done to improve the shootout entries. N.B
we're now the 3rd *fastest* language, behind C and only a little behind D (a C
varient) !!

Consider the partial sums problem: 
wiki: http://www.haskell.org/hawiki/PartialSumsEntry
site: 
http://shootout.alioth.debian.org/gp4/benchmark.php?test=partialsums&lang=ghc&id=2

What follows is a discussion of the steps I took to improve the
performance of this code.

Here's the naive translation of the Clean entry (which was fairly quick):
Lots of math in a tight loop.
  
> import System; import Numeric
> 
> main = do n <- getArgs >>= readIO . head
>   let sums = loop 1 n 1 0 0 0 0 0 0 0 0 0
>   fn (s,t) = putStrLn $ (showFFloat (Just 9) s []) ++ "\t" ++ t
>   mapM_ (fn :: (Double, String) -> IO ()) (zip sums names)
> 
> names = ["(2/3)^k", "k^-0.5", "1/k(k+1)", "Flint Hills", "Cookson Hills"
> , "Harmonic", "Riemann Zeta", "Alternating Harmonic", "Gregory"]
> 
> loop k n alt a1 a2 a3 a4 a5 a6 a7 a8 a9
> | k > n = [ a1, a2, a3, a4, a5, a6, a7, a8, a9 ]
> | otherwise = loop (k+1) n (-alt)
>(a1 + (2/3) ** (k-1))
>(a2 + k ** (-0.5))
>(a3 + 1 / (k * (k + 1)))
>(a4 + 1 / (k*k*k * sin k * sin k))
>(a5 + 1 / (k*k*k * cos k * cos k))
>(a6 + 1 / k)
>(a7 + 1 / (k*k))
>(a8 + alt / k)
>(a9 + alt / (2 * k - 1))

Compiled with "-O2". However, the performance is _really_ bad :/ Somewhere
greater than 128M heap, in fact eventually running out of memory on my laptop.
A classic space leak.

(2) So look at the generated core. "ghc -o naive Naive.hs -O2  -ddump-simpl | 
less"
And we find that our loop has the following type:

> $sloop_r2U6 :: GHC.Prim.Double#
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Prim.Double#
>-> [GHC.Float.Double]

Hmm. Ok, I certainly don't want boxed doubles in such a tight loop.

(3) My next step is to encourage GHC to unbox this loop, by providing some
strictness annotations. Now the loop looks like this:

> loop k n alt a1 a2 a3 a4 a5 a6 a7 a8 a9
> | () !k !n !False = undefined
> | k > n = [ a1, a2, a3, a4, a5, a6, a7, a8, a9 ]
> | otherwise = loop (k+1) n (-alt)
>(a1 + (2/3) ** (k-1))
>(a2 + k ** (-0.5))
>(a3 + 1 / (k * (k + 1)))
>(a4 + 1 / (k*k*k * sin k * sin k))
>(a5 + 1 / (k*k*k * cos k * cos k))
>(a6 + 1 / k)
>(a7 + 1 / (k*k))
>(a8 + alt / k)
>(a9 + alt / (2 * k - 1)) where x ! y = x `seq` y

I've played a little game here, using ! for `seq`, reminiscent of the new
!-pattern proposal for strictness. Let's see how this compiles. Here's the Core:

> $sloop_r2Vh :: GHC.Prim.Double#
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Float.Double
>-> GHC.Prim.Double#
>-> [GHC.Float.Double]

Ok, so it unboxed one extra argument. Let's see if we can get them all unboxed.
Strictify all args, and GHC produces an inner loop of:

> $sloop_r2WS :: GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> GHC.Prim.Double#
>-> [GHC.Float.Double]

Ah! perfect. Let's see ho

Re: [Haskell-cafe] point-free simplification algoritm.

2006-01-31 Thread Stefan Holdermans

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nino,

I am also interested in knowing what zygomorphisms and  
paramorphisms are? and how can u build a gcata?? (a cata for all  
datatypes).


Regarding paramorphisms:

Consider the catamorphism on the type of Peano naturals,

  data Nat = Zero | Succ Nat

  cataNat :: a -> (a -> a) -> Nat -> a
  cataNat e f =  cata
where
  cata Zero = e
  cata (Succ k) = f (cata k)

Have a look at the case for Succ. Note that cata only operates on the  
immediate child n; it leaves the Succ node itself untouched. This  
makes it quite hard to define structural-recursive functions that  
also need access to the complete node. For instance, a clumsy way to  
define the factorial on Peano naturals is


  fac' :: Nat -> Nat
  fac' =  snd . cataNat e f
where
  e= (Zero, Succ Zero)
  f (n, k) = (Succ n, Succ n `times` k)

Paramorphisms are like catamorphisms but provide access to the nodes  
themselves too:


  paraNat :: a -> (Nat -> a -> a) -> Nat -> a
  paraNat e g =  para
where
  para Zero   = e
  para n@(Succ k) = g n (para k)

The factorial can now be written as

  fac :: Nat -> Nat
  fac =  paraNat (Succ Zero) times

Another example is the paramorphism on lists:

  paraList :: b -> (a -> [a] -> b -> b) -> [a] -> b
  paraList e g =  para
where
  para []   = e
  para (x : xs) = g x xs (para xs)

Regarding generic catamorphisms: check out (datatype) generic  
programming or polytypic programming.


HTH,

  Stefan
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFD38lLX0lh0JDNIpwRAthxAJ9iSndWFz/FHDiGPqAwMUXFIfbAAgCcCwnd
17Ahn/T8DNx4V8oRsFCFvCM=
=V7lp
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] point-free simplification algoritm.

2006-01-31 Thread Nuno Pinto

Since it's my first post, hi all!

I have been reading this for some days but only now i decided to write 
something. I am researching a to build a computer assisted simplification 
algorithm used to simplify pointfree expression..


Example. If i give the program:

fst . snd . id . id
it will return: fst . snd.
it wil also show all steps it made to reach that goal.

to do this we must  have the laws (such as f. id = f) that help us doing 
this defined.


the program uses a datatype such as:
data ExpTree v c = Var v | Term c [ExpTree]

exp :: Exp Char PF
exp = Term Comp [Term Comp [Term Fst [], Term Comp [Term Snd [], Term Id 
[]]], Term Id []]


I have previously defined the catamorphism, hylo and anamorphism for this 
data type, so recursion is insured.
I have also implemented a generic calculador that, given a list of functions 
will try to aply them and returns the best simplification aswell as the 
steps made to reach it.


I am now trying to implement such functions in a "decent" way, but i am 
however interested in the theory behind all this. I am looking for 
algorithms that will help me efficiently simplify the expression to the 
fullest. I am also interested in knowing what zygomorphisms and 
paramorphisms are? and how can u build a gcata?? (a cata for all datatypes).


I head about "proof-by-exhaustion" and "four color theorem" beeing used to 
make mathematical proves such as this ones, but i have never seen any 
example of such and would like to see it, if possible, in haskell..


If anyone can help me find what im looking for, a decent mathematical 
proofing algoritm.. or just tell me what it is that im looking for :P. or 
even contribute with any ideas, it would be very much aprecciated. :)


TY, and Hi all :D :) and have fun coding haskell :)

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell to call Microsoft COM (Dispatch)

2006-01-31 Thread Taral
On 1/30/06, Gracjan Polak <[EMAIL PROTECTED]> wrote:
> Is there any library to make Haskell call Microsoft COM functions using
> Dispatch? E.g I don't need the full COM binary functionality, scripting is
> enough. Google didn't seem to find anything interesting... beside rolling my
> own using FFI :)

A bit of searching turns up very little, but rolling your own for
simple support is not difficult. OleInitialize, CoCreateInstance,
class IUnknown, class IDispatch, class Variant...

--
Taral <[EMAIL PROTECTED]>
"Computer science is no more about computers than astronomy is about
telescopes."
-- Edsger Dijkstra
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Known Unknowns

2006-01-31 Thread Simon Marlow

Donald Bruce Stewart wrote:

haskell:


Donald Bruce Stewart wrote:


haskell:


There is a new combined benchmark, "partial sums" that subsumes several earlier
benchmarks and runs 9 different numerical calculations:

http://haskell.org/hawiki/PartialSumsEntry


Ah! I had an entry too. I've posted it on the wiki.  I was careful to
watch that all loops are compiled into nice unboxed ones in the Core. It
seems to run a little bit faster than your more abstracted code.

Timings on the page.

Also, -fasm seems to only be a benefit on the Mac, as you've pointed out
previously. Maybe you could check the times on the Mac too?

-- Don



Yeah. I had not tried all the compiler options. Using -fasm is slower on this
for me as well.  I suspect that since your code will beat the entries that have
been posted so far, so I thin you should submit it.



ok, I'll submit it.


Also, could you explain how to check the Core (un)boxing in a note on the (new?)
wiki?  I would be interested in learning that trick.



Ah, i just do: ghc A.hs -O2 -ddump-simpl | less
and then read the Core, keeping an eye on the functions I'm interested
in, and checking they're compiling to the kind of loops I'd write by
hand. This is particularly useful for the kinds of tight numeric loops
used in some of the shootout entries.


Some comments on this:  I couldn't get it to go any faster (1-2% is all, 
with some really ugly hacks).  It comes down to good low-level loop 
optimisation, which GHC doesn't do.


You could improve things by passing the array around rather than having 
it as a global, because then it can be unpacked - make sure you seq the 
array in the right places, check the Core to be sure.  I didn't try 
this, and it might only improve things marginally.


-fexcess-precision is required when compiling via C.  It should only be 
necessary on x86, but 6.4.1 and earlier require it on all platforms (we 
fixed that recently).


gcc -O2 is about 15% better than -fasm on x86_64 here.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell to call Microsoft COM (Dispatch)

2006-01-31 Thread Gracjan Polak
2006/1/30, Paul Moore <[EMAIL PROTECTED]>:
On 1/30/06, Gracjan Polak <[EMAIL PROTECTED]> wrote:>> Is there any library to make Haskell call Microsoft COM functions using> Dispatch? E.g I don't need the full COM binary functionality, scripting is
> enough. Google didn't seem to find anything interesting... beside rolling my> own using FFI :)I believe that HDirect allows you to do this. Unfortunately, I don'tknow of a binary build of a recent version, and I have yet to manage
to build it myself :-(Latest version is from January 2004, hierarchical library layout has changed a bit since then :( It doesn't compile for me, either.Anyway it seems to be a bit of overkill for what I want to do.
I'm not aware of any other libraries to do this.Thanks for the pointer! 
Paul.-- Gracjan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Correct usage of MonadError, ErrorT?

2006-01-31 Thread Andrew Pimlott
On Tue, Jan 31, 2006 at 05:07:00PM +1300, Daniel McAllansmith wrote:
> On Tuesday 31 January 2006 16:32, Andrew Pimlott wrote:
> > What type would your mapError have?  The first idea that comes to mind
> > is
> >
> > mapError :: (MonadError e1 m1, MonadError e2 m2) =>
> > (e1 -> e2) -> m1 a -> m2 a
> >
> > The problem here is that m1 and m2 have no relation--m1 could be IO and
> > m2 (Either e2)!  Not surprisingly, we can't implement that.  
> 
> Yeah, I expected that would be difficult.
> Is it actually impossible or does it just result in an explosion of code in 
> the implementation, needing a clause to map each instance of MonadError to 
> each other instance on MonadError?
> Not that I'm suggesting that solution, it's just that I'm new to this stuff 
> and don't immediately see why it's impossible (as opposed to impractical).

As Cale said, you can always define your own class that contains some
operation you build mapError on top of.  It might even prove generally
useful.  One idea might be to define a version of MonadError where the
monad can be paramatrized on any error type

class MonadError' m where
throwError' :: e -> m e a
catchError' :: m e a -> (e -> m e a) -> m e a

and see where you get.

> Ok, so mapErrorT, or a convenient wrapper, is the right tool for this 
> situation?
>
> If so, doesn't using mapErrorT bind the general MonadError class to the 
> specific ErrorT instance?
> So,
> g :: (MonadError String m, MonadIO m) => Int -> m String
> would have to become
> g :: (MonadIO m) => Int -> ErrorT String (m String)
> and that change will propagate out through any function which calls g.

Right.  Note that to use the first signature, g must be written so that
it works for any (MonadError String m, MonadIO m).  But I could define

instance MonadError String MyMonad where ...
instance MonadIO MyMonad where ...

that _only_ supports String errors.  You want to call f, which uses Int
errors, from g.  But MyMonad cannot take an Int error, so you're stuck.

I agree that it would be nicer to keep your definition general.  It
doesn't seem possible using MonadError, but maybe with MonadError' or
some other clever idea, you could make it work.

> Is it good advice for a new haskeller to stick to ErrorT in functions with 
> errors?

I'm not sure there's any "common wisdom" about using MonadError.  I
don't get the sense that it's used that much, which seems a bit of a
shame to me.

Andrew
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe