Re: [Haskell-cafe] Threading and Mullticore Computation

2009-03-03 Thread Don Stewart
allbery:
> On 2009 Mar 3, at 12:31, mwin...@brocku.ca wrote:
>> In both runs the same computations are done (sequentially resp.
>> parallel), so the gc should be the same. But still using 2 cores is
>> much slower than using 1 core (same program - no communication).
>
> The same GCs are done, but GC has to be done on a single core  
> (currently; parallel GC is in development) so you will see a lot more  
> lock contention when the GC kicks in.
>

Assuming he is using GHC 6.10, the parallel GC is enabled by default
when you use -Nn where n > 1. That's is -N4 will use -g4   (4 cores to
collect). So GC should be the same or a little faster.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Threading and Mullticore Computation

2009-03-03 Thread Don Stewart
mwinter:
> Hi,
> 
> I tried a get into concurrent Haskell using multiple cores. The program below
> creates 2 task in different threads, executes them, synchronizes the threads
> using MVar () and calculates the time needed. 
> 
> import System.CPUTime
> import Control.Concurrent
> import Control.Concurrent.MVar
> 
> myTask1 = do
> return $! fac 6
> print "Task1 done!"
>   where fac 0 = 1
> fac n = n * fac (n-1)
>  
> myTask2 = do
> return $! fac' 6 1 1
> print "Task2 done!"
>   where fac' n m p = if  m>n then p else fac'  n (m+1) (m*p)
> 
> main = do
>  mvar <- newEmptyMVar
>  pico1 <- getCPUTime
>  forkIO (myTask1 >> putMVar mvar ())
>  myTask2
>  takeMVar mvar
>  pico2 <- getCPUTime
>  print (pico2 - pico1)
> 
> 
> I compiled the code using
> $ ghc FirstFork.hs -threaded
> and executed it by
> $ main +RTS -N1   resp.   $ main +RTS -N2
> I use GHC 6.8.3 on Vista with an Intel Dual Core processor. Instead of getting
> a speed up when using 2 cores I get as significant slow down, even though 
> there
> is no sharing in my code above (at least none I am aware of. BTW, that was
> reason
> that I use 2 different local factorial functions). On my computer the 1-core
> version
> takes about 8.3sec and the 2-core version 12.8sec. When I increase the numbers
> from 6 to 10 the time difference gets even worse (30sec vs 51 sec). 
> Can
> anybody give me an idea what I am doing wrong?


If you just want to check that your machine can do multicore, here's the
"hello world" I've been using:

import Control.Parallel

main = a `par` b `par` c `pseq` print (a + b + c)
where
a = ack 3 10
b = fac 42
c = fib 34

fac 0 = 1
fac n = n * fac (n-1)

ack 0 n = n+1
ack m 0 = ack (m-1) 1
ack m n = ack (m-1) (ack m (n-1))

fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

To be run as:

$ ghc -O2 -threaded --make hello.hs 
[1 of 1] Compiling Main ( hello.hs, hello.o )
Linking hello ...

$ time ./hello +RTS -N2 
1405006117752879898543142606244511569936384005711076
./hello +RTS -N2  2.29s user 0.01s system 152% cpu 1.505 total
  

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] interaction between ghci and cudaBLAS library

2009-03-02 Thread Don Stewart
seb:
> 
> 
> Don Stewart-2 wrote:
> > 
> > 
> > Do you get the same problem in compiled code? (GHCi is generally for
> > exploratory work only).
> > 
> 
> if I create an executable run it non-interactively. It works fine:
> 
> $ ghc -O2 --make -threaded main.hs cublas.hs -lcublas -L${CUDA}/lib
> 
> No matter whether is it compiled or interpreted it blocks in ghci
> (interactively),  the threading option makes no difference in either case.
> 

GHCi doesn't use the threaded runtime though.  So given that:

"To allow foreign calls to be made without blocking all the Haskell
threads (with GHC), it is only necessary to use the -threaded option
when linking your program, and to make sure the foreign import is
not marked unsafe. "

So I think this is expected?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] interaction between ghci and cudaBLAS library

2009-03-02 Thread Don Stewart
seb:
> 
> In my efforts to integrate this library into Haskell (I am working on OS X
> 10.5.6 with ghc-6.10.1 and CUDA 2.0) I am getting a bad interaction between
> the threads in ghci - when I call the library init function via the FFI,
> ghci will block in __semwait_signal. Of course if I build an executable
> which I guess has only one thread then all is well.
> 
> AFAIK the CUDA library is re-entrant (or threadsafe) in that it will
> initialize a device context for each thread that calls it. I guess that
> might be part of the problem. Certainly there is a call to
> _pthread_getspecific in that library.
> 
> This is show stopper for me as I want to use GHCI to call blas routines on
> the device (this lends itself very nicely to a monadic approach - leaving
> the matrices on the device until we are done applying sequential
> computations - and only then bringing them back into the "real" world).
> 
> Since there is this association between the device context and the calling
> thread - is there a way to get a handle on the threading in ghci? (or just
> have a single thread)  
> 
> But why are we blocking? I would have expected completion or is ghci smart
> enough to prevent any non-deterministic behaviour that the current setup
> would entail?
> 
> Any ideas or suggestions of how to proceed with this?  
> The final work should it be successful will be offered to the community as a
> basis for doing high performance linear algebra on CUDA devices as well as
> get my haskell up to speed as a side effect :) 
> 

Do you get the same problem in compiled code? (GHCi is generally for
exploratory work only).

E.g.

ghc -O2 --make 
or
ghc -O2 --make -threaded 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] possible memory leak in uvector 0.1.0.3

2009-03-02 Thread Don Stewart
manlio_perillo:
> Hi.
>
> In the "help optimizing memory usage for a program" I discovered some  
> interesting things:
>
>
> 2) UArr from uvector leaks memory.
>I'm rather sure about this.

Note it was just allocating more than was required, it wasn't "leaking"
it in any sense (i.e. losing track of the memory). 

> Using this version memory usage is, finally, 643 MB!
> (and execution if a bit faster, too).
 
Yep, known bug, and closed last month.
 
> The other program, with a lot of array concatenations, still eats a lot  
> of memory...

Concatenating arrays generally copies data. Which uses memory.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] missing support for NFData in uvector package

2009-02-28 Thread Don Stewart
manlio_perillo:
> Today I noticed that there is no instance declaration for NFData, in the  
> uvector package.
>
> The definition is quite simple:
>
> instance NFData a => NFData (UArr a) where
> -- NOTE: UArr is already strict
> rnf array = array `seq` ()
>
> but it is important.
>
> In a my program I was using a tuple of two arrays, and plain `seq` did  
> not worked.

Contact the author with a patch :-)

-- Don (who thinks NFData should be sorted out and in base)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Left fold enumerator - a real pearl overlooked?

2009-02-28 Thread Don Stewart
jwlato:
> Hello Günther,
> 
> I think the largest reason Haskellers don't use left-fold enumerators
> is that there isn't a ready-to-use package on Hackage.  Oleg's code is
> extremely well commented and easy to follow, but it's not cabalized.
> 
> In addition to Takusen, Johan Tibbe's hyena application server uses
> enumerators for IO:
> http://github.com/tibbe/hyena/tree/master
> 
> There is a darcs repo of a cabalized iteratee package available at
> http://inmachina.net/~jwlato/haskell/iteratee/
> This is essentially Oleg's code, slightly modified and reorganized.
> If anyone is interested in using left-fold enumerators for IO, please
> give it a look and let me know what you think.  I'd like to put this
> on hackage in about a week or so, if possible.  I would especially
> appreciate build reports.
> 
> There are a few iteratee/enumerator design questions that remain,
> which Oleg and others would like to explore more fully.  The results
> of that research will likely find there way into this library.


I agree. There's no left-fold 'bytestring' equivalent. So it remains a
special purpose technique.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] memory issues

2009-02-27 Thread Don Stewart
bulat.ziganshin:
> Hello Rogan,
> 
> Saturday, February 28, 2009, 1:18:47 AM, you wrote:
> 
> > data Block = Block {
> >   offset::Integer
> > , size::Integer
> > } deriving (Eq)
> 
> try
>!offset::Integer
>  , !size::Integer
> 

offset :: !Integer

And possibly just using {-# UNPACK #-}!Int64 would be ok?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] memory issues

2009-02-27 Thread Don Stewart
creswick:
> On Fri, Feb 27, 2009 at 2:20 PM, Don Stewart  wrote:
> > creswick:
> >> \begin{code}
> >> -- Compiled with:
> >> -- $ ghc --make offsetSorter.hs
> >
> > YIKES!! Use the optimizer!
> >
> >    ghc -O2 --make
> 
> Ah, that did drastically cut the amount of time it takes to run out of
> memory (down to 1:23), but unfortunately I can't see any other
> improvements -- the memory consumed seems to be about the same.
> (granted, I have no indication of progress -- it may be getting
> significantly more done, but it's not quite over the hump and
> producing output yet.)
> 

Ok. Now, profile! (ghc -O2 -prof -auto-all --make)

http://book.realworldhaskell.org/read/profiling-and-optimization.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] memory issues

2009-02-27 Thread Don Stewart
creswick:
> First off, my apologies for breaking etiquette, if/when I do -- I've
> only just joined Haskell-cafe, and I'm quite new to Haskell.
> 
> I have recently been trying to process a large data set (the 2.8tb
> wikipedia data dump), and also replace my scripting needs with haskell
> (needs that have previously been filled with bash, perl, and bits of
> Java).  Last week I needed to do some quick scanning of the (7zipped)
> wikipedia dump to get a feel for the size of articles, and from that
> determine the best way to process the whole enchilada... cutting to
> the chase, I ended up with a file consisting of byte offsets and lines
> matched by a grep pattern (a 250mb file).  Specifically, 11m lines of:
> 
> 1405:  
> 14062:  
> 15979:  
> 18665:  
> 920680797:  
> ..
> 2807444041476:  
> 2807444043623:  
> 
> I needed to know how large the lagest  elements were, so I'd
> know if they would fit in memory, and some idea of how many would
> cause swapping, etc. So, I wrote a simple app in haskell (below) to
> find the sizes of each  and sort them.  Unfortunately, it
> consumes an absurd amount of memory (3+gb) and dies with an
> out-of-memory error.  Given the input size, and what it is doing, this
> seems ridiculously high -- can anyone help me understand what is going
> on, and how I can prevent this sort of rampant memory use?
> 
> I can provide a link to the input file if anyone wants it, but it
> doesn't seem particularly useful, given the simplicity and size.
> Since I needed to get results fairly quickly, I've re-implemented this
> in java, so that reference implementation is also available should
> anyone want it (the approach that is most similar to the haskell
> requires a 1.4gb heap, but by streaming the string->long parsing, that
> requirement drops to ~600mb, which seems pretty reasonable, since the
> *output* is 215mb.)
> 
> Thanks!
> Rogan
> 
> \begin{code}
> -- Compiled with:
> -- $ ghc --make offsetSorter.hs


YIKES!! Use the optimizer!

ghc -O2 --make


-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] differences between Data.Array and Data.Vector

2009-02-27 Thread Don Stewart
manlio_perillo:
> Hi.
>
> In Hackage there are some packages named "*array*", and others named  
> "*vector*".
>
> What are the differences?
>
>
> Is available a guide to the various data structures available in Haskell?
>

The vector packages tend to be either easily growable, or easily
fusible, or both.

There's no clear convention though.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Trouble building ArrayRef 0.1.3

2009-02-27 Thread Don Stewart
bulat.ziganshin:
> Hello Mads,
> 
> Friday, February 27, 2009, 6:27:52 PM, you wrote:
> 
> i made this lib back in the ghc 6.4 days, so it probably have a lot of
> compatibility problems here and there :(  i'm self still use ghc 6.6 :)
> 
> > When I try to build ArrayRef 0.1.3 I get:
> 
> > Control/Concurrent/LockingBZ.hs:159:54:
> > Ambiguous type variable `e' in the constraint:
> >   `Exception e'
> > arising from a use of `throw'
> >  at Control/Concurrent/LockingBZ.hs:159:54-60
> > Probable fix: add a type signature that fixes these type variable(s)
> 
> 

It needs --constraint='base<4' when compiling.

Here, with cabal and ghc 6.10:

$ cabal install --constraint='base<4' arrayref
Resolving dependencies...
Downloading ArrayRef-0.1.3...
Configuring ArrayRef-0.1.3...
Preprocessing library ArrayRef-0.1.3...
Building ArrayRef-0.1.3...
[ 1 of 24] Compiling GHC.Unboxed  ( GHC/Unboxed.hs,
dist/build/GHC/Unboxed.o )
[ 2 of 24] Compiling Data.ArrayBZ.Internals.IArray (
Data/ArrayBZ/Internals/IArray.hs,
dist/build/Data/ArrayBZ/Internals/IArray.o )

...

Installing library in
/home/dons/.cabal/lib/ArrayRef-0.1.3/ghc-6.10.1
Registering ArrayRef-0.1.3...
Reading package info from "dist/installed-pkg-config" ... done.
Writing new package config file... done.

Thanks to Gwern, I believe, for packaging this up for hackage.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sparse vector operations

2009-02-27 Thread Don Stewart
You might be duplicating the functionality of an existing library.

There are existing libraries for vectors (though not sure if blas
supports sparse vectors well?).

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/blas
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmatrix

-- Don

grzegorz.chrupala:
> 
> Hi all,
> In a couple of my projects I have needed to perform operations on (very)
> sparse vectors.
> I came up with the attached simple module which defines a typeclass and
> implements instances for
> simple and nested (Int)Maps.
> Is this the right way to go about it? Am I reinventing some wheels?
> Comments welcome.
> Best,
> --
> 
> Grzegorz
> 
> http://www.nabble.com/file/p22247686/SparseVector.hs SparseVector.hs 
> -- 
> View this message in context: 
> http://www.nabble.com/Sparse-vector-operations-tp22247686p22247686.html
> Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
> 
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance Issue

2009-02-26 Thread Don Stewart
james.swaine:
> i'm implementing a benchmark which includes a detailed specification for a
> random number generator.  for any of the kernels outlined in the benchmark, i
> might have to generate a set of random numbers R, which has a length n, using
> the following formulas:
> 
> R[k] = ((2^-46)(X[k])) mod 2^46, where
> 
> X[k] = (a^k)s
> 
> where the values of a and s are constant and defined below. 
> many of the kernels in the benchmark require a large number of randoms to be
> generated (in the tens of millions).  when i invoke the following getRandAt
> function that many times to build up a list, evaluation of the list takes
> forever (somewhere between 5 and 10 minutes).  i've tried optimizing this
> several different ways, with no luck.  i though i might post my code here and
> see if anyone notices anything i'm doing wrong that might be causing such a
> large bottleneck:
> 
> --constants
> a :: Int64
> a = 5^13   
> 
> divisor :: Int64
> divisor = 2^46
> 
> multiplier :: Float
> multiplier = 2**(-46)
> 
> 
> --gets r[k], which is the value at the kth
> --position in the overall sequence of
> --pseudorandom numbers
> getRandAt :: Int64 -> Int64 -> Float
> getRandAt 0 seed = multiplier * (fromIntegral seed)
> getRandAt k seed = multiplier * (fromIntegral x_next)
> where
> x_prev = (a^k * seed) `mod` divisor
> x_next = (a * x_prev) `mod` divisor
> 
> thanks all in advance for your help!


Using ghc -O2 --make

There's nothing wrong with this code, really:

Z.$wgetRandAt :: Int# -> Int# -> Float#

and an inner loop of:

Z.$w$j :: Int# -> Float#
Z.$w$j =
  \ (w_sHx :: Int#) ->
case Z.^1 Z.lit1 Z.lvl of w1_XFs { I64# ww_XFv ->
case ww_XFv of wild_aFB {
  __DEFAULT ->
case minBound3 of wild1_aFC { I64# b1_aFE ->
case ==# w_sHx b1_aFE of wild2_aFG {
  False ->
case modInt# w_sHx wild_aFB of wild3_aFJ { __DEFAULT ->
timesFloat#
  (powerFloat# __float 2.0 __float -46.0)
  (int2Float# wild3_aFJ)
};
  True ->
case wild_aFB of wild3_aFM {
  __DEFAULT ->
case modInt# w_sHx wild3_aFM of wild4_aFN { __DEFAULT ->
timesFloat#
  (powerFloat# __float 2.0 __float -46.0)
  (int2Float# wild4_aFN)
};
  (-1) ->
overflowError
`cast` (CoUnsafe (forall a_aFS. a_aFS) Float#
:: forall a_aFS. a_aFS ~ Float#)
}
}
};
  0 ->
divZeroError
`cast` (CoUnsafe (forall a_aFT. a_aFT) Float#
:: forall a_aFT. a_aFT ~ Float#)
}

Which is just fine.

Inlining those constants explicitly might be a good idea, then we get an outer 
loop of:

Z.$wgetRandAt :: Int# -> Int# -> Float#
Z.$wgetRandAt =
  \ (ww_sHG :: Int#) (ww1_sHK :: Int#) ->
case ww_sHG of wild_B1 {
  __DEFAULT ->
case Z.lvl3 of wild1_aEd { I64# x#_aEf ->
case Z.^ Z.a (I64# wild_B1)
of wild2_XFe { I64# x#1_XFh ->
case Z.lvl1 of w_aE7 { I64# ww2_aE9 ->
case ww2_aE9 of wild3_aFB {
  __DEFAULT ->
case minBound3 of wild11_aFC { I64# b1_aFE ->
let {
  ww3_aFz [ALWAYS Just L] :: Int#

  ww3_aFz = *# x#1_XFh ww1_sHK } in
case ==# ww3_aFz b1_aFE of wild21_aFG {
  False ->
case modInt# ww3_aFz wild3_aFB
of wild31_aFJ { __DEFAULT ->
Z.$w$j (*# x#_aEf wild31_aFJ)
};
  True ->
case wild3_aFB of wild31_aFM {
  __DEFAULT ->
case modInt# ww3_aFz wild31_aFM
of wild4_aFN { __DEFAULT ->
Z.$w$j (*# x#_aEf wild4_aFN)
};
  (-1) ->
overflowError
`cast` (CoUnsafe (forall a_aFS. a_aFS) Float#
:: forall a_aFS. a_aFS ~ Float#)
}
  0 ->
divZeroError
`cast` (CoUnsafe (forall a_aFT. a_aFT) Float#
:: forall a_aFT. a_aFT ~ Float#)
}
}
}
};
  0 ->
timesFloat#
  (powerFloat# __float 2.0 __float -46.0)
  (int2Float# ww1_sHK)
}

which is the fast path, then error / bounds checking.

this looks perfectly acceptable.

What does sound troublesome is using lazy lists .. that's more likely to be the 
bottleneck.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance question

2009-02-26 Thread Don Stewart
vandijk.roel:
> I replaced the standard random number generated with the one from
> mersenne-random. On my system this makes the resulting program about
> 14 times faster than the original. I also made a change to
> accumulateHit because it doesn't need to count to total. That is
> already known.
> 
> {-# LANGUAGE BangPatterns #-}
> 
> import System( getArgs )
> import Data.List( foldl' )
> 
> import System.Random.Mersenne
> 
> pairs :: [a] -> [(a,a)]
> pairs [] = []
> pairs (x:[]) = []
> pairs (x:y:rest) = (x, y) : pairs rest
> 
> isInCircle :: (Double, Double) -> Bool
> isInCircle (x,y) = sqrt (x*x + y*y) <= 1.0
> 
> accumulateHit :: Int -> (Double, Double) -> Int
> accumulateHit (!hits) pair | isInCircle pair = hits + 1
>| otherwise   = hits
> 
> countHits :: [(Double, Double)] -> Int
> countHits ps = foldl' accumulateHit 0 ps
> 
> monteCarloPi :: Int -> [(Double, Double)] -> Double
> monteCarloPi n xs = 4.0 * fromIntegral hits / fromIntegral n
>   where hits = countHits $ take n xs
> 
> main = do
>   args <- getArgs
>   let samples = read $ head args
> 
>   randomNumberGenerator <- getStdGen
>   randomNumbers <- randoms randomNumberGenerator
> 
>   let res = monteCarloPi samples $ pairs randomNumbers
>   putStrLn $ show $ res


But note the lazy list of Double pairs, so the inner loop still looks like this 
though:

$wlgo :: Int# -> [(Double, Double)] -> Int

$wlgo =
  \ (ww_s1pv :: Int#)
(w_s1px :: [(Double, Double)]) ->
case w_s1px of wild_aTl {
  [] -> I# ww_s1pv;
  : x_aTp xs_aTq ->
case x_aTp of wild1_B1 { (x1_ak3, y_ak5) ->
case x1_ak3 of wild2_aX8 { D# x2_aXa ->
case y_ak5 of wild3_XYs { D# x3_XYx ->
case <=##
   (sqrtDouble#
  (+##
 (*## x2_aXa x2_aXa) (*## x3_XYx x3_XYx)))
   1.0
of wild4_X1D {
  False -> $wlgo ww_s1pv xs_aTq;
  True -> $wlgo (+# ww_s1pv 1) xs_aTq
}

while we want to keep everything in registers with something like:

Int# -> Double# -> Double# -> Int#

So we'll be paying a penalty to force the next elem of the list (instead of
just calling the Double generator).  This definitely has an impact on 
performance.

$ ghc-core B.hs -O2 -fvia-C -optc-O3 -fexcess-precision -optc-march=core2 
-funbox-strict-fields

$ time ./B 1000 
   
3.1407688
./B 1000  2.41s user 0.01s system 99% cpu 2.415 total


Now, what if we just rewrote that inner loop directly to avoid intermediate 
stuff? That'd give
us a decent lower bound.

{-# LANGUAGE BangPatterns #-}

import System.Environment
import System.Random.Mersenne

isInCircle :: Double -> Double -> Bool
isInCircle x y = sqrt (x*x + y*y) <= 1.0

countHits :: Int -> IO Int
countHits lim = do
g <- newMTGen Nothing
let go :: Int -> Int -> IO Int
go !throws !hits
| throws >= lim  = return hits
| otherwise = do
x <- random g   -- use mersenne-random-pure64 to stay pure!
y <- random g
if isInCircle x y
then go (throws+1) (hits+1)
else go (throws+1) hits
go 0 0

monteCarloPi :: Int -> IO Double
monteCarloPi n = do
hits <- countHits n
return $ 4.0 * fromIntegral hits / fromIntegral n

main = do
[n] <- getArgs
res <- monteCarloPi (read n)
print res

And now the inner loop looks like:

  $wa_s1yW :: Int#
  -> Int#
  -> State# RealWorld
  -> (# State# RealWorld, Int #)

Pretty good. Can't avoid the Int boxed return (and resulting heap check) due to 
use of IO monad. 
But at least does away with heap allocs in the inner loop!

How does it go:

$ ghc-core A.hs -O2 -fvia-C -optc-O3 -fexcess-precision -optc-march=core2 
-funbox-strict-fields

$ time ./A 1000
3.1412564
./A 1000  0.81s user 0.00s system 99% cpu 0.818 total

Ok. So 3 times faster. Now the goal is to recover the high level version.
We have many tools to employ: switching to mersenne-random-pure64 might help
here. And seeing if you can fuse filling a uvector with randoms, and folding
over it... t

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: seg-fault in mersenne-random with SSE2 (was Performance question)

2009-02-26 Thread Don Stewart
Alistair.Bayley:
> > From: haskell-cafe-boun...@haskell.org 
> > [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Roel van Dijk
> > 
> > I replaced the standard random number generated with the one from
> > mersenne-random. On my system this makes the resulting program about
> > 14 times faster than the original. I also made a change to
> > accumulateHit because it doesn't need to count to total. That is
> > already known.
> 
> 
> I tried this too, but got a seg fault (!), so I stripped it back to a
> small test program. This is with mersenne-random, setup configured with
> -fuse_sse2:

This in the past has always meant: wrong architecture (or GCC can't handle sse2 
on your system)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance question

2009-02-26 Thread Don Stewart
Ben.Lippmeier:
>
> On 26/02/2009, at 9:27 PM, hask...@kudling.de wrote:
>>
>> Currently i can only imagine to define a data type in order to use  
>> unboxed Ints instead of the accumulator tuple.
>
> That would probably help a lot. It would also help to use two separate  
> Double# parameters instead of the tuple.

data T = T !Double !Double

should be enough.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: package code duplication

2009-02-25 Thread Don Stewart
wasserman.louis:
> There was a question recently about being allowed to get into package
> internals, and I had a question.  I want to use uvector's stream internals in
> ways that the exposed methods don't permit, but I don't especially want to use
> another package (e.g. vector, which does expose its internals) or reimplement
> my own stream fusion.  Would it make sense to duplicate uvector's internals,
> copying licensing information and other stuff of course, inside my package? 
> It's a suboptimal solution, but it seems better than the alternative...

I think just exposing them as a .Internal makes more sense, and
is my preferred route (a la Data.ByteString.Internal)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Binary poor read performance

2009-02-24 Thread Don Stewart
jnf:
> 
> 
> wren ng thornton wrote:
> > 
> > If you have many identical strings then you will save lots by memoizing 
> > your strings into Integers, and then serializing that memo table and the 
> > integerized version of your data structure. The amount of savings 
> > decreases as the number of duplications decrease, though since you don't 
> > need the memo table itself you should be able to serialize it in a way 
> > that doesn't have much overhead.
> > 
> 
> I had problems with the size of the allocated heap space after serializing 
> and loading data with the binary package. The reason was that 
> binary does not support sharing of identical elements and considered a 
> restricted solution for strings and certain other data types first, but 
> came up with a generic solution in the end.
> (I did it just last weekend).

And this is exactly the intended path -- that people will release their
own special instances for doing more elaborate parsing/printing tricks!

  
> I put the Binary monad in a state transformer with maps for memoization:
> type PutShared = St.StateT (Map Object Int, Int) PutM ()
> type GetShared = St.StateT (IntMap Object) Bin.Get
> 
> In addition to standard get ant put methods:
> class (Typeable α, Ord α, Eq α) ⇒ BinaryShared α  where
> put :: α  →  PutShared
> get :: GetShared α
> I added putShared and getShared methods with memoization:
> putShared :: (α →  PutShared) →  α →  PutShared
> getShared :: GetShared α →  GetShared α 
> 
> For types that I don't want memoization I can either refer to the underlying 
> binary monad for primitive types, e.g.:
> instance BinaryShared Int where
> put = lift∘Bin.put
> get = lift Bin.get
> or stay in the BinaryShared monad for types of which I may memoize
> components, e.g.:
> instance (BinaryShared a, BinaryShared b) ⇒ BinaryShared (a,b) where
> put (a,b)  = put a ≫ put b
> get = liftM2 (,) get get
> 
> And for types for which I want memoization, I wrap it with putShared and
> getShared ,e.g:
> instance BinaryShared a ⇒ BinaryShared [a] where
> put= putShared (λl →  lift (Bin.put (length l)) ≫ mapM_ put l)
> get= getShared (do
> n ←  lift (Bin.get :: Bin.Get Int)
> replicateM n get)
> 
> This save 1/3 of heap space to my application. I didn't measure time.
> Maybe it would be useful to have something like this in the binary module.
> 

Very nice. Maybe even upload these useful instances in a little
binary-extras package?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Pickling a finite map (Binary + zlib) [was: [Haskell-cafe] Data.Binary poor read performance]

2009-02-24 Thread Don Stewart
felipe.lessa:
> On Tue, Feb 24, 2009 at 4:59 AM, Don Stewart  wrote:
> > Looks like the Map reading/showing via association lists could do with
> > further work.
> >
> > Anyone want to dig around in the Map instance? (There's also some patches 
> > for
> > an alternative lazy Map serialisation, if people are keen to load maps -- 
> > happstack devs?).
> 
> From binary-0.5:
> 
> instance (Ord k, Binary k, Binary e) => Binary (Map.Map k e) where
> put m = put (Map.size m) >> mapM_ put (Map.toAscList m)
> get   = liftM Map.fromDistinctAscList get
> 
> instance Binary a => Binary [a] where
> put l  = put (length l) >> mapM_ put l
> get= do n <- get :: Get Int
> replicateM n get
> 
> 
> 
> Can't get better, I think. Now, from containers-0.2.0.0:
> 
> fromDistinctAscList :: [(k,a)] -> Map k a
> fromDistinctAscList xs
>   = build const (length xs) xs
>   where
> -- 1) use continutations so that we use heap space instead of stack space.
> -- 2) special case for n==5 to build bushier trees.
> build c 0 xs'  = c Tip xs'
> build c 5 xs'  = case xs' of
>((k1,x1):(k2,x2):(k3,x3):(k4,x4):(k5,x5):xx)
> -> c (bin k4 x4 (bin k2 x2 (singleton k1
> x1) (singleton k3 x3)) (singleton k5 x5)) xx
>_ -> error "fromDistinctAscList build"
> build c n xs'  = seq nr $ build (buildR nr c) nl xs'
>where
>  nl = n `div` 2
>  nr = n - nl - 1
> 
> buildR n c l ((k,x):ys) = build (buildB l k x c) n ys
> buildR _ _ _ [] = error "fromDistinctAscList buildR []"
> buildB l k x c r zs = c (bin k x l r) zs
> 
> 
> The builds seem fine, but we spot a (length xs) on the beginning.
> Maybe this is the culprit? We already know the size of the map (it was
> serialized), so it is just a matter of exporting
> 
> fromDistinctAscSizedList :: Int -> [(k, a)] -> Map k a
> 
> Too bad 'Map' is exported as an abstract data type and it's not
> straighforward to test this conjecture. Any ideas?
> 

This idea was the motivation for the new Seq instance, which uses
internals to build quickly.

Encoding to disk, the dictionary,

$ time ./binary /usr/share/dict/cracklib-small
"done"
./binary /usr/share/dict/cracklib-small  0.07s user 0.01s system 94% 
cpu 0.088 total

Decoding,
$ time ./binary dict.gz
52848
"done"
./binary dict.gz  0.07s user 0.01s system 97% cpu 0.079 total

instance (Binary e) => Binary (Seq.Seq e) where
put s = put (Seq.length s) >> Fold.mapM_ put s
get = do n <- get :: Get Int
 rep Seq.empty n get
  where rep xs 0 _ = return $! xs
rep xs n g = xs `seq` n `seq` do
   x <- g
   rep (xs Seq.|> x) (n-1) g


Just a lot better. :)

So ... Data.Map, we're looking at you!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Pickling a finite map (Binary + zlib) [was: [Haskell-cafe] Data.Binary poor read performance]

2009-02-24 Thread Don Stewart
dons:
> wren:
> > Neil Mitchell wrote:
> >> 2) The storage for String seems to be raw strings, which is nice.
> >> Would I get a substantial speedup by moving to bytestrings instead of
> >> strings? If I hashed the strings and stored common ones in a hash
> >> table is it likely to be a big win?
> >
> > Bytestrings should help. The big wins in this application are likely to  
> > be cache issues, though the improved memory/GC overhead is nice too.
> >
> 
> Here's a quick demo using Data.Binary directly.
> 
> Now, let's read back in and decode it back to a Map 
> 
> main = do
> [f] <- getArgs
> m   <- decodeFile f
> print (M.size (m :: M.Map B.ByteString Int))
> print "done"
> 
> Easy enough:
> 
> $ time ./A dict +RTS -K20M
> 52848
> "done"
> ./A dict +RTS -K20M  1.51s user 0.06s system 99% cpu 1.582 total


  
> Compressed dictionary is much smaller. Let's load it back in and unpickle it:
> 
> main = do
> [f] <- getArgs
> m <- (decode . decompress) `fmap` L.readFile f
> print (M.size (m :: M.Map B.ByteString Int))
> print "done"
> 
> Also cute. But how does it run:
> 
> $ time ./A dict.gz
> 52848
> "done"
> ./A dict.gz  0.28s user 0.03s system 98% cpu 0.310 total
> 
> Interesting. So extracting the Map from a compressed bytestring in memory is
> a fair bit faster than loading it  directly, uncompressed from disk.
> 


Note the difference, as Duncan and Bulat pointed out, is a bit
surprising. Perhaps the Map instance is a bit weird? We already know
that bytestring IO is fine.

Just serialising straight lists of pairs,

import Data.Binary
import Data.List
import qualified Data.ByteString.Char8 as B
import qualified Data.ByteString.Lazy  as L
import System.Environment
import qualified Data.Map as M
import Codec.Compression.GZip

main = do
[f] <- getArgs
s   <- B.readFile f
let m = [ (head n, length n) | n <- (group . B.lines $ s) ]
L.writeFile "dict.gz" . encode $ m
print "done"

$ time ./binary /usr/share/dict/cracklib-small
"done"
./binary /usr/share/dict/cracklib-small  0.13s user 0.04s system 99% cpu
0.173 total

$ du -hs dict 
1.3Mdict

And reading them back in,

main = do
[f] <- getArgs
m <- decode `fmap` L.readFile f
print (length (m :: [(B.ByteString,Int)]))
print "done"

$ time ./binary dict
52848
"done"
./binary dict  0.04s user 0.01s system 99% cpu 0.047 total

Is fast. So there's some complication in the Map serialisation. Adding in zlib,
to check,

main = do
[f] <- getArgs
s   <- B.readFile f
let m = [ (head n, length n) | n <- (group . B.lines $ s) ]
L.writeFile "dict.gz" . compress . encode $ m
print "done"

$ time ./binary /usr/share/dict/cracklib-small 
"done"
./binary /usr/share/dict/cracklib-small  0.25s user 0.03s system
100% cpu 0.277 total

Compression takes longer, as expected, and reading it back in,

main = do
[f] <- getArgs
m <- (decode . decompress) `fmap` L.readFile f
print (length (m :: [(B.ByteString,Int)]))
print "done"

$ time ./binary dict.gz
52848
"done"
./binary dict.gz  0.03s user 0.01s system 98% cpu 0.040 total

About the same.

Looks like the Map reading/showing via association lists could do with
further work. 

Anyone want to dig around in the Map instance? (There's also some patches for
an alternative lazy Map serialisation, if people are keen to load maps -- 
happstack devs?).

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Pickling a finite map (Binary + zlib) [was: [Haskell-cafe] Data.Binary poor read performance]

2009-02-23 Thread Don Stewart
wren:
> Neil Mitchell wrote:
>> 2) The storage for String seems to be raw strings, which is nice.
>> Would I get a substantial speedup by moving to bytestrings instead of
>> strings? If I hashed the strings and stored common ones in a hash
>> table is it likely to be a big win?
>
> Bytestrings should help. The big wins in this application are likely to  
> be cache issues, though the improved memory/GC overhead is nice too.
>

Here's a quick demo using Data.Binary directly.

First, let's read in the dictionary file, and build a big, worst-case finite
map of words to their occurence (always 1).

import Data.Binary
import Data.List
import qualified Data.ByteString.Char8 as B
import System.Environment
import qualified Data.Map as M

main = do
[f] <- getArgs
s   <- B.readFile f
let m = M.fromList [ (head n, length n) | n <- (group . B.lines $ s) ]
encodeFile "dict" m
print "done"

So that writes a "dict" file with a binary encoded Map ByteString Int.
Using ghc -O2 --make for everying.

$ time ./A /usr/share/dict/cracklib-small
"done"
./A /usr/share/dict/cracklib-small  0.28s user 0.03s system 94% cpu 0.331 
total

Yields a dictionary file Map:

$ du -hs dict
1.3Mdict

Now, let's read back in and decode it back to a Map 

main = do
[f] <- getArgs
m   <- decodeFile f
print (M.size (m :: M.Map B.ByteString Int))
print "done"

Easy enough:

$ time ./A dict +RTS -K20M
52848
"done"
./A dict +RTS -K20M  1.51s user 0.06s system 99% cpu 1.582 total


Ok. So 1.5s to decode a 1.3M Map. There may be better ways to build the Map 
since we know the input will be sorted, but
the Data.Binary instance can't do that.

Since decode/encode are a nice pure function on lazy bytestrings, we can try a 
trick of 
compressing/decompressing the dictionary in memory.

Compressing the dictionary:

import Data.Binary
import Data.List
import qualified Data.ByteString.Char8 as B
import qualified Data.ByteString.Lazy  as L
import System.Environment
import qualified Data.Map as M
import Codec.Compression.GZip

main = do
[f] <- getArgs
s   <- B.readFile f
let m = M.fromList [ (head n, length n) | n <- (group . B.lines $ s) ]
L.writeFile "dict.gz" . compress . encode $ m
print "done"

Pretty cool, imo, is "compress . encode":

$ time ./A /usr/share/dict/cracklib-small 
"done"
./A /usr/share/dict/cracklib-small  0.38s user 0.02s system 85% cpu 0.470 
total

Ok. So building a compressed dictionary takes only a bit longer than 
uncompressed one (zlib is fast),

$ du -hs dict.gz 
216Kdict.gz

Compressed dictionary is much smaller. Let's load it back in and unpickle it:

main = do
[f] <- getArgs
m <- (decode . decompress) `fmap` L.readFile f
print (M.size (m :: M.Map B.ByteString Int))
print "done"

Also cute. But how does it run:

$ time ./A dict.gz
52848
"done"
./A dict.gz  0.28s user 0.03s system 98% cpu 0.310 total

Interesting. So extracting the Map from a compressed bytestring in memory is a 
fair bit faster than loading it 
directly, uncompressed from disk.

Neil, does that give you some ballpark figures to work with?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] memory-efficient data type for Netflix data - UArray Int Int vs UArray Int Word8

2009-02-23 Thread Don Stewart
bos:
> 2009/2/23 Kenneth Hoste 
>  
> 
> Does anyone know why the Word8 version is not significantly better in 
> terms
> of memory usage?
> 
> 
> Yes, because there's a typo on line 413 of Data/Array/Vector/Prim/BUArr.hs.
> 
> How's that for service? :-)

UArray or UArr?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data.Binary poor read performance

2009-02-23 Thread Don Stewart
ndmitchell:
> Hi,
> 
> In an application I'm writing with Data.Binary I'm seeing very fast
> write performance (instant), but much slower read performance. Can you
> advise where I might be going wrong?

Can you try binary 0.5 , just released 20 mins ago?

There was definitely some slow downs due to inlining that I've mostly
fixed in this release.

  
> The data type I'm serialising is roughly: Map String [Either
> (String,[String]) [(String,Int)]]
> 
> A lot of the String's are likely to be identical, and the end file
> size is 1Mb. Time taken with ghc -O2 is 0.4 seconds.

Map serialisation was sub-optimal. That's been improved today's release.
  

> Various questions/thoughts I've had:
> 
> 1) Is reading a lot slower than writing by necessity?

Nope. Shouldn't be.
  
> 2) The storage for String seems to be raw strings, which is nice.
> Would I get a substantial speedup by moving to bytestrings instead of
> strings? If I hashed the strings and stored common ones in a hash
> table is it likely to be a big win?

Yep and maybe.
  
> 3) How long might you expect 1Mb to take to read?
> 
> Thanks for the library, its miles faster than the Read/Show I was
> using before - but I'm still hoping that reading 1Mb of data can be
> instant :-)

Tiny fractions of a second.

$ cat A.hs
import qualified Data.ByteString as B
import System.Environment

main = do
[f] <- getArgs
print . B.length =<< B.readFile f

$ du -hs /usr/share/dict/cracklib-small  
472K/usr/share/dict/cracklib-small

$ time ./A /usr/share/dict/cracklib-small  
477023
./A /usr/share/dict/cracklib-small  0.00s user 0.01s system 122% cpu 0.005 
total

If you're not seeing results like that, with binary 0.5, let's look deeper.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANNOUNCE: X Haskell Bindings 0.2

2009-02-21 Thread Don Stewart
aslatter:
> I'd like to announce the 0.2.* series release of the X Haskell
> Bindings.  This release, like the prior 0.1.* series focuses on making
> the API prettier.  This does mean that there's a good chance this is a
> breaking release.  Also, 0.2.* is based on the just-released version
> 1.4 of the XML descriptions of the X protocol.
> 
> The goal of XHB is to provide a Haskell implementation of the X11 wire
> protocol, similar in spirit to the X protocol C-language Binding
> (XCB).
> 
> On Hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/xhb

Woo, well done! Here's an Arch Linux package,

http://aur.archlinux.org/packages.php?ID=23765

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-21 Thread Don Stewart

Bulat, you've some serious lessons to learn on how to interact with
online communities. First,

1. Stop posting replies to every post on this thread

2. Read some of the fine literature on how to be a productive,
contributing member of a mailing list community,

http://haskell.org/haskellwiki/Protect_the_community

3. Then see if you can rephrase your concerns in a form that will be
useful. Claus (as always) has made a fine suggestion:


http://www.haskell.org/pipermail/haskell-cafe/2009-February/056241.html

3. Come back with some analysis, or a ticket, and authentically try
to collaborate with people here to improve or fix the problems you see.

I'm setting your moderation bit now, and in public so we all know what
is going on, so your posts will bounce until you do something
constructive. This will likely expire in a few days - just enough to
calm things down.

-- Don (on jerk police protrol today)


P.S. if anyone strongly objects, let's talk offline how better to manage things.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Template Haskell compilation error on Windows (was Re: speed: ghc vs gcc)

2009-02-21 Thread Don Stewart
Missing --make

bugfact:
> I tried to compile the template Haskell loop unrolling trick from Claus Reinke
> on my machine which is running Windows and GHC 6.10.1, and I got linker 
> errors.
> 
> c:\temp>ghc -O2 -fvia-C -optc-O3 -fforce-recomp Apply.hs
> Apply.o:ghc6140_0.hc:(.text+0x7d): undefined reference to
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] The community is more important than the product

2009-02-21 Thread Don Stewart
http://haskell.org/haskellwiki/Protect_the_community

Random notes on how to maintain tone, focus and productivity in an
online community I took a few years ago.

Might be some material there if anyone's seeking to help ensure
we remain a constructive, effective community.

-- Don

P.S. release some code on hackage.haskell.org.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Don Stewart
bertram.felgenhauer:
> This is odd, but it doesn't hurt the inner loop, which only involves
> $wsum01_XPd, and is identical to $wfold_s15t above.
> 
> > Checking the asm:
> > $ ghc -O2 -fasm
> > 
> > sQ3_info:
> > .LcRt:
> >   cmpq 8(%rbp),%rsi
> >   jg .LcRw
> >   leaq 1(%rsi),%rax
> >   addq %rsi,%rbx
> >   movq %rax,%rsi
> >   jmp sQ3_info
> 
> So for some reason ghc ends up doing the (n + 1) addition before the
> (acc + n) addition in this case - this accounts for the extra
> instruction, because both n+1 and n need to be kept around for the
> duration of the addq (which does the acc + n addition).


Yep, well spotted.
  
> > Checking via C:
> > 
> >$ ghc -O2 -optc-O3 -fvia-C
> > 
> > Better code, but still a bit slower:   
> > 
> > sQ3_info:
> >   cmpq8(%rbp), %rsi
> >   jg  .L8
> >   addq%rsi, %rbx
> >   leaq1(%rsi), %rsi
> >   jmp sQ3_info
> 
> This code is identical (up to renaming registers and one offset that
> I can't fully explain, but is probably related to a slight difference
> in handling pointer tags between the two versions of the code) to the
> "nice assembly" above.


Indeed, which is gratifying.
  
> > Running:
> > 
> > $ time   ./B
> > 55
> > ./B  1.01s user 0.01s system 97% cpu 1.035 total
> 
> Hmm, about 5% slower, are you sure this isn't just noise?
> 
> If not noise, it may be some alignment effect. Hard to say.


I couldn't get it under 1s from a dozen runs, so assuming some small
effect with alignment.

Why we get the extra test in the outer loop though, not sure. That's new
too I think -- at least I've not seen that pattern before.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] speed: ghc vs gcc vs jhc

2009-02-20 Thread Don Stewart
bulat.ziganshin:
> Hello John,
> 
> Saturday, February 21, 2009, 3:42:24 AM, you wrote:
> 
> >> this is true for *application* code, but for codec you may have lots of
> >> code that just compute, compute, compute
> 
> > Yes indeed. If there is code like this out there for haskell, I would
> > love to add it as a test case for jhc.
> 
> Crypto library has a lot of native haskell code computing hashes and
> encrypting data
> 
> hopefully people will show other examples
> 
> btw, Galois Cryptol has haskell backend, are you know? with jhс
> compilation it can probably generate as fast code as C backend does.
> it will be very interesting for us and even look as something close to
> production usage. i have crossposted message to Don

That's a very interesting idea. The output from Cryptol is self
contained enough, and simple, numerical code, that JHC probably could
handle it -- it doesn't require extensive libraries or runtime support,
for example. This warrents investigation.

Thanks for the suggestion!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Don Stewart
bulat.ziganshin:
> Hello Achim,
> 
> Saturday, February 21, 2009, 1:17:08 AM, you wrote:
> 
> >> nothing new: what you are not interested in real compilers comparison,
> >> preferring to demonstrate artificial results
> >>
> > ...that we have a path to get better results than gcc -O3
> > -funroll-loops, and it's within reach... we even can get there now,
> > albeit not in the most hack-free way imaginable?
> 
> well, can this be made for C++? yes. moreover, gcc does this trick
> *automatically*, while with ghc we need to write 50-line program using
> Template Haskell and then run it through gcc - and finally get exactly
> the same optimization we got automatic for C code
> 
> so, again: this confirms that Don is always build artificial
> comparisons, optimizing Haskell code by hand and ignoring obvious ways
> to optimize Haskell code. unfortunately, this doesn't work in real
> live. and even worse - Don reports this as fair Haskell vs C++
> comparison

This is extremely depressing to read after the good results and lessons of this 
thread.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Don Stewart
dons:
> bulat.ziganshin:
> > Hello Achim,
> > 
> > Friday, February 20, 2009, 11:44:49 PM, you wrote:
> > 
> > >> > Turning this into a ticket with associated test will:
> > >> 
> > >> but why you think that this is untypical and needs a ticket? ;)
> > >> 
> > > Bulat, you are right in every aspect. You never did anything wrong.
> > 
> > Achim, this is simplest code one can imagine. so when Simon will go to
> > check ghc optimizations, he will try it without any reports. but
> > Simon, unlike Don, never said that ghc may be compared to gcc. Don, on
> > the other hand, say this everyday. when he is asked for code that
> > shows this, he declined to answer. so - why YOU think that ghc
> > generates fast code and this example is something unusual? can you
> > provide any *technical* arguments or will continue to make personal
> > attacks together with Don?
> 
> Bulat, you misunderstand, it is not personal! We just want something to
> work on. Something specific.
> 
> For example, you've identified loop unrolling as something that could
> very profitably be improved in GHC, and Claus even wrote a prototype to
> see what kind of speedups to guess. 
> 
> This is a great contribution!  Now we know where to hunt.

And just to summarise what we have seen:

ghc -O2 naive left fold15.680
gcc -O0 4.500
ghc manual recursion -fasm  1.328
ghc manual recursion1.035
ghc naive left fold "stream fusion" 0.967
gcc -O1 0.892
ghc "-funroll-loops" -D80.623
gcc -O3 -funroll-loops  0.318
ghc "-funroll-loops" -D64   0.088

So what did we learn here?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Don Stewart
bulat.ziganshin:
> Hello Achim,
> 
> Friday, February 20, 2009, 11:44:49 PM, you wrote:
> 
> >> > Turning this into a ticket with associated test will:
> >> 
> >> but why you think that this is untypical and needs a ticket? ;)
> >> 
> > Bulat, you are right in every aspect. You never did anything wrong.
> 
> Achim, this is simplest code one can imagine. so when Simon will go to
> check ghc optimizations, he will try it without any reports. but
> Simon, unlike Don, never said that ghc may be compared to gcc. Don, on
> the other hand, say this everyday. when he is asked for code that
> shows this, he declined to answer. so - why YOU think that ghc
> generates fast code and this example is something unusual? can you
> provide any *technical* arguments or will continue to make personal
> attacks together with Don?

Bulat, you misunderstand, it is not personal! We just want something to
work on. Something specific.

For example, you've identified loop unrolling as something that could
very profitably be improved in GHC, and Claus even wrote a prototype to
see what kind of speedups to guess. 

This is a great contribution!  Now we know where to hunt.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Don Stewart
claus.reinke:
> Concrete examples always help, thanks.
>
> In simple enough situations, one can roll one's own loop unrolling;),
> somewhat like shown below (worker/wrapper split to bring the function
> parameter representing the loop body into scope, then template haskell  
> to unroll its applications syntactically, then optimization by 
> transformation
> to get rid of the extra code). It is all rather more complicated than one
> would like it to be, what with TH scoping restrictions and all, but 
> perhaps a library of self-unrolling loop combinators along these lines 
> might help, as a workaround until ghc does its own unrolling.
>
> Claus
>
> {-# LANGUAGE TemplateHaskell #-}
> module Apply where
> import Language.Haskell.TH.Syntax
> apply i bound | i $(apply (i+1) bound) f (f i x) |]
>  | otherwise = [| \f x -> x |]
>
> {-# LANGUAGE CPP #-}
> {-# LANGUAGE TemplateHaskell #-}
> {-# LANGUAGE BangPatterns #-}
> {-# OPTIONS_GHC -DN=8 -ddump-splices #-}
> module Main(main) where
> import Apply
> main = print $ loopW 1 (10^9) body 0
>
> {-# INLINE loopW #-}
> loopW :: Int -> Int -> (Int -> Int -> Int) -> Int -> Int
> loopW i max body acc = loop i acc
>  where
>  loop :: Int -> Int -> Int
>  loop !i !acc | i+N<=max  = loop (i+N) ($(apply (0::Int) N) (\j acc->body 
> (i+j) acc) acc)
>  {-
>  loop !i !acc | i+8<=max  = loop (i+8) ( body (i+7)
>$ body (i+6)
>$ body (i+5)
>$ body (i+4)
>$ body (i+3)
>$ body (i+2)
>$ body (i+1)
>$ body i acc)
>  -}
>  loop !i !acc | i<=max= loop (i+1) (body i acc)
>   | otherwise = acc
>
> body :: Int -> Int -> Int
> body !i !acc = i+acc
>

Great thinking! This is EXTREMELY COOL!

Main.hs:15:42-57: Splicing expression
let
  apply = apply
  $dOrd = GHC.Base.$f1
  $dNum = GHC.Num.$f6
  $dLift = Language.Haskell.TH.Syntax.$f18
in apply (0 :: Int) 8
  ==>
\ f[a1KU] x[a1KV]
-> \ f[a1KW] x[a1KX]
   -> \ f[a1KY] x[a1KZ]
  -> \ f[a1L0] x[a1L1]
 -> \ f[a1L2] x[a1L3]
-> \ f[a1L4] x[a1L5]
   -> \ f[a1L6] x[a1L7]
  -> \ f[a1L8] x[a1L9]
 -> \ f[a1La] 
x[a1Lb] -> x[a1Lb]
  f[a1L8] 
(f[a1L8] 7 x[a1L9])
   f[a1L6] (f[a1L6] 
6 x[a1L7])
f[a1L4] (f[a1L4] 5 
x[a1L5])
 f[a1L2] (f[a1L2] 4 x[a1L3])
  f[a1L0] (f[a1L0] 3 x[a1L1])
   f[a1KY] (f[a1KY] 2 x[a1KZ])
f[a1KW] (f[a1KW] 1 x[a1KX])
 f[a1KU] (f[a1KU] 0 x[a1KV])
In the second argument of `loop', namely
`($(apply (0 :: Int) 8) (\ j acc -> body (i + j) acc) acc)'
In the expression:
loop
  (i + 8) ($(apply (0 :: Int) 8) (\ j acc -> body (i + j) acc) acc)
In the definition of `loop':
loop !i !acc
   | i + 8 <= max
   = loop
   (i + 8) ($(apply (0 :: Int) 8) (\ j acc -> body (i + j) 
acc) acc)

So, that's the fastest yet:

$ time ./Main
55
./Main  0.61s user 0.00s system 98% cpu 0.623 total

And within 2x the best GCC was doing,

 gcc -O3 -funroll-loops  0.318

If we unroll even further...

$ ghc -O2 -fvia-C -optc-O3 -D64 Main.hs

$ time ./Main
55
./Main  0.08s user 0.00s system 94% cpu 0.088 total

Very very nice, Claus!

Now I'm wondering if we can do this via rewrite rules

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Don Stewart
barsoap:
> Don Stewart  wrote:
> 
> > No! This is not how open source works! You *should submit bug
> > reports* and *analysis*. It is so so much more useful than
> > complaining and throwing stones.
> >
> Exactly. I don't know where, but I read that the vast majorities of
> Linux bugs are reported, nailed, and then fixed, by at least three
> different persons: The first reports a misbehaviour, the second manages
> to find it surfacing in a certain line of code, the third instantly
> knows how to make it go away.

Elaboarting further:


Thinking more about Bulat's code gen observations, I think there's something
wrong here -- other than that GHC needs the new codegen to do any of the
fancier loop optimisations.

If we take what I usually see as the best loops GHC can do for this kind of 
thing:

import Data.Array.Vector

main = print (sumU (enumFromToU 1 (10^9 :: Int)))

And compile it:

$ ghc-core A.hs -O2 -fvia-C -optc-O3

We get ideal core, all data structures fused away, and no heap allocation:

$wfold_s15t :: Int# -> Int# -> Int#
$wfold_s15t =
  \ (ww1_s150 :: Int#) (ww2_s154 :: Int#) ->
case ># ww2_s154 ww_s14U of wild_aWm {
  False ->
$wfold_s15t
  (+# ww1_s150 ww2_s154) (+# ww2_s154 1);
  True -> ww1_s150
}; } in
case $wfold_s15t 0 1

Which produces nice assembly:

s16e_info:
  cmpq6(%rbx), %rdi
  jg  .L2
  addq%rdi, %rsi
  leaq1(%rdi), %rdi
  jmp s16e_info

This is the best GHC will do here, in my experience, and I'm satisifed with it.

Short of new backend tweaks, and realising that GHC is not the loop magic 
compiler GCC is.


http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/IntegratedCodeGen

We can be happy with this. The compiler is doing exactly what we expect.

$ time ./B
55
./B  0.96s user 0.00s system 99% cpu 0.967 total

Now, going back to the low level version, Bulat's loop:

main()
{
  int sum=0;
  //for(int j=0; j<100;j++)
for(int i=0; i<1000*1000*1000;i++)
  sum += i;
  return sum;
}

What was first confusing for me was that he wrote the loop "backwards" when 
translating to Haskell,
like this:

main = print $ sum0 (10^9) 0

sum0 :: Int -> Int -> Int
sum0 0  !acc = acc
sum0 !x !acc = sum0 (x-1) (acc+x)

(The bang patterns aren't needed). Note how he counts backwards from 10^9. Was 
there a reason for that, Bulat?

I wondered if we just got worse code on backwards counting loops. So
translating into the "obvious" translation, counting up:

main = print (sum0 0 1)

sum0 :: Int -> Int -> Int
sum0 acc n | n > 10^9  = acc
   | otherwise = sum0 (acc + n) (n + 1)

Which I actually consider to be the same difficulty as writing the C version, 
fwiw... 
We start to notice something interesting:


$wsum0 :: Int# -> Int# -> Int#
$wsum0 =
  \ (ww_sOH :: Int#) (ww1_sOL :: Int#) ->
case lvl2 of wild1_aHn { I# y_aHp ->
case ># ww1_sOL y_aHp of wild_B1 {
  False ->
letrec {

  $wsum01_XPd :: Int# -> Int# -> Int#
  $wsum01_XPd =
\ (ww2_XP4 :: Int#) (ww3_XP9 :: Int#) ->
  case ># ww3_XP9 y_aHp of wild11_Xs {
False ->
  $wsum01_XPd (+# ww2_XP4 ww3_XP9) (+# ww3_XP9 1);
True -> ww2_XP4
  }; } in
$wsum01_XPd (+# ww_sOH ww1_sOL) (+# ww1_sOL 1);

  True -> ww_sOH
}

Why is there an extra test? What is GHC doing?
Checking the asm:

$ ghc -O2 -fasm

sQ3_info:
.LcRt:
  cmpq 8(%rbp),%rsi
  jg .LcRw
  leaq 1(%rsi),%rax
  addq %rsi,%rbx
  movq %rax,%rsi
  jmp sQ3_info

$ time ./B
55
./B  1.30s user 0.01s system 98% cpu 1.328 total

So its a fair bit slower. Now, we should, as a principle, be able to write sum 
directly as I did , and get the
same code from the manual, and automatically , fused version. But we didn't.

Checking via C:

   $ ghc -O2 -optc-O3 -fvia-C

Better code, but still a bit slower:   

sQ3_info:
  cmpq8(%rbp), %rsi
  jg  .L8
  addq%rsi, %rbx
  leaq1(%rsi), %rsi
  jmp sQ3_info

Running:

$ time   ./B
55
./B  1.01s user 0.01s system 97% cpu 1.035 total

So I think we have a bug report! Why did GHC put that extra test in place?

Now, none of this addresses (I think) Bulat's point that GCC can unroll loops 
and do other loop magic.
That's handled under a different workflow - the new code generator.

I'll create the ticket.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] speed: ghc vs gcc

2009-02-20 Thread Don Stewart
bulat.ziganshin:
> Friday, February 20, 2009, 7:41:33 PM, you wrote:
> 
> >> main = print $ sum[1..10^9::Int]
> 
> > This won't be comparable to your loop below, as 'sum' is a left fold
> > (which doesn't fuse under build/foldr).
> 
> > You should use the list implementation from the stream-fusion package (or
> > uvector) if you're expecting it to fuse to the following loop:
> 
> it was comparison of native haskell, low-level haskell (which is
> harder to write than native C) and native C. stream-fusion and any
> other packages provides libraries for some tasks but they can't make faster
> maps, for example. so i used plain list


Hmm? Maybe you're not familiar with the state of the art?

$ cabal install uvector

Write a loop at a high level:

import Data.Array.Vector

main = print (sumU (enumFromToU 1 (10^9 :: Int)))
   
Compile it:

$ ghc-core A.hs -O2 -fvia-C -optc-O3

Yielding:

s16h_info:
  cmpq6(%rbx), %rdi
  jg  .L2
  addq%rdi, %rsi
  leaq1(%rdi), %rdi
  jmp s16h_info

Running:

$ time ./A
55
./A  0.97s user 0.01s system 99% cpu 0.982 total


Now, (trying to avoid the baiting...) this is actually *very*
interesting. Why is this faster than the manual recursion we did earlier
why do we get better assembly?  Again, if you stick to specifics, there's some
interesting things we can learn here.

  
> > Which seems ... OK.
> 
> really? :D

No, see above.

  
> > I don't get anything near the 0.062s which is interesting.
> 
> it was beautiful gcc optimization - it added 8 values at once. with
> xor results are:
> 
> xor.hs  12.605
> xor-fast.hs  1.856
> xor.cpp  0.339


GCC is a good loop optimiser. But apparently not my GCC.

  
> > So we have:
> 
> > ghc -fvia-C -O2 1.127
> > ghc -fasm   1.677
> > gcc -O0 4.500
> > gcc -O3 -funroll-loops  0.318
> 
> why not compare to ghc -O0? also you can disable loop unrolling in gcc
> and unroll loops manually in haskell. or you can generate asm code on
> the fly. there are plenty of tricks to "prove" that gcc generates bad
> code :D


No, we want to show (I imagine) that GHC is within a factor or two of "C".
I usually set my benchmark to beat gcc -O0 fwiw, and then to hope to be within
2x of optimised C. I'm not sure what you're standards are.

  
> > So. some lessons. GHC is around 3-4x slower on this tight loop. (Which 
> > isn't as
> > bad as it used to be).
> 
> really? what i see: low-level haskell code is usually 3 times harder
> to write and 3 times slower than gcc code. native haskell code is tens
> to thousands times slower than C code (just recall that real programs
> use type classes and monads in addition to laziness)


"thousands times", now you're just undermining your own credibility
here. Stick to what you can measure. If anything we'd expect GCC's magic loop
skillz to be less useful on large code bases.

  
> > That's actually a worse margin than any current shootout program, where we 
> > are no
> > worse than 2.9 slower on larger things:
> 
> 1) most benchmarks there depend on libraries speed. in one test, for
> example, php is winner
> 2) for the sum program ghc libs was modified to win in benchmark


It is interesting that the < 2.9x slower in the shootout is pretty much what
we found in this benchmark too. 

> 3) the remaining 1 or 2 programs that measure speed of ghc-generated
> code was hardly optimized using low-level code, so they don't have
> anything common with real haskell code most of us write every day


Depends on where you work.

  
> > Now, given GHC gets most of the way there -- I think this might make a good 
> > bug
> > report against GHC head, so we can see if the new register allocator helps 
> > any.
> 
> you mean that 6.11 includes new allocator? in that case you can
> test it too

Yes.


http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/IntegratedCodeGen
  

> i believe that ghc developers are able to test sum performance without my
> bugreports :D

No! This is not how open source works! You *should submit bug reports* and 
*analysis*.
It is so so much more useful than complaining and throwing stones.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] speed: ghc vs gcc

2009-02-20 Thread Don Stewart
bulat.ziganshin:
> Hello haskell-cafe,
> 
> since there are no objective tests comparing ghc to gcc, i made my own
> one. these are 3 programs, calculating sum in c++ and haskell:

Wonderful. Thank you!
  
> main = print $ sum[1..10^9::Int]


This won't be comparable to your loop below, as 'sum' is a left fold
(which doesn't fuse under build/foldr).

You should use the list implementation from the stream-fusion package (or
uvector) if you're expecting it to fuse to the following loop:
  
> main = print $ sum0 (10^9) 0
> 
> sum0 :: Int -> Int -> Int
> sum0 0  !acc = acc
> sum0 !x !acc = sum0 (x-1) (acc+x)


Note the bang patterns aren't required here. It compiles to the
following core:

$wsum0 :: Int# -> Int# -> Int#
$wsum0 =
  \ (ww_sON :: Int#) (ww1_sOR :: Int#) ->
  case ww_sON of ds_XD0 {
_ -> $wsum0 (-# ds_XD0 1) (+# ww1_sOR ds_XD0);
0 -> ww1_sOR

which is perfect.

Main_zdwsum0_info:
  testq   %rsi, %rsi
  movq%rsi, %rax
  jne .L2
  movq%rdi, %rbx
  jmp *(%rbp)
.L2:
  leaq-1(%rsi), %rsi
  addq%rax, %rdi
  jmp Main_zdwsum0_info

Which seems ... OK.

$ ghc-core A.hs -fvia-C -optc-O3
$ time ./A
55
./A  1.12s user 0.00s system 99% cpu 1.127 total
  
Works for me. That's on linux x86_64, gcc 4.4

Trying -fasm:

Main_zdwsum0_info:
.LcQs:
  movq %rsi,%rax
  testq %rax,%rax
  jne .LcQw
  movq %rdi,%rbx
  jmp *(%rbp)
.LcQw:
  movq %rdi,%rcx
  addq %rax,%rcx
  leaq -1(%rax),%rsi
  movq %rcx,%rdi
  jmp Main_zdwsum0_info

$ time ./A
55
./A  1.65s user 0.00s system 98% cpu 1.677 total

Is  a bit slower.

> main()
> {
>   int sum=0;
>   //for(int j=0; j<100;j++)
> for(int i=0; i<1000*1000*1000;i++)
>   sum += i;
>   return sum;
> }


Well, that's a bit different. It doesn't print the result, and it returns a 
different
results on 64 bit


$ gcc -O0 t.c
$ time ./a.out 
-1243309312
./a.out  3.99s user 0.00s system 88% cpu 4.500 total

$ gcc -O1 t.c
$ time ./a.out
-1243309312
./a.out  0.88s user 0.00s system 99% cpu 0.892 total

$ gcc -O3 -funroll-loops t.c 
$ time ./a.out
-1243309312
./a.out  0.31s user 0.00s system 97% cpu 0.318 total

I don't get anything near the 0.062s which is interesting.
The print statement slows things down, I guess...

So we have:

ghc -fvia-C -O2 1.127
ghc -fasm   1.677
gcc -O0 4.500
gcc -O3 -funroll-loops  0.318

So. some lessons. GHC is around 3-4x slower on this tight loop. (Which isn't as
bad as it used to be).

That's actually a worse margin than any current shootout program, where we are 
no 
worse than 2.9 slower on larger things:


http://shootout.alioth.debian.org/u64q/benchmark.php?test=all&lang=ghc&lang2=gcc&box=1

> 
> execution times:
>  sum:
>ghc 6.6.1 -O2   : 12.433 secs
>ghc 6.10.1 -O2  : 12.792 secs
>  sum-fast:
>ghc 6.6.1 -O2   :  1.919 secs
>ghc 6.10.1 -O2  :  1.856 secs
>ghc 6.10.1 -O2 -fvia-C  :  1.966 secs
>  C++:
>gcc 3.4.5 -O3 -funroll-loops:  0.062 secs
> 

I couldn't reproduce your final number. 

Now, given GHC gets most of the way there -- I think this might make a good bug
report against GHC head, so we can see if the new register allocator helps any.

http://hackage.haskell.org/trac/ghc/newticket?type=bug

Thanks for the report, Bulat!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: garbage collector woes

2009-02-19 Thread Don Stewart
waterson:
> On Feb 17, 2009, at 12:22 PM, Chris Waterson wrote:
>
>> I'm at wits end with respect to GHC's garbage collector and would very
>> much appreciate a code review of my MySQL driver for HDBC, which is
>> here:
>>
>>  
>> >  
>> >
>>
>> In particular, the problem that I'm having is that my "statements"
>> (really, just iterators over a SQL query result set) are getting
>> garbage collected prematurely.
>
> So (*blush*), my woes turned out to be my misunderstanding of the MySQL C 
> API, which
> I have now come to terms with.  I apologize for the noise here.

Is the solution written up somewhere so we can point to that next time?
:)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to create an online poll

2009-02-18 Thread Don Stewart
This looks very promising!
Investigating...

anton:
> There's also the Condorcet Internet Voting Service:
>
>   http://www.cs.cornell.edu/andru/civs.html
>
>
> gregg reynolds wrote:
>> See also www.surveymonkey.com
>>
>> Bulat Ziganshin  wrote:
>>
>>> Hello haskell-cafe,
>>>
>>> http://zohopolls.com/
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Polymorphism overhead

2009-02-17 Thread Don Stewart
wasserman.louis:
> I have (roughly) the following code:
> 
> data Foo e
> type MFoo e = Maybe (Foo e)
> 
> instance Ord e => Monoid (Foo e) where
> f1 `mappend` f2 = 
> 
> I'd expect this to optimize to the same thing as if I had implemented:
> meld :: Ord e => Foo e -> Foo e -> Foo e
> f1 `meld` f2 = -- code invoking meld'
> 
> meld' :: Ord e => Maybe (Foo e) -> Maybe (Foo e) -> Maybe (Foo e)
> meld' (Just f1) (Just f2) = meld f1 f2
> meld' m1 Nothing = m1
> meld' Nothing m2 = m2
> 
> instance Ord e => Monoid (Foo e) where
>  mappend = meld
> 
> However, GHC's Core output tells me that the first piece of code reexamines 
> the
> polymorphism in every recursion, so that mappend, which is used in the Monoid
> instance of Foo, looks up the Monoid instance of Foo again (for the sole
> purpose of looking itself up) and recurses with that.  Why is this, and is
> there a way to fix that?
> 

In general, INLINE or SPECIALIZE

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Memory

2009-02-16 Thread Don Stewart
inbuninbu:
> Hello All,
> 
> The kind people at #haskell suggested I come to haskell-cafe for
> questions about haskell performance issues.
> I'm new to haskell, and I'm having a hard time understanding how to
> deal with memory leaks.
> 
> I've been playing with some network server examples and I noticed with
> each new connection, the memory footprint increases by about 7k
> However, the leaks don't seem to have anything to do with the
> networking code. Actually I get a huge leak just from using using
> 'forever'.
> 
> > import Control.Monad
> > import System.IO
> >
> > main = forever $ putStrLn "hi"
> 
> When I run it for a few seconds with profiling...
> 
> > total time  =0.36 secs   (18 ticks @ 20 ms)
> > total alloc =  54,423,396 bytes  (excludes profiling overheads)
> 
> Can this be right?


did you compile with optimisations on?

$ ghc -O2 A.hs --make
$ time ./A +RTS -sstderr

  17,880 bytes maximum residency (1 sample(s))
  18,984 bytes maximum slop
   1 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0:  9951 collections, 0 parallel,  0.07s,  0.07s elapsed
  Generation 1: 1 collections, 0 parallel,  0.00s,  0.00s elapsed

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time4.45s  ( 16.08s elapsed)
  GCtime0.07s  (  0.07s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time4.52s  ( 16.16s elapsed)

  %GC time   1.5%  (0.5% elapsed)

  Alloc rate1,173,414,505 bytes per MUT second

  Productivity  98.5% of total user, 27.5% of total elapsed

./A +RTS -sstderr  4.52s user 10.61s system 93% cpu 16.161 total

So it's allocating small cells, frequently, then discarding them -- and running 
in constant space.

Looking further,

forever :: (Monad m) => m a -> m b
forever a   = a >> forever a

Well, here, the result is thrown away anyway. And the result is (), so I'd 
expect 
constant space. 

Looks good to me. Did you run it without optimisations, on , perhaps?


-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN : Crypto 4.2.0 & Related News

2009-02-16 Thread Don Stewart
wchogg:
> Hello Haskellers,
> 
> I'm pleased to announce version 4.2.0 of Crypto has been uploaded to
> Hackage & that I am taking over maintenance of the library from
> Dominic Steinitz.  As of this release it should be cabal install'able
> on GHC 6.10.1.  I'm also pleased to announce that the darcs repo will
> be moving from code.haskell.org to being hosted on Patch-Tag at
> http://patch-tag.com/repo/crypto/home.  You don't need to sign up for
> Patch-Tag to use the read only repos, but you will need an account if
> you want to be given write access to the crypto repository.
> 
> Please feel free to e-mail me with any issues or questions.

Great! Good to see the torch passed on.

Packaged up for Arch,

http://aur.archlinux.org/packages.php?ID=17492

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Arch Haskell News: Feb 16 2009

2009-02-16 Thread Don Stewart

Arch now has 926 Haskell packages in AUR.

That’s an increase of 27 new packages in the last 8 days, or 3.38 new
Haskell apps a day.

This weekly news includes:

* Noteworthy updates: grapefruit, haskelldb, gtk2hs
* A video on how to use Arch packages
* Updated releases by category

Read it all:

http://archhaskell.wordpress.com/2009/02/16/arch-haskell-news-feb-16-2009/

Enjoy!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Epic failure

2009-02-15 Thread Don Stewart
> OK, what did I do wrong here?

When making a request for help on a compiler issue, you failed to include key
information to make it possible to reproduce your problem, and what you did
include was broken or incorrect.

The three programs that submitted don't do even do the same thing.

Let's look into this further.

* Program 1:

>   module Main () where
>
>   import Data.List
>
>top = 10 ^ 8 :: Int
>
>main = do
> let numbers = [1 .. top]
> print $ foldl' (+) 0 numbers
>
>-- Runtime is 20 seconds.

Well, let's see if we can reproduce this:

$ ghc -O2 A.hs --make
[1 of 1] Compiling Main ( A.hs, A.o )
Linking A ...

$ time ./A
50005000
./A  1.54s user 0.01s system 98% cpu 1.571 total

Nope. OK, so this seems like user error. Without more info about how you
conducted your experiment, the results are meaningless.

My guess is that you compiled it without optimisations?
Nope, not that,

$ ghc -O0 --make A.hs
$ time ./A
50005000
./A  2.65s user 0.01s system 99% cpu 2.667 tota

So even with all optimisations disabled, it is still an order of magnitude
faster than the number you presented. Resolution: invalid. Not reproducible.


* Program 2

>   #include 
>
>   int main(void)
>{
> int total = 0;
> int n;
>
> for (n = 1, n < 1; n++)
> total += n;
>
> printf("%d", n);
>}
>
>  // Runtime is 0.8 seconds.

Ok. Let's try this then, a C program:

$ gcc t.c 
t.c: In function ‘main’:
t.c:8: error: expected ‘;’ before ‘)’ token

Ah, an incorrect C program. Correcting the OP's typo:

$ time ./a.out
1
./a.out  0.41s user 0.00s system 100% cpu 0.416 total

So its actually a different program. Is this supposed to print 'total'?
This program seems wrong in a number of other ways too.

Resolution: non-sequitor


> Program 3
>
>module Main () where
>
>import Data.List
>
>top = 10 ^ 8 :: Int
>
>kernel i o = if i < top then o `seq` kernel (i+1) (o+i) else o
>
>main = do
> print $ kernel 1 0

>   -- Runtime is 0.5 seconds.
>   Clearly these two nearly identical Haskell programs should have exactly
>   the same runtime. Instead, one is 40x slower. Clearly something has gone
>   catastrophically wrong here. The whole point of GHC's extensive optimiser
>   passes are to turn the first example into the second example - but
>   something somewhere hasn't worked. Any suggestions?


Ok, another program. Let's try this.

$ ghc -O2 B.hs --make
[1 of 1] Compiling Main ( B.hs, B.o )
Linking B ...

$ time ./B
49995000
./B  0.18s user 0.00s system 98% cpu 0.186 total

Oh, this is produces yet another result. In 0.186 seconds.


So, going back to the original question, what did you do wrong?

If you're seeking input for a technical error relating to performance, you
should have, but failed to:

* Use programs that implement the same algorithm
* Indicate which compiler versions/optimisations/architecture you're on
* State what you expected the results to be.

Besides the technical aspects, your presentation categorises the post somwhere 
between internet
crank and internet troll, as:

* You used an inflammatory title, which doesn't inspire trust.

* You jumped to conclusions on the fundamental nature of a technology,
  striking at its reasons for existing, without considering to recheck your
  assumptions.
 
So yes, epic fail. And after three years of this, I'm not hopeful.
But perhaps you can take some lessons from this post to improve your next one?




Now, assuming good faith, and you were just very confused about what you were
measuring, or how to measure it, or how to ask for help in a technical forum,
here's some more fun: let's write your 1st program, and see if GHC can
transform it into your 3rd program.

$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.10.1

$ uname -msr
Linux 2.6.28-ARCH x86_64

$ gcc --version
gcc (GCC) 4.3.3

$ ghc-pkg list uvector
uvector-0.1.0.3

Here's a a program written with combinators in a high level style (and using a
library written in a high level way):

import Data.Array.Vector
main = print . sumU . enumFromToU 0 $ (10^8 :: Int)

Compiling it,

$ ghc -O2 --make C.hs

Which yields the following core,

$wfold_s15D :: Int# -> Int# -> Int#
$wfold_s15D =
  \ (ww1_s15a :: Int#) (ww2_s15e :: Int#) ->
case ># ww2_s15e ww_s154 of wild_a12I {
  False ->
$wfold_s15D
  (+# ww1_s15a ww2_s15e) (+# ww2_s15e 1);
  True -> ww1_s15a

Because GHC knows how to optimise loops of these forms.

And the resulting assembly is pretty nice,

s16o_info:

Re: [Haskell-cafe] ANNOUNCE: haha-0.1 - Animated ascii lambda

2009-02-14 Thread Don Stewart
sfvisser:
> Always wanted to have an full-color rotating vector based ascii art
> lambda on your terminal? This is your chance, installing `haha' will do
> the trick!
>
> This is very minimal vector based ascii art library written just for
> fun. There is a sample program called `rotating-lambda' which does
> exactly what is says.
>
> Make sure your terminal window is at least 80x40 and supports the most
> basic ANSI escape sequences before trying the demo.

Very smoothly done!

Here's a video of what he's talking about,

http://www.youtube.com/watch?v=MugQXHUZPK8 

Raw video,

http://galois.com/~dons/images/rotating-lambda.ogv

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graph library, was: Haskell.org GSoC

2009-02-14 Thread Don Stewart
g9ks157k:
> Am Samstag, 14. Februar 2009 16:59 schrieb Brent Yorgey:
> > On Thu, Feb 12, 2009 at 04:10:21PM +0100, Wolfgang Jeltsch wrote:
> > > Am Donnerstag, 12. Februar 2009 15:34 schrieb Thomas DuBuisson:
> > > > Get a community.haskell.org account once you are ready to start a
> > > > repo, it can not only host your repo (ex:
> > > > http://community.haskell.org/~tommd/pureMD5) but also allows you to
> > > > upload packages to hackage.haskell.org.
> > >
> > > I already have a Hackage account. Can this be readily used as a
> > > community.haskell.org account? If not, what if I get a community account.
> > > Do I have two accounts for Hackage access then?
> >
> > No, they are two separate things.  A Hackage account just lets you
> > upload things to Hackage.  A community.haskell.org account lets you
> > log into code.haskell.org (a completely different server than
> > Hackage), host projects there, and so on.
> 
> But Thomas DuBuisson wrote that you can upload packages to HackageDB with 
> your 
> community.haskell.org account (see above). Is this wrong?

Yes. Register for community accounts (shell, trac, darcs etc)

http://community.haskell.org/

(or you can use github if all you need is a repo).

To get hackage upload perms,

http://hackage.haskell.org/packages/accounts.html

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HaskellDB is alive?

2009-02-14 Thread Don Stewart
felipe.lessa:
> Hello!
> 
> There was a new HaskellDB release, but I didn't see any announcement
> here. Is it back alive? What happened to 0.11?
> 
> Thanks =)

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/haskelldb-0.12
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Race condition possible?

2009-02-12 Thread Don Stewart
bugfact:
> Consider the following code
> 
> stamp v x = do
>   t <- getCurrentTime 
>   putMVar v (x,t)
> 
> Is it possible - with GHC - that a thread switch happens after the t <-
> getCurrentTime and the putMVar v (x,t)? 

Yes. if 't' is heap allocated, there could be a context switch.
  
> If so, how would it be possible to make sure that the operation of reading the
> current time and writing the pair to the MVar is an "atomic" operation, in the
> sense that no thread switch can happen between the two? Would this require 
> STM?
> 

Using 'atomically' and TVars in STM, perhaps? Else, use withMVar?   Or a
modifyMVar in IO?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is using Data.Dynamic considered a no-go?

2009-02-12 Thread Don Stewart
Notably, extensible exceptions use dynamics, in conjunction with type
classes and existentials.

A number of solutions to the 'expression problem' involve dynamics.

bugfact:
> It would be interesting to see when you HAVE to use dynamics, e.g. when no
> other solution is possible in Haskell...
> 
> Right now if I use it, it feels that I'm doing so because I'm too new to
> Haskell.
> 
> 
> On Thu, Feb 12, 2009 at 7:53 PM, Lennart Augustsson 
> wrote:
> 
> You're quite right.  You should only be allowed to derive Typeable.
> (Which could be arranged by hiding the methods of typeable.)
> 
> On Thu, Feb 12, 2009 at 6:24 PM, Jonathan Cast
>  wrote:
> > On Thu, 2009-02-12 at 19:04 +0100, Lennart Augustsson wrote:
> >> They are not unsafe in the way unsafePerformIO is,
> >
> > I beg permission to demur:
> >
> >  newtype Unsafe alpha = Unsafe { unUnsafe :: alpha }
> >  instance Typeable (Unsafe alpha) where
> >typeOf _ = typeOf ()
> >
> >  pseudoSafeCoerce :: alpha -> Maybe beta
> >  pseudoSafeCoerce = fmap unUnsafe . cast . Unsafe
> >
> > Note that
> >
> >  pseudoSafeCoerce = Just . unsafeCoerce
> >
> >> but I regard them
> >> as a last resort in certain situations.
> >> Still, in those situations they are very useful.
> >
> > But I would agree with both of these.  As long as you *derive* Typeable.
> >
> > jcc
> >
> >
> > ___
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
> 
> 

> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is using Data.Dynamic considered a no-go?

2009-02-12 Thread Don Stewart
bugfact:
> Haskell seems to have pretty strong support for dynamic casting using
> Data.Typeable and Data.Dynamic.
> 
> All kinds of funky dynamic programming seems to be possible with these
> "hacks". 
> 
> Is this considered as being as bad as - say - unsafePerformIO? What kind of
> evil is lurking here?

Inefficiencies and runtime errors?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Google Summer of Code 2009

2009-02-12 Thread Don Stewart
gwern0:
> On Thu, Feb 12, 2009 at 11:49 AM, John Lato  wrote:
> > Johan Tibell wrote:
> >> On Thu, Feb 12, 2009 at 2:12 AM, Felipe Lessa  
> >> wrote:
> >>> Do we already have enough information to turn
> >>> http://okmij.org/ftp/Haskell/Iteratee/ into a nice, generic, cabalized
> >>> package? I think Iteratees may prove themselves as useful as
> >>> ByteStrings.
> >>
> >> I still haven't figured out what the "correct" definition of Iteratee
> >> would look like. The Iteratee code that Oleg wrote seems to have the
> >> structure of some kind of "two level" monad. I think that's the reason
> >> for the frequent occurrences of >>== and liftI in the code. There
> >> seems to be some things we yet have to discover about Iteratees.
> >>
> >
> > I concur.  I've recently been involved with several discussions on
> > this topic, and there are some issues that remain.  The "two level
> > monad" part doesn't bother me, but I think the type should be slightly
> > more abstract and I'm not sure of the best way to do so.  IMO liftI is
> > used more because of Oleg's particular style of coding than anything
> > else.  I don't think it need be common in user code, although it may
> > be more efficient.
> >
> > I think that, if a GSOC project were to focus on Iteratees, it would
> > need to look at issues like these.  I can't judge as to whether this
> > is an appropriate amount of work for GSOC, however simply packaging
> > and cabal-izing Oleg's Iteratee work (or Johan's, or my own) is likely
> > of too small a scope.
> >
> > John Lato
> 
> I agree. Just packaging and cabalizing something is likely not a
> SoC-worthy project. (This is why the 'cabalize Wash' suggestion will
> never make it, for example.) In general, cabalizing seems to be either
> pretty easy (most everything I've cabalized) or next to impossible
> (gtk2hs, ghc). The former are too trivial for SoC, and the latter
> likely are impossible without more development of Cabal - at which
> point it'd be more correct to call it a Cabal SoC and not a cabalizing
> SoC.

Yes, "cabalising" was more of a priority when we only had 10 libraries :)

So in general, think hard about missing capabilities in Haskell:

* tools
* libraries
* infrastructure

that benefit the broadest number of Haskell users or developers.

Another route is to identify a clear niche where Haskell could leap
ahead of the competition, with a small investment.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-12 Thread Don Stewart
gtener:
> On Thu, Feb 12, 2009 at 11:36, Malcolm Wallace
>  wrote:
> > Gwern Branwen  wrote:
> >
> >> * A GUI interface to Darcs
> >> (http://hackage.haskell.org/trac/summer-of-code/ticket/17);
> >
> > I wonder whether darcs ought to apply to be a GSoC mentoring
> > organisation in its own right this year?  It would be good to attempt to
> > get a couple of dedicated slots for darcs only (in addition to any that
> > haskell.org may get).
> >
> >> * Optimization of containers
> >> (http://hackage.haskell.org/trac/summer-of-code/ticket/1549). Would
> >> benefit every Haskell user very quickly.
> >
> > This was Jamie Brandon's GSoC project last year, and although that is
> > not yet in wide use, I suspect there is very little extra effort needed
> > to get it out there into the average Haskell user's hands.
> >
> >> * XMonad compositing support
> >> (http://hackage.haskell.org/trac/summer-of-code/ticket/1548).
> >
> > Maybe XMonad should also think about whether to apply to GSoC in their
> > own right as a mentoring org?  As a project, it seems to have a lot of
> > life independent of the Haskell community.
> >
> 
> By the way: I think it may be worthwile to contact Google to point out
> the recent growth of Haskell community. I don't know on what basis
> they assign the slots, but it may be beneficial to do so.

In the past it has been based on scale of mentors, proposals and
students. If we're under-allocated, that can sometimes be addressed, however.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-12 Thread Don Stewart
Malcolm.Wallace:
> Gwern Branwen  wrote:
> 
> > * A GUI interface to Darcs
> > (http://hackage.haskell.org/trac/summer-of-code/ticket/17);
> 
> I wonder whether darcs ought to apply to be a GSoC mentoring
> organisation in its own right this year?  It would be good to attempt to
> get a couple of dedicated slots for darcs only (in addition to any that
> haskell.org may get).
> 
> > * Optimization of containers
> > (http://hackage.haskell.org/trac/summer-of-code/ticket/1549). Would
> > benefit every Haskell user very quickly.
> 
> This was Jamie Brandon's GSoC project last year, and although that is
> not yet in wide use, I suspect there is very little extra effort needed
> to get it out there into the average Haskell user's hands.
> 
> > * XMonad compositing support
> > (http://hackage.haskell.org/trac/summer-of-code/ticket/1548).
> 
> Maybe XMonad should also think about whether to apply to GSoC in their
> own right as a mentoring org?  As a project, it seems to have a lot of
> life independent of the Haskell community.

I agree: the big projects can stand on their own.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell.org GSoC

2009-02-12 Thread Don Stewart
g9ks157k:
> Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
> > For example, if all the haddocks on hackage.org were a wiki, and
> > interlinked, every single package author would benefit, as would all
> > users.
> 
> You mean, everyone should be able to mess about with my documentation? This 
> would be similar to give everyone commit rights to my repositories or allow 
> everyone to edit the code of my published libraries. What is the good thing 
> about that?

No one said anything about unrestricted commit rights ... we're not
crazy ... what if it were more like, say, RWH's wiki .. where comments
go to editors to encorporate ...

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[4]: [Haskell] Google Summer of Code

2009-02-11 Thread Don Stewart
bulat.ziganshin:
> Hello Don,
> 
> Thursday, February 12, 2009, 12:23:16 AM, you wrote:
> 
> > Check out what GHC is doing these days, and come back with an analysis
> > of what still needs to be improved.  We can't wait to hear!
> 
> can you point me to any haskell code that is as fast as it's C
> equivalent?

You should do your own benchmarking!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-11 Thread Don Stewart
Thanks for the analysis, this clarifies things greatly.
Feasibility and scope is a big part of how we determine what projects to
work on.

gtener:
> On Wed, Feb 11, 2009 at 21:00, Jamie  wrote:
> > Hi Gwern,
> >
> > On Wed, 11 Feb 2009, Gwern Branwen wrote:
> >
> >>> I just checked H.263 and it looks like it does not require patent
> >>> licensing
> >>> at all (it is created by ITU-T Video Coding Experts Group (VCEG)) so one
> >>> can
> >>> write H.263 in Haskell and release freely without patent licensing
> >>> issues.
> >>>
> >>> So writing H.263 in Haskell could be a good GSoC project.  One mentioned
> >>> that GHC produce slow code, well H.263 could be a good test case to
> >>> improve
> >>> GHC optimization over time.  In The Computer Language Benchmarks Game,
> >>> Haskell has some catching up to do. :)
> >>
> >> It does sound like a reasonably discrete task, and it sounds like you have
> >> a use for it; but I wonder if it's doable in a single summer?
> >
> > I have no idea, I have not dig deeper into H.263 C source code but I guess
> > it should be quite trivial as it is a black box with video frame input and
> > output with several parameters for encoding and just frame in/out for
> > decoding.
> 
> I didn't dig into the source code either, but I've just skimmed
> through Wikipedia page on that codec:
> http://en.wikipedia.org/wiki/H.263
> and in seems far from trivial. Anything that has 23 annexes is likely
> to be quite complex :-)
> Therefore I seriously doubt chances for success of such project. I did
> some checks: in libavcodec at least following files consist of
> implementation of H.263:
> 
> h263.c h263data.h h263dec.c  h263.h
> h263_parser.c  h263_parser.h
> 
> How many lines are there?
> 
> [te...@laptener libavcodec]$ wc h263*
>   6295  19280 218932 h263.c
>314   2117  10423 h263data.h
>816   2171  26675 h263dec.c
> 46217   2032 h263.h
> 91282   2361 h263_parser.c
> 29165   1047 h263_parser.h
>   7591  24232 261470 razem
> 
> In Haskell project one would also need to provide some additional
> utility code which is part of libavcodec.
> Fast grep shows the tip of an iceberg:
> 
> [te...@laptener libavcodec]$ grep include h263* | grep -v "<"
> h263.c:#include "dsputil.h"
> h263.c:#include "avcodec.h"
> h263.c:#include "mpegvideo.h"
> h263.c:#include "h263data.h"
> h263.c:#include "mpeg4data.h"
> h263.c:#include "mathops.h"
> h263data.h:#include "mpegvideo.h"
> h263dec.c:#include "avcodec.h"
> h263dec.c:#include "dsputil.h"
> h263dec.c:#include "mpegvideo.h"
> h263dec.c:#include "h263_parser.h"
> h263dec.c:#include "mpeg4video_parser.h"
> h263dec.c:#include "msmpeg4.h"
> h263.h:#include "config.h"
> h263.h:#include "msmpeg4.h"
> h263_parser.c:#include "parser.h"
> h263_parser.h:#include "parser.h"
> 
> 
> 
> Bottom line: I don't think it is reasonable to assume anyone without
> previous knowledge of H.263 is able to fit that project into one
> summer. But! It's Haskell community, and people here see the
> impossible happen from time to time ;-)
> 
> All best
> 
> Christopher Skrzętnicki
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
> 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[4]: [Haskell] Google Summer of Code

2009-02-11 Thread Don Stewart
bulat.ziganshin:
> Hello John,
> 
> Wednesday, February 11, 2009, 11:55:47 PM, you wrote:
> 
> >> it's exactly example of tight loop. and let's compare HP code written
> >> for this task with analogous code written in C. i expect that haskell
> >> code is much more complex
> 
> > I think it's fair to point out that tight loops are nearly always the
> > biggest bottlenecks of code, so generating good code for tight loops
> > is pretty important.
> 
> it's important, but doesnt't make whole game. and while i said that
> ghc improved tight loops compilation, this doesn't mean that it
> becomes the same as in gcc. it just started to put loop variables into
> register
> 
> > And ghc is still making large improvements with
> > each release, whereas gcc isn't likely to get significantly better.
> 
> yes, it's close to perfect
> 
> >> afaiu, it's 20-line equivalent of 2-line C code:
> >>
> >> for (i=...)
> >>  a[i] = b[i]
> >>
> >> does this need any more comments?
> 
> > I think you've misunderstood my code.  Look at Oleg's IterateeM and
> > see if you think that's really all it's doing.
> 
> what else does the code that you've citated? you are wrote that it
> just copies 16-bit words into doubles
> 
> > Use libsndfile for comparison.  http://www.mega-nerd.com/libsndfile/.
> 
> it's one method of miscomparing haskell to C - compare hand-tuned
> haskell code with some C code which may be just not optimal. ig you
> want to make fair comparison, you should write best code in both
> languages
> 
> > I actually haven't looked at the code, although it's very highly
> > regarded in the audio community (and I've seen the author post on this
> > list on occasion).  Using libsndfile-1.0.18:
> >  wc wav.c
> > 17867833   57922 wav.c
> 
> > compared to my source:
> > wc Wave.hs
> >  4122215   15472 Wave.hs
> 
> > And there you are.  I will admit that I have implemented the entire
> > wave spec, but only because of lack of time.
> 
> when you don't need speed, you may write more compact code in haskell
> than in C. so the best way is to split your task into
> speed-critical part and the rest and use C++ for the first and Haskell
> for the second


I think what's frustrating about this continued dialogue with Bulat re.
performance is that ,

a) the experience he bases his remarks upon was several year ago
b) he's making blanket generic statements, using that old data
d) a lot of people have written a lot of fast code without trouble
c) he's not acknowledging the great improvements over this time

So its very difficult to have these conversations. They're stuck in the
same old pattern.

Meanwhile, GHC keeps getting smarter and smarter. 

Bulat: time to update your results! 

Check out what GHC is doing these days, and come back with an analysis
of what still needs to be improved.  We can't wait to hear!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-11 Thread Don Stewart
bulat.ziganshin:
> Hello Don,
> 
> Wednesday, February 11, 2009, 8:28:33 PM, you wrote:
> 
> >> anyway it's impossible due to slow code generated by ghc
> 
> > Been a long time since you did high perf code -- we routinely now write
> > code that previously was considered not feasible.
> 
> which is still slower than C and need more time to write
> 
> > However, I would say it needs an optimisation expert, yes, in any
> > language.
> 
> there are experts, includingyou, in making haskell specific code as
> fast as possible, but i don't know anyone using haskell to write
> high-performance code. so you ask for non-existing specialists

We're doing it at Galois regularly. Check out the blog.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC development

2009-02-11 Thread Don Stewart
andrewcoppin:
> OK, so I have a small question.
>
> I was just wondering what the current state of development with GHC is.  
> So, I had a look at the developer wiki. Unfortunately, as best as I can  
> tell, most of the status pages haven't been updated in many months.  
> (Most of them still talk about what will or won't be in 6.10.1, which  
> has been *released* for a while now.) What's the best way to find out  
> what the "real" state of affairs is? What are the developers really  
> working on? What's on hold? How far have people got with things? Etc.
>
> (Yes, I know. I'm nosey...)
>

GHC questions should go to glasgow-haskell-users,


http://www.haskell.org/pipermail/glasgow-haskell-users/2009-February/thread.html

To really see what is going on, look at the commit list and the bug
tracker,

http://www.haskell.org/pipermail/cvs-ghc/2009-February/thread.html

Bug tracker,


http://hackage.haskell.org/trac/ghc/query?status=new&status=assigned&status=reopened&group=priority&type=bug&order=id&desc=1

These links are trivial to find from the GHC home page. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell.org GSoC

2009-02-11 Thread Don Stewart
d:
> Hi,
>
> I noticed last year Haskell.org was a mentoring organization for  
> Google's Summer of Code, and I barely noticed some discussion about it  
> applying again this year :)
>
> I participated for GCC in 2008 and would like to try again this year;  
> while I'm still active for GCC and will surely stay so, I'd like to see  
> something new at least for GSoC.  And Haskell.org would surely be a  
> very, very nice organization.
>
> Since I discovered there's more than just a lot of imperative languages  
> that are nearly all the same, I love to do some programming in Prolog,  
> Scheme and of course Haskell.  However, so far this was only some toy  
> programs and nothing "really useful"; I'd like to change this (as well  
> as learning more about Haskell during the projects).
>
> Here are some ideas for developing Haskell packages (that would  
> hopefully be of general use to the community) as possible projects:
>
> - Numerics, like basic linear algebra routines, numeric integration and  
> other basic algorithms of numeric mathematics.

I think a lot of the numerics stuff is now covered by libraries (see
e.g. haskell-blas, haskell-lapack, haskell-fftw)
 
> - A basic symbolic maths package; I've no idea how far one could do this  
> as a single GSoC project, but it would surely be a very interesting  
> task.  Alternatively or in combination, one could try to use an existing  
> free CAS package as engine.

Interesting, but niche, imo.
 
> - Graphs.
>
> - Some simulation routines from physics, though I've not really an idea  
> what exactly one should implement here best.

True graphs (the data structure) are still a weak point! There's no
canonical graph library for Haskell. 
 

> - A logic programming framework.  I know there's something like that for  
> Scheme; in my experience, there are some problems best expressed  
> logically with Prolog-style backtracking/predicates and unification.  
> This could help use such formulations from inside a Haskell program.  
> This is surely also a very interesting project.

Interesting, lots of related work, hard to state the benefits to the
community though.
 
> What do you think about these ideas?  I'm pretty sure there are already  
> some of those implemented, but I also hope some would be new and really  
> of some use to the community.  Do you think something would be  
> especially nice to have and is currently missing?

Think about how many people would benefit.

For example, if all the haddocks on hackage.org were a wiki, and
interlinked, every single package author would benefit, as would all
users. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-11 Thread Don Stewart
bulat.ziganshin:
> Hello Jamie,
> 
> Wednesday, February 11, 2009, 5:54:09 AM, you wrote:
> 
> > Seems like it is ok to write H.264 in Haskell and released via GPL
> > license?
> 
> anyway it's impossible due to slow code generated by ghc
> 

Been a long time since you did high perf code -- we routinely now write
code that previously was considered not feasible.

However, I would say it needs an optimisation expert, yes, in any
language.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-11 Thread Don Stewart
gwern0:
> (The following is a quasi essay/list of past Summer of Code projects;
> my hope is to guide thinking about what Summer of Code projects would
> be good to pick, and more specifically what should be avoided.
> If you're in a hurry, my conclusions are at the bottom.
> The whole thing is written in Markdown; for best results pass it
> through Pandoc or view it via your friendly local Gitit wiki.)
> 

Thanks for the write up!

We explicitly pushed harder in 2008 to clarify and simplify the goals of
the projects, ensure adequate *prior Haskell experience* and to 
focus on libraries and tools that directly benefit the communtity.

And our success rate was much higher.

So: look for things that benefit the largest number of Haskell
developers and users, and from students with proven Haskell development
experience. You can't learn Haskell from zero on the job, during SoC.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Gtk2HS 0.10.0 Released

2009-02-10 Thread Don Stewart
Well done!

Our flagship GUI bindings... Go team!

-- Don

pgavin:
> Hi everyone,
>
> Oh, dear... it seems I've forgotten how to spell "cafe", and sent this  
> message to haskell-c...@haskell.org the first time around.  I resent it  
> to all the lists again (just to make sure everyone interested receives  
> it), so I apologize for any duplicated messages you might have received.  
>  In any case...
>
> I'd like to release the announcement of Gtk2HS 0.10.0.  A lot of new  
> stuff has gone into this release, including:
>
> - Support for GHC 6.10
> - Bindings to GIO and GtkSourceView-2.0
> - Full switch to the new model-view implementation using a Haskell model
> - Support for many more model-based widgets such as IconView and an  
> updated binding for ComboBox
> - Full Drag-and-Drop support
> - Better support for Attributes in Pango
> - Replaced Event for EventM monad, thereby improving efficiency and  
> convenience
> - Functions for interaction between Cairo and Pixbuf drawing
> - Lots of bug fixes, code cleanups, and portability improvements
>
> With this release, the bindings to GnomeVFS and GtkSourceView-1.0 have  
> been deprecated.  The TreeList modules have been deprecated from the  
> Gtk+ bindings.
>
> Source and Win32 binaries are available at:
>
>
> https://sourceforge.net/project/showfiles.php?group_id=49207package_id=42440&release_id=659598
>
> Thanks to everyone who submitted bug fixes and features this time around!
>
> Thanks,
> Peter Gavin
> Gtk2HS Release Manager
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GMP on Mac OS X linked statically by default

2009-02-10 Thread Don Stewart
leimy2k:
> Was there a reason for this?  If so, it'd be nice if the package that was 
> build
> explained why... otherwise it feels kind of arbitrary, and would be nice if
> there was documentation available to make it link dynamically in case someone
> didn't want to LGPL their program.
> 
> Anyone know the steps to make it link dynamically?
> 

Here's how we do it on Windows. The Mac should be far easier,

http://haskell.forkio.com/gmpwindows

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Asking the GHC garbage collector to run

2009-02-10 Thread Don Stewart
mads_lindstroem:
> Hi all,
> 
> Is it possible to ask the GHC garbage collector to run ? Something like
> a collectAllGarbage :: IO() call.

System.Mem.performGC

iirc,
Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Google Summer of Code 2009

2009-02-10 Thread Don Stewart
Malcolm.Wallace:
> Gentle Haskellers,
> 
> The Google Summer of Code will be running again this year.  Once again,
> haskell.org has the opportunity to bid to become a mentoring
> organisation.  (Although, as always, there is no guarantee of
> acceptance.)
> 
> If you have ideas for student projects that you think would benefit the
> Haskell community, now is the time to start discussing them on mailing
> lists of your choice.  We especially encourage students to communicate
> with the wider community: if you keep your ideas private, you have a
> much worse chance of acceptance than if you develop ideas in
> collaboration with those who will be your "customers", end-users, or
> fellow-developers.  This is the open-source world!
> 

And I'll just note that since December we've been running a proposal
submission site here, where you can vote and comment on ideas,

http://www.reddit.com/r/haskell_proposals/top/

A great place to suggest ideas!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: The Haskell re-branding exercise

2009-02-09 Thread Don Stewart
marlowsd:
> Sterling Clover wrote:
>> IP based limitations are a terrible idea. Multiple users can be and  
>> often are behind the same IP if they're in some sort of intranet, be it 
>> corporate, academic, or simply multiple home computers. Mail-based  
>> authentication can be screwed with, sure, but it's also very easy to  
>> notice this (as opposed to ip nonsense) through simply eyeballing the  
>> results. There's no general everywhere way to prevent vote fraud.  
>> However, if we make it even require a mild bit of thought, that should  
>> be sufficient in this case, as there won't be enough votes to prevent  
>> some sort of rough eyeball-based check of the results, and if there 
>> are, then that's a sign of fraud for sure! Furthermore, there's very 
>> little incentive for someone to go the extra mile here, as we're voting 
>> for a haskell logo, and not, e.g., giving away ten thousand dollars.  
>> Furthermore, since I assume we'll only be presenting reasonable logos,  
>> there's not even some room for pranksters to stage a "write-in" of some 
>> gag slogan.
>
> I suggest we do voting by email, and restrict voting to those who have 
> ever posted on haskell-cafe before 1 Jan 2009.  We could then have an  
> auto-confirmation scheme similar to mailing list sign-up where the  
> confirmation message is sent back to the originator to confirm their  
> identity, containing a verification link to click on.
>
> I realise there are flaws in this, but it seems to be (a) cheap to  
> implement and participate in, and (b) good enough.

Seems good enough. Who's going to tally the votes?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Efficient string output

2009-02-09 Thread Don Stewart
ketil:
> 
> Hi,
> 
> I'm currently working on a program that parses a large binary file and
> produces various textual outputs extracted from it.  Simple enough.
> 
> But: since we're talking large amounts of data, I'd like to have
> reasonable performance.  
> 
> Reading the binary file is very efficient thanks to Data.Binary.
> However, output is a different matter.  Currently, my code looks
> something like:
> 
>   summarize :: Foo -> ByteString
>   summarize f = let f1 = accessor f
> f2 = expression f
>:
> in B.concat [f1,pack "\t",pack (show f2),...]
> 
> which isn't particularly elegant, and builds a temporary ByteString
> that usually only get passed to B.putStrLn.  I can suffer the
> inelegance were it only fast - but this ends up taking the better part
> of the execution time.

Why not use Data.Binary for output too? It is rather efficient at
output -- using a continuation-like system to fill buffers gradually.

--   Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-08 Thread Don Stewart
> >> Furthermore, since I assume we'll only be presenting reasonable logos,
> >> there's not even some room for pranksters to stage a "write-in" of some
> >> gag slogan.
> >
> > Right, only a subset of previously submitted ones.
> >
> > -- Don
> 
> So does this mean no 'haskell YEEHH!'?

Isn't that already the underground official logo?

   http://www.facebook.com/pages/Haskell/56088385002

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-08 Thread Don Stewart
s.clover:
> IP based limitations are a terrible idea. Multiple users can be and  
> often are behind the same IP if they're in some sort of intranet, be it 
> corporate, academic, or simply multiple home computers. Mail-based  
> authentication can be screwed with, sure, but it's also very easy to  
> notice this (as opposed to ip nonsense) through simply eyeballing the  
> results. There's no general everywhere way to prevent vote fraud.  
> However, if we make it even require a mild bit of thought, that should be 
> sufficient in this case, as there won't be enough votes to prevent some 
> sort of rough eyeball-based check of the results, and if there are, then 
> that's a sign of fraud for sure! Furthermore, there's very little 
> incentive for someone to go the extra mile here, as we're voting for a 
> haskell logo, and not, e.g., giving away ten thousand dollars. 

Exactly. Let's not wander down to the bikeshed :)

> Furthermore, since I assume we'll only be presenting reasonable logos, 
> there's not even some room for pranksters to stage a "write-in" of some 
> gag slogan.

Right, only a subset of previously submitted ones.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: X Haskell Bindings 0.1

2009-02-08 Thread Don Stewart
aslatter:
> I'd like to announce a version bump for the X Haskell Bindings (XHB)
> library, to 0.1.* from 0.0.*.
> 
> The goal of XHB is to provide a Haskell implementation of the X11 wire
> protocol, similar in spirit to the X protocol C-language Binding
> (XCB).
> 
> On Hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/xhb
> 
> This release focuses on making the API a bit friendlier:
> 
>  + 'type BOOL = Word8' has been replaced in the API by Prelude.Bool
> 
>  + type synonyms BYTE, CARD8, CARD16 and CARD32 for the Data.Word
> types have been eliminated
> 
>  + type synonyms INT8, INT16 and INT32 for the Data.Int types have
> been eliminated
> 
>  + Previously, all protocol replies were represented by their own
> distinct data type.  Now, if the reply to a request only includes a
> single field, the request returns that field directly.
> 
>  In more concrete terms:
> 
> > internAtom :: Connection -> InternAtom -> IO (Receipt InternAtomReply)
> 
> becomes:
> 
> > internAtom :: Connection -> InternAtom -> IO (Receipt ATOM)
> 
> Further work to make the API more "Haskelly" is ongoing.
> 
> Related projects:
> 
> X C Bindings: http://xcb.freedesktop.org/

Well done!

Have a distro package,

http://aur.archlinux.org/packages.php?ID=23765

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to link statically with a c lib ?

2009-02-08 Thread Don Stewart
noteed:
> 2009/2/8 Don Stewart :
> > noteed:
> >> Hi,
> >>
> >> I'm writing bindings for the Tiny C Compiler.
> >> It seems that tcc provide a libtcc.a but no libtcc.so.
> >>
> >> In my cabal file, I have
> >>
> >>   extra-libraries: dl, tcc
> >>
> >> but when using the generated haskell module,
> >> I have the following message :
> >>
> >>   ⟨...@jones samples⟩ ghc -e "main" Test.hs
> >>   : : can't load .so/.DLL for: tcc
> >> (libtcc.so: cannot open shared object file: No such file or directory)
> >>
> >> How can I generate a module linked statically against libtcc ?
> >
> > Without a .so you can't load it in ghci, but you can compile it with ghc.
> >
> >  ghc --make Test.hs
> 
> Ok but what should be written in the cabal file ?
> 
> I build a .so of libtcc so it works for now.
> 
> Before I put it on hackage, maybe I can get a review of it, if
> anything is fundamentaly wrong ?
> It is located at http://github.com/noteed/tcc/tree/master

In the .cabal file should only be:

extra-libraries: tcc

I think.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to link statically with a c lib ?

2009-02-08 Thread Don Stewart
noteed:
> Hi,
> 
> I'm writing bindings for the Tiny C Compiler.
> It seems that tcc provide a libtcc.a but no libtcc.so.
> 
> In my cabal file, I have
> 
>   extra-libraries: dl, tcc
> 
> but when using the generated haskell module,
> I have the following message :
> 
>   ⟨...@jones samples⟩ ghc -e "main" Test.hs
>   : : can't load .so/.DLL for: tcc
> (libtcc.so: cannot open shared object file: No such file or directory)
> 
> How can I generate a module linked statically against libtcc ?

Without a .so you can't load it in ghci, but you can compile it with ghc.

 ghc --make Test.hs

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] any implementation of ACIO?

2009-02-08 Thread Don Stewart
jianzhou:
> Hi,
>  
> http://www.haskell.org/pipermail/haskell-cafe/2004-November/007715.html
> mentioned an interesting (A)ffine and (C)entral IO. Are there any packages
> or extensions to support ACIO in Haskell?

Not that I know of. 

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
bulat.ziganshin:
> Hello Don,
> 
> Saturday, February 7, 2009, 8:20:23 PM, you wrote:
> 
> > We need a voting site set up. There was some progress prior to the end
> > of the year. Updates welcome!
> 
> i think that there are a lot of free voting/survey services available.
> the last one i went through was LimeSurvey available for any SF
> project and on separate site too
> 
> http://apps.sourceforge.net/trac/sitedocs/wiki/Hosted%20Apps
> https://www.limeservice.com/
> 

Before the new year's break, the progress we made towards deciding on a
voting process was,

http://groups.google.com/group/fa.haskell/msg/5d0ad1a681b044c7

Eelco implemented a demo condorcet voting system in HAppS.

He then asked for help with some decisions:

* Limit voting, if so how?  Email confirmation, IP based, vote once,  once 
per day? 
* Maybe don't show the results until the contest is over? 

Eelco, can we do simple email-based confirm to encourage people to vote
only once, and can we keep the results closed until the vote process is
over?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
gwern0:
> 2009/2/7 Don Stewart :
> > Quite so, biased by the fact that they dropped off the page.
> >
> > I'm not saying reddit is unsuitable for communal decision making -- I've
> > thought hard about this -- just that isn't perfect, and this isn't
> > really its purpose. It would make a good backup if we can't find a
> > proper system.
> >
> > -- Don
> 
> And how long do we wait? Is a month long enough? 2 months? Do we just
> make a note on our calendars for February 2010 - 'get moving on that
> logo contest thing'?

Help identifying and implementing a voting process is very welcome.
Snarky comments are not.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
gwern0:
> On Sat, Feb 7, 2009 at 3:04 PM, Don Stewart  wrote:
> > gwern0:
> >> On Sat, Feb 7, 2009 at 1:34 PM, Don Stewart  wrote:
> >> >Oh, we had a long discussion about the need for condorcet voting,
> >> >not a system like the reddit which is prone to abuse.
> >> >
> >> >Also, it would be good to have the images inline.
> >>
> >> Perfect, please meet better. Better, perfect. Now get along you two!
> >>
> >> Since January 1st, we could've had hundreds or thousands of votes and
> >> easily compensated for any abuse.
> >
> > Unfortunately, reddit isn't a suitable voting site, as submissions decay
> > over time, dissappearing off the page after a day or two. It does have
> > up and down mods, but in no other way is a voting site.
> >
> > -- Don
> 
> That's how the what's hot works, I understand. But it seems to me that
> Top works just fine for vote tallying purposes eg.
> http://www.reddit.com/r/haskell/top/ lists quite a few posts posted
> months ago (4 months seems to be the oldest).

Quite so, biased by the fact that they dropped off the page.

I'm not saying reddit is unsuitable for communal decision making -- I've
thought hard about this -- just that isn't perfect, and this isn't
really its purpose. It would make a good backup if we can't find a
proper system.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
gwern0:
> On Sat, Feb 7, 2009 at 1:34 PM, Don Stewart  wrote:
> >Oh, we had a long discussion about the need for condorcet voting,
> >not a system like the reddit which is prone to abuse.
> >
> >Also, it would be good to have the images inline.
> 
> Perfect, please meet better. Better, perfect. Now get along you two!
> 
> Since January 1st, we could've had hundreds or thousands of votes and
> easily compensated for any abuse.

Unfortunately, reddit isn't a suitable voting site, as submissions decay
over time, dissappearing off the page after a day or two. It does have
up and down mods, but in no other way is a voting site.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Semantic web

2009-02-07 Thread Don Stewart
dev:
> Anybody implementing rdf or owl  stuff in haskell?  Seems like a natural fit.

http://www.ninebynine.org/RDFNotes/Swish/Intro.html

Needs moving to Hackage.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
Oh, we had a long discussion about the need for condorcet voting,
not a system like the reddit which is prone to abuse.

Also, it would be good to have the images inline.

wagner.andrew:
> Um, ok. Glad we could "discuss" it
> 
> On Sat, Feb 7, 2009 at 1:12 PM, Don Stewart  wrote:
> 
> wagner.andrew:
> >
> > We need a voting site set up. There was some progress prior to the
> end
> > of the year. Updates welcome!
> >
> > -- Don
> >
> > Can't we just use the haskell proposal reddit for this?
> 
> Hmm... not ideal. Would make a backup should all else fail.
> 
> 
> 
> 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
wagner.andrew:
> 
> We need a voting site set up. There was some progress prior to the end
> of the year. Updates welcome!
> 
> -- Don
> 
> Can't we just use the haskell proposal reddit for this?
  
Hmm... not ideal. Would make a backup should all else fail.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Haskell re-branding exercise

2009-02-07 Thread Don Stewart
paul:
> Paul Johnson wrote:
>> A call has gone out  
>>  
>> for a new logo for Haskell.  Candidates (including a couple  
>>   
>> of mine  
>> ) are  
>> accumulating here  
>> .   
>> There has also been a long thread on the Haskell Cafe mailing list.
>>
> So what's happening about this?


We need a voting site set up. There was some progress prior to the end
of the year. Updates welcome!

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fastest regex package?

2009-02-06 Thread Don Stewart
allbery:
> On 2009 Feb 5, at 10:26, Eugene Kirpichov wrote:
>> My benchmark (parsing a huge logfile with a regex like "GET
>> /foo.xml.*fooid=([0-9]++).*barid=([0-9]++)") shows that plain PCRE is
>> the fastest one (I tried PCRE, PCRE-light and TDFA; DFA can't do
>> capturing groups at all, TDFA was abysmally slow (about 20x slower
>> than PCRE), and it doesn't support ++), but maybe have I missed any
>> blazing-fast package?
>
>
> I think dons (copied) will want to hear about this; pcre-light is  
> supposed to be a fast lightweight wrapper for the PCRE library, and if  
> it's slower than PCRE then something is likely to be wrong somewhere.

Shouldn't be slower (assuming you're using bytestrings).

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are you using Haskell on the job?

2009-02-06 Thread Don Stewart
kirk.martinez:
> Hello, fellow Haskell hackers!  I am writing a term paper on Haskell in
> Business, and while I have gathered a lot of good information on the Internet,
> I would really like direct feedback from software professionals who have used
> Haskell in a business setting.  I would really appreciate a few minutes of 
> your
> time to provide insights gained from applying Haskell in the real world.  Who
> knows, this could lead to a greater adoption of Haskell in the business
> community!
> 
>  
> 
> Rather than a list of Haskell's technical strengths (purity, laziness,
> composition, etc.), I want to get a sense of the process leading up to the
> decision to use Haskell for a given project and the insights gained during and
> after completion.  I am particularly interested in questions related to
> business value:
> 
>  
> 
>   ● What were the pros and cons you considered when choosing a language?  Why
> FP?  Why Haskell?
>   ● What aspects of your problem domain were most critical to making that
> choice?
>   ● How has using Haskell given you a competitive advantage?
>   ● How is the software development lifecycle positively/negatively affected 
> by
> using Haskell as opposed to a different language?
>   ● How did you convince management to go with a functional approach?
>   ● Was the relative scarcity of Haskell programmers a problem?  If so, how 
> was
> it managed?
>   ● Would you choose to use Haskell again for a similar project given what you
> know now?
> 
>  
> 
> The best responses will not simply list answers, but also provide background
> and a bit of narrative on the project and insights gained.  Feel free to reply
> to the list, or just to me personally if you prefer.  My email is below.
> 

I would also suggest you contact speakers from past CUFP meetings,
who've written all sorts of interesting summaries on the use of
Haskell (and other FP langs) in industry.

http://cufp.galois.com/

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Happstack 0.1 Released!

2009-02-05 Thread Don Stewart
andrewcoppin:
> Jochem Berndsen wrote:
>> The HAppS project has been abandoned, see
>> http://groups.google.com/group/HAppS/msg/d128331e213c1031 .
>>
>> The Happstack project is intended to continue development. For more
>> details, see http://happstack.com/faq.html .
>>
>>   
> So we've got HAppS, Happstack, WASH, Turbinado, probably others... Does  
> anybody know how all these relate to each other? Where their strengths  
> and weaknesses lie?
>
> It's nice to have choice, but without knowing what you're choosing  
> between, it's hard to use it well.
>

A comparative analysis of the 10+ Haskell web frameworks would be
awesome.

happstack, wash, fastcgi.., turbinado, perpubplat, riviera, salvia,
kibro, ella, what was that one launched yesterday? *ah, yesod...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell tutorial for pseudo users?

2009-02-05 Thread Don Stewart
andrewcoppin:
> Deniz Dogan wrote:
>> Learn You a Haskell for Great Good (http://learnyouahaskell.com/)
>
> Mmm, interesting.
>
> Does anybody else think it would be neat if GHCi really did colourise  
> your input like that? (Or at least display the prompt in a different  
> colour to user input and program output?)

I wrote about this a while ago,

http://haskell.org/haskellwiki/GHCi_in_colour

Needs integration with ghc-api, and we're done.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: #haskell-in-depth IRC channel

2009-02-03 Thread Don Stewart
We explicitly want to avoid a newbie "trap"
See the summary of the discussion that lead to the channel creation

http://haskell.org/haskellwiki/IRC_channel/Phase_2

-- Don

DekuDekuplex:
> On Wed, 04 Feb 2009 00:15:48 +, Philippa Cowderoy
>  wrote:
> 
> >[...]
> >
> >If you need to know how to use monads so you can do IO,
> >#haskell-in-depth isn't the place. On the other hand, if you want to
> >discuss how Haskell's monads compare to the category theory or what the
> >category theory can tell us about how individual monads relate to the
> >language as a whole, -in-depth is a good place! In particular, we're
> >hoping that the kind of category theory discussions that give the
> >mistaken impression you actually need to know CT will increasingly live
> >in #haskell-in-depth.
> >
> >We're not after a theory channel though - architectural discussion,
> >compiler implementation, possible type system extensions, library
> >design, all are good subjects.
> 
> Great work!  I look forward to participating sometime in the near
> future.
> 
> In that case, for people who need to know how to use monads so that
> they can do IO, why not create a #haskell-beginners channel?  I have
> occasionally read posts of some users who were hesitant to participate
> in #haskell until they learned enough to keep up with the discussions
> there.  If neither #haskell nor #haskell-in-depth is appropriate,
> perhaps they would feel more comfortable in a
> Haskell-beginners-specific channel?
> 
> -- Benjamin L. Russell
> -- 
> Benjamin L. Russell  /   DekuDekuplex at Yahoo dot com
> http://dekudekuplex.wordpress.com/
> Translator/Interpreter / Mobile:  +011 81 80-3603-6725
> "Furuike ya, kawazu tobikomu mizu no oto." 
> -- Matsuo Basho^ 
> 
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Verifying Haskell Programs

2009-02-02 Thread Don Stewart
dbueno:
> On Mon, Feb 2, 2009 at 15:04, Don Stewart  wrote:
> > pocmatos:
> >> Hi all,
> >>
> >> Much is talked that Haskell, since it is purely functional is easier >
> > to be verified.  > However, most of the research I have seen in software
> > verification > (either through model checking or theorem proving)
> > targets C/C++ or > subsets of these. What's the state of the art of
> > automatically > verifying properties of programs written in Haskell?
> >>
> >
> > State of the art is translating subsets of Haskell to Isabelle, and
> > verifying them. Using model checkers to verify subsets, or extracting
> > Haskell from Agda or Coq.
> 
> Don, can you give some pointers to literature on this, if any?  That
> is, any documentation of a verification effort of Haskell code with
> Isabelle, model checkers, or Coq?
> 
> (It's not that I don't believe you -- I'd be really interested to read it!)


All on haskell.org,


http://haskell.org/haskellwiki/Research_papers/Testing_and_correctness#Verifying_Haskell_programs

And there's been work since I put that list together.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: 1,000 packages, so let's build a few!

2009-02-02 Thread Don Stewart
ganesh.sittampalam:
> Don Stewart wrote:
> 
> > GHC doesn't  bundle with cabal-install on any system.
> > 
> > What is needed is not for the GHC team to be doing Windows platform
> > packages, but for the Windows Haskell devs to build their own system,
> > as happens on all the Unices.  
> > 
> > Take GHC's release, wrap it up with native installers, throw in
> > useful libraries and executables like cabal. Done. 
> > 
> > It's not the GHC compiler team's job to build distro-specific bundles.
> > 
> > So, wind...@haskell.org anyone? Get the wiki going, get the set of
> > tasks created. 
> 
> Isn't the Haskell Platform going to do all this? Shouldn't interested
> people just help out there?
> 

The platform is a set of blessed libraries and tools. The distros will
still need to package that.

To do that for Windows, we're still going to need a windows packaging
team, along side Debian, Arch, Gentoo, Mac etc.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Verifying Haskell Programs

2009-02-02 Thread Don Stewart
pocmatos:
> Hi all,
> 
> Much is talked that Haskell, since it is purely functional is easier >
to be verified.  > However, most of the research I have seen in software
verification > (either through model checking or theorem proving)
targets C/C++ or > subsets of these. What's the state of the art of
automatically > verifying properties of programs written in Haskell?
> 

State of the art is translating subsets of Haskell to Isabelle, and
verifying them. Using model checkers to verify subsets, or extracting
Haskell from Agda or Coq.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: 1,000 packages, so let's build a few!

2009-02-02 Thread Don Stewart
ndmitchell:
> Hi
> 
> > So actually just having more Windows users subscribed to cabal-devel and
> > commenting on tickets would be very useful, even if you do not have much
> > time for hacking.
> 
> I believe that as soon as a Windows user starts doing that you'll
> start asking them for patches :-)
> 
> There are a number of reasons that we have fewer Windows developers:
> 
> * Some of it comes down to social reasons - for some reason it seems
> to be socially acceptable to belittle Windows (and Windows users) on
> the Haskell mailing lists and #haskell.
> 
> * Some of it comes down to technical issues - for example not having
> cabal.exe bundled with GHC 6.10.1 on Windows was a massive mistake
> (although I've heard everyone argue against me, I've not yet heard a
> Windows person argue against me).
> 
> * Part of it comes down to most developers not being Windows people.
> 
> * A little is because Windows is a second class citizen even in the
> libraries, my OS is NOT mingw32 - mingw32 is not even an OS, its a
> badly typed expression! How would you like it if your OS was listed as
> Wine? Things like this tell me that Haskell isn't Windows friendly, at
> best its windows tolerant.
> 
> * Things like Gtk2hs, which Windows users need building for them,
> don't release in sync with GHC, which makes it hard to use.
> 
> * Windows machines don't usually have a C compiler, and have a very
> different environment - while the rest of the world is starting to
> standardise.
> 
> I gave up on fighting the fight when people decided not to bundle
> cabal.exe with Windows - and now I'm too busy with my day job... Now
> I'd say Duncan is the most vocal and practical Windows developer, even
> overlooking the fact he doesn't run Windows.

GHC doesn't  bundle with cabal-install on any system.

What is needed is not for the GHC team to be doing Windows platform
packages, but for the Windows Haskell devs to build their own system, as
happens on all the Unices.

Take GHC's release, wrap it up with native installers, throw in useful
libraries and executables like cabal. Done.

It's not the GHC compiler team's job to build distro-specific bundles. 

So, wind...@haskell.org anyone? Get the wiki going, get the set of tasks
created.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: 1,000 packages, so let's build a few!

2009-02-02 Thread Don Stewart
jwlato:
> Duncan Coutts wrote:
> >
> > Some are trivial and should be done away with. For example the ones that
> > just check if a C header / lib is present are unnecessary (and typically
> > do not work correctly). The next point release of Cabal can do these
> > checks automatically, eg:
> >
> >Configuring foo-1.0...
> >cabal: Missing dependencies on foreign libraries:
> >* Missing header file: foo.h
> >* Missing C libraries: foo, bar, baz
> >This problem can usually be solved by installing the system
> >packages that provide these libraries (you may need the "-dev"
> >versions). If the libraries are already installed but in a
> >non-standard location then you can use the flags
> >--extra-include-dirs= and --extra-lib-dirs= to specify where
> >they are.
> 
> Thank you!  Thank you!  Thank you!
> 
> For those of us who want to write cross-platform (i.e. Windows)
> bindings to C libraries, this is great news.

It will be important now to report the lack of uses of these portability
tests back to the authors of packages.

A start would be to have hackage warn, I suppose.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Complex C99 type in Foreign

2009-02-01 Thread Don Stewart
briqueabraque:
> >>Are there plans to include C99 'complex' type
> >>in Foreign, maybe as CFloatComplex, CDoubleComplex
> >>and CLongDoubleComplex? This seems an easy addition
> >>to the standard and would allow binding of a few
> >>interesting libraries, like GSL.
> >
> >A separate library for new types to add to Foreign would be the easiest
> >way forward. Just put the foreign-c99 package on Hackage?
> 
> As far as I know, this is not possible. (I tried for
> a long time to do that, actually, until I reallized
> it could not be done.)
> 
> If it's not true, i.e., I could actually have some
> arbitrary sized parameter as argument to a function
> or as a return value (and not its pointer), what
> did I saw wrong? I understand only Foreign.C.C*
> types or forall a. => Foreign.Ptr.Ptr a can be used
> like that.

Oh, you mean you need to teach the compiler about unboxed complex types?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Takusen 0.8.3 install problems

2009-02-01 Thread Don Stewart
alistair:
> >> > You can probably just remove the Setup.lhs and build with defaults
> >> > (we're doing that at galois, we use Takusen).
> >> >
> >> > -- Don
> 
> I'm surprised this works, unless you also change the imports of
> Control.Exception to Control.OldException. The new exception module is
> part of the reason it's taking me a while to port to 6.10.1. Nearly
> there though; only the haddock failures to fix and then we can
> release.

build-depends: base < 4
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Complex C99 type in Foreign

2009-02-01 Thread Don Stewart
briqueabraque:
> Hi,
> 
> Are there plans to include C99 'complex' type
> in Foreign, maybe as CFloatComplex, CDoubleComplex
> and CLongDoubleComplex? This seems an easy addition
> to the standard and would allow binding of a few
> interesting libraries, like GSL.
> 

A separate library for new types to add to Foreign would be the easiest
way forward. Just put the foreign-c99 package on Hackage?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-01-31 Thread Don Stewart
sebastian.sylvan:
> 
> 
> --
> From: "Don Stewart" 
> Sent: Saturday, January 31, 2009 8:35 PM
> To: "Andrew Coppin" 
> Cc: 
> Subject: Re: [Haskell-cafe] 1,000 packages, so let's build a few!
> 
> >andrewcoppin:
> >>In celebration of Hackage reachin over 1,000 unique packages, I decided
> >>that I would re-visit the problem of attempting to build them on Windows.
> >>
> >>Ah yes, I already have the tarball for stream-fusion-0.1.1, but I see
> >>that the latest release is 0.1.2.1. (Unfortunately, there doesn't appear
> >>to be any way to determine what the difference is between the two
> >>versions...)
> >
> >the true way to install all of hackage is:
> >
> >   cabal install $(all my packages)
> >
> >where cabal install solves it all.
> 
> If that had actually worked it would be great I must say that my own 
> not-so-random sampling (basically "ooh that looks cool, let's try it") is 
> probably at a 20% success rate or so... It's great when it does work, but 
> it usually doesn't.
> 
> Usually it fails because some part of it tries to run unix shell scripts 
> (and I try to avoid things which seem like they're unix only, even though 
> they're no easy way of determining this, so these are packages that at 
> least to me seemed like they could be perfectly portable if not for 
> unix-specific installation procedures). 
> 

Windows people need to set up a wind...@haskell.org to sort out their
packaging issues, like we have for debian, arch, gentoo, freebsd and
other distros.

Unless people take action to get things working well on their platform,
it will be slow going.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Takusen 0.8.3 install problems

2009-01-31 Thread Don Stewart
build-type: Simple

praki.prakash:
> Don,
> 
> Thanks for the hint. I removed Setup.hs and tried "cabal build".  I
> get an error that build type is custom and Setup.lhs is missing. What
> is the magical incantation needed to do the default build?
> 
> Thanks
> Praki
> 
> On Sat, Jan 31, 2009 at 12:30 PM, Don Stewart  wrote:
> > praki.prakash:
> >> I am trying to install Takusen 0.8.3 with ghc 6.10.1 on Ubuntu 8.04
> >> (same issue on Win XP as well). I get the following complaint from
> >> cabal.
> >>
> >>Module
> >>`Distribution.PackageDescription'
> >>does not export
> >>`writeHookedBuildInfo'
> >> cabal: Error: some packages failed to install:
> >> Takusen-0.8.3 failed during the configure step. The exception was:
> >> exit: ExitFailure 1
> >
> > You can probably just remove the Setup.lhs and build with defaults
> > (we're doing that at galois, we use Takusen).
> >
> > -- Don
> >
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-01-31 Thread Don Stewart
bugfact:
>  the true way to install all of hackage is:
> 
> cabal install $(all my packages)
> 
>  where cabal install solves it all.
> 
>not really :) e.g. my output on a Windows Vista system with GHC 6.10.1
>cabal install sdl
>Resolving dependencies...
>Downloading SDL-0.5.4...
>[1 of 1] Compiling Main (
>C:\Users\Peter\AppData\Local\Temp\TMPSDL-0.5.4\SDL-0.5.4\Setup.lhs,
>C:\Users\Peter
>\AppData\Local\Temp\TMPSDL-0.5.4\SDL-0.5.4\dist\setup\Main.o )
>C:\Users\Peter\AppData\Local\Temp\TMPSDL-0.5.4\SDL-0.5.4\Setup.lhs:2:2:
>Warning: In the use of `defaultUserHooks'
> (imported from Distribution.Simple):
> Deprecated: "Use simpleUserHooks or autoconfUserHooks, unless
>you need Cabal-1.2
> compatibility in which case you must stick with
>defaultUserHooks"
>Linking
>
> C:\Users\Peter\AppData\Local\Temp\TMPSDL-0.5.4\SDL-0.5.4\dist\setup\setup.exe
>...
>Warning: defaultUserHooks in Setup script is deprecated.
>Configuring SDL-0.5.4...
>setup.exe: sh: runGenProcess: does not exist (No such file or directory)
>cabal: Error: some packages failed to install:
>SDL-0.5.4 failed during the configure step. The exception was:
>exit: ExitFailure 1

Isn't this missing C library dependencies, which cabal head now warns
about?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-01-31 Thread Don Stewart
andrewcoppin:
> In celebration of Hackage reachin over 1,000 unique packages, I decided 
> that I would re-visit the problem of attempting to build them on Windows.
> 
> Ah yes, I already have the tarball for stream-fusion-0.1.1, but I see 
> that the latest release is 0.1.2.1. (Unfortunately, there doesn't appear 
> to be any way to determine what the difference is between the two 
> versions...)

the true way to install all of hackage is:

cabal install $(all my packages)

where cabal install solves it all.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Takusen 0.8.3 install problems

2009-01-31 Thread Don Stewart
praki.prakash:
> I am trying to install Takusen 0.8.3 with ghc 6.10.1 on Ubuntu 8.04
> (same issue on Win XP as well). I get the following complaint from
> cabal.
> 
>Module
>`Distribution.PackageDescription'
>does not export
>`writeHookedBuildInfo'
> cabal: Error: some packages failed to install:
> Takusen-0.8.3 failed during the configure step. The exception was:
> exit: ExitFailure 1

You can probably just remove the Setup.lhs and build with defaults
(we're doing that at galois, we use Takusen).

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


<    3   4   5   6   7   8   9   10   11   12   >