Re: [Haskell-cafe] Battling laziness
Hello Joel, Friday, December 16, 2005, 2:44:00 PM, you wrote: JR> I have a huge space leak someplace and I suspect this code. The JR> SrvServerInfo data structure is something like 50K compressed or JR> uncompressed byte data before unpickling. My thousands of bots issue JR> this request at least once and I almost run out of memory with 100 JR> bots on a 1Gb machine on FreeBSD. Do I need deepSeq somewhere below? 1. try to use 3-generations GC. this may greatly help in reducing GC times 2. manually add {-# UNPACK #-} to all simple fields (ints, words, chars). don't use "-f-unbox-strict-fields" because it can unbox whole structures instead of sharing them 3. in my experience, it's enough to mark all fields in massively used structures as strict and then eval highest level of such structures (using "return $! x"). after that the whole structure will be fully evaluated. but when you use a list, you must either manually eval whole list (using "return $! length xs") or use DeepSeq, as you suggest, because lists remain unevaluated depite all these sctrictness annotations 4. you can try to use packed strings or unboxed arrays instead of lists. in my experience this can greatly reduce GC time just because this array don't need to be scanned on each GC 5. what is the "uncompress" function here? can i see its code? 6. why EACH bot receives and processes this 50k structure itself? can't that be done only one time for all? JR> do let tables = filter (tableMatches filters) $ activeTables cmd JR> ids = map tiTableID tables JR> return $! Eat $! Just $! Custom $! Tables $! ids here `ids` definitely will be unevaluated, except for the first element. add "return $! length ids" before the last line ps: last week i also fight against memory requirements of my own program. as a result, they was reduced 3-4 times :) -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[2]: [Haskell-cafe] Substring replacements
Hello Branimir, Friday, December 16, 2005, 5:36:47 AM, you wrote: BM> I've also performed tests on dual Xeon linux box and results are just to let you know - GHC don't uses pentium4 hyperthreading, multiple cpus or multiple cores in these tests only way to make ghc using multiple processors is to use 6.5 beta version, compile with "-smp" and explicitly fork several threads -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[4]: [Haskell-cafe] Optimizing a high-traffic network architecture
Hello Joel, Friday, December 16, 2005, 3:22:46 AM, you wrote: >> TZ> You don't have to check "every few seconds". You can determine >> TZ> exactly how much you have to sleep - just check the timeout/ >> event with >> TZ> the lowest ClockTime. JR> The scenario above does account for the situation that you are JR> describing. to be exact - Tomasz's variant don't work proper in this situation, but your code (which is not use this technique) is ok >> i repeat my thought - if you have one or several fixed waiting periods >> (say, 1 sec, 3 sec and 1 minute), then you don't need even to sort >> requests - just use one waking thread for each waiting period and >> requests will be arrive already sorted. in this way, you can really >> sleep as Tomasz suggests JR> I do not have several fixed waiting periods, they are determined by JR> the user. by the user of library? by the poker player? what you exactly mean? -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Timers
Hello Joel, Thursday, December 15, 2005, 2:42:03 PM, you wrote: JR> Here's the latest and greatest version put together with Einar's help. let's analyze execution of this thread. it has 2000-6000 events in his Map with an expiration time in the range 0-60 sec. it sleeps half a second, then wakes and finds/deletes minimal values from map until all events which are within this half-a-second will be performed and then sleeps again if half-second precision of performing events is appropriate for you, why don't use solution which holds all events for given second in one list? you can use array of such lists, or map of lists, or even ordered list of lists - it will contain only 60 elements at any time the most advanced solution will be array used as round buffer, whose size==maximal event timing i still don't understand why timings of your events may be different. you always say us that in each run timing is constant - 9 min, 1 min -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[2]: [Haskell-cafe] Optimizing a high-traffic network architecture
Hello Simon, Thursday, December 15, 2005, 4:53:27 PM, you wrote: SM> The 3k threads are still GC'd, but they are not actually *copied* during SM> GC. SM> It'll increase the memory overhead per thread from 2k (1k * 2 for SM> copying) to 4k (4k block, no overhead for copying). Simon, why not to include this in the "base package"? either change something so that a 1k-threads will be not copied during GC, or at least increment default stack size? this will improve performance of other hyper-threaded programs. memory expenses seems not so great -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
Lemmih wrote: > On 12/16/05, Peter Eriksen <[EMAIL PROTECTED]> wrote: > > > threadDelay (10*10^6) > > 10*10^6 == 10e6, btw. But the types are different. For sake of completeness: (10*10^6, 10*10^^6, 10*10**6, 10e6) :: (Num a, Fractional b, Floating c, Fractional d) => (a, b, c, d) threadDelay wants an Int, so 10e6 won't work. Bertram ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
On Dec 16, 2005, at 3:47 PM, Simon Marlow wrote: Oh, and it looks like you aren't doing -auto-all, that would probably be helpful. Apparently, when you give -p to configure (with Cabal 1.1+) it does add -prof but does not add -auto-all. I added this to my cabal file and my profiling suddenly bloomed! Now I really have something to chew on! COST CENTREMODULE %time %alloc byteArrayFromPtr Script.Array 34.1 34.7 readBits Script.Array 32.3 36.2 appU_endianScript.Endian 5.73.2 sequ Script.Pickle 5.33.7 emptyByteArray Script.Array 5.34.5 appU_num Script.Pickle 3.64.0 copyMArray Script.Array 2.42.7 bytearray Script.Pickle 1.92.6 appU_wstr Script.Endian 1.70.8 withByteArray Script.Array 1.41.7 byteSize Script.Pickle 1.10.9 puTableInfoScript.PicklePlus 0.61.3 It makes me wonder how I managed to convert pickling to mutable arrays from [Word8] without complete profiling info! The memory hogs are at http://wagerlabs.com/randomplay.autohc.ps Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On Fri, Dec 16, 2005 at 07:02:03PM +0100, Lemmih wrote: > On 12/16/05, Marcin Tustin <[EMAIL PROTECTED]> wrote: > > On Fri, Dec 16, 2005 at 06:51:12PM +0100, Lemmih wrote: > > > On 12/16/05, Peter Eriksen <[EMAIL PROTECTED]> wrote: > > > > > threadDelay (10*10^6) > > > > > > 10*10^6 == 10e6, btw. > > > > 10e7. > > Prelude> 10*10^6 == 10e6 > True > Prelude> 10*10^6 == 10e7 > False > > 10*10^6 == 1.0e7. Err yes, that's obviously correct. I think I must have misread "10e6" as "1e6", and then added to the superscript. D'oh. > -- > Friendly, > Lemmih ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On 12/16/05, Marcin Tustin <[EMAIL PROTECTED]> wrote: > On Fri, Dec 16, 2005 at 06:51:12PM +0100, Lemmih wrote: > > On 12/16/05, Peter Eriksen <[EMAIL PROTECTED]> wrote: > > > > threadDelay (10*10^6) > > > > 10*10^6 == 10e6, btw. > > 10e7. Prelude> 10*10^6 == 10e6 True Prelude> 10*10^6 == 10e7 False 10*10^6 == 1.0e7. -- Friendly, Lemmih ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On Fri, Dec 16, 2005 at 06:51:12PM +0100, Lemmih wrote: > On 12/16/05, Peter Eriksen <[EMAIL PROTECTED]> wrote: > > > threadDelay (10*10^6) > > 10*10^6 == 10e6, btw. 10e7. > -- > Friendly, > Lemmih > ___ > Haskell-Cafe mailing list > Haskell-Cafe@haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On 12/16/05, Peter Eriksen <[EMAIL PROTECTED]> wrote: > > threadDelay (10*10^6) 10*10^6 == 10e6, btw. -- Friendly, Lemmih ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On Fri, Dec 16, 2005 at 06:25:00PM +0100, Lemmih wrote: > If you move 'putStr $ show id' down below the barrier then it'll > behave like you want it to. However, the printed sequence may sometimes differ from expected because of races. Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On 12/16/05, Peter Eriksen <[EMAIL PROTECTED]> wrote: > Greeting, > > Something is not working for me, and I could use some more eyes on this. > What I'm trying to accomplish is to implement a simpel barrier for ten > worker threads (id = 0..9) using STM. With or without the barrier, the > program produces an unordered interleaving of the output from the > workers. Here's what I get with the program below: > > $ ghc --make Main.lhs > $ a.out > 0134568027913457896012579026813423904671238455702468159367839684012570279134685049137825901642375689134057892610462578903156012389473268457910267801345923924567801304689235714013679458256702465913878... > > And here's what I get without the line "atomically $ barrier tv id": > > $ a.out > 1249056782934567210845619720538461975203698469175203469850123485076912348579406123894625738942106381592740631859274163841092315768491302578416930728254169302785693024917853029640217390856490... > > The first run should've been something like: > 012345678901234567890123456789012345... > since each worker thread 0..9 should write its id out once per > iteration, > and the workers should iterate in sync. > > Here's the code: > > > module Main where > > > > import Control.Concurrent > > import Control.Concurrent.STM > > import System.Random > > > > worker :: Int -> TVar Int -> IO () > > worker id tv = do > > sleepingTime <- randomRIO (0, 5) > > threadDelay sleepingTime > > putStr $ show id You're printing the ID after a random sleep. Shouldn't be a big surprise that the output will be shuffled. > > atomically $ barrier tv id If you move 'putStr $ show id' down below the barrier then it'll behave like you want it to. > > worker id tv > > Each worker sleeps for some time, then outputs its id and waits at > the barrier for all the other workers to finish their sleep+output. > > > barrier :: TVar Int -> Int -> STM () > > barrier tv id = do > > passed <- readTVar tv > > if (passed `mod` 10 == id) > > then writeTVar tv (passed+1) > > else retry > > The barrier is simply a global variable, tv, which holds the number of > times any worker passed the barrier. Now, a worker may only pass the > barrier iff the worker with an id one less just passed, or else it > should block. > > > main :: IO () > > main = do > > tv <- atomically $ newTVar 0 > > for [0..9] $ \i -> forkIO $ worker i tv > > threadDelay (10*10^6) > > > > for = flip mapM_ -- Friendly, Lemmih ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Barrier implementation
On Fri, Dec 16, 2005 at 05:46:33PM +0100, Peter Eriksen wrote: > Here's the code: > > > module Main where > > > > import Control.Concurrent > > import Control.Concurrent.STM > > import System.Random > > > > worker :: Int -> TVar Int -> IO () > > worker id tv = do > > sleepingTime <- randomRIO (0, 5) > > threadDelay sleepingTime > > putStr $ show id > > atomically $ barrier tv id > > worker id tv You've got the barrier after the putStr, so there's nothing to make the first ten putStrs be in order. I think you need a non-updating barrier before the putStr and then an updating function after the putStr (to tell the next worker that it is free to print). > barrier :: TVar Int -> Int -> STM () > barrier tv id = do > passed <- readTVar tv > if (passed `mod` 10 == id) > then writeTVar tv (passed+1) > else retry > move_along :: TVar Int -> Int -> STM () > barrier tv id = do passed <- readTVar tv >writeTVar tv (passed+1) > worker :: Int -> TVar Int -> IO () > worker id tv = do > sleepingTime <- randomRIO (0, 5) > threadDelay sleepingTime > atomically $ barrier tv id > putStr $ show id > atomically $ move_along tv id > worker id tv -- David Roundy ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Barrier implementation
Greeting, Something is not working for me, and I could use some more eyes on this. What I'm trying to accomplish is to implement a simpel barrier for ten worker threads (id = 0..9) using STM. With or without the barrier, the program produces an unordered interleaving of the output from the workers. Here's what I get with the program below: $ ghc --make Main.lhs $ a.out 0134568027913457896012579026813423904671238455702468159367839684012570279134685049137825901642375689134057892610462578903156012389473268457910267801345923924567801304689235714013679458256702465913878... And here's what I get without the line "atomically $ barrier tv id": $ a.out 1249056782934567210845619720538461975203698469175203469850123485076912348579406123894625738942106381592740631859274163841092315768491302578416930728254169302785693024917853029640217390856490... The first run should've been something like: 012345678901234567890123456789012345... since each worker thread 0..9 should write its id out once per iteration, and the workers should iterate in sync. Here's the code: > module Main where > > import Control.Concurrent > import Control.Concurrent.STM > import System.Random > > worker :: Int -> TVar Int -> IO () > worker id tv = do > sleepingTime <- randomRIO (0, 5) > threadDelay sleepingTime > putStr $ show id > atomically $ barrier tv id > worker id tv Each worker sleeps for some time, then outputs its id and waits at the barrier for all the other workers to finish their sleep+output. > barrier :: TVar Int -> Int -> STM () > barrier tv id = do > passed <- readTVar tv > if (passed `mod` 10 == id) > then writeTVar tv (passed+1) > else retry The barrier is simply a global variable, tv, which holds the number of times any worker passed the barrier. Now, a worker may only pass the barrier iff the worker with an id one less just passed, or else it should block. > main :: IO () > main = do > tv <- atomically $ newTVar 0 > for [0..9] $ \i -> forkIO $ worker i tv > threadDelay (10*10^6) > > for = flip mapM_ The main thread just initializes the pass counter, starts 10 worker threads, and waits for ten seconds. I'd like to hear some comments on the approach, and perhaps even some insight into why it doesn't work. Regards, Peter Eriksen ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
Most of the samples in randomplay.hp look like this: BEGIN_SAMPLE 1.76 (170)Script.Array.CAF 8 (154)Script.CmdType.CAF 64 (165)Script.PickleCmd.CAF 760 (197)Script.PokerClient.CAF 8 (156)Script.Command.CAF 24 (282)Main.CAF 285752 (163)Script.Pickle.CAF 16 (311)/launchScripts#8/laun... 93464 END_SAMPLE 1.76 I'm pickling to/from unboxed arrays of Word8 type MutByteArray = IOUArray Int Word8 type ByteArray = UArray Int Word8 type Index = Int CmdType is (Word8, Word8) that tells me what pickler to use. PickleCmd looks like this: puCommand :: (Word8, Word8) -> PU Command puCommand (116, 2) = sequ tableID endian32 (\a -> sequ password wstring (\b -> sequ localIP wstring (\c -> sequ affiliateID (list endian32 byte) (\d -> lift $ ClConnectGame a b c d puCommand (36, 1) = ... Command has about 250 constructors for the different records that can be send/received. These records can be somewhat nested and have lists of other records inside them. Like SrvServerInfo. Could this be where the polymorphism is coming from, i.e. the "*" are my Commands that are being unpickled? Fields in command all have strictness annotations, btw. Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
On Dec 16, 2005, at 3:47 PM, Simon Marlow wrote: I'm a bit mystified though, because looking at the code for Script.Array, all your arrays are unboxed, so I don't know where all the Word8s and Ints are coming from. It might be useful to do "+RTS -hyWord8 -hc" to see who generated the Word8s. Done. http://wagerlabs.com/randomplay.word8.ps {-# SCC "launchScripts#8" #-}launch host $! script (bot, bot, affid) The xx, xx, are Word8. affiliateIDs is all Word8 and looks like this: affiliateIDs = [ [xx,xx,xx,xx,xx,xx,xx], 99 more like the above ] I guess the whole affid list of lists is being pulled into script? How do I prevent this? - launchScripts :: Int -> NamePick -> TMVar (ClockTime, (Event CustomEvent)) -> IO () launchScripts 0 _ _ = return () launchScripts n pick mbx = do n' <- case pick of Random -> {-# SCC "launchScripts#1" #-}liftIO $ randomRIO (0, 8500) Straight -> {-# SCC "launchScripts#2" #-}return n let botnum = {-# SCC "launchScripts#3" #-}firstbot + n' bot = {-# SCC "launchScripts#4" #-}"m" ++ show botnum cell = {-# SCC "launchScripts#5" #-}botnum `mod` 100 - 1 affid = {-# SCC "launchScripts#6" #-}if cell == -1 then [xx,xx,xx,xx,xx,xx,xx] else affiliateIDs !! cell {-# SCC "launchScripts#7" #-}trace_ $ "Launching bot..." ++ show n {-# SCC "launchScripts#8" #-}launch host $! script (bot, bot, affid) {-# SCC "launchScripts#9" #-}liftIO $ sleep_ 1000 -- quit if we have been told to empty <- {-# SCC "launchScripts#10" #-}atomically $ isEmptyTMVar mbx {-# SCC "launchScripts#11" #-}unless empty $ trace_ "launchScripts: Done, exiting" {-# SCC "launchScripts#12" #-}when empty $ launchScripts (n - 1) pick mbx -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Optimizing a high-traffic network architecture
On 16 December 2005 15:19, Lennart Augustsson wrote: > John Meacham wrote: >> On Thu, Dec 15, 2005 at 02:02:02PM -, Simon Marlow wrote: >> >>> With 2k connections the overhead of select() is going to start to >>> be a problem. You would notice the system time going up. >>> -threaded may help with this, because it calls select() less often. >> >> >> we should be using /dev/poll on systems that support it. > > And kqueue for systems that support that. Much, much more efficient > than select. Yeah, yeah. We know. We just haven't got around to doing anything about it :-( It's actually quite fiddly to hook this up to Handles - see Einar's implementation in Network.Alt for instance. Cheers, Simon (who wished he hadn't mentioned select() again) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
On Dec 16, 2005, at 3:47 PM, Simon Marlow wrote: Ok, so your heap is mainly full of (a) thunks generated by something in Script.Array, (b) Word8s, and (c) Ints. Would it be worth investigaiting who is holding on to them? interesting... Word8 and Int correspond to the -hd output above, but '*' indicates that the type of the is polymorphic. Completely polymorphic closures like this are usually (error "something"), which is a silly thing to fill up your heap with :-) So what do I do then? If I add cost center annotations to Script.Array, will they show up in the -hd report? Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
On Dec 16, 2005, at 3:47 PM, Simon Marlow wrote: interesting... Word8 and Int correspond to the -hd output above, but '*' indicates that the type of the is polymorphic. Completely polymorphic closures like this are usually (error "something"), which is a silly thing to fill up your heap with :-) Hmm... I'm attaching the pickling code that I use at the end, together with a sample of how I use it to pickle/unpickle SrvServerInfo. I'm a bit mystified though, because looking at the code for Script.Array, all your arrays are unboxed, so I don't know where all the Word8s and Ints are coming from. It might be useful to do "+RTS -hyWord8 -hc" to see who generated the Word8s. I will do it. Why bother with Word8, though? Shouldn't I be looking for the polymorphic closures instead? Oh, and it looks like you aren't doing -auto-all, that would probably be helpful. I compile like this: ghc -O --make -prof -auto-all randomplay.hs -o randomplay -lssl - lcrypto -lz and run like this: ./randomplay +RTS -p -hd -hclaunchScripts#8 Did I miss -auto-all somewhere? I have Cabal 1.1.4 and I give configure the -p option which builds the profiled libraries for me. Do I need to separately give -auto-all to the compiler below ghc-options: -fglasgow-exts -Wall -threaded -fno-warn-name-shadowing Thanks, Joel {-# OPTIONS_GHC -fglasgow-exts -fth #-} module Script.Pickle where import Data.Word import Data.Int import Data.Bits import Data.Char import Data.Maybe import Data.Array.MArray import Script.Array import Control.Monad data PU a = PU { appP :: MutByteArray -> Index -> a -> IO Index, appU :: MutByteArray -> Index -> IO (a, Index), appS :: a -> IO Int } pickle :: PU a -> MutByteArray -> Index -> a -> IO Index pickle p array ix value = appP p array ix value unpickle :: PU a -> MutByteArray -> Index -> IO (a, Index) unpickle p array ix = appU p array ix sizeup :: PU a -> a -> IO Int sizeup p value = appS p value lift :: a -> PU a lift x = PU (\_ ix _ -> return ix) (\_ ix -> return (x, ix)) (\_ -> return 0) sequ :: (b -> a) -> PU a -> (a -> PU b) -> PU b sequ f pa k = PU (\array ix b -> do let a = f b pb = k a ix1 <- appP pa array ix a appP pb array ix1 b) (\array ix -> do (a, ix1) <- appU pa array ix let pb = k a appU pb array ix1) (\b -> do let a = f b pb = k a sz1 <- appS pa a sz2 <- appS pb b return $ sz1 + sz2) pair :: PU a -> PU b -> PU (a,b) pair pa pb = sequ fst pa (\ a -> sequ snd pb (\ b -> lift (a, b))) triple :: PU a -> PU b -> PU c -> PU (a, b, c) triple pa pb pc = sequ (\ (x, _, _) -> x) pa (\a -> sequ (\ (_, y, _) -> y) pb (\b -> sequ (\ (_, _, z) -> z) pc (\c -> lift (a, b, c quad :: PU a -> PU b -> PU c -> PU d -> PU (a, b, c, d) quad pa pb pc pd = sequ (\ (w, _, _, _) -> w) pa (\a -> sequ (\ (_, x, _, _) -> x) pb (\b -> sequ (\ (_, _, y, _) -> y) pc (\c -> sequ (\ (_, _, _, z) -> z) pd (\d -> lift (a, b, c, d) wrap :: (a -> b, b -> a) -> PU a -> PU b wrap (i, j) pa = sequ j pa (lift . i) unit :: PU () unit = lift () {-# SPECIALIZE num :: PU Word8 #-} {-# SPECIALIZE num :: PU Word16 #-} {-# SPECIALIZE num :: PU Word32 #-} {-# SPECIALIZE num :: PU Word64 #-} {-# SPECIALIZE num :: PU Int16 #-} {-# SPECIALIZE num :: PU Int32 #-} num :: (Integral a, Bits a) => PU a num = PU appP_num appU_num (return . byteSize) char :: PU Char char = wrap (fromByte, toByte) num bool :: PU Bool bool = wrap (toenum, fromenum) byte enum :: (Integral a, Bits a, Enum b) => PU a -> PU b enum pa = wrap (toenum, fromenum) pa byte :: PU Word8 byte = num short :: PU Word16 short = num uint :: PU Word32 uint = num fixlist :: PU a -> Int -> PU [a] fixlist _ 0 = lift [] fixlist pa n = wrap (\(a, b) -> a : b, \(a : b) -> (a, b)) (pair pa (fixlist pa (n - 1))) list :: (Integral a, Bits a) => PU a -> PU b -> PU [b] list pa pb = sequ (fromIntegral . length) pa (\a -> fixlist pb (fromIntegral a)) alt :: (a -> Word8) -> [PU a] -> PU a alt tag ps = sequ tag byte (((!!) ps) . fromIntegral) optional :: PU a -> PU (Maybe a) optional pa = alt tag [lift Nothing, wrap (Just, fromJust) pa] where tag Nothing = 0; tag (Just _) = 1 chunk :: Integral a => PU a -> PU ByteArray chunk pa = sequ (fromIntegral . (+ 1) . snd . bounds) pa (\a -> bytearray $ fromIntegral a) bytearray :: Int -> PU ByteArray bytearray sz = PU (\array ix a -> do let count = (snd $ bounds a) + 1
[Haskell-cafe] RE: module names
-Original Message- From: S Koray Can [mailto:[EMAIL PROTECTED] Why not do this: name none of those modules Main.hs, and have an empty module Main.hs with only "import MainDeJour" and "main = MainDeJour.main" so you can just edit just that file. Cheers, Koray -- Yeah, I like that approach. That saves me from having to remember which file I most recent used as main. Seems easy enough to even set it up so that load MainDuJour writes the file Main.hs with import MainDuJour main = MainDuJour.main and then and then calls ghc --make Main.hs -o mainDuJour This will do for now, but still feels really kludgy, especially for Haskell. -Chad ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Battling laziness
On 16 December 2005 15:23, Joel Reymont wrote: > Looking at http://wagerlabs.com/randomplay.hd.ps I see closures > (constructors?) in this order > > > W8# > I# > > >> > > W16# > > stg_ap_2_upd_info Ok, so your heap is mainly full of (a) thunks generated by something in Script.Array, (b) Word8s, and (c) Ints. > This tells me it's something having to do with array code. I'm > attaching the Script.Array module at the end. This report does not > tell me who is retaining the data, though. > > Looking at http://wagerlabs.com/randomplay.hy.ps I see types ordered > like this > > * > Word8 > Int > ->* > [] > Char > Word16 > TableInfo interesting... Word8 and Int correspond to the -hd output above, but '*' indicates that the type of the is polymorphic. Completely polymorphic closures like this are usually (error "something"), which is a silly thing to fill up your heap with :-) I'm a bit mystified though, because looking at the code for Script.Array, all your arrays are unboxed, so I don't know where all the Word8s and Ints are coming from. It might be useful to do "+RTS -hyWord8 -hc" to see who generated the Word8s. Oh, and it looks like you aren't doing -auto-all, that would probably be helpful. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
Looking at http://wagerlabs.com/randomplay.hd.ps I see closures (constructors?) in this order W8# I# : W16# stg_ap_2_upd_info This tells me it's something having to do with array code. I'm attaching the Script.Array module at the end. This report does not tell me who is retaining the data, though. Looking at http://wagerlabs.com/randomplay.hy.ps I see types ordered like this * Word8 Int ->* [] Char Word16 TableInfo What do I make of all these? This is Script.Array: -- module Script.Array where import Data.Array.IO import Data.Array.Unboxed import Foreign hiding (newArray) import Foreign.Ptr type MutByteArray = IOUArray Int Word8 type ByteArray = UArray Int Word8 type Index = Int arraySize :: HasBounds a => a Int e -> Int arraySize a = (snd (bounds a)) + 1 emptyByteArray :: Int -> IO MutByteArray emptyByteArray sz = newArray (0, sz - 1) 0 mkPureArray :: MutByteArray -> IO ByteArray mkPureArray array = freeze array copyMArray :: MutByteArray -> Index -> MutByteArray -> Index -> Int - > IO () copyMArray _ _ _ _ 0 = return () copyMArray dest ix src src_ix n = do e <- readArray src src_ix writeArray dest ix e copyMArray dest (ix + 1) src (src_ix + 1) (n - 1) copyIArray :: MutByteArray -> Index -> ByteArray -> Index -> Int -> IO () copyIArray _ _ _ _ 0 = return () copyIArray dest ix src src_ix n = do let e = src ! src_ix writeArray dest ix e copyIArray dest (ix + 1) src (src_ix + 1) (n - 1) readBits :: forall a.(Num a, Bits a) => MutByteArray -> Index -> IO a readBits array ix = readBits' array ix bitsize 0 where bitsize = bitSize (undefined :: a) readBits' _ _ 0 acc = return acc readBits' array ix count acc = do e <- readArray array ix let e' = (fromIntegral e) `shiftL` (count - 8) readBits' array (ix + 1) (count - 8) (acc + e') writeBits :: (Integral a, Bits a) => MutByteArray -> Index -> a -> IO () writeBits array ix a = writeBits' array ix (bitSize a) where writeBits' _ _ 0 = return () writeBits' array ix count = do let mask = 0xff `shiftL` (count - 8) a' = (a .&. mask) `shiftR` (count - 8) a'' = fromIntegral a' writeArray array ix a'' writeBits' array (ix + 1) (count - 8) withByteArray :: ByteArray -> (Ptr Word8 -> IO a) -> IO a withByteArray array fun = do let size = arraySize array allocaBytes size $ \ptr -> do copyBytes ptr array 0 size fun ptr where copyBytes _ _ _ 0 = return () copyBytes ptr arr ix sz = do poke ptr (arr ! ix) copyBytes (advancePtr ptr 1) arr (ix + 1) (sz - 1) byteArrayFromPtr :: Ptr Word8 -> Int -> IO MutByteArray byteArrayFromPtr ptr sz = do array <- emptyByteArray sz copyBytes array ptr 0 sz return array where copyBytes _ _ _ 0 = return () copyBytes array ptr ix n = do e <- peek ptr writeArray array ix e copyBytes array (advancePtr ptr 1) (ix + 1) (n - 1) instance Show MutByteArray where show a = show $ unsafePerformIO $ getElems a ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Optimizing a high-traffic network architecture
John Meacham wrote: On Thu, Dec 15, 2005 at 02:02:02PM -, Simon Marlow wrote: With 2k connections the overhead of select() is going to start to be a problem. You would notice the system time going up. -threaded may help with this, because it calls select() less often. we should be using /dev/poll on systems that support it. And kqueue for systems that support that. Much, much more efficient than select. -- Lennart ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
The result of ./randomplay +RTS -p -hd -hclaunchScripts#8 is at http://wagerlabs.com/randomplay.hd.ps Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
Simon, I'm approaching this methodically, as you are suggesting. I re-ran the program with -hc again and got the following. I suppose it tells me that I need to investigate launchScripts#8. COST CENTREMODULE %time %alloc launchScripts#8Main 85.7 86.0 takeEmptySeat#8Snippets 8.07.0 CAFMain 4.15.9 {-# SCC "launchScripts#8" #-}launch host $ script (bot, bot, affid) I added some strictness and ran again {-# SCC "launchScripts#8" #-}launch host $! script (bot, bot, affid) COST CENTREMODULE %time %alloc launchScripts#8Main 81.0 81.6 takeEmptySeat#8Snippets 12.19.2 CAFMain 5.18.3 Did $! make a difference of 4%? I'm running -hy -hclaunchScripts#8 now. I ran ./randomplay +RTS -p -hy -hclaunchScripts#8, results at http:// wagerlabs.com/randomplay1.tgz results from -hc -hclaunchScripts#8 at http://wagerlabs.com/randomplay2.tgz COST CENTREMODULE %time %alloc launchScripts#8Main 92.1 92.0 takeEmptySeat#8Snippets 4.35.1 CAFMain 2.11.9 What do the "by type" (-hy) results tell you and how should I proceed? Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Battling laziness
On 16 December 2005 12:42, Joel Reymont wrote: > On Dec 16, 2005, at 12:36 PM, Simon Marlow wrote: > >> If script#9 is the cost center attached to all of your leaking heap >> data, then you're already a long way to finding the problem. It'll >> help even more to find out whether it is just unevaluated copies of >> "takeEmptySeat Holdem affid []", or something else (-hd, -hy will >> help here). Try +RTS -hy -hcscript#9, for example. >> >> One obvious thing to try is replacing the '$' before {-# SCC >> "script#9" #-} with '$!'. And similarly in takeEmptySeat. > > Let me try these and report my findings. > >> (I should say that we definitely plan to update these for STM, but >> it's not completely trivial (I checked). Volunteers definitely >> welcome). > > I volunteer! Just need some pointers on where to get started. I learn > quickly but need to be guided ;-). Plus, I need this the most, right? I was slightly mistaken: lag/drag/void profiling is pretty easy. Take a look at ghc/rts/LdvProfile.c and add relevant cases for STM objects to processHeapClosureForDead(). If you fix this up and test it we should be able to get it into 6.4.2. Retainer profiling is much harder; the code is in RetainerProfile.c/RetainerSet.c. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Optimizing a high-traffic network architecture
On 16.12 07:03, Tomasz Zielonka wrote: > On 12/16/05, Einar Karttunen wrote: > > To matters nontrivial all the *nix variants use a different > > more efficient replacement for poll. > > So we should find a library that offers a unified > interface for all of them, or implement one ourselves. > > I am pretty sure such a library exists. It should fall back to select() > or poll() on platforms that don't have better alternatives. network-alt has select(2), epoll, blocking and very experimental kqueue (the last one is not yet committed but I can suply patches if someone is interested. - Einar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
On Dec 16, 2005, at 12:36 PM, Simon Marlow wrote: If script#9 is the cost center attached to all of your leaking heap data, then you're already a long way to finding the problem. It'll help even more to find out whether it is just unevaluated copies of "takeEmptySeat Holdem affid []", or something else (-hd, -hy will help here). Try +RTS -hy -hcscript#9, for example. One obvious thing to try is replacing the '$' before {-# SCC "script#9" #-} with '$!'. And similarly in takeEmptySeat. Let me try these and report my findings. (I should say that we definitely plan to update these for STM, but it's not completely trivial (I checked). Volunteers definitely welcome). I volunteer! Just need some pointers on where to get started. I learn quickly but need to be guided ;-). Plus, I need this the most, right? Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Battling laziness
On 16 December 2005 12:08, Joel Reymont wrote: > -hc points to script#9 below. > > script (_, _, affid) (Custom (JoinedTable 0)) = > do {-# SCC "script#8" #-}push "takeEmptySeat" $ > {-# SCC "script#9" #-}takeEmptySeat Holdem affid [] > {-# SCC "script#10" #-}return $ Eat $ Just Go > > What takeEmptySeat does it call pickTable > > takeEmptySeat game_type _ filters Go = > do push "pickTable" $ pickTable game_type filters > return $ Eat $ Just Go It's hard to pick out the cause of a space leak from just a fragment of the program, but I'll try to give you some pointers. If script#9 is the cost center attached to all of your leaking heap data, then you're already a long way to finding the problem. It'll help even more to find out whether it is just unevaluated copies of "takeEmptySeat Holdem affid []", or something else (-hd, -hy will help here). Try +RTS -hy -hcscript#9, for example. One obvious thing to try is replacing the '$' before {-# SCC "script#9" #-} with '$!'. And similarly in takeEmptySeat. > Overall, -hc does not help me figure out where my data is being > retained. My understanding is that I need to do -hbdrag,void fo > rthat. I did not try -hd and -hy, they would only help me narrow down > the producers, right? Not necessarily; lag/drag/void only tells you about certain kinds of space leaks. It's another tool in the box, and quite often you can get away without it. Retainer profiling similarly. (I should say that we definitely plan to update these for STM, but it's not completely trivial (I checked). Volunteers definitely welcome). > My program seems to spend 70% of the time collecting garbage. Notice > the HUGE overall allocations. This is my trying to launch 4k bots > over 8 hours. Only 1k bots were launched and just 300 of those got to > play. Maybe because they did not have time with all the garbage > collection :-). Note that your GC time is inflated quite a bit due to profiling (it makes every object larger). The plan to reduce GC time is, in this order: squash space leaks, reduce allocation (to reduce GC load), and then tweak GC parameters. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
I uploaded the full reports to http://wagerlabs.com/randomplay.tgz On Dec 16, 2005, at 11:53 AM, Simon Marlow wrote: What does ordinary heap profiling (-hc, -hd, -hy) tell you about what's in the heap? These options should work fine with STM. -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Battling laziness
-hc points to script#9 below. script (_, _, affid) (Custom (JoinedTable 0)) = do {-# SCC "script#8" #-}push "takeEmptySeat" $ {-# SCC "script#9" #-}takeEmptySeat Holdem affid [] {-# SCC "script#10" #-}return $ Eat $ Just Go What takeEmptySeat does it call pickTable takeEmptySeat game_type _ filters Go = do push "pickTable" $ pickTable game_type filters return $ Eat $ Just Go pickTable retrieves the list of SrvServerInfo structures, etc. Overall, -hc does not help me figure out where my data is being retained. My understanding is that I need to do -hbdrag,void fo rthat. I did not try -hd and -hy, they would only help me narrow down the producers, right? My program seems to spend 70% of the time collecting garbage. Notice the HUGE overall allocations. This is my trying to launch 4k bots over 8 hours. Only 1k bots were launched and just 300 of those got to play. Maybe because they did not have time with all the garbage collection :-). The tests that I ran previously did not involve heavy network traffic, just a few very small packets. This is why I was able to get to thousands of bots in just a couple of hours and keep them there. ./randomplay +RTS -k3k -P -hc -srandomplay.gc 95,739,560,464 bytes allocated in the heap 887,633,330,904 bytes copied during GC 131,849,008 bytes maximum residency (8730 sample(s)) 330325 collections in generation 0 (557.40s) 8730 collections in generation 1 (16370.05s) 248 Mb total memory in use INIT time0.00s ( 0.03s elapsed) MUT time 783.40s (1872.75s elapsed) GCtime 16927.45s (20075.68s elapsed) RPtime0.00s ( 0.00s elapsed) PROF time 6003.62s (7058.40s elapsed) EXIT time0.00s ( 0.00s elapsed) Total time 23714.47s (29006.86s elapsed) %GC time 71.4% (69.2% elapsed) < isn't this aweful? Alloc rate122,210,314 bytes per MUT second Productivity 3.3% of total user, 2.7% of total elapsed On Dec 16, 2005, at 11:53 AM, Simon Marlow wrote: What does ordinary heap profiling (-hc, -hd, -hy) tell you about what's in the heap? These options should work fine with STM. -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Battling laziness
What does ordinary heap profiling (-hc, -hd, -hy) tell you about what's in the heap? These options should work fine with STM. Cheers, Simon On 16 December 2005 11:44, Joel Reymont wrote: > Folks, > > I have a huge space leak someplace and I suspect this code. The > SrvServerInfo data structure is something like 50K compressed or > uncompressed byte data before unpickling. My thousands of bots issue > this request at least once and I almost run out of memory with 100 > bots on a 1Gb machine on FreeBSD. Do I need deepSeq somewhere below? > > This is the read. > > read :: Handle -> (SSL, BIO, BIO) -> IO Command > read h _ = > do sa <- emptyByteArray 4 > hGetArray h sa 4 > (size', _) <- unpickle endian32 sa 0 > let size = fromIntegral $ size' - 4 > packet <- emptyByteArray size > hGetArray h packet size > unstuff packet 0 > > I suspect that I need to deepSeq cmd'' instead of return $! cmd'' > > unstuff :: MutByteArray -> Index -> IO Command > unstuff array ix = > do (kind, ix1) <- unpickle puCmdType array ix > (cmd', _) <- unpickle (puCommand kind) array ix1 > case cmd' of > InvalidCommand -> do fail $ "unstuff: Cannot parse " ++ > show array > SrvCompressedCommands sz bytes -> > do bytes' <- uncompress bytes (fromIntegral sz) > cmd'' <- unstuff bytes' 4 > return $! cmd'' > _ -> return cmd' > > This is where the list of active tables is converted to a table id > list of [Word32]. > > pickTable _ filters (Cmd cmd@(SrvServerInfo {})) = > do let tables = filter (tableMatches filters) $ activeTables cmd > ids = map tiTableID tables > case tables of > [] -> fail $ "pickTable: No tables found: " ++ show filters > _ -> > do pop > stoptimer "pickTable" > return $! Eat $! Just $! Custom $! Tables $! ids > > This is where the table id list of [Word32] is consumed. > > takeEmptySeat _ aff_id _ (Custom (Tables ids@(table:rest))) = > do trace 85 $ "takeEmptySeat: " ++ show (length ids) > ++ " tables found" > trace 100 $ "takeEmptySeat: tables: " ++ showTables ids > trace 85 $ "takeEmptySeat: trying table# " ++ show table > w <- get > put_ $ w { tables_to_try = rest } > push "goToTable" $ goToTable table aff_id > -- kick off goToTable > return $ Eat $ Just Go > > This is the SrvServerInfo structure. > > | SrvServerInfo >{ > activeTables :: ![TableInfo], -- Word16/ > removedTables :: ![Word32], -- Word16/ > version :: !Int32 >} > > And this is the table info itself. > > data TableInfo = TableInfo > { > tiAvgPot :: !Word64, > tiNumPlayers :: !Word16, > tiWaiting :: !Word16, > tiPlayersFlop :: !Word8, > tiTableName :: !String, > tiTableID :: !Word32, > tiGameType :: !GameType, > tiInfoMaxPlayers :: !Word16, > tiIsRealMoneyTable :: !Bool, > tiLowBet :: !Word64, > tiHighBet :: !Word64, > tiMinStartMoney :: !Word64, > tiMaxStartMoney :: !Word64, > tiGamesPerHour :: !Word16, > tiTourType :: !TourType, > tiTourID :: !Word32, > tiBetType :: !BetType, > tiCantReturnLess :: !Word32, > tiAffiliateID :: ![Word8], > tiLangID :: !Word32 > } deriving (Show, Typeable) > > Thanks, Joel ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] +RTS -M800M
On 16 December 2005 10:05, Joel Reymont wrote: > I'm trying to restrict GHC to 800Mb of heap at runtime by passing in > +RTS -M800M, the machine has 1Gb of memory and top shows free > physical memory dropping below 175Mb. I suppose I'm missing something > obvious or paying attention to the wrong statistics, Unix has a good > VM manager after all. Are my runtime options correct, though? -M800m should do more or less the right thing, but it is possible for GHC to exceed this figure by a small percentage. Rather than considering the "worst case" requirements for the next GC, GHC uses a more "average case" estimate, which sometimes ends up being wrong, but in most cases results in better utilisation of the available memory. Cheers, Simon ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Battling laziness
Folks, I have a huge space leak someplace and I suspect this code. The SrvServerInfo data structure is something like 50K compressed or uncompressed byte data before unpickling. My thousands of bots issue this request at least once and I almost run out of memory with 100 bots on a 1Gb machine on FreeBSD. Do I need deepSeq somewhere below? This is the read. read :: Handle -> (SSL, BIO, BIO) -> IO Command read h _ = do sa <- emptyByteArray 4 hGetArray h sa 4 (size', _) <- unpickle endian32 sa 0 let size = fromIntegral $ size' - 4 packet <- emptyByteArray size hGetArray h packet size unstuff packet 0 I suspect that I need to deepSeq cmd'' instead of return $! cmd'' unstuff :: MutByteArray -> Index -> IO Command unstuff array ix = do (kind, ix1) <- unpickle puCmdType array ix (cmd', _) <- unpickle (puCommand kind) array ix1 case cmd' of InvalidCommand -> do fail $ "unstuff: Cannot parse " ++ show array SrvCompressedCommands sz bytes -> do bytes' <- uncompress bytes (fromIntegral sz) cmd'' <- unstuff bytes' 4 return $! cmd'' _ -> return cmd' This is where the list of active tables is converted to a table id list of [Word32]. pickTable _ filters (Cmd cmd@(SrvServerInfo {})) = do let tables = filter (tableMatches filters) $ activeTables cmd ids = map tiTableID tables case tables of [] -> fail $ "pickTable: No tables found: " ++ show filters _ -> do pop stoptimer "pickTable" return $! Eat $! Just $! Custom $! Tables $! ids This is where the table id list of [Word32] is consumed. takeEmptySeat _ aff_id _ (Custom (Tables ids@(table:rest))) = do trace 85 $ "takeEmptySeat: " ++ show (length ids) ++ " tables found" trace 100 $ "takeEmptySeat: tables: " ++ showTables ids trace 85 $ "takeEmptySeat: trying table# " ++ show table w <- get put_ $ w { tables_to_try = rest } push "goToTable" $ goToTable table aff_id -- kick off goToTable return $ Eat $ Just Go This is the SrvServerInfo structure. | SrvServerInfo { activeTables :: ![TableInfo], -- Word16/ removedTables :: ![Word32], -- Word16/ version :: !Int32 } And this is the table info itself. data TableInfo = TableInfo { tiAvgPot :: !Word64, tiNumPlayers :: !Word16, tiWaiting :: !Word16, tiPlayersFlop :: !Word8, tiTableName :: !String, tiTableID :: !Word32, tiGameType :: !GameType, tiInfoMaxPlayers :: !Word16, tiIsRealMoneyTable :: !Bool, tiLowBet :: !Word64, tiHighBet :: !Word64, tiMinStartMoney :: !Word64, tiMaxStartMoney :: !Word64, tiGamesPerHour :: !Word16, tiTourType :: !TourType, tiTourID :: !Word32, tiBetType :: !BetType, tiCantReturnLess :: !Word32, tiAffiliateID :: ![Word8], tiLangID :: !Word32 } deriving (Show, Typeable) Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Non-STM bounded queue
Folks, I need to move away from STM (unfortunately) until I profile my program to my satisfaction. Profiling is somewhat crippled with STM in 6.4.1. I cannot do void and drag profiling, for examplel. I decided to abstract my mailboxes in a Queue module so that I could switch between STM and non-STM as needed. Please let me know what you think of this bounded queue implementation. --- module Queue where import Control.Monad import Control.Concurrent data Queue a = Queue !QSem !(Chan a) newQ :: Int -> IO (Queue a) newQ size = do sem <- newQSem size q <- newChan return $! Queue sem q writeQ :: Queue a -> a -> IO () writeQ (Queue sem q) x = do waitQSem sem writeChan q x readQ :: Queue a -> IO a readQ (Queue sem q) = do x <- readChan q signalQSem sem return x Thank, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] +RTS -M800M
Folks, I'm trying to restrict GHC to 800Mb of heap at runtime by passing in +RTS -M800M, the machine has 1Gb of memory and top shows free physical memory dropping below 175Mb. I suppose I'm missing something obvious or paying attention to the wrong statistics, Unix has a good VM manager after all. Are my runtime options correct, though? Thanks, Joel -- http://wagerlabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: Re[2]: [Haskell-cafe] Unbound threads and FFI
I've added a FAQ. Simon | -Original Message- | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of | Simon Marlow | Sent: 15 December 2005 09:26 | To: Bulat Ziganshin | Cc: Haskell-Cafe Cafe | Subject: RE: Re[2]: [Haskell-cafe] Unbound threads and FFI | | On 14 December 2005 18:04, Bulat Ziganshin wrote: | | > Hello Simon, | > | > Wednesday, December 14, 2005, 7:39:43 PM, you wrote: | > | >> Do other Haskell threads get blocked during an FFI call? | > | >> | safeunsafe | >> --+ | >> -threaded| NO YES | >> no -threaded | YES YES | > | > there is also `threadsafe` specifier. i don't understand from the docs | > what it exactly means? | > | > ghc commentary says: | > | > To signal that you want an external call to be serviced by a separate | > OS thread, you have to add the attribute threadsafe to a foreign | > import declaration, i.e., | > | > foreign import "bigComp" threadsafe largeComputation :: Int -> IO () | > | > The distinction between 'safe' and thread-safe C calls is made so that | > we may call external functions that aren't re-entrant but may cause a | > GC to occur. | > | > The threadsafe attribute subsumes safe. | | threadsafe is deprecated; it is currently exactly the same as safe. We | thought at one time that it would be useful to have the distinction, but | it turned out to be impractical to implement safe that wasn't also | threadsafe. | | Cheers, | Simon | ___ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Substring replacements
Am Freitag, 16. Dezember 2005 03:36 schrieben Sie: > From: Daniel Fischer <[EMAIL PROTECTED]> > >Any improvements are welcome, certainly some of you can do much better. > > It is fast on my machine except that you are using Map to lookup > for badChar which is O(log n). > I;ve placed this instead: > badChar :: UArray Int Int > badChar = array (0,255) ([(i,-1) | i <- [0..255]] ++ proc src 0) > proc [] _ = [] > proc (s:st) i = (ord s,i):proc st (i+1) > getBc c = badChar ! ord c > > which gaved it significant boost, O(1) lookup. Yes, but Char has 1114112 values, and I'm not sure whether such a large array would still be better, especially since, presumably, the Map will usually not be deeper than five layers, say. But if we restrict ourselves to extended ASCII Strings, an array certainly is better. And maybe, instead of using two arrays, bmGs0 and bmGs, a mutable array (those are DiffArrays, I think -- I'll check that out) would also improve it. > Now it's faster then brute force method but 10% slower then KMP > with my test. > I've also performed tests on dual Xeon linux box and results are > proportionally > the same as on my intel windows box. > KMP wins again 10% better then BM and 20-30% better then straightforward > search, > which means that KMP is well suited for non indexed strings. > > >Cheers, > >Daniel > > > >P.S. As an algorithm, I prefer KMP, it's quite elegant whereas BM is > >somewhat > >fussy. > > Yes, BM is for indexed structures. > > Greetings, Bane. > Cheers, Daniel ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe