Re: [Haskell-cafe] Concurrency question

2005-09-06 Thread Dmitry V'yal

Donald Bruce Stewart wrote:


Can I ask if just using a `seq` works as well? That was enough (now I
recall) for the watchdog code I posted earlier.

-- Don


In my case this doesn't work

>timeout :: DeepSeq a => Int -> IO a -> IO (Maybe a)
>timeout n t = do res <- par_io timer thr  --timer
> return res
>where thr = do res <- t
>   return $! Just res
>  timer = do threadDelay $ n * 1000
> return Nothing

It's probably because "resolve" function has type
> resolve :: CNF -> (Bool, [(Int,Int)])
So it should be sequenced deeper.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency question

2005-09-06 Thread Juan Carlos Arevalo Baeza

Bulat Ziganshin wrote:


Hello Dmitry,

Sunday, September 04, 2005, 9:45:37 PM, you wrote:

DV> -- These useful subroutines I saw in "Tackling The Awkward Squad"

DV>timer = do threadDelay n
DV>   return Nothing

Notes from GHC/Conc.hs:

-- Note: threadDelay, threadWaitRead and threadWaitWrite aren't really 
functional
-- on Win32, but left in there because lib code (still) uses them (the manner
-- in which they're used doesn't cause problems on a Win32 platform though.)

may be, authors of "Tackling The Awkward Squad" just wrote their
subroutines for working in Unix. also you can try -threaded option on
compilation and try to use forkOS isntead of forkIO
 



  I don't think bounded  threads work under Win32 yet.

JCAB
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency question

2005-09-05 Thread Dmitry V'yal


I believe you're just observing lazy evaluation at work.  The IO 
computation that you're forking is (return $ resolve cnf).  `resolve` is 
a pure function.  Hence the forked computation succeeds immediately--and 
the thread terminates (successfully)--without evaluating (resolve cnf).  
It isn't until the case arm that begins "Just (ans, stats) ->" that the 
result of (resolve cnf) is demanded and hence evaluation of (resolve 
cnf) begins.  But this is too late for the timeout to have the intended 
effect.


How to fix?  You need to demand (enough of) the result of (resolve cnf) 
before returning from the IO computation.  What "enough of" means 
depends on how `resolve` is written.  You may find the DeepSeq module I 
wrote (see 
http://www.mail-archive.com/haskell@haskell.org/msg15819.html) helpful.


Dean


I've just tried DeepSeq as you proposed.

>timeout :: DeepSeq a => Int -> IO a -> IO (Maybe a)
>timeout n t = do res <- par_io timer thr  --timer
> return res
>where thr = do res <- t
>   return $!! Just res
>  timer = do threadDelay n
> return Nothing

All works perfectly now! From now I'll pay more attention to evaluation order.

Thank you for your help and attention.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency question

2005-09-04 Thread Dean Herington

At 9:45 PM +0400 9/4/05, Dmitry Vyal wrote:

Donald Bruce Stewart wrote:


Maybe your loop does no allocations, so the scheduler can't get in and do a
context switch.  You could put the computation in an external 
program, and run

it over a fork, using unix signals in the external program to kill the
computation after a period of time.


I thought about doing that, but function is closely connected with 
the rest of the program. Running it in another process would require 
some parsing of its arguments and I want circumvent these 
difficulties.


Moreover, this function indeed allocates plenty of memory (creates 
long lists), so It's just curiously for me to establish the reason 
of this (mis)behavior. By the way, what does it mean precisely, "no 
allocations".


This is the top part of program I have trouble with. "resolve" is 
that sluggish function, which execution I'm trying to break. It hogs 
a lot of memory, so context switching should occur regular.


I'm new to Haskell, so probably I've just made some really stupid mistake.

Thanks a lot for your help.


I believe you're just observing lazy evaluation at work.  The IO 
computation that you're forking is (return $ resolve cnf).  `resolve` 
is a pure function.  Hence the forked computation succeeds 
immediately--and the thread terminates (successfully)--without 
evaluating (resolve cnf).  It isn't until the case arm that begins 
"Just (ans, stats) ->" that the result of (resolve cnf) is demanded 
and hence evaluation of (resolve cnf) begins.  But this is too late 
for the timeout to have the intended effect.


How to fix?  You need to demand (enough of) the result of (resolve 
cnf) before returning from the IO computation.  What "enough of" 
means depends on how `resolve` is written.  You may find the DeepSeq 
module I wrote (see 
http://www.mail-archive.com/haskell@haskell.org/msg15819.html) 
helpful.


Dean




res_timeout=100 -- time quota in microseconds

forever a = a >> forever a

main :: IO ()
main = do args <- getArgs
  if (length args /= 1) then usage
 else do axioms <- readFile (head args)
 let tree = parseInput axioms
 case tree of
  (Right exprs) ->
  do let cnf = normalize $
   concatMap to_cnf exprs
 forever $ one_cycle cnf
  (Left er) -> putStr $ show er

usage = putStr "usage: resolution \n"

one_cycle :: CNF -> IO ()
one_cycle base =
do inp <- getLine
   let lex_tree = parseInput inp
   case lex_tree of
   (Right exprs) -> run_resolution $
normalize $ to_cnf (Not (head exprs))
++ base
   (Left er) -> putStr $ show er

-- Here I start a heavy computation

run_resolution :: CNF -> IO ()
run_resolution cnf =
do res <- timeout res_timeout (return $ resolve cnf)
   case res of
Just (ans, stats) -> do print stats
print ans
Nothing -> print "***timeout***"

-- These useful subroutines I saw in "Tackling The Awkward Squad"

par_io :: IO a -> IO a -> IO a
par_io t1 t2 = do c <- newEmptyMVar :: IO (MVar a)
  id1 <- forkIO $ wrapper c t1
  id2 <- forkIO $ wrapper c t2
  res <- takeMVar c
  killThread id1
  killThread id2
  return res
where wrapper :: MVar a -> IO a -> IO ()
  wrapper mvar io = do res <- io
   putMVar mvar res

timeout :: Int -> IO a -> IO (Maybe a)
timeout n t = do res <- par_io timer thr
 return res
where thr = do res <- t
   return $ Just res
  timer = do threadDelay n
 return Nothing
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency question

2005-09-04 Thread Donald Bruce Stewart
akamaus:
> Donald Bruce Stewart wrote:
> 
> >Maybe your loop does no allocations, so the scheduler can't get in and do a
> >context switch.  You could put the computation in an external program, and 
> >run
> >it over a fork, using unix signals in the external program to kill the
> >computation after a period of time. 
> 
> I thought about doing that, but function is closely connected with the 
> rest of the program. Running it in another process would require some 
> parsing of its arguments and I want circumvent these difficulties.

Ah, I've found another example. This function attempts to run an
expensive computation. If it doesn't return within a given time, a cheap
function is used instead. This was mostly written by Stefan Wehr:

watchdogIO :: Int  -- milliseconds
 -> IO a   -- expensive computation
 -> IO a   -- cheap computation
 -> IO a

watchdogIO millis expensive cheap = do
   mvar <- newEmptyMVar
   tid1 <- forkIO $ do x <- expensive
   x `seq` putMVar mvar (Just x)
   tid2 <- forkIO $ do threadDelay (millis * 1000)
   putMVar mvar Nothing
   res <- takeMVar mvar
   case res of
 Just x -> do
info ("EXPENSIVE was used")
killThread tid2 `catch` (\e -> warn (show e))
return x
 Nothing -> do
info ("WATCHDOG after " ++ show millis ++ " milliseconds")
killThread tid1 `catch` (\e -> warn (show e))
cheap

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency question

2005-09-04 Thread Dmitry Vyal

Donald Bruce Stewart wrote:


Maybe your loop does no allocations, so the scheduler can't get in and do a
context switch.  You could put the computation in an external program, and run
it over a fork, using unix signals in the external program to kill the
computation after a period of time. 


I thought about doing that, but function is closely connected with the 
rest of the program. Running it in another process would require some 
parsing of its arguments and I want circumvent these difficulties.


Moreover, this function indeed allocates plenty of memory (creates long 
lists), so It's just curiously for me to establish the reason of this 
(mis)behavior. By the way, what does it mean precisely, "no allocations".


This is the top part of program I have trouble with. "resolve" is that 
sluggish function, which execution I'm trying to break. It hogs a lot of 
memory, so context switching should occur regular.


I'm new to Haskell, so probably I've just made some really stupid mistake.

Thanks a lot for your help.


res_timeout=100 -- time quota in microseconds

forever a = a >> forever a

main :: IO ()
main = do args <- getArgs
  if (length args /= 1) then usage
 else do axioms <- readFile (head args)
 let tree = parseInput axioms
 case tree of
  (Right exprs) ->
  do let cnf = normalize $
   concatMap to_cnf exprs
 forever $ one_cycle cnf
  (Left er) -> putStr $ show er

usage = putStr "usage: resolution \n"

one_cycle :: CNF -> IO ()
one_cycle base =
do inp <- getLine
   let lex_tree = parseInput inp
   case lex_tree of
   (Right exprs) -> run_resolution $
normalize $ to_cnf (Not (head exprs))
++ base
   (Left er) -> putStr $ show er

-- Here I start a heavy computation

run_resolution :: CNF -> IO ()   
run_resolution cnf =
do res <- timeout res_timeout (return $ resolve cnf)
   case res of
Just (ans, stats) -> do print stats
print ans
Nothing -> print "***timeout***"

-- These useful subroutines I saw in "Tackling The Awkward Squad"

par_io :: IO a -> IO a -> IO a
par_io t1 t2 = do c <- newEmptyMVar :: IO (MVar a)
  id1 <- forkIO $ wrapper c t1
  id2 <- forkIO $ wrapper c t2
  res <- takeMVar c
  killThread id1
  killThread id2
  return res
where wrapper :: MVar a -> IO a -> IO ()
  wrapper mvar io = do res <- io
   putMVar mvar res

timeout :: Int -> IO a -> IO (Maybe a)
timeout n t = do res <- par_io timer thr
 return res
where thr = do res <- t
   return $ Just res
  timer = do threadDelay n
 return Nothing
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrency question

2005-09-04 Thread Donald Bruce Stewart
akamaus:
> Hi, everyone!
> 
> I have a function, which sometimes takes a long time to compute or even
> may loop forever. So I want to limit it in time somehow.
> 
> I tried to run it in another thread in order to kill it after its time
> lapsed. But it seems to lock out other threads so they can't terminate it.
> 
> I wonder is there some clever way of dealing with such situation 
> (running a computation in background for specific time) ?

Maybe your loop does no allocations, so the scheduler can't get in and do a
context switch.  You could put the computation in an external program, and run
it over a fork, using unix signals in the external program to kill the
computation after a period of time.  This is pretty much bullet proof:

> import System.Exit
> import System.Posix.Resource
> 
> rlimit = ResourceLimit 3 -- 3 second time limit
> 
> main = do
> setResourceLimit ResourceCPUTime (ResourceLimits rlimit rlimit)
> ... run my dangerous loop ...
> exitWith ExitSuccess

And then in the host application:
  
> (out,err,_) <- popen "my_loop_code" [] Nothing
> return $ case () of {_
> | null out && null err -> "Terminated\n"
> | null out -> err
> | otherwise-> out
> }

where popen looks something like:
  
> popen :: FilePath -> [String] -> Maybe String -> IO 
> (String,String,ProcessID)
> popen file args minput =
> Control.Exception.handle (\e -> return ([],show e,error (show e))) $ 
> do
> (inp,out,err,pid) <- runInteractiveProcess file args Nothing Nothing
> case minput of
> Just input -> hPutStr inp input >> hClose inp
> Nothing-> return ()
> output <- hGetContents out
> errput <- hGetContents err
> forkIO (Control.Exception.evaluate (length output) >> return ())
> forkIO (Control.Exception.evaluate (length errput) >> return ())
> waitForProcess pid  -- blocks without -threaded, you're 
> warned.
> return (output,errput,pid)


Cheers,
Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Concurrency question

2005-09-04 Thread Dmitry Vyal

Hi, everyone!

I have a function, which sometimes takes a long time to compute or even
may loop forever. So I want to limit it in time somehow.

I tried to run it in another thread in order to kill it after its time
lapsed. But it seems to lock out other threads so they can't terminate it.

I wonder is there some clever way of dealing with such situation 
(running a computation in background for specific time) ?


Thanks a lot.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe