Hello,
Concurrency in Haskell (technically GHC) is much like concurrency in other languages,
in that you fork off threads and do stuff with them, etc. I think you are perhaps thinking of another kind of concurrency, where different redexes in
a term are evaluted simultaneously? Here is an example to illustrate what I was talking about
(the point about stToIO making that one law unsafe)


> import Control.Monad.ST
> import Data.STRef
> import Control.Concurrent
> import Control.Monad(when)

> reader     :: STRef r Int -> ST r ()
> reader r    = do x <- readSTRef r
                  y <- readSTRef r
                  when (x == y) (reader r)

> writer     :: Int -> STRef r Int -> ST r ()
> writer n r  = do writeSTRef r n
                  writer (n+1) r

> main       :: IO ()
> main        = do r <- stToIO (newSTRef 0)
                  forkIO (stToIO (writer 1 r))
                  stToIO (reader r)

In GHC the whole program stops when the main thread exits.
So if the law I was talking about holds, this program should
never terminate, as it will forever loop in 'reader'.
However since there is a concurrent thread running that can modify
the state, if 'reader' is interrupted between the two readSTRefs
we will get different values for 'x' and 'y' and 'reader' will stop.
I tried that in GHC and it stops after a little while, so the law does not hold.


What is interesting is that I think the law does hold if
we execute an ST computation using runST, so at least in principle
GHC could perform this optimiziation in such situations.
I think the law holds then, as I think no reference can escape to concurrent threads,
as if they did their region parameter would become "RealWorld", and so the computation
could not be runSTed.


-Iavor













Graham Klyne wrote:

At 10:38 08/11/04 -0800, Iavor S. Diatchki wrote:

... Now the above law already doesn't hold when all GHC extensions are used,
as when concurrency is present we cannot assume that nobody modified the state concurrently.
As a result all pointers in GHC probably behave as if they were "volatile", which is not very nice.


Eek!  I find this most worrisome.  And I'm not sure that I agree.

I thought that part of the reason for having a monad that it was threaded in a useful way through the path of a *single* computation (expression evaluation). If concurrent activities can change that, then I sense that they're breaking something quite fundamental in the way Haskell should work.

e.g. in a sequence like:

  v :: SomeMonad
  v = do { e1 ; e2 ; e3 }

Then I think that exactly the monad created by e1 is passed to e2, and the result of e2 passed to e3, without any visible external interference under any circumstance. Concurrency, as I understand it should apply to Haskell, would allow different elements of that computation to be performed in different threads/processes, but the overall result of the computation should not be changeable. Put another way, the graph reduction model for evaluating a Haskell program should not change, just the mechanics actual processes (or processors) actually perform the reduction steps.

Or am I really overlooking something here?

#g


------------ Graham Klyne For email: http://www.ninebynine.org/#Contact


_______________________________________________
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to