> Sorry but I'm not convinced that blocking on a putMVar of a full MVar is
> any safer than throwing an exception. Normally putMVar to a full MVar
> is an error, and I would like to know about it.
To make putMVar to a full MVar an error, an extra rule had to be added to
the semantics! This extra rule doesn't interact nicely with the concurrent
reaction model, and the ensueing race-condition may introduce exceptions
where none are needed.
Consider some simple producer-consumer pair, communicating via a
shared MVar:
producer mv = mapM_ (putMVar mv) [1..]
consumer mv = do
x <- takeMVar mv
print x
consumer mv
prod_cons = do
mv <- newEmptyMVar
forkIO (consumer mv)
producer mv
According to the current CH semantics, this program may or may
not crash (throw an unhandled exception), at various stages during
its execution, depending on the scheduling sequence.
I would not usually want such programs to be problematic - the
communication channel works as a buffer, and the capacity limitation
guarantees that the producer doesn't overrun (with blocking
putMVars). And what would you do in the exception handler?
Sleep for a while, then try again?
Using CVars would avoid the problem, introducing a separate
demand-channel from the consumer to the producer would avoid
the problem, so I can rewrite the program in a safe way, but then
both reader and writer get extra work, even though only the
writer was unsafe!
(question to the specialists: does this mean I pay twice, i.e., that
safe abstractions built on half-safe primitives are more expensive
than a safe pair of primitives would be?)
I am not saying that there should only be blocking primitives. But
if everybody is using raw MVars, it should be made clear in the
code (operation name or type) that there is a proof obligation and
an exception to handle, i.e., that the current putMVar is not the
operation one would naively expect to be paired with the current
takeMVar (btw, what was the original motivation for this choice?).
Claus