Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-12-16 Thread Richard O'Keefe

On 17/12/2010, at 12:03 PM, Mitar wrote:
> 
> Yes, this is the same problem as when designing an operating system:
> what should happen to a system call if it is interrupted. And I agree
> with elegance of simply returning EINTR errno. This makes system calls
> much easier to implement and for user it is really simple to wrap it
> in a loop retrying it. So it is quite often to see such function:
> 
> static int xioctl( int fd, int request, void *arg) {
> int r;
> 
> do {
>   r = ioctl(fd, request, arg);
> } while (-1 == r && EINTR == errno);
> 
> return r;
> }
> 
And _this_ you call "simple elegance"?

It may be simple, but the user has to know to do it, and the user has to
do it over and over and over again.

There are gotchas when this kind of thing propagates into higher layers.
For example, in another window I have the source code for fwrite() from
OpenSolaris.  (Actually, there are three fwrite.c files, and I'm not at
all sure that the one I'm looking at, /usr/src/lib/libc/port/stdio/fwrite.c,
is the correct one.)  In the _IONBF case there are loops that look like

while ((n = write(fileno(iop), dptr, s)) != s) {
if (n == -1) {
if (!cancel_active()) iop->_flag |= _IOERR;
return written/size;
} else {
dptr += n;
s -= n;
written += n;
}
}

Do you see an EINTR loop there?  Neither do I.  This means
that it is possible for fwrite() to return -1 with ferror()
true and errno == EINTR.  If you are writing to the ANSI C
standard, there is no way you can respond appropriately to
this because EINTR is not part of the C standard.

It _is_ part of the POSIX standard that file operations are
defined to set errno when they go wrong.  However, there is a
really nasty gotcha here.  The loop

do {
r = fwrite(p, sizeof *p, n, output);
} while (r == 0 && errno == EINTR);

is simply wrong.  There may have been successful partial
writes before the final interrupted one.  So let's try again.
r is the number of complete items written.  If the write is
interrupted, this need not be zero.  So

for (;;) {
r = fwrite(p, sizeof *p, n, output);
if (!ferror(output)) return SUCCESS;
if (errno != EINTR) return FAILURE;
p += r;
n -= r;
}

Not quite so easy to write.  I may have made any number of
mistakes, but it doesn't matter, because this still has a
problem.  fwrite() returns the number of complete items
written (written/size), but in the case of a partial write,
it may have written *more* than that.

Normally this doesn't matter, because almost every possible
error from fwrite() is something you would not want to
continue.  The only two you might want to continue are
EAGAIN and EINTR.  EAGAIN can only arise if you've explicitly
asked for non-blocking I/O, which you can't with fopen().

Now I'm not saying that you _couldn't_ design something like
fwrite() that _would_ be resumable after EINTR.  My point is
that apparently cute hacks like EINTR have a nasty way of
propagating into higher level interfaces and breaking them.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-12-16 Thread Mitar
Hi!

On Thu, Dec 16, 2010 at 10:30 AM, Simon Marlow  wrote:
> I've thought about whether we could support resumption in the past. It's
> extremely difficult to implement - imagine if the thread was in the middle
> of an I/O operation - should the entire I/O operation be restarted from the
> beginning?

Yes, this is the same problem as when designing an operating system:
what should happen to a system call if it is interrupted. And I agree
with elegance of simply returning EINTR errno. This makes system calls
much easier to implement and for user it is really simple to wrap it
in a loop retrying it. So it is quite often to see such function:

static int xioctl( int fd, int request, void *arg) {
 int r;

 do {
   r = ioctl(fd, request, arg);
 } while (-1 == r && EINTR == errno);

 return r;
}

And this is why, once I recognized similarities decided to use my
uninterruptible function. So as resumption is hard to do then such
approach is a valid alternative.

I agree with you, that the best would be that there would be no need
for such functions as you would make sure that exceptions are raised
only in threads you want. But this is hard to really satisfy when
using complex programs with many threads and many exception throwing
between them and especially if you are using third-party libraries you
do not know much about and which could throw some exception to some
other thread you haven't expected. This is why it is good to make sure
that things will work as expected even if some exception leaks in. It
would be great if type system would help you here: maybe a separate
(type) system just for exception coverage checking which would build a
graph from code of how all could exceptions be thrown (this would
require following of how ThreadId values are passed around, as
capability tokens for throwing an exception) and inform/warn you about
which exceptions can be thrown in which code segments. And you could
specify which exceptions you predict/allow and if there would be a
mismatch you would get an compile-time error.


Mitar

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-12-16 Thread Simon Marlow

On 07/12/2010 21:30, Mitar wrote:

Hi!

On Wed, Dec 1, 2010 at 10:50 AM, Simon Marlow  wrote:

Yes, but semantics are different. I want to tolerate some exception
because they are saying I should do this and this (for example user
interrupt, or timeout) but I do not want others, which somebody else
maybe created and I do not want to care about them.


Surely if you don't care about these other exceptions, then the right thing
to do is just to propagate them?  Why can't they be dealt with in the same
way as user interrupt?


The problem is that Haskell does not support getting back to
(continue) normal execution flow as there was no exception.


If you can't tolerate certain kinds of exceptions being thrown to a 
thread, then... don't throw them.  If your intention is that the thread 
will continue after receiving a certain kind of exception, then handle 
the exception in a different thread instead.


You have complete control over which asynchronous exceptions are thrown 
to a thread.  You might argue that StackOverflow/HeapOverflow aren't 
under your control - but in fact StackOverflow will never be raised 
inside mask.  If/when we ever implement HeapOverflow we'll make sure it 
has this property too.



I've thought about whether we could support resumption in the past. 
It's extremely difficult to implement - imagine if the thread was in the 
middle of an I/O operation - should the entire I/O operation be 
restarted from the beginning?  If the thread was blocked on an MVar, 
then it has to re-block on the same MVar - but maybe now the MVar is 
full.  Should it be as if the thread was never unblocked?  The only 
sensible semantics is that the exception behaves like an interrupt - but 
we already have support for that, just run the handler in a different 
thread.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-12-07 Thread Mitar
Hi!

On Wed, Dec 1, 2010 at 10:50 AM, Simon Marlow  wrote:
>> Yes, but semantics are different. I want to tolerate some exception
>> because they are saying I should do this and this (for example user
>> interrupt, or timeout) but I do not want others, which somebody else
>> maybe created and I do not want to care about them.
>
> Surely if you don't care about these other exceptions, then the right thing
> to do is just to propagate them?  Why can't they be dealt with in the same
> way as user interrupt?

The problem is that Haskell does not support getting back to
(continue) normal execution flow as there was no exception. So for
user interrupt I want to deal with in a way to interrupt execution,
cleanup, print to user something, etc. But for other exceptions I want
to behave as nothing happened. This is why those cannot be dealt with
the same as user interrupt.

On way to "behave as nothing happened" is to mask them (and only them)
and the other way is to "eat" (and ignore) them. Probably the first
would be better if it would be possible. So the second one is the one
I have chosen.

The third one, to restore execution flow is sadly not possible in
Haskell. Would it be possible to add this feature? To have some
function which operates on Exception value and can get you back to
where the exception was thrown?

>> uninterruptible :: IO a ->  IO a
>> uninterruptible a = *CENSORED*
>
> That is a very scary function indeed.  It just discards all exceptions and
> re-starts the operation.  I definitely do not condone its use.

Yes, this one is extreme. For example, this would be more in the line
of above arguments:

uninterruptible :: IO a -> IO a
uninterruptible a = mask_ $ a `catches` [
Handler (\(e :: AsyncException) -> case e of UserInterrupt ->
throw e ; _ -> uninterruptible a),
Handler (\(_ :: SomeException) -> uninterruptible a)
  ]

Mitar

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-12-02 Thread Simon Marlow

On 13/11/2010 19:08, Bit Connor wrote:

On Wed, Nov 10, 2010 at 5:48 PM, Simon Marlow  wrote:

[...] So we should
say there are a few things that you can do that guarantee not to call any
interruptible operations:

  - IORef operations
  - STM transactions that do not use retry
  - everything from the Foreign.* modules

and probably some other things.  Maybe we should put this list in the
documentation.


A list would be very helpful. I am specifically interested in knowing about:

- The try family of MVar functions
- the forkIO function itself


I added the following text to the Control.Exception documentation, in 
the section on "Interruptible operations".  Suggestions for improvements 
are welcome:



It is useful to think of 'mask' not as a way to completely prevent
asynchronous exceptions, but as a filter that allows them to be raised
only at certain places.  The main difficulty with asynchronous
exceptions is that they normally can occur anywhere, but within a
'mask' an asynchronous exception is only raised by operations that are
interruptible (or call other interruptible operations).  In many cases
these operations may themselves raise exceptions, such as I\/O errors,
so the caller should be prepared to handle exceptions arising from the
operation anyway.

Sometimes it is too onerous to handle exceptions in the middle of a
critical piece of stateful code.  There are three ways to handle this
kind of situation:

 * Use STM.  Since a transaction is always either completely executed
   or not at all, transactions are a good way to maintain invariants
   over state in the presence of asynchronous (and indeed synchronous)
   exceptions.

 * Use 'mask', and avoid interruptible operations.  In order to do
   this, we have to know which operations are interruptible.  It is
   impossible to know for any given library function whether it might
   invoke an interruptible operation internally; so instead we give a
   list of guaranteed-not-to-be-interruptible operations below.

 * Use 'uninterruptibleMask'.  This is generally not recommended,
   unless you can guarantee that any interruptible operations invoked
   during the scope of 'uninterruptibleMask' can only ever block for
   a short time.  Otherwise, 'uninterruptibleMask' is a good way to
   make your program deadlock and be unresponsive to user interrupts.

The following operations are guaranteed not to be interruptible:

 * operations on 'IORef' from "Data.IORef"
 * STM transactions that do not use 'retry'
 * everything from the @Foreign@ modules
 * everything from @Control.Exception@
 * @tryTakeMVar@, @tryPutMVar@, @isEmptyMVar@
 * @takeMVar@ if the @MVar@ is definitely full, and conversely 
@putMVar@ if the @MVar@ is definitely empty

 * @newEmptyMVar@, @newMVar@
 * @forkIO@, @forkIOUnmasked@, @myThreadId@


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-12-01 Thread Simon Marlow

On 01/12/2010 03:02, Mitar wrote:

Hi!

On Thu, Nov 18, 2010 at 2:19 PM, Simon Marlow  wrote:

then it isn't uninterruptible, because the timeout can interrupt it.  If you
can tolerate a timeout exception, then you can tolerate other kinds of async
exception too.


Yes, but semantics are different. I want to tolerate some exception
because they are saying I should do this and this (for example user
interrupt, or timeout) but I do not want others, which somebody else
maybe created and I do not want to care about them.


Surely if you don't care about these other exceptions, then the right 
thing to do is just to propagate them?  Why can't they be dealt with in 
the same way as user interrupt?





My main question is then, why do you want to use maskUninterruptible rather
than just mask?  Can you give a concrete example?


I have written few of them. As I said: timeout and user exceptions.
Timeout is for interrupts with which me as a programmer want to limit
otherwise uninterruptible code, and user exceptions so that users can
limit otherwise uninterruptible code.

But in meanwhile I have find an elegant way to solve my problems (very
old one in fact). ;-)

I have defined such function:

uninterruptible :: IO a ->  IO a
uninterruptible a = *CENSORED*


That is a very scary function indeed.  It just discards all exceptions 
and re-starts the operation.  I definitely do not condone its use.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-30 Thread Mitar
Hi!

On Thu, Nov 18, 2010 at 2:19 PM, Simon Marlow  wrote:
> then it isn't uninterruptible, because the timeout can interrupt it.  If you
> can tolerate a timeout exception, then you can tolerate other kinds of async
> exception too.

Yes, but semantics are different. I want to tolerate some exception
because they are saying I should do this and this (for example user
interrupt, or timeout) but I do not want others, which somebody else
maybe created and I do not want to care about them.

> My main question is then, why do you want to use maskUninterruptible rather
> than just mask?  Can you give a concrete example?

I have written few of them. As I said: timeout and user exceptions.
Timeout is for interrupts with which me as a programmer want to limit
otherwise uninterruptible code, and user exceptions so that users can
limit otherwise uninterruptible code.

But in meanwhile I have find an elegant way to solve my problems (very
old one in fact). ;-)

I have defined such function:

uninterruptible :: IO a -> IO a
uninterruptible a = mask_ $ a `catch` (\(_ :: SomeException) ->
uninterruptible a)

with which I wrap takeMVar and other interruptible operations. As they
can only be interrupted or succeed they can simply be retried until
they succeed. In this way my code is not susceptible to possible dead
locks which could happen if I would use maskUninterruptible and for
example throw two exceptions between two threads, while they would
both be in maskUninterruptible section. In this way I can remain in
only masked state, exceptions can still be delivered, but my program
flow is not interrupted. Of course function about can be also easily
modified to allow user interrupts and timeouts, but not other
exceptions. And as I want to use this code only in my cleanup code I
do not really care about new exceptions being ignored.


Mitar

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-18 Thread Simon Marlow

On 18/11/2010 11:31, Mitar wrote:

Hi!

On Wed, Nov 17, 2010 at 12:00 PM, Simon Marlow  wrote:

That's hard to do, because the runtime system has no knowledge of exception
types, and I'm not sure I like the idea of baking that knowledge into the
RTS.


But currently it does have a knowledge of interruptible and
uninterruptible exceptions? So adding another which would ctrl-c
interrupt?


The RTS knows when a thread is blocked, which is when it becomes 
interruptible.  It doesn't need to know about different kinds of 
exeptions.  In fact, when an exception is thrown it might well be 
unevaluated.



The point of maskUninterruptible is for those
hoefully rare rare cases where (a) it's really inconvenient to deal with
async exceptions and (b) you have some external guarantee that the critical
section won't block.


But is it possible to make an uninterruptible code section which can
timeout?


then it isn't uninterruptible, because the timeout can interrupt it.  If 
you can tolerate a timeout exception, then you can tolerate other kinds 
of async exception too.



Because once you enter maskUninterruptible probably
System.Timeout does not work anymore? I see uses of
maskUninterruptible (or some derivation of it, preferably) if:

- user would still be able to interrupt it
- you could specify some timeout or some other condition after which
the code section would be interrupted, so you could try for example to
deal with some cleanup code (and you do not want interrupts there) but
if this cleanup code hangs you still want some way to kill it then

Currently, with this absolute/all approach maskUninterruptible is
really not useful much because it masks too much. But I would see a
lot more useful something which would still allow me (and only me, as
a programmer) to interrupt it or user (because user should know what
he/she does). And this is why I argue for some way of being able to
specify which interrupts you still allow: like mask everything
except... (user interrupts, my special exception meant for cleanup
code hanging conditions...).


My main question is then, why do you want to use maskUninterruptible 
rather than just mask?  Can you give a concrete example?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-18 Thread Mitar
Hi!

On Wed, Nov 17, 2010 at 12:00 PM, Simon Marlow  wrote:
> That's hard to do, because the runtime system has no knowledge of exception
> types, and I'm not sure I like the idea of baking that knowledge into the
> RTS.

But currently it does have a knowledge of interruptible and
uninterruptible exceptions? So adding another which would ctrl-c
interrupt?

> The point of maskUninterruptible is for those
> hoefully rare rare cases where (a) it's really inconvenient to deal with
> async exceptions and (b) you have some external guarantee that the critical
> section won't block.

But is it possible to make an uninterruptible code section which can
timeout? Because once you enter maskUninterruptible probably
System.Timeout does not work anymore? I see uses of
maskUninterruptible (or some derivation of it, preferably) if:

- user would still be able to interrupt it
- you could specify some timeout or some other condition after which
the code section would be interrupted, so you could try for example to
deal with some cleanup code (and you do not want interrupts there) but
if this cleanup code hangs you still want some way to kill it then

Currently, with this absolute/all approach maskUninterruptible is
really not useful much because it masks too much. But I would see a
lot more useful something which would still allow me (and only me, as
a programmer) to interrupt it or user (because user should know what
he/she does). And this is why I argue for some way of being able to
specify which interrupts you still allow: like mask everything
except... (user interrupts, my special exception meant for cleanup
code hanging conditions...).


Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-17 Thread Simon Marlow

On 12/11/2010 07:49, Mitar wrote:


On Wed, Nov 10, 2010 at 4:48 PM, Simon Marlow  wrote:

You can use maskUninterruptible in GHC 7, but that is not generally
recommended,


Maybe there should be some function like maskUninterruptibleExceptUser
which would mask everything except UserInterrupt exception. Or maybe
UserInterrupt and some additional exception meant for use by programs,
like InterruptMaskException.


That's hard to do, because the runtime system has no knowledge of 
exception types, and I'm not sure I like the idea of baking that 
knowledge into the RTS.


Furthermore I'm not sure about the usefulness of 
maskUninterruptibleExceptUser - it seems even less useful than 
maskUninterruptible to me.  The point of maskUninterruptible is for 
those hoefully rare rare cases where (a) it's really inconvenient to 
deal with async exceptions and (b) you have some external guarantee that 
the critical section won't block.



Or we could make two new type classes:

HiddenException -- for those exceptions which should not print
anything if not caught


We could do that (or something like it), yes.


UninterruptibleExceptException -- for those exceptions which should
not be masked with maskUninterruptibleExcept function


but not this (see above).


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-15 Thread Bit Connor
On Wed, Nov 10, 2010 at 5:48 PM, Simon Marlow  wrote:
> [...] So we should
> say there are a few things that you can do that guarantee not to call any
> interruptible operations:
>
>  - IORef operations
>  - STM transactions that do not use retry
>  - everything from the Foreign.* modules
>
> and probably some other things.  Maybe we should put this list in the
> documentation.

A list would be very helpful. I am specifically interested in knowing about:

- The try family of MVar functions
- the forkIO function itself

Thanks,
Bit Connor
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-11 Thread Mitar
Hi!

On Wed, Nov 10, 2010 at 4:48 PM, Simon Marlow  wrote:
> You can use maskUninterruptible in GHC 7, but that is not generally
> recommended,

Maybe there should be some function like maskUninterruptibleExceptUser
which would mask everything except UserInterrupt exception. Or maybe
UserInterrupt and some additional exception meant for use by programs,
like InterruptMaskException.

Or we could make two new type classes:

HiddenException -- for those exceptions which should not print
anything if not caught
UninterruptibleExceptException -- for those exceptions which should
not be masked with maskUninterruptibleExcept function


Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-11 Thread Simon Marlow

On 10/11/2010 17:52, Mitar wrote:

Hi!

On Wed, Nov 10, 2010 at 4:16 PM, Simon Marlow  wrote:

The right way to fix it is like this:


Optimist. ;-)


  let run = unblock doSomething `catches` [
Handler (\(_ :: MyTerminateException) ->  return ()),
Handler (\(e :: SomeException) ->  putStrLn $ "Exception: " ++
show e)
  ] `finally` (putMVar terminated ())
  nid<- block $ forkIO run


In 6.12.3 this does not work (it does not change anything, I hope I
tested it correctly) because finally is defined as:

a `finally` sequel =
   block (do
 r<- unblock a `onException` sequel
 _<- sequel
 return r
   )


Oh, good point!  I must have tested it with 7.0.  So this is indeed 
something that is fixed by the new async exceptions API: finally itself 
doesn't raise async exceptions inside a mask.


To fix this with GHC 6.12.3 you'd have to avoid finally and do it manually:

  let run = do
r <- (unblock doSomething `catches` [
Handler (\(_ :: MyTerminateException) -> return ()),
Handler (\(e :: SomeException) -> putStrLn $ 
"Exception: " ++ show e)

  ]) `onException` sequel
sequel
where sequel = putMVar terminated ()

Now *that* works - I tested it with 6.12.3 this time.

Cheers,
Simon





You see that unblock there? So it still unblocks so that second
exception is delivered immediately after catches handles the first
exception?

But I agree that your explanation for what is happening is the correct
one. Better than my "hanging threads at the end". And with my throwIO
approach I just override MyTerminateException with ThreadKilled.


Mitar


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Mitar
Hi!

On Wed, Nov 10, 2010 at 4:16 PM, Simon Marlow  wrote:
> The right way to fix it is like this:

Optimist. ;-)

>  let run = unblock doSomething `catches` [
>                Handler (\(_ :: MyTerminateException) -> return ()),
>                Handler (\(e :: SomeException) -> putStrLn $ "Exception: " ++
> show e)
>              ] `finally` (putMVar terminated ())
>  nid <- block $ forkIO run

In 6.12.3 this does not work (it does not change anything, I hope I
tested it correctly) because finally is defined as:

a `finally` sequel =
  block (do
r <- unblock a `onException` sequel
_ <- sequel
return r
  )

You see that unblock there? So it still unblocks so that second
exception is delivered immediately after catches handles the first
exception?

But I agree that your explanation for what is happening is the correct
one. Better than my "hanging threads at the end". And with my throwIO
approach I just override MyTerminateException with ThreadKilled.


Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Simon Marlow

On 10/11/2010 13:39, Mitar wrote:


I know that (I read one post from you some time ago). It is in TODO
commend before this code. I am waiting for GHC 7.0 for this because I
do not like current block/unblock approach.

Because blocked parts can still be interrupted, good example of that
is takeMVar. The other is (almost?) any IO. So if code would be:

block $ forkIO $ do
   putStrLn "Forked"
   (unblock doSomething) `finally` (putMVar terminated ())

There is a chance that putStrLn gets interrupted even with block.


The new async exceptions API isn't going to significantly change things 
in this area.  You still have interruptible operations that can receive 
async exceptions, and you don't know when you call an IO function in a 
library whether it uses interruptible operations or not.


The new API means that things like withMVar that previously would 
unblock async exceptions even when called inside block no longer do 
that.  But that doesn't affect whether a library function that you have 
no control over can raise async exceptions or not.


You can use maskUninterruptible in GHC 7, but that is not generally 
recommended, because of the danger that you make your program 
unresponsive to ^C indefinitely.  If you can be sure that the the code 
will only be blocked for a short time, then maskUninterruptible might be 
justified, but I think that case is rare.  e.g. putStrLn could certainly 
block indefintely.


So let's be clear about this.  Inside mask (previously block), the 
following things can raise asynchronous exceptions:


 - interruptible operations (MVar operations, throwTo, atomically)

Since we can't be certain that an IO operation from a library does not 
invoke any interruptible operations, we have to assume that all IO 
functions from libraries can raise async exceptions.


Well, maybe that's a bit conservative.  If it were really true then 
there wouldn't be much point in mask at all, since the only things you 
can meaningfully do inside IO involve calling library functions.  So we 
should say there are a few things that you can do that guarantee not to 
call any interruptible operations:


 - IORef operations
 - STM transactions that do not use retry
 - everything from the Foreign.* modules

and probably some other things.  Maybe we should put this list in the 
documentation.


Cheers,
Simon



 So

for me it is better to have a big TODO warning before the code than to
use block and believe that this is now fixed. And when I will add some
code one year later I will have problems. Of course I could add a
comment warning not to add any IO before finally. But I want to have a
reason to switch to GHC 7.0. ;-)

I am using such big hack for this:

-- Big hack to prevent interruption: it simply retries interrupted computation
uninterruptible :: IO a ->  IO a
uninterruptible a = block $ a `catch` (\(_ :: SomeException) ->
uninterruptible a)

So I can call "uninterruptible $ takeMVar terminated" and be really
sure that I have waited for thread to terminate. Not that thread in
its last breaths send me some exception which interrupted me waiting
for it and thus not allowing it to terminate properly. Of course if
you now that you do not want that your waiting is interrupted in any
way. Maybe there could be an UserInterrupt exception to this
uninterruptible way of handling exceptions. ;-)


I've made this same error and have seen others make it too. For this
reason I created the threads[1] package to deal with it once and for
all.


I know. I checked it. But was put off by mention of "unsafe".


Strange. It would help if you could show more of of your code.


I am attaching a sample program which shows this. I am using 6.12.3 on
both Linux and Mac OS X. And I run this program with runhaskell
Test.hs. Without "throwIO ThreadKilled" it outputs:

Test.hs: MyTerminateException
MVar was successfully taken

With "throwIO ThreadKilled" is as expected, just:

MVar was successfully taken

So MVar is filled. What means that thread gets exception after that.
But there is nothing after that. ;-) (At least nothing visible.)


Mitar



___
Glasgow-haskell-users mailing list
glasgow-haskell-us...@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Simon Marlow

On 10/11/2010 14:32, Bas van Dijk wrote:

On Wed, Nov 10, 2010 at 2:39 PM, Mitar  wrote:

Strange. It would help if you could show more of of your code.


I am attaching a sample program which shows this. I am using 6.12.3 on
both Linux and Mac OS X. And I run this program with runhaskell
Test.hs. Without "throwIO ThreadKilled" it outputs:

Test.hs: MyTerminateException
MVar was successfully taken

With "throwIO ThreadKilled" is as expected, just:

MVar was successfully taken

So MVar is filled. What means that thread gets exception after that.
But there is nothing after that. ;-) (At least nothing visible.)


This is really interesting. Presumably what happens is that an
exception is indeed thrown and then raised in the thread after the
final action. Now if you synchronously throw an exception at the end
it looks like it's raised before the asynchronous exception is raised.

Hopefully one of the GHC devs (probably Simon Marlow) can confirm this
behavior and shed some more light on it.


I think it's behaving as expected - there's a short window during which 
exceptions are unblocked and a second exception can be thrown.  The 
program has


  let run = doSomething `catches` [
Handler (\(_ :: MyTerminateException) -> return ()),
Handler (\(e :: SomeException) -> putStrLn $ 
"Exception: " ++ show e)

  ] `finally` (putMVar terminated ())
  nid <- forkIO run

The first MyTerminateException gets handled by the first exception 
handler.  This handler returns, and at that point exceptions are 
unblocked again, so the second MyTerminateException can be thrown.  The 
putMVar gets to run, and then the exception is re-thrown by finally, and 
caught and printed by the outer exception handler.


The right way to fix it is like this:

  let run = unblock doSomething `catches` [
Handler (\(_ :: MyTerminateException) -> return ()),
Handler (\(e :: SomeException) -> putStrLn $ 
"Exception: " ++ show e)

  ] `finally` (putMVar terminated ())
  nid <- block $ forkIO run

and the same will be true in GHC 7.0, except you'll need to use mask 
instead of block/unblock.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Bas van Dijk
On Wed, Nov 10, 2010 at 2:39 PM, Mitar  wrote:
>> Strange. It would help if you could show more of of your code.
>
> I am attaching a sample program which shows this. I am using 6.12.3 on
> both Linux and Mac OS X. And I run this program with runhaskell
> Test.hs. Without "throwIO ThreadKilled" it outputs:
>
> Test.hs: MyTerminateException
> MVar was successfully taken
>
> With "throwIO ThreadKilled" is as expected, just:
>
> MVar was successfully taken
>
> So MVar is filled. What means that thread gets exception after that.
> But there is nothing after that. ;-) (At least nothing visible.)

This is really interesting. Presumably what happens is that an
exception is indeed thrown and then raised in the thread after the
final action. Now if you synchronously throw an exception at the end
it looks like it's raised before the asynchronous exception is raised.

Hopefully one of the GHC devs (probably Simon Marlow) can confirm this
behavior and shed some more light on it.

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Mitar
Hi!

On Wed, Nov 10, 2010 at 1:39 PM, Bas van Dijk  wrote:
> First of all, are you sure you're using a unique MVar for each thread?
> If not, there could be the danger of deadlock.

I am.

> Finally the 'putMVar terminated ()' computation is performed.

Yes. But this should be done fast if MVar is empty, so putMVar does
not block. And also MVar is filled, because takeMVar does not block at
the end. So It is not that putMVar is interrupted (and thus no
handler) but something after that.

> Finally, your code has dangerous deadlock potential: Because you don't
> mask asynchronous exceptions before you fork it could be the case that
> an asynchronous exception is thrown to the thread before your
> exception handler is registered. This will cause the thread to abort
> without running your finalizer: 'putMVar terminated ()'.  Your
> 'takeMVar terminated' will then deadlock because the terminated MVar
> will stay empty.

I know that (I read one post from you some time ago). It is in TODO
commend before this code. I am waiting for GHC 7.0 for this because I
do not like current block/unblock approach.

Because blocked parts can still be interrupted, good example of that
is takeMVar. The other is (almost?) any IO. So if code would be:

block $ forkIO $ do
  putStrLn "Forked"
  (unblock doSomething) `finally` (putMVar terminated ())

There is a chance that putStrLn gets interrupted even with block. So
for me it is better to have a big TODO warning before the code than to
use block and believe that this is now fixed. And when I will add some
code one year later I will have problems. Of course I could add a
comment warning not to add any IO before finally. But I want to have a
reason to switch to GHC 7.0. ;-)

I am using such big hack for this:

-- Big hack to prevent interruption: it simply retries interrupted computation
uninterruptible :: IO a -> IO a
uninterruptible a = block $ a `catch` (\(_ :: SomeException) ->
uninterruptible a)

So I can call "uninterruptible $ takeMVar terminated" and be really
sure that I have waited for thread to terminate. Not that thread in
its last breaths send me some exception which interrupted me waiting
for it and thus not allowing it to terminate properly. Of course if
you now that you do not want that your waiting is interrupted in any
way. Maybe there could be an UserInterrupt exception to this
uninterruptible way of handling exceptions. ;-)

> I've made this same error and have seen others make it too. For this
> reason I created the threads[1] package to deal with it once and for
> all.

I know. I checked it. But was put off by mention of "unsafe".

> Strange. It would help if you could show more of of your code.

I am attaching a sample program which shows this. I am using 6.12.3 on
both Linux and Mac OS X. And I run this program with runhaskell
Test.hs. Without "throwIO ThreadKilled" it outputs:

Test.hs: MyTerminateException
MVar was successfully taken

With "throwIO ThreadKilled" is as expected, just:

MVar was successfully taken

So MVar is filled. What means that thread gets exception after that.
But there is nothing after that. ;-) (At least nothing visible.)


Mitar


Test.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Mitar
Hi!
On Wed, Nov 10, 2010 at 8:54 AM, Bas van Dijk  wrote:
> A ThreadKilled exception is not printed to stderr because it's not
> really an error and should not be reported as such.

So, how to make custom exceptions which are "not really an error"?

> What do you mean by "hanging there". Are they blocked on an MVar?

I have something like this:

terminated <- newEmptyMVar
forkIO $ doSomething `catches` [
  Handler (\(_ :: MyTerminateException) -> return
()), -- we just terminate, that is the idea at least
  Handler (...) -- handle other exceptions
] `finally` (putMVar terminated ())
takeMVar terminated

The problem is, that if I send multiple MyTerminateException
exceptions to the thread one of them is printed (or maybe even more, I
do not know, because I have many threads). My explanation is that
after the first one is handled and MVar is written, thread stays
"active", just no computation is evaluating.  Because of that another
exception can be thrown at the thread. And then it is not handled
anymore and so the library prints the exception and exits the thread.

If I change things into:

`finally` (putMVar dissolved () >> throwIO ThreadKilled)

it works as expected.

> That shouldn't be necessary. If the cleanup action is the last action
> of the thread then the thread will terminate when it has performed the
> cleanup.

It does not seem so.

> It would help if you could show us your code (or parts of it).

I hope the code above should satisfy. If not I will make some real example.


Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-10 Thread Bas van Dijk
On Wed, Nov 10, 2010 at 10:50 AM, Mitar  wrote:
> Hi!
> On Wed, Nov 10, 2010 at 8:54 AM, Bas van Dijk  wrote:
>> A ThreadKilled exception is not printed to stderr because it's not
>> really an error and should not be reported as such.
>
> So, how to make custom exceptions which are "not really an error"?

I would say that any exception which isn't handled must be considered
an error, any exception which is handled is just an exceptional
situation.  For example if your application does not handle
DivideByZero than it's an error if one is thrown. However, your
application does handle MyTerminateException, so when one is thrown it
will just be an exceptional situation that your application can
handle.

>> What do you mean by "hanging there". Are they blocked on an MVar?
>
> I have something like this:
>
> terminated <- newEmptyMVar
> forkIO $ doSomething `catches` [
>                      Handler (\(_ :: MyTerminateException) -> return
> ()), -- we just terminate, that is the idea at least
>                      Handler (...) -- handle other exceptions
>                    ] `finally` (putMVar terminated ())
> takeMVar terminated
>
> The problem is, that if I send multiple MyTerminateException
> exceptions to the thread one of them is printed (or maybe even more, I
> do not know, because I have many threads). My explanation is that
> after the first one is handled and MVar is written, thread stays
> "active", just no computation is evaluating.  Because of that another
> exception can be thrown at the thread. And then it is not handled
> anymore and so the library prints the exception and exits the thread.

First of all, are you sure you're using a unique MVar for each thread?
If not, there could be the danger of deadlock.

Secondly, it could just be the case that when you throw your first
MyTerminateException it is catched by your handler which will then
just return as expected. Finally the 'putMVar terminated ()'
computation is performed. Now when you throw your second
MyTerminateException during the execution of this computation, there's
no handler any more to catch it so it will be reported.

Finally, your code has dangerous deadlock potential: Because you don't
mask asynchronous exceptions before you fork it could be the case that
an asynchronous exception is thrown to the thread before your
exception handler is registered. This will cause the thread to abort
without running your finalizer: 'putMVar terminated ()'.  Your
'takeMVar terminated' will then deadlock because the terminated MVar
will stay empty.

I've made this same error and have seen others make it too. For this
reason I created the threads[1] package to deal with it once and for
all. You could rewrite your code to something like:

import qualified Control.Concurrent.Thread as Thread

main = do
  (tid, wait) <- Thread.forkIO $
   doSomething `catches`
 [ Handler (\(_ :: MyTerminateException) -> return ())
 , Handler (...)
 ]
  _ <- wait

Hopefully that helps.

> If I change things into:
>
> `finally` (putMVar dissolved () >> throwIO ThreadKilled)
>
> it works as expected.

Strange. It would help if you could show more of of your code.

Regards,

Bas

[1] http://hackage.haskell.org/package/threads
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-09 Thread Bas van Dijk
On Tue, Nov 9, 2010 at 5:37 PM, Mitar  wrote:
> Why is ThreadKilled not displayed by RTS when send to thread (and
> unhandled), but any other exception is?

A ThreadKilled exception is not printed to stderr because it's not
really an error and should not be reported as such. It is also clear
how to handle a ThreadKilled: just abort the thread and silently
terminate. All other exceptions raised in or thrown to the thread
which are not handled by the thread itself should be considered errors
and reported as such.

To see what happens under the hood look at the implementation of
forkIO by clicking on the 'Source' link next to the type signature:

http://hackage.haskell.org/packages/archive/base/4.2.0.2/doc/html/Control-Concurrent.html#v%3AforkIO

The 'action_plus' that is forked catches all exceptions and handles
them with childHandler:
...
  where
action_plus = catchException action childHandler

childHandler in turn calls real_handler to handle the exception but
registers an exception handler so that exceptions raised by the
real_handler are handled again recursively:

childHandler :: SomeException -> IO ()
childHandler err = catchException (real_handler err) childHandler

Now real_handler handles the exceptions BlockedIndefinitelyOnMVar,
BlockedIndefinitelyOnSTM and ThreadKilled by just returning. All other
exceptions are reported:

real_handler :: SomeException -> IO ()
real_handler se@(SomeException ex) =
  -- ignore thread GC and killThread exceptions:
  case cast ex of
  Just BlockedIndefinitelyOnMVar-> return ()
  _ -> case cast ex of
   Just BlockedIndefinitelyOnSTM-> return ()
   _ -> case cast ex of
Just ThreadKilled   -> return ()
_ -> case cast ex of
 -- report all others:
 Just StackOverflow -> reportStackOverflow
 _  -> reportError se

> For example, I am using custom exceptions to signal different kinds of
> thread killing. Based on those my threads cleanup in different ways.
> But after they cleanup they keep "running". Not really running as
> their computations were interrupted. But simply hanging there. And if
> then new custom exception arrives it is not handled anymore (as it is
> after my normal exception handlers already cleaned everything) and it
> is printed to stderr.

What do you mean by "hanging there". Are they blocked on an MVar?

> Now I added  "`finally` throwIO ThreadKilled" at the end of my cleanup
> computations to really kill the thread. So that possible later
> exceptions are not delivered anymore.

That shouldn't be necessary. If the cleanup action is the last action
of the thread then the thread will terminate when it has performed the
cleanup.

> Why it is not so that after all computations in a thread finish,
> thread is not killed/disposed of?

It actually should. I think any other behavior is a bug.

It would help if you could show us your code (or parts of it).

Regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Printing of asynchronous exceptions to stderr

2010-11-09 Thread Mitar
Hi!

I have been spend some time to debug this in my program. ;-)

Why is ThreadKilled not displayed by RTS when send to thread (and
unhandled), but any other exception is?

For example, I am using custom exceptions to signal different kinds of
thread killing. Based on those my threads cleanup in different ways.
But after they cleanup they keep "running". Not really running as
their computations were interrupted. But simply hanging there. And if
then new custom exception arrives it is not handled anymore (as it is
after my normal exception handlers already cleaned everything) and it
is printed to stderr.

Now I added  "`finally` throwIO ThreadKilled" at the end of my cleanup
computations to really kill the thread. So that possible later
exceptions are not delivered anymore.

Why it is not so that after all computations in a thread finish,
thread is not killed/disposed of? Because it can be reused by some
other computation? But then my exceptions could be delivered to wrong
(new) thread while they were meant for old one?


Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe