Alain Ketterlin <al...@universite-de-strasbourg.fr.invalid>:

> The close(2) manpage has the following warning on my Linux system:
>
> | Not checking the return value of close() is a common but
> | nevertheless serious programming error. It is quite possible that
> | errors on a previous write(2) operation are first reported at the
> | final close(). Not checking the return value when closing the file
> | may lead to silent loss of data. This can especially be observed
> | with NFS and with disk quota.
> | 
>
> (I haven't followed the thread, but if your problem is to make sure
> fds are closed on exec, you may be better off using the...
> close-on-exec flag. Or simply do the bookkeeping.)

The quoted man page passage is a bit untenable.

First, if close() fails, what's a poor program to do? Try again? How
do you get rid of an obnoxious file descriptor? How would close-on-exec
help? Would exec*() fail?

What if an implicit close() fails on _exit(), will _exit() fail then?
(The man page doesn't allow it.)

The need to close all open file descriptors comes between fork() and
exec*(). The kernel (module) does not see the close() system call unless
the reference count drops to zero. Normally, those function calls
between fork() and exec*() are therefore no-ops.

However, there's no guarantee of that. So the parent process might get
to call close() before the child that is about to call exec*(). Then,
the parent would not get the error that the man page talks about.
Instead, the error goes to the child, which has no reasonable way of
dealing with the situation.

I think having NFS et al postpone their I/O errors till close() is
shifting the blame to the victim.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to