On 13/04/2015 12:54, Leonid Fedorenchik wrote:
        opendir();
        do {
                unlinked_files = 0;
                while (readdir()) {
                        unlink();
                        unlinked_files = 1;
                }
                if (unlinked_files)
                        rewinddir();
        } while (unlinked_files);
        closedir();

 That is actually incorrect. It depends on non-portable and
unspecified directory implementation.

 The readdir() specification page you linked says:

 "If a file is removed from or added to the directory after the most
recent call to opendir() or rewinddir(), whether a subsequent call to
readdir() returns an entry for that file is unspecified."

 Which means that it is very possible for readdir() to keep returning
NULL after a rewinddir() even if files have been added to the directory
in the meantime. The fact that your readdir() implementation doesn't is
accidental.

 If you really want to use that algorithm, you have to loop around
opendir() and closedir(): re-open the directory after every pass, until
it is empty. But this is dangerous: it opens a race condition where you
could delete a new directory created by another process after you
successfully deleted the old one.

 In other words, there is no way to atomically delete a file hierarchy
in Unix, and every "rm -rf" implementation is a compromise.

 A way that I have found safer than most is the following:
 * atomically rename() the directory to a non-existing, unique, random,
hard to predict directory name.
 * recursively delete the newly named directory with your favorite
deletion algorithm. No matter what method you choose, there is still
a race condition, but the chances it will be triggered are significantly
reduced.

--
 Laurent

_______________________________________________
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to