Bernhard Voelker wrote: > Michael Boldischar wrote: > > For what it's worth, I still think a safe move flag in "mv" adds value. > > Instead of a general "safe" idea, I'm wondering against what error cases > mv should be made aware of? > > I.e., there are cases when creating the new files/directories fails, > when reading or writing fails, and when unlinking the old files/directories > fails.
I think this is a good idea. What are the failure cases that will cause mv to be trouble? If we can document understand them then we can improve them or document them. There may be some hidden trouble that could be improved. An improved test set may be possible to tickle the problematic cases so that we don't fall into them accidentally. > IMHO the only solution to be sure mv completely finished its job is > when all steps are done. Therefore, using cp+rm makes most sense. > Thinking iterative, rsync may be the best option. If I need an atomic move of a single file then I use mv. If the move is on the same filesystem then the mv will be fast and atomic. If the move is across filesystems or the data data set that needs to be moved is non-trivial then the move cannot be atomic. For any non-trivial data set I have always used rsync until the data set is complete and then remove the old. As Gordon pointed out the best feature of rsync is that the process can be stopped and started and rsync will be efficient about picking up the copy from where it left off. The fact that mv tries to make a best effort to move across filesystems is really a misfeature. It shouldn't have tried. It changes the behavior of the operation in that it can't be atomic. The set of possible errors now includes all of the cp possible errors. Therefore it violates the Unix philosophy. If it didn't work at all then users would naturally use a different tool like cp+rm or rsync+rm and there wouldn't be any confusion. But of course that is water long under the bridge now. Bob