On Fri, Oct 31, 2014 at 11:07 PM, Akira Li <4kir4...@gmail.com> wrote: > where atomic_open() [1] tries to overcome multiple issues with saving > data reliably: > > - write to a temporary file so that the old data is always available > - rename the file when all new data is written, handle cases such as: > * "antivirus opens old file thus preventing me from replacing it" > > either the operation succeeds and 'backup' contains new data or it fails > and 'backup' contains untouched ready-to-restore old data -- nothing in > between.
Sounds like a lot of hassle, and a lot of things that could be done wrongly. Personally, if I need that level of reliability and atomicity, I'd rather push the whole question down to a lower level: maybe commit something to a git repository and push it to a remote server, or use a PostgreSQL database, or something of that sort. Let someone else have the headaches about "what if AV opens the file". Let someone else worry about how to cope with power failures at arbitrary points in the code. (Though, to be fair, using git for this doesn't fully automate failure handling; what it does is allow you to detect issues on startup, and either roll back ("git checkout -f") or apply ("git commit -a") if it looks okay.) ChrisA -- https://mail.python.org/mailman/listinfo/python-list