I just wanted to suggest another approach that may work, at least as a 
fallback.

When I upload files to Amazon S3 I track the activity in a logging table 
(via sqlalchemy) that looks a bit like this:

  id | filename | timestamp_upload_start | upload_status (bool) | 
timestamp_deleted

Before uploading I create and flush an object with: id(sequence), filename, 
timestamp_start

If it uploads, I set upload_success = True , and flush
If it fails, I set upload_status = False, and flush.

If i can catch the failure, I'll delete the file and update 
`timestamp_deleted`.  If i can't catch it, a taskrunner often checks for 
failed uploads that didn't delete and performs a cleanup.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to