off the top of my head:
1. celery
2. fork a process to handle the uploads
3. register a cleanup handler
4. homegrown batch / daemon -- log the upload locally, then process the 
upload separately

in my personal experience -- the main thing i'd watch out for is the 
book-keeping/accounting portion of it.

you want to ensure that:
1- you mark the upload as complete when it's complete
2- you mark the upload as failed when it's failed
3- you have some sort of check in place to handle crashes ( the process 
died during an upload, before it could handle a complete or fail )

you notify the user or your application as necessary

the record-keeping and transactional element of this is really important -- 
otherwise you can end up with an s3 bucket that has thousands of images 
(which you're paying hosting for ) but will never be used.  i learned that 
the hard way due to a bug in one of my unit tests !

IIRC, the approach I used was to have the uploading facility use a 
transactionless db handle for status recordkeeping ( i've added this file , 
i've deleted this file ), while the main application / daemon used 
transactions as normal.

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to pylons-discuss+unsubscr...@googlegroups.com.
To post to this group, send email to pylons-discuss@googlegroups.com.
Visit this group at http://groups.google.com/group/pylons-discuss?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to