Yeah, that’s got plenty of promise looking at it the 2nd time around. What 
turned me off from using it was the combination of not having a single-shot 
backup mode and appearing to only run in inotify mode which would just chuck in 
every compaction.

It looks like when reading it through its source code to come up with an actual 
answer to your question that I can put in a very small patch for making a 
single-shot pass of backups without the notify loop and then run it again in 
parallel as a separate task with an include regex of backups/(.*?!-tmp).

So, yes, I suggest using tablesnap and I’ll focus what time I would have spent 
enhancing my code to test it, put up a minor diff for a single-shot flag, and 
get some documentation / examples on snapshot and backup directories.

-Jeff 

> On Oct 12, 2015, at 2:30 PM, Robert Coli <rc...@eventbrite.com> wrote:
> 
> On Mon, Oct 12, 2015 at 9:41 AM, Jeff Ferland <j...@tubularlabs.com 
> <mailto:j...@tubularlabs.com>> wrote:
> I have a semi-hacky Python script I’ve written up. It needs refining for 
> public use, but I’ll put it in Github later today and send you a link as I 
> work on it. It uses boto to do concurrent multi-part uploads to S3 with retry 
> and resume recording function if it gets interrupted while uploading that 
> super huge file.
> 
> Or you could use tablesnap which has this basic design and has existed and 
> been maintained and extended with additional tools over the last few years?
> 
> https://github.com/JeremyGrosser/tablesnap 
> <https://github.com/JeremyGrosser/tablesnap>
> 
> =Rob
> 

Reply via email to