Hi Colin & Graham,

I restored a large archive using the --resume-extract option and it worked
beautifully. I ended up writing a throwaway node script for comparing the
written files with the metadata in the list and it all looked good. Thank
you for the great work!

вс, 16 сент. 2018 г. в 21:34, Fede Pereiro <fpere...@gmail.com>:

> Hi Colin & Percival!
>
> I think I never subscribed to the list -- I nonchalantly assumed that no
> subscription was necessary : ). Just subscribed.
>
> Thank you for the response and the PR. I think I will use that code to
> recompile tarsnap and give it a try. I will also write a javascript
> (nodejs) script to compare the list output from tarsnap against the file
> sizes & dates, to detect any discrepancies - hopefully this will catch any
> bugs from the new code. When I am done with the script I will post it in
> github and send you the link.
>
> Cheers!
>
> сб, 15 сент. 2018 г. в 15:16, Graham Percival <gperc...@tarsnap.com>:
>
>> Hi Fede,
>>
>> I've just uploaded a PR that adds a --resume-extract option for precisely
>> this
>> case:
>> https://github.com/Tarsnap/tarsnap/pull/327
>>
>> If you're comfortable recompiling Tarsnap, by all means give it a try!
>> There's
>> always the warning that this is relatively untested code right now.
>>
>> Cheers,
>> - Graham
>>
>> On Sat, Sep 15, 2018 at 11:37:41AM -0700, Colin Percival wrote:
>> > Hi Fede,
>> >
>> > I just found this in my spam filter -- it looks like your mailing list
>> post
>> > may have been eaten (are you subscribed to the list with this email
>> address?)
>> > as well.
>> >
>> > Feel free to re-send this to the list, but I think the answer to your
>> question
>> > is (a) yes you're right that using --keep-newer-files for this has a
>> problem
>> > relating to files which are partially downloaded when an extract fails,
>> and
>> > (b) there's a work in progress for adding a --resume-extract option
>> which will
>> > catch cases like this... I think Graham (CCed) may have a patch you can
>> try,
>> > if you're comfortable recompiling Tarsnap.
>> >
>> > Colin Percival
>> >
>> > On 9/10/18 8:28 AM, Fede Pereiro wrote:
>> > > Hi everyone!
>> > >
>> > > I'm in the process of restoring (-x) a large archive to my local
>> disk. During
>> > > download, I have experienced network errors that have interrupted the
>> download.
>> > >
>> > > If I initialize the process again, using the *--keep-newer-files *to
>> only
>> > > download the missing files, the restore will eventually be complete.
>> My main
>> > > concern, however, is that a network error can leave a file partly
>> downloaded.
>> > > In this case, when I re-run the command, said file won't be
>> re-downloaded and
>> > > I will be left with an incomplete file.
>> > >
>> > > To avoid this, I would have to manually delete the last file
>> downloaded before
>> > > the network failure and re-enter the command. Another option would be
>> to
>> > > concoct a script that compares the tarsnap -t output with that of a
>> recursive
>> > > ls and spot files with different sizes (then I could manually delete
>> them and
>> > > run the command again).
>> > >
>> > > Before I do either of the above, I ask: is there a more reliable and
>> efficient
>> > > (both in personal time and in bandwidth cost) to restore large
>> backups when
>> > > using a connection that experiences network failures?
>> > >
>> > > I take this opportunity to thank cperciva and the community for
>> creating and
>> > > maintaining Tarsnap.
>> > >
>> > > Thanks!
>> >
>> > --
>> > Colin Percival
>> > Security Officer Emeritus, FreeBSD | The power to serve
>> > Founder, Tarsnap | www.tarsnap.com | Online backups for the truly
>> paranoid
>>
>

Reply via email to