On Monday, 27 September 2021 02:39:19 BST Adam Carter wrote:
> On Sun, Sep 26, 2021 at 8:57 PM Peter Humphrey <pe...@prh.myzen.co.uk>
> 
> wrote:
> > Hello list,
> > 
> > I have an external USB-3 drive with various system backups. There are 350
> > .tar files (not .tar.gz etc.), amounting to 2.5TB. I was sure I wouldn't
> > need to compress them, so I didn't, but now I think I'm going to have to.
> > Is there a reasonably efficient way to do this?
> 
> find <mountpoint> -name \*tar -exec zstd -TN {} \;
> 
> Where N is the number of cores you want to allocate. zstd -T0 (or just
> zstdmt) if you want to use all the available cores. I use zstd for
> everything now as it's as good as or better than all the others in the
> general case.
> 
> Parallel means it uses more than one core, so on a modern machine it is
> much faster.

Thanks to all who've helped. I can't avoid feeling, though, that the main 
bottleneck has been missed: that I have to read and write on a USB-3 drive. 
It's just taken 23 minutes to copy the current system backup from USB-3 to 
SATA SSD: 108GB in 8 .tar files.

Perhaps I have things out of proportion.

-- 
Regards,
Peter.




Reply via email to