On 12/8/16 10:11 AM, Swâmi Petaramesh wrote: > Hi, Some real world figures about running duperemove deduplication on > BTRFS : > > I have an external 2,5", 5400 RPM, 1 TB HD, USB3, on which I store the > BTRFS backups (full rsync) of 5 PCs, using 2 different distros, > typically at the same update level, and all of them more of less sharing > the entirety or part of the same set of user files. > > For each of these PCs I keep a series of 4-5 BTRFS subvolume snapshots > for having complete backups at different points in time. > > The HD was full to 93% and made a good testbed for deduplicating. > > So I ran duperemove on this HD, on a machine doing "only this", using a > hashfile. The machine being an Intel i5 with 6 GB of RAM. > > Well, the damn thing has been running for 15 days uninterrupted ! > ...Until I [Ctrl]-C it this morning as I had to move with the machine (I > wasn't expecting it to last THAT long...). > > It took about 48 hours just for calculating the files hashes. > > Then it took another 48 hours just for "loading the hashes of duplicate > extents". > > Then it took 11 days deduplicating until I killed it. > > At the end, the disk that was 93% full is now 76% full, so I saved 17% > of 1 TB (170 GB) by deduplicating for 15 days. > > Well the thing "works" and my disk isn't full anymore, so that's a very > partial success, but still l wonder if the gain is worth the effort...
What version were you using? I know Mark had put a bunch of effort into reducing the memory footprint and runtime. The earlier versions were "can we get this thing working" while the newer versions are more efficient. What throughput are you getting to that disk? I get that it's USB3, but reading 1TB doesn't take a terribly long time so 15 days is pretty ridiculous. At any rate, the good news is that when you run it again, assuming you used the hash file, it will not have to rescan most of your data set. -Jeff -- Jeff Mahoney SUSE Labs
signature.asc
Description: OpenPGP digital signature