Am Samstag, 8. September 2012 schrieb Marc MERLIN:
> I read the discussions on hardlinks, and saw that there was a proposed
> patch (although I'm not sure if it's due in 3.6 or not, or whether I
> can apply it to my 3.5.3 tree).
> 
> I was migrating a backup disk to a new btrfs disk, and the backup had a
> lot of hardlinks to collapse identical files to cut down on inode
> count and disk space.
> 
> Then, I started seeing:
[…]
> Has someone come up with a cool way to work around the too many link
> error and only when that happens, turn the hardlink into a file copy
> instead? (that is when copying an entire tree with millions of files).

What about:

- copy first backup version
- btrfs subvol create first next
- copy next backup version
- btrfs subvol create previous next

I use this scheme for my backup since quite a while. Except that first 
backup, then create a read only snapshot. And at some time remove old 
snapshots.

Works like a charm and is easily scriptable.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to