On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote: > On 11/10/22 07:44, hw wrote: > > On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote: > > > On 11/9/22 00:24, hw wrote: > > > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote: > > [...] > > > > Taking snapshots is fast and easy. The challenge is deciding when to > destroy them.
That seems like an easy decision, just keep as many as you can and destroy the ones you can't keep. > [...] > > > Without deduplication or compression, my backup set and 78 snapshots > > > would require 3.5 TiB of storage. With deduplication and compression, > > > they require 86 GiB of storage. > > > > Wow that's quite a difference! What makes this difference, the compression > > or > > the deduplication? > > > Deduplication. Hmm, that means that deduplication shrinks your data down to about 1/40 of it's size. That's an awesome rate. > > When you have snapshots, you would store only the > > differences from one snapshot to the next, > > and that would mean that there aren't > > so many duplicates that could be deduplicated. > > > I do not know -- I have not crawled the ZFS code; I just use it. Well, it's like a miracle :) > > > Users can recover their own files without needing help from a system > > > administrator. > > > > You have users who know how to get files out of snapshots? > > > Not really; but the feature is there. That means you're still the one to get the files. > [...] > > > > > > > What were the makes and models of the 6 disks? Of the SSD's? If you > > > have a 'zpool status' console session from then, please post it. > > > > They were (and still are) 6x4TB WD Red (though one or two have failed over > > time) > > and two Samsung 850 PRO, IIRC. I don't have an old session anymore. > > > > These WD Red are slow to begin with. IIRC, both SDDs failed and I removed > > them. > > > > The other instance didn't use SSDs but 6x2TB HGST Ultrastar. Those aren't > > exactly slow but ZFS is slow. > > > Those HDD's should be fine with ZFS; but those SSD's are desktop drives, > not cache devices. That said, I am making the same mistake with Intel > SSD 520 Series. I have considered switching to one Intel Optane Memory > Series and a PCIe 4x adapter card in each server. Isn't that very expensinve and wears out just as well? Wouldn't it be better to have the cache in RAM? > Please run and post the relevant command for LVM, btrfs, whatever. Well, what would that tell you? > [...] > > > > > What is the make and model of your controller cards? > > > > They're HP smart array P410. FreeBSD doesn't seem to support those. > > > I use the LSI 9207-8i with "IT Mode" firmware (e.g. host bus adapter, > not RAID): Well, I couldn't get those when I wanted them. Since I didn't plan on using ZFS, the P410s have to do. > [...] > > ... the data to back up is mostly (or even all) on btrfs. ... copy the > > files over with rsync. ... > > the data comes from different machines and all backs up to one volume. > > > I suggest creating a ZFS pool with a mirror vdev of two HDD's. That would be way too small. > If you > can get past your dislike of SSD's, I don't dislike them. I'm using them where they give me advantages, and I don't use them where they would give me disadvantages. > add a mirror of two SSD's as a > dedicated dedup vdev. (These will not see the hard usage that cache > devices get.) I think I have 2x80GB SSDs that are currently not in use. > Create a filesystem 'backup'. Create child filesystems, > one for each host. Create grandchild filesystems, one for the root > filesystem on each host. Huh? What's with these relationships? > Set up daily rsync backups of the root > filesystems on the various hosts to the ZFS grandchild filesystems. Set > up zfs-auto-snapshot to take daily snapshots of everything, and retain > 10 snapshots. Then watch what happens. What do you expect to happen? I'm thinking about changing my backup sever ... In any case, I need to do more homework first.