On 3/17/23 19:25, Gregory Seidman wrote:
On Fri, Mar 17, 2023 at 06:05:27PM -0700, David Christensen wrote:
On 3/17/23 12:36, Gregory Seidman wrote:
[...]
This thread has piqued my interest, because I have been lax in doing proper
backups. I currently run a RAID1 mirroring across three disks (plus a hot
spare). On top of that is LUKS, and on top of that is LVM. I keep meaning
to manually fail a disk then store it in a safe deposit box or something as
a backup, but I have not gotten around to it.

It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID as
an additional mirror would be a way to produce the off-site backup I want
(and LUKS means I am not concerned about encryption in transit). It also
sounds like you're saying this is not a good backup approach. Ignoring
cost, what am I missing?

Reco
--Gregory

I would not consider using a cloud device as a RAID member -- that sounds
both slow and brittle.  Live data needs to be on local hardware.
[...]

Thinking about it more, that makes sense. Maybe the right approach is to
split the difference. I can manually fail a mirror, dd it over to an iSCSI
target, then re-add it.


If you are serious about iSCSI, I suggest evaluating it. Build a RAID1 using two local disks. Benchmark it. Run it through various failure-recovery use-cases. Then add an iSCSI volume on another host in the LAN, repeat the benchmarks, and repeat the failure-recovery use-cases. Then add an iSCSI volume in the cloud, repeat the benchmarks, and repeat the failure-recovery use-cases. I would be interested in reading the results.


On 3/17/23 13:52, Dan Ritter wrote:
Three different things:

resiliency in the face of storage failure: RAID.

And what I'm really trying to achieve is resiliency in the face of all the
drives failing, e.g. due to a fire or other catastrophe.


I assume you mean "all the drives failing in one computer".


I assume a cloud iSCSI volume is on RAID, so you should only need one (unless you are worried about the vendor).


restoration of files that were recently deleted: snapshots.

I don't have automated LVM snapshotting set up, but I could and probably
should. That would cover that use case.


STFW LVM snapshots differ from ZFS snapshots:

1.  ZFS snapshots are read-only.

2. All of the snapshots for a given ZFS filesystem are automatically mounted in a hidden, known subdirectory under the filesystem mount point -- .zfs/snapshot. This makes it easy to retrieve, compare, restore from, etc., prior copies of files and/or directories using standard userland tools.

3. A ZFS dataset (filesystem or volume) can be rolled back to a prior snapshot, discarding all changes made to the dataset and destroying any intermediate snapshots, bookmarks, and/or clones.


complete restoration of a filesystem: backup.

This can be achieved with the same off-site full-disk backup.


I would think recovery of a RAID1 with an off-site iSCSI member would involve building a new RAID1 based upon that off-site iSCSI member (?).


(and technically, a fourth: complete restoration of points in
time: archives).

That isn't a use case I've considered, and I don't think it's a use case I
have.


Some people have to deal with "audit" and "discovery".


* Check-summing filesystems (I prefer ZFS-on-Linux).
[...]
With four disks, the OP could use two in a ZFS mirror for live data, use
zfs-auto-snapshot for user-friendly recovery, and use the other two
individually as on-site and off-site backup media.

I do like the checksumming ZFS offers. The main reason I haven't switched
to ZFS, aside from already having a working setup with RAID/LUKS/LVM and
not wanting to fix what isn't broken, is that ZFS encryption is per volume
instead of the entire pool overall. That means that I either need to create
an encrypted ZFS volume for each of my existing LVM filesystems,
multiplying the hassle of unlocking them all, or I need to create a single
encrypted ZFS volume and put LVM on top of it. Is there a better way?

David
--Gregory


If you use ZFS, you will not need mdadm, LVM, ext4, etc..


I use old-school ZFS-on-Linux (ZOL), which does not have built-in encryption. So, I encrypt each partition below ZFS. A pool with many partitions could multiply the CPU cryptographic workload. I make sure to buy processors with AES-NI.


The Debian stable zfs-dkms package is the newer OpenZFS. It may support built-in encryption. I do not know how the cryptographic efficiency compares.


David

Reply via email to