Hi.

On Fri, Mar 17, 2023 at 11:09:09AM +0100, Nicolas George wrote:
> Is this possible: ?

Actually, there are at least three ways of doing it:

- DRBD
- MDADM + iSCSI
- zpool attach/detach

But DRBD was designed with continuous replication in mind, and ZFS has
severe processor architecture restrictions, and somewhat unusual design
decisions for the filesystem storage.
So let's keep it on MDADM + iSCSI for now.


> What I want to do:
> 
> 1. Stop programs and umount /dev/something
> 
> 2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \
>   --metadata-file /data/raid_something /dev/something

a) Replace that with:

mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \
        --metadata=1.0 /dev/local_dev missing


--metadata=1.0 is highly important here, as it's one of the few mdadm
metadata formats that keeps said metadata at the end of the device.

b) Nobody forbids you to run degraded RAID1 all the time. Saves you
unmounting and mounting again.


> → Now I have everything running again completely normally after a very
> short service interruption. But behind the scenes files operations go
> through /dev/md0 before reaching /dev/something. If I want to go back, I
> de-configure /dev/md0 and can start using /dev/something directly again.
> 
> 4. mdadm --add /dev/md0 remote:/dev/something && mdadm --grow /dev/md0 
> --raid-devices=2

And "remote:/dev/something" is merely "iscsiadm --mode node --targetname
xxx --portal remote --login".
Then add resulting block device as planned.


That assumes that "remote" runs configured iSCSI target ("tgt" in
current stable is perfectly fine for that), "local" can reach "remote"
via tcp:3260, and you do not care about data encryption for the data in
transmission.

Reco

Reply via email to