On 12/10/17 06:14, Gandalf Corvotempesta wrote:
So, let's assume a raid -> drbd -> lvm

starting with a single RAID1, what If I would like to add a second
raid1 converting the existing one to a RAID10 ? drbdadm resize would
be enoguh ?
Correct, assuming you can convert the raid1 to raid10. You might need to start with a 2 device RAID10, best to check that procedure now and ensure mdadm will properly support this.
keeping lvm as the upper layer would be best, I think, because will
allow me to create logical volumes, snapshot and so on.
You can also do that with raid + lvm + drbd... you just need to create a new drbd as you add a new LV, and also resize the drbd after you resize the LV.
what happens if local raid totally fails ? the upper layer will stay
up thanks to DRBD fetching data from the other node ?
If both drives fail on one node, then raid will pass the disk errors up to DRBD, which will mark the local storage as down, and yes, it will read all needed data from remote node (writes are always sent to the remote node). You would probably want to migrate the remote node to primary as quickly as possible, and then work on fixing the storage.
is "raid -> drbd -> lvm" a standard configuration or something bad? I
don't want to put in production something "custom" and not supported.
Yes, it is not some bizarre configuration that has never been seen before. You also haven't mentioned the size of your proposed raid, nor what size you are planning on growing it to?

How to prevent splitbrains ? Would be enough to bond the cluster
network ? Any qdevice or fencing to configure ?
Yes, you will always want multiple network paths between the two nodes, and also fencing. bonding can be used to improve performance, but you should *also* have an additional network or serial or other connection between the two nodes which is used for fencing.

Regards,
Adam

2017-10-11 21:07 GMT+02:00 Adam Goryachev <mailingli...@websitemanagers.com.au>:

On 12/10/17 05:10, Gandalf Corvotempesta wrote:
Previously i've asked about DRBDv9+ZFS.
Let's assume a more "standard" setup with DRBDv8 + mdadm.

What I would like to archieve is a simple redundant SAN. (anything
preconfigured for this ?)

Which is best, raid1+drbd+lvm or drbd+raid1+lvm?

Any advantage by creating multiple drbd resources ? I think that a
single DRBD resource is better for administrative point of view.

A simple failover would be enough, I don't need master-master
configuration.
In my case, the best option was raid + lvm + drbd
It allows me to use lvm tools to resize each exported resource as required
easily:
lvmextend...
drbdadm resize ...

However, the main reason was to improve drbd "performance" so that it will
use different counters for each resource instead of a single set of counters
for one massive resource.

BTW, how would you configure drbd + raid + lvm ?

If you do DRBD with a raw drive on each machine, then use raid1 on top
within each local machine, then your raw drbd drive dies, the second raid
member will not contain or participate with DRBD anymore, so the whole node
is failed. This only adds DR ability to recover the user data. I would
suggest this should not be a considered configuration at all (unless I'm
awake to early and am overlooking something).

Actually, assuming machine1 with disk1 + disk2, and machine2 with disk3 +
disk4, I guess you could setup drbd1 between disk1 + disk3, and a drbd2 with
disk2 + disk4, and then create raid on machine 1 with drbd1+drbd2 and raid
on machine2 with drbd1+drbd2 and then use the raid device for lvm. You would
need double the write bandwidth between the two machines. When machine1 is
primary, and a write for the LV, it will be sent to raid which will send the
write to drbd1 and also drbd2. Locally, they are written to disk1 + disk2,
but also those 2 x writes will need to send over the network to machine2, so
it can be written to disk3 (drbd1) and disk4 (drbd2). Still not a sensible
option IMHO.

The two valid options would be raid + drbd + lvm or raid + lvm + drbd (or
just lvm + drbd if you use lvm to handle the raid as well).

Regards,
Adam
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to