Re: [DRBD-user] Some info

2017-10-12 Thread Gandalf Corvotempesta
2017-10-12 10:17 GMT+02:00 Robert Altnoeder :
> While it is not "bad", it limits the system to an active-passive cluster
> configuration, because all logical volumes must be active on the same node.
> The standard setup that we teach in our trainings, that we commonly
> install and use ourselves and that all of our automated provisioning
> software uses is storage -> LVM -> DRBD, or storage -> ZFS -> DRBD, as
> this allows the configuration of multiple independent resources, where
> each resource can be active on any of the nodes, thereby also allowing
> the creation of active-active clusters.

Ok, but I have to create just a simple NFS server, which advantage should I get
creating multiple resources ? I would be still forced to an
active-passive cluster, as
only one NFS server can be used at the same time
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-12 Thread Robert Altnoeder
On 10/11/2017 09:14 PM, Gandalf Corvotempesta wrote:
> is "raid -> drbd -> lvm" a standard configuration or something bad? I
> don't want to put in production something "custom" and not supported.
While it is not "bad", it limits the system to an active-passive cluster
configuration, because all logical volumes must be active on the same node.
The standard setup that we teach in our trainings, that we commonly
install and use ourselves and that all of our automated provisioning
software uses is storage -> LVM -> DRBD, or storage -> ZFS -> DRBD, as
this allows the configuration of multiple independent resources, where
each resource can be active on any of the nodes, thereby also allowing
the creation of active-active clusters.

> How to prevent splitbrains ? Would be enough to bond the cluster
> network ? Any qdevice or fencing to configure ?
Fencing

br,
-- 
Robert Altnoeder
+43 1 817 82 92 0
robert.altnoe...@linbit.com

LINBIT | Keeping The Digital World Running
DRBD - Corosync - Pacemaker
f /  t /  in /  g+

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Adam Goryachev

On 12/10/17 06:52, Gandalf Corvotempesta wrote:

2017-10-11 21:22 GMT+02:00 Adam Goryachev :

You can also do that with raid + lvm + drbd... you just need to create a new
drbd as you add a new LV, and also resize the drbd after you resize the LV.

I prefere to keep drbd as minimum. I'm much more familiar with LVM.
If not an issue, i prefere to keep the number of drbd resources as bare minimum.

Except that you should become familiar with DRBD so that when something 
goes wrong, you will be better placed to fix it. If you set it up once 
and don't touch it for three years, then it breaks, you will have no 
idea on what to do or even where to start. You will probably have 
forgotten how it was configured/supposed to work.

If both drives fail on one node, then raid will pass the disk errors up to
DRBD, which will mark the local storage as down, and yes, it will read all
needed data from remote node (writes are always sent to the remote node).
You would probably want to migrate the remote node to primary as quickly as
possible, and then work on fixing the storage.

Why should I migrate the remote node to primary? Any advantage?
Yes, avoids reads from going over the network, reducing latency and 
increasing throughput (depending on bandwidth between nodes). I guess 
this is not a MUST, but just an easy optimisation.



Yes, it is not some bizarre configuration that has never been seen before.
You also haven't mentioned the size of your proposed raid, nor what size you
are planning on growing it to?

Currently, I'm planning to start with 2TB disks. I don't think to go
over 10-12TB

That is a significant growth. I would advise to plan how you will 
achieve that growth now. For example, create a 200GB array, with DRBD + 
LVM etc, then try to grow the array (add extra 200GB partitions to the 
drive) and make sure everything works as expected. A good idea to 
document the process while you are doing this, so that when you need it, 
you have a very good idea on how to proceed. (You should still re-test 
it at that time in case tools have changed/etc).


One thing you have ignored is that DRBD will behave differently with a 
single resource as opposed to multiple resources. For me, this was 
enough of a difference that it made a horrible solution into a viable 
solution (from the end users that were using it, performance was 
terrible with the single resource, and possible with multiple, other 
changes were also made to convert it to highly useful).

Yes, you will always want multiple network paths between the two nodes, and
also fencing. bonding can be used to improve performance, but you should
*also* have an additional network or serial or other connection between the
two nodes which is used for fencing.

Ok.

Any "bare-metal" distribution with DRBD or detailed guide on how to
implement HA?
Something like FreeNAS, or similiar.


No, I just use debian and then configure things as required, for me that 
is the best way to become familiar with the system, and be prepared for 
when things break. I would also strongly advise to read the very good 
documentation, try and read through all of it at least once. (Another 
thank you to linbit for this documentation!)


Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au
--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Adam Goryachev

On 12/10/17 07:55, Gandalf Corvotempesta wrote:

For this project I'll use v8
As I would like to use just one big resource, I don't think v9 would 
be able to rebalance a single resource across 4 or 5 nodes


v9 would allow for a 3 node mirror, which improves redundancy and 
resiliency, and I assume makes split-brain avoidance much simpler/more 
reliable to stonith the right node.


For me, I mainly still use v8 in production.

Regards,
Adam

Il 11 ott 2017 10:48 PM, "Yannis Milios" > ha scritto:


Are you planning to use DRBD8 or DRBD9?

DRBD8 is limited to 2 nodes(max 3).
DRBD9 can scale to multiple nodes.

For DRBD8 the most common setup is RAID -> DRBD -> LVM  or  RAID
-> LVM -> DRBD
It’s management is way easier than DRBD9.

The most common DRBD9 setups are RAID -> LVM (thin or thick) ->
DRBD  or  HDD  ->  ZFS (thin or thick)  ->  DRBD.
Complicated management...

On Wed, 11 Oct 2017 at 20:52, Gandalf Corvotempesta
> wrote:

2017-10-11 21:22 GMT+02:00 Adam Goryachev
>:
> You can also do that with raid + lvm + drbd... you just need
to create a new
> drbd as you add a new LV, and also resize the drbd after you
resize the LV.

I prefere to keep drbd as minimum. I'm much more familiar with
LVM.
If not an issue, i prefere to keep the number of drbd
resources as bare minimum.

> If both drives fail on one node, then raid will pass the
disk errors up to
> DRBD, which will mark the local storage as down, and yes, it
will read all
> needed data from remote node (writes are always sent to the
remote node).
> You would probably want to migrate the remote node to
primary as quickly as
> possible, and then work on fixing the storage.

Why should I migrate the remote node to primary? Any advantage?

> Yes, it is not some bizarre configuration that has never
been seen before.
> You also haven't mentioned the size of your proposed raid,
nor what size you
> are planning on growing it to?

Currently, I'm planning to start with 2TB disks. I don't think
to go
over 10-12TB

> Yes, you will always want multiple network paths between the
two nodes, and
> also fencing. bonding can be used to improve performance,
but you should
> *also* have an additional network or serial or other
connection between the
> two nodes which is used for fencing.

Ok.

Any "bare-metal" distribution with DRBD or detailed guide on
how to
implement HA?
Something like FreeNAS, or similiar.
___
drbd-user mailing list
drbd-user@lists.linbit.com 
http://lists.linbit.com/mailman/listinfo/drbd-user


-- 
Sent from Gmail Mobile




--
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful. If you have received this message
in error, please notify us immediately. Please also destroy and delete the
message from your computer. Viruses - Any loss/damage incurred by receiving
this email is not the sender's responsibility.
--
Adam Goryachev Website Managers www.websitemanagers.com.au
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Igor Cicimov
On 12 Oct 2017 5:10 am, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> wrote:

Previously i've asked about DRBDv9+ZFS.
Let's assume a more "standard" setup with DRBDv8 + mdadm.

What I would like to archieve is a simple redundant SAN. (anything
preconfigured for this ?)

Which is best, raid1+drbd+lvm or drbd+raid1+lvm?

Any advantage by creating multiple drbd resources ? I think that a
single DRBD resource is better for administrative point of view.

A simple failover would be enough, I don't need master-master


In that case you might go with raid -> lvm -> drbd -> lvm to benefit from
lvm in both layers
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Gandalf Corvotempesta
For this project I'll use v8
As I would like to use just one big resource, I don't think v9 would be
able to rebalance a single resource across 4 or 5 nodes

Il 11 ott 2017 10:48 PM, "Yannis Milios"  ha
scritto:

> Are you planning to use DRBD8 or DRBD9?
>
> DRBD8 is limited to 2 nodes(max 3).
> DRBD9 can scale to multiple nodes.
>
> For DRBD8 the most common setup is RAID -> DRBD -> LVM  or  RAID -> LVM ->
> DRBD
> It’s management is way easier than DRBD9.
>
> The most common DRBD9 setups are RAID -> LVM (thin or thick) -> DRBD  or
>  HDD  ->  ZFS (thin or thick)  ->  DRBD.
> Complicated management...
>
> On Wed, 11 Oct 2017 at 20:52, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> 2017-10-11 21:22 GMT+02:00 Adam Goryachev > com.au>:
>> > You can also do that with raid + lvm + drbd... you just need to create
>> a new
>> > drbd as you add a new LV, and also resize the drbd after you resize the
>> LV.
>>
>> I prefere to keep drbd as minimum. I'm much more familiar with LVM.
>> If not an issue, i prefere to keep the number of drbd resources as bare
>> minimum.
>>
>> > If both drives fail on one node, then raid will pass the disk errors up
>> to
>> > DRBD, which will mark the local storage as down, and yes, it will read
>> all
>> > needed data from remote node (writes are always sent to the remote
>> node).
>> > You would probably want to migrate the remote node to primary as
>> quickly as
>> > possible, and then work on fixing the storage.
>>
>> Why should I migrate the remote node to primary? Any advantage?
>>
>> > Yes, it is not some bizarre configuration that has never been seen
>> before.
>> > You also haven't mentioned the size of your proposed raid, nor what
>> size you
>> > are planning on growing it to?
>>
>> Currently, I'm planning to start with 2TB disks. I don't think to go
>> over 10-12TB
>>
>> > Yes, you will always want multiple network paths between the two nodes,
>> and
>> > also fencing. bonding can be used to improve performance, but you should
>> > *also* have an additional network or serial or other connection between
>> the
>> > two nodes which is used for fencing.
>>
>> Ok.
>>
>> Any "bare-metal" distribution with DRBD or detailed guide on how to
>> implement HA?
>> Something like FreeNAS, or similiar.
>> ___
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
> --
> Sent from Gmail Mobile
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Yannis Milios
Are you planning to use DRBD8 or DRBD9?

DRBD8 is limited to 2 nodes(max 3).
DRBD9 can scale to multiple nodes.

For DRBD8 the most common setup is RAID -> DRBD -> LVM  or  RAID -> LVM ->
DRBD
It’s management is way easier than DRBD9.

The most common DRBD9 setups are RAID -> LVM (thin or thick) -> DRBD  or
 HDD  ->  ZFS (thin or thick)  ->  DRBD.
Complicated management...

On Wed, 11 Oct 2017 at 20:52, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-10-11 21:22 GMT+02:00 Adam Goryachev <
> mailingli...@websitemanagers.com.au>:
> > You can also do that with raid + lvm + drbd... you just need to create a
> new
> > drbd as you add a new LV, and also resize the drbd after you resize the
> LV.
>
> I prefere to keep drbd as minimum. I'm much more familiar with LVM.
> If not an issue, i prefere to keep the number of drbd resources as bare
> minimum.
>
> > If both drives fail on one node, then raid will pass the disk errors up
> to
> > DRBD, which will mark the local storage as down, and yes, it will read
> all
> > needed data from remote node (writes are always sent to the remote node).
> > You would probably want to migrate the remote node to primary as quickly
> as
> > possible, and then work on fixing the storage.
>
> Why should I migrate the remote node to primary? Any advantage?
>
> > Yes, it is not some bizarre configuration that has never been seen
> before.
> > You also haven't mentioned the size of your proposed raid, nor what size
> you
> > are planning on growing it to?
>
> Currently, I'm planning to start with 2TB disks. I don't think to go
> over 10-12TB
>
> > Yes, you will always want multiple network paths between the two nodes,
> and
> > also fencing. bonding can be used to improve performance, but you should
> > *also* have an additional network or serial or other connection between
> the
> > two nodes which is used for fencing.
>
> Ok.
>
> Any "bare-metal" distribution with DRBD or detailed guide on how to
> implement HA?
> Something like FreeNAS, or similiar.
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
-- 
Sent from Gmail Mobile
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Gandalf Corvotempesta
2017-10-11 21:22 GMT+02:00 Adam Goryachev :
> You can also do that with raid + lvm + drbd... you just need to create a new
> drbd as you add a new LV, and also resize the drbd after you resize the LV.

I prefere to keep drbd as minimum. I'm much more familiar with LVM.
If not an issue, i prefere to keep the number of drbd resources as bare minimum.

> If both drives fail on one node, then raid will pass the disk errors up to
> DRBD, which will mark the local storage as down, and yes, it will read all
> needed data from remote node (writes are always sent to the remote node).
> You would probably want to migrate the remote node to primary as quickly as
> possible, and then work on fixing the storage.

Why should I migrate the remote node to primary? Any advantage?

> Yes, it is not some bizarre configuration that has never been seen before.
> You also haven't mentioned the size of your proposed raid, nor what size you
> are planning on growing it to?

Currently, I'm planning to start with 2TB disks. I don't think to go
over 10-12TB

> Yes, you will always want multiple network paths between the two nodes, and
> also fencing. bonding can be used to improve performance, but you should
> *also* have an additional network or serial or other connection between the
> two nodes which is used for fencing.

Ok.

Any "bare-metal" distribution with DRBD or detailed guide on how to
implement HA?
Something like FreeNAS, or similiar.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Adam Goryachev



On 12/10/17 06:14, Gandalf Corvotempesta wrote:

So, let's assume a raid -> drbd -> lvm

starting with a single RAID1, what If I would like to add a second
raid1 converting the existing one to a RAID10 ? drbdadm resize would
be enoguh ?
Correct, assuming you can convert the raid1 to raid10. You might need to 
start with a 2 device RAID10, best to check that procedure now and 
ensure mdadm will properly support this.

keeping lvm as the upper layer would be best, I think, because will
allow me to create logical volumes, snapshot and so on.
You can also do that with raid + lvm + drbd... you just need to create a 
new drbd as you add a new LV, and also resize the drbd after you resize 
the LV.

what happens if local raid totally fails ? the upper layer will stay
up thanks to DRBD fetching data from the other node ?
If both drives fail on one node, then raid will pass the disk errors up 
to DRBD, which will mark the local storage as down, and yes, it will 
read all needed data from remote node (writes are always sent to the 
remote node). You would probably want to migrate the remote node to 
primary as quickly as possible, and then work on fixing the storage.

is "raid -> drbd -> lvm" a standard configuration or something bad? I
don't want to put in production something "custom" and not supported.
Yes, it is not some bizarre configuration that has never been seen 
before. You also haven't mentioned the size of your proposed raid, nor 
what size you are planning on growing it to?



How to prevent splitbrains ? Would be enough to bond the cluster
network ? Any qdevice or fencing to configure ?
Yes, you will always want multiple network paths between the two nodes, 
and also fencing. bonding can be used to improve performance, but you 
should *also* have an additional network or serial or other connection 
between the two nodes which is used for fencing.


Regards,
Adam


2017-10-11 21:07 GMT+02:00 Adam Goryachev :


On 12/10/17 05:10, Gandalf Corvotempesta wrote:

Previously i've asked about DRBDv9+ZFS.
Let's assume a more "standard" setup with DRBDv8 + mdadm.

What I would like to archieve is a simple redundant SAN. (anything
preconfigured for this ?)

Which is best, raid1+drbd+lvm or drbd+raid1+lvm?

Any advantage by creating multiple drbd resources ? I think that a
single DRBD resource is better for administrative point of view.

A simple failover would be enough, I don't need master-master
configuration.

In my case, the best option was raid + lvm + drbd
It allows me to use lvm tools to resize each exported resource as required
easily:
lvmextend...
drbdadm resize ...

However, the main reason was to improve drbd "performance" so that it will
use different counters for each resource instead of a single set of counters
for one massive resource.

BTW, how would you configure drbd + raid + lvm ?

If you do DRBD with a raw drive on each machine, then use raid1 on top
within each local machine, then your raw drbd drive dies, the second raid
member will not contain or participate with DRBD anymore, so the whole node
is failed. This only adds DR ability to recover the user data. I would
suggest this should not be a considered configuration at all (unless I'm
awake to early and am overlooking something).

Actually, assuming machine1 with disk1 + disk2, and machine2 with disk3 +
disk4, I guess you could setup drbd1 between disk1 + disk3, and a drbd2 with
disk2 + disk4, and then create raid on machine 1 with drbd1+drbd2 and raid
on machine2 with drbd1+drbd2 and then use the raid device for lvm. You would
need double the write bandwidth between the two machines. When machine1 is
primary, and a write for the LV, it will be sent to raid which will send the
write to drbd1 and also drbd2. Locally, they are written to disk1 + disk2,
but also those 2 x writes will need to send over the network to machine2, so
it can be written to disk3 (drbd1) and disk4 (drbd2). Still not a sensible
option IMHO.

The two valid options would be raid + drbd + lvm or raid + lvm + drbd (or
just lvm + drbd if you use lvm to handle the raid as well).

Regards,
Adam
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Gandalf Corvotempesta
So, let's assume a raid -> drbd -> lvm

starting with a single RAID1, what If I would like to add a second
raid1 converting the existing one to a RAID10 ? drbdadm resize would
be enoguh ?

keeping lvm as the upper layer would be best, I think, because will
allow me to create logical volumes, snapshot and so on.

what happens if local raid totally fails ? the upper layer will stay
up thanks to DRBD fetching data from the other node ?

is "raid -> drbd -> lvm" a standard configuration or something bad? I
don't want to put in production something "custom" and not supported.

How to prevent splitbrains ? Would be enough to bond the cluster
network ? Any qdevice or fencing to configure ?


2017-10-11 21:07 GMT+02:00 Adam Goryachev :
>
>
> On 12/10/17 05:10, Gandalf Corvotempesta wrote:
>>
>> Previously i've asked about DRBDv9+ZFS.
>> Let's assume a more "standard" setup with DRBDv8 + mdadm.
>>
>> What I would like to archieve is a simple redundant SAN. (anything
>> preconfigured for this ?)
>>
>> Which is best, raid1+drbd+lvm or drbd+raid1+lvm?
>>
>> Any advantage by creating multiple drbd resources ? I think that a
>> single DRBD resource is better for administrative point of view.
>>
>> A simple failover would be enough, I don't need master-master
>> configuration.
>
> In my case, the best option was raid + lvm + drbd
> It allows me to use lvm tools to resize each exported resource as required
> easily:
> lvmextend...
> drbdadm resize ...
>
> However, the main reason was to improve drbd "performance" so that it will
> use different counters for each resource instead of a single set of counters
> for one massive resource.
>
> BTW, how would you configure drbd + raid + lvm ?
>
> If you do DRBD with a raw drive on each machine, then use raid1 on top
> within each local machine, then your raw drbd drive dies, the second raid
> member will not contain or participate with DRBD anymore, so the whole node
> is failed. This only adds DR ability to recover the user data. I would
> suggest this should not be a considered configuration at all (unless I'm
> awake to early and am overlooking something).
>
> Actually, assuming machine1 with disk1 + disk2, and machine2 with disk3 +
> disk4, I guess you could setup drbd1 between disk1 + disk3, and a drbd2 with
> disk2 + disk4, and then create raid on machine 1 with drbd1+drbd2 and raid
> on machine2 with drbd1+drbd2 and then use the raid device for lvm. You would
> need double the write bandwidth between the two machines. When machine1 is
> primary, and a write for the LV, it will be sent to raid which will send the
> write to drbd1 and also drbd2. Locally, they are written to disk1 + disk2,
> but also those 2 x writes will need to send over the network to machine2, so
> it can be written to disk3 (drbd1) and disk4 (drbd2). Still not a sensible
> option IMHO.
>
> The two valid options would be raid + drbd + lvm or raid + lvm + drbd (or
> just lvm + drbd if you use lvm to handle the raid as well).
>
> Regards,
> Adam
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Some info

2017-10-11 Thread Adam Goryachev



On 12/10/17 05:10, Gandalf Corvotempesta wrote:

Previously i've asked about DRBDv9+ZFS.
Let's assume a more "standard" setup with DRBDv8 + mdadm.

What I would like to archieve is a simple redundant SAN. (anything
preconfigured for this ?)

Which is best, raid1+drbd+lvm or drbd+raid1+lvm?

Any advantage by creating multiple drbd resources ? I think that a
single DRBD resource is better for administrative point of view.

A simple failover would be enough, I don't need master-master configuration.

In my case, the best option was raid + lvm + drbd
It allows me to use lvm tools to resize each exported resource as 
required easily:

lvmextend...
drbdadm resize ...

However, the main reason was to improve drbd "performance" so that it 
will use different counters for each resource instead of a single set of 
counters for one massive resource.


BTW, how would you configure drbd + raid + lvm ?

If you do DRBD with a raw drive on each machine, then use raid1 on top 
within each local machine, then your raw drbd drive dies, the second 
raid member will not contain or participate with DRBD anymore, so the 
whole node is failed. This only adds DR ability to recover the user 
data. I would suggest this should not be a considered configuration at 
all (unless I'm awake to early and am overlooking something).


Actually, assuming machine1 with disk1 + disk2, and machine2 with disk3 
+ disk4, I guess you could setup drbd1 between disk1 + disk3, and a 
drbd2 with disk2 + disk4, and then create raid on machine 1 with 
drbd1+drbd2 and raid on machine2 with drbd1+drbd2 and then use the raid 
device for lvm. You would need double the write bandwidth between the 
two machines. When machine1 is primary, and a write for the LV, it will 
be sent to raid which will send the write to drbd1 and also drbd2. 
Locally, they are written to disk1 + disk2, but also those 2 x writes 
will need to send over the network to machine2, so it can be written to 
disk3 (drbd1) and disk4 (drbd2). Still not a sensible option IMHO.


The two valid options would be raid + drbd + lvm or raid + lvm + drbd 
(or just lvm + drbd if you use lvm to handle the raid as well).


Regards,
Adam
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user