Hi!

>> - (3) rules out sbd, as this method requires access to a physical device, 
>> that offers the shared storage. Am I right? The manual explicitly says, that 
>> sbd may even not be used on a DRBD-Partition. Question: Is there a way to 
>> insert the sbd-Header on a mounted drive instead of a physical partition? 
>> Are there any other methods of ressource fencing besides sbd?

>The only requirement for sbd is to have a dedicated disk on shared storage. 
>That disk (or partition, if you will) doesn't
> need to be big (1MB is enough). I don't see how (3) then is an obstacle.

Let me see, if I got the manual (see: http://www.linux-ha.org/wiki/SBD_Fencing) 
right:

a) Our customer might only grant us one storage access via nfs. Can one create 
an sbd on a NFS-share?
b) If we set up a shared storage ourselves, we want it to be redundant itself, 
thus setting it up with drbd is very likely. The manual says: "The SBD device 
must not make use of host-based RAID. " and "The SBD device must not reside on 
a drbd instance."

Did I get this right: The sbd-Partition is  not allowed to reside on either a 
RAID or a DBRD? Well? Doesn't that mess with the concept of redundancy? Let's 
say, we have a three-node shared storage, using DRBD to keep the partitions 
redundant between the shared-storage nodes, exporting the storage tot he other 
nodes via NFS: Where and how shall the sbd device be created? Only on one oft 
he storage nodes? Or on each oft he storage nodes? Or somehow on a clustered 
partition (that would mean drbd, again, wouldn't it?).

To me only the latter makes sense, because, as the manual says:

"This can be a logical unit, partition, or a logical volume; but it must be 
accessible from all nodes."

Aaaaahhhh... my brain is starting to explode... ;-)

Please, I feel that I get something entirely wrong here. May, for example, the 
sbd be created on a partion or logical volume, that I created on an drbd-device 
(or RAID), and the "no drbd"-rule (or NO RAID) only means, that the sbd may not 
be created on the drbd (or RAID) directly?

>>  No way telling if the suicide succeeded or not.

Yes, but on the other hand, suicide is quite independent of the network, while 
for all the power-off-methods (including vmware) I have to have at least access 
to the power device (or VM-host), which might not be the case, if all 
communication between two locations is demolished (classical split brain).

> There's also external/libvirt which hasn't been in any release yet, but seems 
> to be of very good quality. You can get it here:

Thanks, I'll check it out!

> There's a document on fencing at http://clusterlabs.org
Which has been written by you, right? Don't get me wrong, it is excellent, and 
I already read it (it's nearly word-by-word included in the SLES HAE-Manual): 
http://www.clusterlabs.org/doc/crm_fencing.html

Further help is still very welcome.

TNX in advance,

Andreas


------------------------
CONET Solutions GmbH, Theodor-Heuss-Allee 19, 53773 Hennef.
Registergericht/Registration Court: Amtsgericht Siegburg (HRB Nr. 9136)
Geschäftsführer/Managing Directors: Jürgen Zender (Sprecher/Chairman), Anke 
Höfer
Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Hans Jürgen 
Niemeier

CONET Technologies AG, Theodor-Heuss-Allee 19, 53773 Hennef.
Registergericht/Registration Court: Amtsgericht Siegburg (HRB Nr. 10328 )
Vorstand/Member of the Managementboard: Rüdiger Zeyen (Sprecher/Chairman), 
Wilfried Pütz
Vorsitzender des Aufsichtsrates/Chairman of the Supervisory Board: Dr. Gerd 
Jakob
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to