No, I can't see the logical volumes on the other nodes. vgscan doesn't show
any, nor I can find any new devices in /dev.
As I couldn't find docs/examples on this particular point, I really don't know
what to expect.
I'm trying with different types of logical volumes (stripped, mirrored), but
did
As far as I know, you should be able to at least SEE the logical
volume as long as there is a path to the physical volumes on the other
nodes. Are you able to see the same block devices (eg /dev/sd?) on
the other nodes?
Shawn Hood
2008/4/14 nch <[EMAIL PROTECTED]>:
>
> Hello, everybody.
>
> I'
Hello, everybody.
I'm trying to run a cluster with 3 nodes. One of them would share storage with
the other two using GFS and DLM (kernel 2.4.18-6).
I was able to start ccsd, cman, fenced and clvmd in all nodes. I've defined a
logical volume in the storage node and was able to gfs_mkfs, activate
Hi,
[EMAIL PROTECTED] wrote:
I remember that this was mentioned several times in the last few
months, but has any documentation been put together on the API that
the fencing drivers are supposed to cover?
I'm looking into writing a fencing driver based on disabling switch
ports on a manage
I've been using RHCS to control DRBD quite happily, but only in a
active/passive scenario.
All it requires is a little script, and an rgmanager '' object:
#!/bin/bash
exec /etc/ha.d/resource.d/drbddisk $@
(/etc/ha.d/resource.d/drbddisk is installed by the DRBD package)
Regards,
Arjuna Christ
Hi,
I remember that this was mentioned several times in the last few months,
but has any documentation been put together on the API that the fencing
drivers are supposed to cover?
I'm looking into writing a fencing driver based on disabling switch ports
on a managed 3com switch via the telne