On 02/23/2012 08:53 PM, Terry wrote:
> We had a disk loss where we lost 4 clustered volumes. Any commands I
> run give me errors similar to:
> /dev/vg_data01h/lv_data01h: read failed after 0 of 4096 at
> 6597069701120: Input/output erro
>
> How do I just clean all of the broken pv's, lv's, and vg
On Thu, Feb 23, 2012 at 12:13 PM, Steven Whitehouse wrote:
> Was that the only change between the two tests?
Yes, that was the only change (the no fencing cluster.conf is below).
Devices, mount options, etc. remained the same between the tests.
Best regards,
Greg
Hi,
On Thu, 2012-02-23 at 11:56 -0500, Greg Mortensen wrote:
> Hi.
>
> I'm testing a two-node virtual-host CentOS 6.2 (2.6.32-220.4.2.el6.x86_64)
> GFS2 cluster running on the following hardware:
>
> Two physical hosts, running VMware ESXi 5.0.0
> EqualLogic PS6000XV iSCSI SAN
>
> I have export
Hi.
I'm testing a two-node virtual-host CentOS 6.2 (2.6.32-220.4.2.el6.x86_64)
GFS2 cluster running on the following hardware:
Two physical hosts, running VMware ESXi 5.0.0
EqualLogic PS6000XV iSCSI SAN
I have exported a 200GB shared LUN that the virtual hosts have mounted
as a Mapped Raw LUN (p
Hi,
I have 2 questions about High Availability Add-On for RHEL 6.2, which
is bothering me for few days.
1. is there possibility to disable monitoring (status check) for
some/all resources? I just want resources start on node2, in event of
complete node1 failure, but not in any other case.
2. is