I have a 3 nodes xen cluser under centos5.3 that host linux virtual
machines. Node1 and Node2 got virtual machines but not on Node3. Node
3 is basically used when Node1 or Node2 goes down. This is working OK.
Now when I turn off both Node1 and Node2. Then quorum is dissolved
and my cluser fails. I
In the message dated: Tue, 11 Aug 2009 14:14:03 +0200,
The pithy ruminations from Juan Ramon Martin Blanco on
were:
=> --===1917368601==
=> Content-Type: multipart/alternative; boundary=0016364c7c07663f600470dca3b8
=>
=> --0016364c7c07663f600470dca3b8
=> Content-Type: text/plain; c
Simple 4-node cluster, 2-nodes have a GFS shared home directory mounted
for over a month. Today, I wanted to mount /home on a 3rd node, so:
# service fenced start[failed]
Weird. Checking /var/log/messages show:
Aug 11 10:19:06 cerberus kernel: Lock_Harness 2.6.9-80.9.el4_7.10 (
- "Wendell Dingus" wrote:
| Well, here's the entire list of blocks it ignored and the entire
| message section.
| Perhaps I'm just overlooking it but I'm not seeing anything in the
| messages
| that appears to be a block number. Maybe 1633350398 but if so it is
| not a match.
Your assumption
Good day
I have a Sun x4100 server running Centos 5.3 X64 - patch to latest and
greatest.
When trying to start luci, it simply fails, no error in /var/log and
nothing in /var/lib/luci/log
I have re-installed luci and ricci a couple of times now. Cleaned out /
var/lib/luci & /rici betwee
Getting cluster software including kvm virtual machines with live
migration working,
can be a very difficult task, with many obstacles.
But I would like to mention to the mailing list, that I just booked some
succes.
And because nobody is around to tell the wonderfull news,
I would like to sha
On Tue, Aug 11, 2009 at 2:03 PM, ESGLinux wrote:
> Thanks
> I´ll check it when I could reboot the server.
>
> greetings,
>
You have a BMC ipmi in the first network interface, it can be configured at
boot time (I don't remember if inside the BIOS or pressing cntrl+something
during boot)
Greeting
Thanks
I´ll check it when I could reboot the server.
greetings,
ESG
2009/8/10 Paras pradhan
> On Mon, Aug 10, 2009 at 5:24 AM, ESGLinux wrote:
> > Hi all,
> > I was designing a 2 node cluster and I was going to use 2 servers DELL
> > PowerEdge 1950. I was going to buy a DRAC card to use for fe
Hi.
Does anybody write a fence agent for the VirtualBox?
Can we get this into the mainstream?
BR, Bob
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Problem resolved,
rgmanager seems to rely on the 'status' action rather than on the 'monitor'
one.
Which doesn't seem to conform to the opencf API (
http://www.opencf.org/cgi-bin/viewcvs.cgi/*checkout*/specs/ra/resource-agent-api.txt?rev=1.10)
chapter 3.4.3 (is this doc outdated ?) that states th
10 matches
Mail list logo