On Tue, Sep 1, 2009 at 12:19 PM, Lon Hohberger wrote:
> On Wed, 2009-08-26 at 16:11 +0200, Jakov Sosic wrote:
> > Hi.
> >
> > I have a situation - when two nodes are up in 3 node cluster, and one
> > node goes down, cluster looses quorate - although I'm using qdiskd...
>
>
> >
> >
>
Issue a cman_tool status on both nodes and also group_tool and post the
outputs
2009/9/11 James Marcinek
> When I try to issue lvdisplay commands on the 2nd node (problem child) it
> was hanging... There was only one clvmd service running when I ps'd it...
> May just rebuild the thing unless so
When I try to issue lvdisplay commands on the 2nd node (problem child) it was
hanging... There was only one clvmd service running when I ps'd it... May just
rebuild the thing unless someone else has another option. Rebooting hasn't
seemed to fix it. What log files would you recommend I examine.
I've just run into a odd problem on my production cluster. One of the nodes
got fenced (still digging through logs to find out why) and on it's way back
up, it appears to join the cluster find but the node that fenced it starts
spewing out tons of these in /var/log/messages:
Sep 10 14:25:34 re
The dirty flag is not pointing to any error, it's a normal status (that's
going to be changed to something else in future releases as many people got
worried about it).
The message you get means that your second node cannot contact clvmd and so
cannot access clustered VG's.
issue a ps -ef | grep c
Hi,
No comments on this RHCS gurus ? Am I trying to setup (multisite cluster)
something that 'll never be supported ?
Or is the qdiskd reboot action considered as sufficient? (Reboot action
should be a dirty power reset to prevent data syncing)
If so, all IO's on the wrong nodes (at the isolated
no i didn't. that might be the root cause.
I was able to get rid of it but now I get these errors on the second cluster
node
connect() failed on local socket: Connection refused
WARNING: Falling back to local file-based locking.
Volume Groups with the clustered attribute will be inaccessible
wh
This is shared storage correct?
have you tried the pvscan/vgscan/lvscan dance?
did you create the vg with the -c y ?
-luis
Luis E. Cerezo
Global IT
GV: +1 412 223 7396
On Sep 10, 2009, at 3:47 PM, James Marcinek wrote:
> it turns out after my initial issues I turned off clvmd on both
> node
it turns out after my initial issues I turned off clvmd on both nodes. One of
them comes up nice but the other hangs... I'm going to boot in runlevel 1 and
check my lvm stuff this might be the root cause (hoping) of why it's not
they're not becoming members (one sees the phantom lvm and the othe
Gianluca Cecchi wrote:
SO now the question is: to understand correctly how to manage eventual
interactions with other init scripts, where and how exactly the
service srvname will be stopped when I run
shutdown -h now
or
shutdown -r now
?
Which one of the init script related is responsible to d
you really got to the cluster in quorum before the lvm to work nicely.
what is the output of clustat?
do you have clvmd on both nodes up and running?
did you run pvscan/vgscan/lvscan after initializing the volume?
what did vgdisplay say? what it set to not avail etc?
-luis
Luis E. Cerezo
Glob
I forgot to turn on clvmd but now that I do I'm getting some 'Call Trace'
issues...
- Original Message -
From: "James Marcinek"
To: "linux clustering"
Sent: Thursday, September 10, 2009 3:38:33 PM GMT -05:00 US/Canada Eastern
Subject: Re: [Linux-cluster] EXT3 or GFS shared disk
I'm ru
I'm running 5.3 and it gave me a locking issue and indicated that it couldn't
create the logical volume. However it showed up and I had some issues getting
rid of it.
I couldn't get rid of the LVM because it couldn't locate the id. In the end I
rebooted the node and then I could get rid of it..
what grief did it give you? also- what version of RHEL are you running?
5.1 has some known issues with clvmd
-luis
Luis E. Cerezo
Global IT
GV: +1 412 223 7396
On Sep 10, 2009, at 12:37 PM, James Marcinek wrote:
> Hello again,
>
> Next question.
>
> Again since my cluster class (back in '04) G
Hello again,
Next question.
Again since my cluster class (back in '04) GFS wasn't around so I'm not sure if
I should use this or not in the cluster build...
If I have an active/passive cluster where only one node needs to have access to
the file system at a given time should I just use an ext
thanks
- Original Message -
From: "Paras pradhan"
To: "linux clustering"
Sent: Thursday, September 10, 2009 1:22:17 PM GMT -05:00 US/Canada Eastern
Subject: Re: [Linux-cluster] clustering questions
This is a nice article
http://magazine.redhat.com/2007/12/19/enhancing-cluster-quorum-wit
This is a nice article
http://magazine.redhat.com/2007/12/19/enhancing-cluster-quorum-with-qdisk/
Paras.
On Thu, Sep 10, 2009 at 12:16 PM, James Marcinek wrote:
> Hi Everyone,
>
> It's been a while since I took the clustering class and some items have
> changed. Can someone tell me how to set
Hi Everyone,
It's been a while since I took the clustering class and some items have
changed. Can someone tell me how to setup a quorum disk in the cluster settings
in regards to the heuristics programs used to test?
Thanks,
james
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https:/
On Tue, Sep 8, 2009 at 5:34 PM, Alan A wrote:
> It has come to the point where our cluster production configuration has
> halted due to the unexpected issues with multicasting on LAN/WAN.
>
> The problem is that the firewall enabled on the switch ports does not
> support multicasting, and between
Hello again,
I have initiated a X session on my server and I have realized that in the
root desktop there was a strage icon:
LaunchServerAdministrator.
I donĀ“t recognice it but I have clicked on it (I know, a very bad idea, it
could be anything, but the curiosity killed the cat...)
And it opened
Thank you Juanra,
you are right, I have executed this command from other machne:
ipmitool -U root -H 192.168.1.250 lan print
Password:
Set in Progress : Set Complete
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable: Callback : MD2 MD5
: User
Hello,
suppose that I have a service srvname defined in chkconfig and I would like
to insert it as a resource/service in my cluster.conf
(version 3 of cluster as found in f11, but thanks for answer for version 2
as in rhel 5 if different)
So my cluster.conf is something like this:
On Thu, Sep 10, 2009 at 12:51 PM, ESGLinux wrote:
> Hi all,
> after a long time without the opportunity to check the boot process of my
> server to see the message I have done it.
>
> I can see the following message:
> BMC Revision 2.05
> Remote Access Configuration Utility 1.25
>
> I enter in th
Hi all,
after a long time without the opportunity to check the boot process of my
server to see the message I have done it.
I can see the following message:
BMC Revision 2.05
Remote Access Configuration Utility 1.25
I enter in the utility pressing F2. I have configured the ip to
192.168.1.250.
24 matches
Mail list logo