Hi Steve,
Thanks for the reply.
On Mon, Mar 21, 2011 at 11:11:31AM +, Steven Whitehouse wrote:
> > Note that I've used the same setup for the GFS2 and ext3 tests: same
> > machine, same networking config, same storage array (which is not used by
> > anything else).
> > I also confirmed usin
Lon Hohberger wrote:
> On Fri, Mar 04, 2011 at 02:49:23PM -0500, Ofer Inbar wrote:
> > I could write some
> > separate cluster.conf parser that simulates what I think rgmanager
> > would do, but I might get it wrong. Or rgmanager might change in a
> > future version and I wouldn't track the chang
On 03/22/2011 03:54 PM, Gianluca Cecchi wrote:
> On Tue, 22 Mar 2011 11:47:58 +0100, Fabio M. Di Nitto wrote:
>> For RHEL related questions you should always file a ticket with GSS.
>
> yes, it is my usual behaviour, but tipically I prefer to analyze in
> advance and know if a problem I'm encount
On 03/22/2011 04:41 PM, berg...@merctech.com wrote:
> The pithy ruminations from "Fabio M. Di Nitto" on "Re:
> [Linux-cluster] rhel6 node start causes power on of the other one" were:
>
> => Hi,
> =>
> => On 3/22/2011 11:12 AM, Gianluca Cecchi wrote:
>
> [SNIP!]
>
> => >
> => > If the
I believe you will want to investigate the "clean_start" property in the
fence_daemon stanza (RHEL 5). Unsure if it is in RHEL6/Cluster3 code. It
is my understanding that the property can be used to by-pass the timeout and
remote fencing on initial startup. This assumes you know that the remote
Gianluca,
I thought that the sequence when both nodes are down and one starts was:
a) Fence daemon notices that the other node is down
(with status option of the fence command)
b) Fence daemon waits for the configured amount of time, based on
cluster.conf values or default ones, to "see" the other
The pithy ruminations from "Fabio M. Di Nitto" on "Re:
[Linux-cluster] rhel6 node start causes power on of the other one" were:
=> Hi,
=>
=> On 3/22/2011 11:12 AM, Gianluca Cecchi wrote:
[SNIP!]
=> >
=> > If the initial situation is both nodes down and I start one of them, I
=> > get
On 03/22/2011 06:12 AM, Gianluca Cecchi wrote:
> If the initial situation is both nodes down and I start one of them, I
> get it powering on the other, that is not my intentional target...
> Is this an expected default behaviour in rh el 6 with two nodes
> without quorum disk? Or in general no matt
On Tue, 22 Mar 2011 11:47:58 +0100, Fabio M. Di Nitto wrote:
> For RHEL related questions you should always file a ticket with GSS.
yes, it is my usual behaviour, but tipically I prefer to analyze in
advance and know if a problem I'm encountering is a bug or only my
fault in docs understanding...
Thank you Bob;
That is a very good point about the file-system being mounted on the other
node.
I have opened a ticket with Red Hat on this issue, and have not gotten a
satisfactory reply.
I was told that an "fsck" should be performed at boot from Red Hat support.
I am still finding that the kno
- Original Message -
| Good morning;
| We have a critical Oracle application running on a two node Red Hat
| clustered environment. (RHEL5u5)
|
|
| Red Hat clustering has worked extremely well for us; we have achieved
| better performance and improved reliability at a substantial reduced
On Tue, Mar 22, 2011 at 6:42 PM, Bobby Cherian wrote:
>
> Hi all,
>
> May i know the link to download the RHEL 6.
That's not really a linux-cluster question, but you can try create a
RHN login and request trial on
https://www.redhat.com/rhel/details/eval/
Try might be some delay since your trial
Good morning;
We have a critical Oracle application running on a two node Red Hat
clustered environment. (RHEL5u5)
Red Hat clustering has worked extremely well for us; we have achieved
better performance and improved reliability at a substantial reduced cost.
The issue is that when the system
Hi,
I have the same situation (two_node=1, RHEL5.5, no quorum disk), but it works
nice for me. Having both nodes down, starting one node always sucessfully fence
the other and this is the expected as Fabio said.
In my scenario the fenced node must remain down, even when sucessfully fenced
by
Hi all,
May i know the link to download the RHEL 6.
Regards
Bobby
---
This message contains information t
Hi,
On 3/22/2011 11:12 AM, Gianluca Cecchi wrote:
> Hello,
> I'm using latest updates on a 2 nodes rhel 6 based cluster.
> At the moment no quorum disk defined, so this line inside cluster.conf
>
>
> # rpm -q cman rgmanager fence-agents ricci corosync
> cman-3.0.12-23.el6_0.6.x86_64
> rgmanager-
Hello,
I'm using latest updates on a 2 nodes rhel 6 based cluster.
At the moment no quorum disk defined, so this line inside cluster.conf
# rpm -q cman rgmanager fence-agents ricci corosync
cman-3.0.12-23.el6_0.6.x86_64
rgmanager-3.0.12-10.el6.x86_64
fence-agents-3.0.12-8.el6_0.3.x86_64
ricci-0.1
17 matches
Mail list logo