On 09/01/12 13:33, Rajagopal Swaminathan wrote:
Switches used for this purpose are best completely isolated from the rest of
the network and multicast traffic control should be DISABLED.
I distinctly remember asking the network guys Multicast mode to be on
for the Heartbeat network (for the c
On 09/01/12 13:23, SATHYA - IT wrote:
Alan,
Corosync (heartbeat) network is not connected to switch. The network is
connected between server to server directly.
See my comment about direct hookups. My experience is that they are
prone to playing up for no apparent reason (NICs simply aren't d
Greetings,
On Mon, Jan 9, 2012 at 6:46 PM, Alan Brown wrote:
> On 09/01/12 05:24, Digimer wrote:
>
>
> Switches used for this purpose are best completely isolated from the rest of
> the network and multicast traffic control should be DISABLED.
>
I distinctly remember asking the network guys Mult
: linux clustering
Cc: Digimer; SATHYA - IT
Subject: Re: [Linux-cluster] rhel 6.2 network bonding interface in cluster
environment
On 09/01/12 05:24, Digimer wrote:
> With both of the bond's NICs down, the bond itself is going to drop.
Odds are, both NICs are plugged into the same switch.
(
On 09/01/12 05:24, Digimer wrote:
With both of the bond's NICs down, the bond itself is going to drop.
Odds are, both NICs are plugged into the same switch.
(assuming the OP isn't running things plugged nic-nic - which I have
found in the past tends to be flakey when N-way negotiation becom
, January 09, 2012 6:08 PM
To: SATHYA - IT
Cc: 'Digimer'; 'linux clustering'
Subject: Re: [Linux-cluster] rhel 6.2 network bonding interface in cluster
environment
Am 09.01.2012 11:43, schrieb SATHYA - IT:
> Klaus,
>
> For your point the corosync network is not connect
Am 09.01.2012 11:43, schrieb SATHYA - IT:
> Klaus,
>
> For your point the corosync network is not connected to the switch. They are
> connected directly to the servers (server to server).
Ahh, then the going down of the bond is probably not a sign of a network
problem, it probably goes down when
]
Sent: Monday, January 09, 2012 11:48 AM
To: 'Digimer'; 'linux clustering'
Subject: RE: [Linux-cluster] rhel 6.2 network bonding interface in cluster
environment
Not sure whether you received the logs and cluster.conf file. Herewith
pasting the same...
On File Server1:
Jan 8 03:1
>
>
> Jan 3 14:46:07 filesrv2 kernel: bnx2 :04:00.0: eth4: NIC Copper Link is
> Down
> Jan 3 14:46:07 filesrv2 kernel: bnx2 :03:00.1: eth3: NIC Copper Link is
> Down
> Jan 3 14:46:07 filesrv2 kernel: bonding: bond1: link status definitely down
> for interface eth3, disabling it
> Ja
Thanks
Sathya Narayanan V
Solution Architect
-Original Message-----
From: SATHYA - IT [mailto:sathyanaray
On 01/09/2012 12:12 AM, SATHYA - IT wrote:
> Hi,
>
> Thanks for your mail. I herewith attaching the bonding and eth configuration
> files. And on the /var/log/messages during the fence operation we can get
> the logs updated related to network only in the node which fences the other.
What IPs do
nk status definitely up for
interface eth4, 1000 Mbps full duplex.
Thanks
Sathya Narayanan V
Solution Architect
-----Original Message-
From: Digimer [mailto:li...@alteeve.com]
Sent: Monday, January 09, 2012 10:27 AM
To: linux clustering
Cc: SATHYA - IT
Subject: SPAM - Re: [Linux-cl
On 01/08/2012 11:37 PM, SATHYA - IT wrote:
> Hi,
>
> We had configured RHEL 6.2 - 2 node Cluster with clvmd + gfs2 + cman +
> smb. We have 4 nic cards in the servers where 2 been configured in
> bonding for heartbeat (with mode=1) and 2 been configured in bonding for
> public access (with mode=0).
Hi,
We had configured RHEL 6.2 - 2 node Cluster with clvmd + gfs2 + cman + smb.
We have 4 nic cards in the servers where 2 been configured in bonding for
heartbeat (with mode=1) and 2 been configured in bonding for public access
(with mode=0). Heartbeat network is connected directly from server
14 matches
Mail list logo