On Wednesday 13 May 2009 06:04:38 John Cronin wrote:
> Getting slightly off topic, but still somewhat relevant.
>
> Linux has many flavors of Ethernet bonding. To be sure, link aggregation
> resulting in increased bandwidth is generally supported on a single switch.
> However, Linux does have an a
On Thursday 07 May 2009 14:11:02 John Cronin wrote:
> On thing to consider - don't you want to know when you lose a heartbeat
> link? I know the lost link will be noted in /var/log/messages or wherever
> your syslog configuration indicates, but I would want it in my VCS logs
> somewhere too.
Of c
On Wednesday 13 May 2009 05:27:00 Hudes, Dana wrote:
> I don't care what tricks Linux plays or what they call it. From a
> network perspective, true bonding requires connection to the same
> switch/router and is done at the link layer. You don't have 2 IP
> interfaces, you have one. The bits go out
IS
+1 718 510 8586
Nextel: 172*26*16684
=
-Original Message-
From: veritas-ha-boun...@mailman.eng.auburn.edu
[mailto:veritas-ha-boun...@mailman.eng.auburn.edu] On Behalf Of Imri
Zvik
Sent: Sunday, May 03, 2009 12:18 PM
To: Jim Senicka
Cc: veritas-ha@mailman.eng.auburn.edu
Hi,
Not long ago, we bought a product that uses Sybase IQ as it's backend
database.
The company from which we bought it claims it cannot put it's raw devices
under Veritas control (/dev/vx/rdsk/dg/vol), because it is not supported.
As a reference, they sent a 5 years old document, titled "SYBA
On Wednesday 06 May 2009 19:49:27 Sandeep Agarwal (MTV) wrote:
> The lost hb messages are due to problems in Layer 2 connectivity.
When I transform the two links into a bonded interface, the lost hb messages
goes away (even when I fail over the active NIC back and forth).
Is this new feature is
On Wednesday 06 May 2009 00:55:23 you wrote:
> Nothing to add to /etc/llttab
>
> If you have 50MP3 then it should work.
So considering the following llttab file:
set-node rac-node1
set-cluster 0
link eth2 eth-00:21:5e:1f:0b:b0 - ether - -
link eth3 eth-00:21:5e:1f:0b:b1 - ether - -
and that eth2
On Monday 04 May 2009 23:51:54 Sandeep Agarwal (MTV) wrote:
> Unfortunately we don't have any "more" documentation for this feature.
>
> Basically, LLT works in the same way except now it can handle the
> failure that you described and now we can connect the two switches that
> the individual LLT l
On Sunday 03 May 2009 20:07:29 Sandeep Agarwal (MTV) wrote:
> From 5.0MP3 onwards we do support cross-links. In your example if you
> had a cable connecting sw1 and sw2 then the failure that you described
> would be handled and LLT would still have 1 valid link between node 1
> and node 4.
Could y
On Sunday 03 May 2009 19:22:38 Jim Senicka wrote:
> Let me check on this with engineering and see if we have any more up to
> date recommendations
Thanks!
___
Veritas-ha maillist - Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailma
On Sunday 03 May 2009 19:03:16 Jim Senicka wrote:
> This is not a limitation, as you had two independent failures. Bonding
> would remove the ability to discriminate between a link and a node
> failure.
I didn't understand this one - With bonding I can maintain full mesh
topology - No matter whic
On Sunday 03 May 2009 18:25:08 Jim Senicka wrote:
> You had 2 failures. No real way to design around that.
> GAB "visible" would prevent bad things from occurring.
Thank you for the fast response :)
Well, In linux I can use the bonding module to aggregate the interfaces and
work around this limi
Hi,
As far as I understand the manuals, the LLT heartbeat links should be isolated
from each other. Now, consider the following scenario - Node1 is connected
with two links, each one to a sperate switch. We will call them
li...@node1@sw1 and li...@node1@sw2.
Node4 is also connected with 2 link
On Tuesday 17 March 2009 15:32:55 Jim Senicka wrote:
> A few questions
>
> 1. Do you have a support case open?
Yes, for over two weeks.
> 2. Do you reconnect the FC before the node boots?
Yes, FC is reconnected immediately after the panic.
> 3. Is the network available during boot time?
Yes.
Hi,
I have a 4 nodes SFCFSRAC cluster, running on Linux (RHEL 5 x86_64), with
SFCFSRAC version 5MP3RP2.
As part of my ATP, I've tried disconnecting node1 from the storage (by
shutting down it's FC ports at the FC switch). The node paniced, and the
cluster did recognize the node failure, evicte
15 matches
Mail list logo