Hi Stig,

I will be following the Glendale release, though I want to keep only 
stable releases in production. Glad to know this feature is coming down 
the pipe though.

Thanks!
-Daniel

Stig Thormodsrud wrote:
> Daniel,
>
> If you're able to use the glendale alpha (i.e. VC4.0.0) it does have
> support for adding an IP address on the bridge interface and it also
> supports ECMP which might be an option for your dual links.  I wrote a
> quick howto for ECMP on the new forum at:
> http://www.vyatta.org/forum/viewtopic.php?t=20
>
> stig
>
>   
>> -----Original Message-----
>> From: [EMAIL PROTECTED] [mailto:vyatta-users-
>> [EMAIL PROTECTED] On Behalf Of Daniel Stickney
>> Sent: Wednesday, March 05, 2008 11:42 AM
>> To: [EMAIL PROTECTED]
>> Subject: [Vyatta-users] Request for link redundancy suggestions
>>
>> Hi Everyone,
>>
>> I have exhausted my ideas and am now looking for suggestions on how to
>> achieve my goal of having redundant links from two clustered Vyatta
>>     
> boxes.
>   
>> I'll lay out the technical details and goal first. We have two edge
>>     
> layer
>   
>> 3 switches which are stacked (with stacking modules and cable, so they
>> are a single logical switch with a single administration interface. For
>> those not familiar with stacking, they act like separate 48 port blades
>> in a switch chassis) and two Vyatta boxes with clustering configured.
>>     
> The
>   
>> cluster resources are one public VIP, and one private VIP. I am
>> excluding the rest of the network architecture to focus in on the links
>> between the two Vyatta boxes and the edge (logical) switch. Our network
>> design requirements document stated we wanted to have no single point of
>> failure in our network backbone. To meet this goal, we have 2 ISP drops
>> with 2 cables per drop (spanning-tree used to select designated cable on
>> each drop) to our edge switch stack; one cable from each drop is
>>     
> connected
>   
>> to each
>> switch (think "connected to each blade") so if an edge switch in the
>>     
> stack
>   
>> dies (or "if a blade in the chassis dies") traffic can still run
>> through the surviving edge switch ("blade"). As mentioned, we also have
>> two Vyatta boxes clustered. The only part of this that I can't figure
>> out how to make redundant is the gigabit network cable between each
>>     
> Vyatta
>   
>> box and the edge switch stack (named link1-1, link1-2, and link2-1,
>>     
> link2-
>   
>> 2
>> below). I am hoping to hear some suggestions on how this might be
>>     
> achieved
>   
>> within our architecture. So far I have considered port-channeling
>> and spanning-tree, but neither of these appear to be a solution in this
>> case. Here is an ASCII drawing of this description:
>>
>>
>>
>> ISP-drop1-1---->|---------|
>> ISP-drop1-2---->|edge-sw-1|<----link2-1------------
>>                 |         |<----link2-2--------   |
>>                 |---------|                   |   |
>>  -----link1-1-->|         |<----ISP-drop2-1   |   |
>>  |    -link1-2->|edge-sw-2|<----ISP-drop2-2   |   |
>>  |    |         |---------|                   |   |
>>  |    |                                       |   |
>>  |    |          ------------------------------   |
>>  |    |          |     ----------------------------
>>  |    |          |     |
>>  |    |          |     |
>> |----------|   |----------|
>> | vyatta-1 |   | vyatta-2 |
>> |----------|   |----------|
>>
>>
>>
>> I realize the link1 and link2 can be done with one cable from each
>>     
> Vyatta
>   
>> box to the edge switch stack, but we are trying to eliminate each
>> cable as a single point of failure by providing a backup cable from each
>> Vyatta box to the edge. We have no interest in applying a 'duct-tape
>> and bubble gum' hacked together solution on our network backbone, so I
>>     
> am
>   
>> hoping there is a standardized method to achieve our goal. I am
>>     
> concerned
>   
>> that I am misunderstanding something or missing an option which has left
>> me at my dead end. Here is how I have gotten to this logical dead end.
>> Vyatta (VC3) does not support Linux bonded NICs (devices named bondX) in
>> the command shell or web interface, so port-channeling is not an option.
>> Vyatta (VC3) does support bridge groups, but not officially assigning
>>     
> them
>   
>> IPs within Xorpsh or the web interface (yes I know the Linux shell
>>     
> method,
>   
>> but we are avoiding any unofficial hacks). With 2 bridged interfaces,
>> running spanning-tree with the switch ports to have only one active
>>     
> cable
>   
>> would meet our goal, but we need the clustering to move a VIP resource
>> back and forth between the bridge group interfaces on the two Vyatta
>> boxes, and
>> as far as I can tell from reading the manuals and forums and guides,
>>     
> this
>   
>> is not supported.
>>
>> Am I understanding things correctly? I am ok with the answer being
>>     
> "there
>   
>> is no way to do what you want at this time", I just don't want to miss
>> an officially supported method if it exists. If we just have to use a
>> single cable from each Vyatta box to the edge stack, it is not optimal
>> but it is acceptable.
>>
>>
>> Thanks very much for suggestions.
>> -Daniel
>>
>>
>> --
>> Daniel Stickney - Linux Systems Administrator
>> Email: [EMAIL PROTECTED]
>>
>> _______________________________________________
>> Vyatta-users mailing list
>> Vyatta-users@mailman.vyatta.com
>> http://mailman.vyatta.com/mailman/listinfo/vyatta-users
>>     
>
> _______________________________________________
> Vyatta-users mailing list
> Vyatta-users@mailman.vyatta.com
> http://mailman.vyatta.com/mailman/listinfo/vyatta-users
>   
-- 

Daniel Stickney - Linux Systems Administrator
Email: [EMAIL PROTECTED]
Cell: 720.422.2732 Work: 303.497.9369 

_______________________________________________
Vyatta-users mailing list
Vyatta-users@mailman.vyatta.com
http://mailman.vyatta.com/mailman/listinfo/vyatta-users

Reply via email to