The main reason behind separate switches for private node communication is
for isolating hardware failures and ensure that redundancy prevails. Private
interconnects through patch panels would increase the number of points of
failure. With reliable hardware and running each private interconnect
through separate patch panels/core switches would be a good idea.

Unless you have CFS or Oracle RAC clusters that heavily rely on cluster
interconnects, bandwidth consumed by LLT/GAB communication would be minimum.
For clusters with growing number of nodes, your new approach would be the
best from space provisioning and maintenance point of view. If need be, LLT
does provide tunable options that could be tweaked based on Symantec tech
support recommendations.

Some of the HP-UX test clusters that we use for SFRAC testing have
interconnects running through patch panels. No issues observed so far with
the default LLT configuration.

- Rajesh

On Wed, Jul 13, 2011 at 10:15 PM, <[email protected]> wrote:

> Dear List Members,
>
> We are redoing our datacenter wiring to add lots more structure. Within
> this datacenter we have several Windows and Solaris VCS 5.x multinode
> clusters. Currently, the heartbeat (GAB/LLT) switches for these clusters are
> located within the same rack or at most 1 rack distant from the servers they
> are monitoring, and those switches are directly cabled to those servers.
>
> We are considering consolidating all switches within the network racks, not
> the server racks. This means that the heartbeat switch cabling would attach
> to a rack patch panel, then a 40-foot cable run to the network rack patch
> panel, then to its heartbeat switch.
>
> Does anyone have positive or negative experience with heartbeat switches
> patch-panel-separated from the servers they are monitoring?  In the best of
> all worlds it would be great to have no patch panel connections between
> server and switch, so I'm looking for real world experience with the use of
> intermediate patch panel cabling. Are there capacity or extensibility
> "gotchas"?  In particular, if we grow the clusters to 5 or 6 nodes, will the
> patch-panel approach fail?
>
> Thanks in advance for your comments,
>
> Andy Colb
> Investment Company Institute
>
> _______________________________________________
> Veritas-ha maillist  -  [email protected]
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha
>
>
_______________________________________________
Veritas-ha maillist  -  [email protected]
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

Reply via email to