I have 5 x T640s running JunOS 8.2 and am adding some 10/100
aggregation switches.  These switches would be in a ring (say, 3 of
them per ring) and 2 of those switches would have gig uplinks to
different T640s.  The switches would run a version of spanning tree
(likely RSTP, unless they can do PVST).  Dunno if the format will look
right for everyone, but it should resemble:

T640----------T640
    |                  |
switch----------switch
         \         /
         switch

A 10/100 customer would connect to a switch, traffic would be QinQ'd
(stacked VLANd) up to the Juniper, where the top tag would be pop'd,
and they would be dumped into a VPLS routing-instance.
   While T640s have 2GB of memory, I find myself worrying about long
term memory problems due to MAC table sizes as more customers are
added over time.  I imagine this is very case specific and obviously
depends on what else the 640s are doing, but does anyone have any
real-world examples of how they mitigated MAC table size issues with a
growing customer base?  I realize there are ways of setting mac table
sizes and aging times, but  is this generally how it's done?  Anyone
noticed their MAC tables taking up a lot of memory, and if so, how
much and for how many customers (roughly) ?
   We're early in our deployment and I'm trying to make sure it's done
right the first time.  Thanks for any thoughts/experiences you might
share.

David
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to