On 29 Oct 2015, at 12:49, Mark Tinka <mark.ti...@seacom.mu> wrote: > > > On 29/Oct/15 14:22, Cydon Satyr wrote: > >> Oh wow. >> >> Any real drawbacks to running something like 32x10Gbps LAG link in core >> instead of higher bandwidth physical links? Just seems so unreal. > > Folk like AMS-IX have publicly acknowledged running 32x 10Gbps on their > exchange point (albeit, with Brocade) right before 100Gbps ports became > viable. > > I suppose the biggest issue will be how you hash equally across all of > the links, especially if much of the traffic being carried is inherently > Layer 2 (despite having a Layer 3 payload). > > Mark. > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp
I believe LINX were using 64x10G LAGs between the core nodes on the Juniper LAN in London before the upgrade to use smaller 100G bundles. I guess the biggest benefit of using 100G interfaces instead of 10x10G is that you can support individual flows >10G. Edward Dore Freethought Internet
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp