Re: Juniper Config Commit causes Cisco Etherchannels to go into err-disable state

2018-04-06 Thread Marian Ďurkovič
Please see the link below, that ugly hack should be disabled asap on all your Cisco boxes: https://supportforums.cisco.com/t5/lan-switching-and-routing/spanning-tree-etherchannel-guard-misconfig/td-p/1147273 MD On Fri, 6 Apr 2018 11:31:17 -0700, Keenan Tims wrote > What it's telling you is

Severe packet loss Amazon AWS->Telia

2018-02-21 Thread Marian Ďurkovič
Hi, we're experiencing severe packet loss for downloads from Amazon AWS via Telia network, tcpdump regularly shows multiple missing packets triggering SACKs like this: 08:33:48.935415 IP 147.175.167.x.57668 > 54.231.33.x.443: Flags [.], ack 1051088, win 16384, options [nop,nop,sack 4

Re: 40G reforming

2018-02-05 Thread Marian Ďurkovič
Many switches based on BCM Trident ASIC allow you to configure 4 consecutive SFP+ ports as 40G link (not LACP, but using real hardware 40G framing). In such case, you can plug 4 DWDM SFP+ modules directly into the switch, without the need for any reformer. M. On Mon, 5 Feb 2018 20:03:33

Re: DWDM Mux/Demux using 40G Optics

2017-06-19 Thread Marian Ďurkovič
On Mon, 19 Jun 2017 13:26:55 -0500, Colton Conor wrote [snip] > Maybe I should just ditch the 40G QSFP+ optics and use all 10G optics, > but the switches I am using have 48 10G SFP+ ports and 6 QSFP+ ports > built in. I know there are 40G breakout cables, but the whole point of > 40G is to

Re: 10G switch drops traffic for a split second

2016-12-01 Thread Marian Ďurkovič
On Wed, Nov 30, 2016 at 11:58:06AM -0500, Lee wrote: > On 11/30/16, Mikael Abrahamsson wrote: > > If your switch is the typical small-buffered-switch that has become more > > and more common the past few years, then the entire switch might have > > buffer to keep packets for

Re: MPLS in the campus Network?

2016-10-22 Thread Marian Ďurkovič
On Sat, 22 Oct 2016 21:29:22 +0200, Mark Tinka wrote > On 21/Oct/16 19:02, Javier Solis wrote: > > With that said, what are the best options to be able to cost effectively > > scale without using vlans and maintaining a routed core? What technology > > would someone suggest (mpls, vxlan,etc) to

Re: MPLS in the campus Network?

2016-10-21 Thread Marian Ďurkovič
> Compared to MPLS, a L2 solution with 100 Gb/s interfaces between > core switches and a 10G connection for each buildings looks so much > cheaper. But we worry about future trouble using Trill, SPB, or other > technologies, not only the "open" ones, but specifically the proprietary > ones based

Re: 100Gb/s TOR switch

2015-04-08 Thread Marian Ďurkovič
Wait for switches with BCM Tomahawk ASICs. They'll support exactly what you're looking for. M. On Wed, 08 Apr 2015 21:01:59 +0200, Piotr wrote Hi, There is something like this on market ? Looking for standalone switch, 1/2U, ca 40 ports 10Gb/s and about 4 ports 100Gb/s fixed or as a

Re: Recommended L2 switches for a new IXP

2015-01-20 Thread Marian Ďurkovič
On Mon, Jan 19, 2015 at 09:37:35PM -0500, Phil Bedard wrote: I think in fairly short order both TRILL and 802.1AQ will be depercated in place of VXLAN and using BGP EVPN as the control plane ala Juniper QFX5100/Nexus 9300. We also evaluated VXLAN for IXP deployment, since Trident-2

Re: Recommended L2 switches for a new IXP

2015-01-19 Thread Marian Ďurkovič
On Sat, Jan 17, 2015 at 09:15:04PM +0200, Saku Ytti wrote: On (2015-01-17 12:02 +0100), Marian Ďurkovič wrote: Our experience after 100 days of production is only the best - TRILL setup is pretty straightforward and thanks to IS-IS it provides shortest-path IP-like routing for L2

Re: Recommended L2 switches for a new IXP

2015-01-17 Thread Marian Ďurkovič
Last year we installed four 1RU TRILL switches in SIX - see http://www.six.sk/images/trill_ring.png Our experience after 100 days of production is only the best - TRILL setup is pretty straightforward and thanks to IS-IS it provides shortest-path IP-like routing for L2 ethernet packets over

Re: Shady areas of TCP window autotuning?

2009-03-18 Thread Marian Ďurkovič
On Tue, 17 Mar 2009 10:39:13 -0500, Leo Bicknell wrote So at the end of the day, we'll again have a system which is unable to achieve good performance over high BDP paths, since with reduced buffers we'll have an underbuffered bottleneck in the path which will prevent full link

Re: Shady areas of TCP window autotuning?

2009-03-17 Thread Marian Ďurkovič
On Mon, Mar 16, 2009 at 09:09:35AM -0500, Leo Bicknell wrote: Many edge devices have queues that are way too large. What appears to happen is vendors don't auto-size queues. Something like a cable or DSL modem may be designed for a maximum speed of 10Mbps, and the vendor sizes the queue

Shady areas of TCP window autotuning?

2009-03-16 Thread Marian Ďurkovič
. -- Marian Ďurkovič network manager Slovak Technical University Tel: +421 2 571 041 81 Computer Centre, Nám. Slobody 17 Fax: +421 2 524 94 351