Re: [j-nsp] Traffic delayed
I put on both lists cause we had cisco and we have now juniper. Hence maybe this is something known by cisco guru as well. I have not a mx960 in lab unfortunately, no cpu spikes, no relevant logs. Upgrade is not in roadmap since we re in 16.1 Cheers Il Mar 2 Ott 2018, 21:19 James Bensley ha scritto: > On Tue, 2 Oct 2018 at 19:59, james list wrote: > > > > Can you elaborate? > > Why just every 30 minutes the issue? > > Seeing as you have an all Juniper set up I don't think there is a need > to cross-post to two lists simultaneously. If you feel there is a > need, please post to the two lists separately as not all subscribers > will be subscribed to both lists. > > What basic troubleshooting have you done so far? What have you ruled out? > > The very first think you should have done is try to replicate the > issue in the lab, can you replicate it? > > If yes, have you tried a code upgrade to see if this fixes anything? > Or changing any settings? > > If not and you've only got the issue in production, can you enable > some logging to see if there is anything in the logs when the issues > happens? Do you see any packet drops on interfaces when the issue > happens? CPU spikes? Anything? > > So far you haven't provided any data at all on the problem or what you > have tried to do to resolve it, before coming to the list. > > Cheers, > James. > ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Traffic delayed
What kind of traffic is delayed? Unicast or multicast? Usually Mac tables have Mac timeouts driven by traffic and flooding may occur on timeouts. You can check if any ARPs are expring and needed to be refreshed every 30 mins interval. For multicast, check if any prune or joins are happening around the time. Any IGMP joins or prunes around the same time. On Tue, Oct 2, 2018 at 9:38 AM james list wrote: > Dear experts > > I’ve a strange issue. > > Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3 > (hsrp) environment was set-up with a couple of new MX9k running the same L2 > and L3 services but those two MX are running MPLS/VPLS to transport L3/L2 > frames. Access switches are QFX5k connected to MX MPLS PE. > > Now the main issue: the customer every almost 30 minutes (sometimes 28 > sometimes 33 minutes sometimes 30) detect some frames received with a delay > of 3-600 milliseconds. The customer is a trading venue.. > > It seems like something slow down the forwarding processing, now I know > Juniper separate forwarding and control, but I was thinking to OSPF LSA > refresh or something like that since the frequency is around 30 minutes.. > > Can anybody help me in sorting out which can be the main point here ? > > Thanks in advance > > Cheers, > > James > ___ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp > ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
❦ 2 octobre 2018 23:07 +0200, Mark Tinka : >> Dunno with IS-IS, but with BGP, BFD is advertised as control plane >> independent when using public IPv6 addresses. However, I've just noticed >> that when using an IRB, BFD is handled by the control plane, both for >> IPv6 and IPv4. > > It's clear that whether the BFD session is run in the PFE or the RE has > a lot to do with how it was learned. I am unsure if say that about ISIS vs BGP or about the difference with/without an IRB. If the later, in both cases, it was learnt from a BGP session. The only difference between the two setups is the local IP of the BGP session is on an IRB interface. -- The only way to keep your health is to eat what you don't want, drink what you don't like, and do what you'd rather not. -- Mark Twain ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] BFD Distributed Mode for IPv6
On 2/Oct/18 23:01, Vincent Bernat wrote: > Dunno with IS-IS, but with BGP, BFD is advertised as control plane > independent when using public IPv6 addresses. However, I've just noticed > that when using an IRB, BFD is handled by the control plane, both for > IPv6 and IPv4. It's clear that whether the BFD session is run in the PFE or the RE has a lot to do with how it was learned. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
On 2/Oct/18 23:01, Vincent Bernat wrote: > Dunno with IS-IS, but with BGP, BFD is advertised as control plane > independent when using public IPv6 addresses. However, I've just noticed > that when using an IRB, BFD is handled by the control plane, both for > IPv6 and IPv4. It's clearly that whether the BFD session is run in the PFE or the RE has a lot to do with how it was learned. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
❦ 2 octobre 2018 20:13 +0100, James Bensley : >> > Not exactly your scenario but we had the same problems with eBGP with >> > IPv6 link-local addresses on QFX10K platform. >> > Dev Team had replied that rather than hardware limitation it's more of >> > a "design decision" to not distribute IPv6 LL BFD sessions on PFEs, >> > it's the same behaviour across the MX/QFX/PTX portfolio and there are >> > no plans to change it. > > I'd be interested to know if BFD works OK if you use public IPv6 > addresses for IS-IS adjacencies (although it's a waste of IPs, I'd > still be curious). Dunno with IS-IS, but with BGP, BFD is advertised as control plane independent when using public IPv6 addresses. However, I've just noticed that when using an IRB, BFD is handled by the control plane, both for IPv6 and IPv4. -- The difference between a Miracle and a Fact is exactly the difference between a mermaid and a seal. -- Mark Twain ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
On 2/Oct/18 21:13, James Bensley wrote: > I presume that if one were to run MT-ISIS there would be no impact to IPv4? We already run MT for IS-IS. I consider this as basic a requirement as "Wide Metrics". However, the issue here is BFD sees the whole of IS-IS as a client. So if BFD has a moment, it will signal its client (IS-IS), regardless of whether the moment was for IPv4 or IPv6. I imagine that re-running adjacencies and SPF just for the IPv6 topology would be a vendor-specific solution to the problem. However, wouldn't it just be easier to support BFD for IPv6 in the PFE as Juniper already does for IPv4? > I'd be interested to know if BFD works OK if you use public IPv6 > addresses for IS-IS adjacencies (although it's a waste of IPs, I'd > still be curious). Interesting. What I do know is that if you are running BFD for static IPv6 routes, it runs in the PFE. But if the routes are learned via an IGP (IS-IS or OSPFv3), it can only run in the RE. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Traffic delayed
There is no such product as an MX9K. Is your product some form of MX or an EX9200 of some type? In either case would need to know which exact modules within the MX or EX product you are using or are involved. When using the CAT6K was the edge QFX5100 previously as well? I assume QFX5100 is just L2? Sent from my iPhone > On Oct 2, 2018, at 12:37 PM, james list wrote: > > Dear experts > > I’ve a strange issue. > > Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3 > (hsrp) environment was set-up with a couple of new MX9k running the same L2 > and L3 services but those two MX are running MPLS/VPLS to transport L3/L2 > frames. Access switches are QFX5k connected to MX MPLS PE. > > Now the main issue: the customer every almost 30 minutes (sometimes 28 > sometimes 33 minutes sometimes 30) detect some frames received with a delay > of 3-600 milliseconds. The customer is a trading venue.. > > It seems like something slow down the forwarding processing, now I know > Juniper separate forwarding and control, but I was thinking to OSPF LSA > refresh or something like that since the frequency is around 30 minutes.. > > Can anybody help me in sorting out which can be the main point here ? > > Thanks in advance > > Cheers, > > James > ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Traffic delayed
On Tue, 2 Oct 2018 at 19:59, james list wrote: > > Can you elaborate? > Why just every 30 minutes the issue? Seeing as you have an all Juniper set up I don't think there is a need to cross-post to two lists simultaneously. If you feel there is a need, please post to the two lists separately as not all subscribers will be subscribed to both lists. What basic troubleshooting have you done so far? What have you ruled out? The very first think you should have done is try to replicate the issue in the lab, can you replicate it? If yes, have you tried a code upgrade to see if this fixes anything? Or changing any settings? If not and you've only got the issue in production, can you enable some logging to see if there is anything in the logs when the issues happens? Do you see any packet drops on interfaces when the issue happens? CPU spikes? Anything? So far you haven't provided any data at all on the problem or what you have tried to do to resolve it, before coming to the list. Cheers, James. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
On Tue, 2 Oct 2018 at 16:39, Mark Tinka wrote: > The real-world problem we are seeing is when, for whatever reason, the > RE CPU spikes and BFD for IPv6 sneezes, we also lose IPv4 because, well, > IS-IS integrates both IP protocols. I presume that if one were to run MT-ISIS there would be no impact to IPv4? > On 2/Oct/18 15:30, Виталий Венгловский wrote: > > > Mark, > > > > Not exactly your scenario but we had the same problems with eBGP with > > IPv6 link-local addresses on QFX10K platform. > > Dev Team had replied that rather than hardware limitation it's more of > > a "design decision" to not distribute IPv6 LL BFD sessions on PFEs, > > it's the same behaviour across the MX/QFX/PTX portfolio and there are > > no plans to change it. I'd be interested to know if BFD works OK if you use public IPv6 addresses for IS-IS adjacencies (although it's a waste of IPs, I'd still be curious). Cheers, James (not currently near the lab). ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Traffic delayed
Can you elaborate? Why just every 30 minutes the issue? Il Mar 2 Ott 2018, 20:34 Tom Beecher ha scritto: > You have switches with completely different buffer depths than you used > to. You prob want to look into that. > > On Tue, Oct 2, 2018 at 9:39 AM james list wrote: > >> Dear experts >> >> I’ve a strange issue. >> >> Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3 >> (hsrp) environment was set-up with a couple of new MX9k running the same >> L2 >> and L3 services but those two MX are running MPLS/VPLS to transport L3/L2 >> frames. Access switches are QFX5k connected to MX MPLS PE. >> >> Now the main issue: the customer every almost 30 minutes (sometimes 28 >> sometimes 33 minutes sometimes 30) detect some frames received with a >> delay >> of 3-600 milliseconds. The customer is a trading venue.. >> >> It seems like something slow down the forwarding processing, now I know >> Juniper separate forwarding and control, but I was thinking to OSPF LSA >> refresh or something like that since the frequency is around 30 minutes.. >> >> Can anybody help me in sorting out which can be the main point here ? >> >> Thanks in advance >> >> Cheers, >> >> James >> ___ >> juniper-nsp mailing list juniper-nsp@puck.nether.net >> https://puck.nether.net/mailman/listinfo/juniper-nsp >> > ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
Hi, On Tue, Oct 02, 2018 at 05:39:06PM +0200, Mark Tinka wrote: > On 2/Oct/18 17:05, Gert Doering wrote: > > > "nobody is using this for real, so just make sure we can tick the > > marks on the customers' shopping lists" > > The real-world problem we are seeing is when, for whatever reason, the > RE CPU spikes and BFD for IPv6 sneezes, we also lose IPv4 because, well, > IS-IS integrates both IP protocols. Aw. This is particularily annoying... > It's really terrible that in 2018, Juniper feel it's not important to > make IPv6 as stable as IPv4. The days I wish network operators ran > equipment vendors... This. gert -- "If was one thing all people took for granted, was conviction that if you feed honest figures into a computer, honest figures come out. Never doubted it myself till I met a computer with a sense of humor." Robert A. Heinlein, The Moon is a Harsh Mistress Gert Doering - Munich, Germany g...@greenie.muc.de signature.asc Description: PGP signature ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Traffic delayed
You have switches with completely different buffer depths than you used to. You prob want to look into that. On Tue, Oct 2, 2018 at 9:39 AM james list wrote: > Dear experts > > I’ve a strange issue. > > Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3 > (hsrp) environment was set-up with a couple of new MX9k running the same L2 > and L3 services but those two MX are running MPLS/VPLS to transport L3/L2 > frames. Access switches are QFX5k connected to MX MPLS PE. > > Now the main issue: the customer every almost 30 minutes (sometimes 28 > sometimes 33 minutes sometimes 30) detect some frames received with a delay > of 3-600 milliseconds. The customer is a trading venue.. > > It seems like something slow down the forwarding processing, now I know > Juniper separate forwarding and control, but I was thinking to OSPF LSA > refresh or something like that since the frequency is around 30 minutes.. > > Can anybody help me in sorting out which can be the main point here ? > > Thanks in advance > > Cheers, > > James > ___ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp > ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Traffic delayed
Dear experts I’ve a strange issue. Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3 (hsrp) environment was set-up with a couple of new MX9k running the same L2 and L3 services but those two MX are running MPLS/VPLS to transport L3/L2 frames. Access switches are QFX5k connected to MX MPLS PE. Now the main issue: the customer every almost 30 minutes (sometimes 28 sometimes 33 minutes sometimes 30) detect some frames received with a delay of 3-600 milliseconds. The customer is a trading venue.. It seems like something slow down the forwarding processing, now I know Juniper separate forwarding and control, but I was thinking to OSPF LSA refresh or something like that since the frequency is around 30 minutes.. Can anybody help me in sorting out which can be the main point here ? Thanks in advance Cheers, James ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] JunOS recommendations
I think the answer would depend upon feature set you might want to use, or specific hardware. If neither of these matter, I think 16.2R2-S6 is probably safest choice. If you need newer feature/functions, I might suggest 18.1R3. Good luck, and no promises! Rich On 10/2/18, 11:44 AM, "Matthew Crocker" wrote: Hello, I'm running an Enhanced MX480 Midplane with RE-S-2X00x6 and Enhanced MX SCB 2 with MPC3E NG PQ & Flex Q & MPC7E 3D MRATE-12xQSFPP-XGE-XLGE-CGE cards. Currently running : Junos: 15.1F7.3 it appears stable but I don't' have it in production yet and want to upgrade if needed before it sees live traffic. I'm looking for a recommended stable JunOS release that supports an ISP workload (IPv4, IPv6, BGP (DE-CIX-NYC peers), ISIS, MPLS) Thanks, ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] JunOS recommendations
Hello, I'm running an Enhanced MX480 Midplane with RE-S-2X00x6 and Enhanced MX SCB 2 with MPC3E NG PQ & Flex Q & MPC7E 3D MRATE-12xQSFPP-XGE-XLGE-CGE cards. Currently running : Junos: 15.1F7.3 it appears stable but I don't' have it in production yet and want to upgrade if needed before it sees live traffic. I'm looking for a recommended stable JunOS release that supports an ISP workload (IPv4, IPv6, BGP (DE-CIX-NYC peers), ISIS, MPLS) Thanks, ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
On 2/Oct/18 17:05, Gert Doering wrote: > "nobody is using this for real, so just make sure we can tick the > marks on the customers' shopping lists" The real-world problem we are seeing is when, for whatever reason, the RE CPU spikes and BFD for IPv6 sneezes, we also lose IPv4 because, well, IS-IS integrates both IP protocols. It's really terrible that in 2018, Juniper feel it's not important to make IPv6 as stable as IPv4. The days I wish network operators ran equipment vendors... Mark. signature.asc Description: OpenPGP digital signature ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
Hi, On Tue, Oct 02, 2018 at 03:37:15PM +0200, Mark Tinka wrote: > Definitely makes the case for anyone that says IPv6 is not as ready as > IPv4... did Juniper elaborate what "design decision" actually means? "nobody is using this for real, so just make sure we can tick the marks on the customers' shopping lists" Gh! gert -- "If was one thing all people took for granted, was conviction that if you feed honest figures into a computer, honest figures come out. Never doubted it myself till I met a computer with a sense of humor." Robert A. Heinlein, The Moon is a Harsh Mistress Gert Doering - Munich, Germany g...@greenie.muc.de signature.asc Description: PGP signature ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 15:38, adamv0...@netconsultings.com wrote: > Hi folks, > > I'm glad the thread spun up an interesting discussion but my original > question was more around why would anyone prefer IntServ to DiffServ in an > RSVP-TE environment. > Personally I prefer RSVP-TE solely for TE purposes and QoS for QoS purposes, > but there are always compromises if you haven't found one you're not looking > hard enough, so here I am looking... The way I looked at this when I began operating any kind of Internet network back in the day was one of scale. IntServ sparked the development of RSVP, whose idea was for clients to signal the network and request for resources in order for their applications to be guaranteed performance. Of course, in the real world, it was soon obvious that your Windows laptop or your iPhone XS sending RSVP messages to the network will not scale well. So detaching the need for resources from any co-ordinated knowledge between devices (clients and routers) about what those resources may be in relation to each other leads to having no state in the backbone. This became DiffServ, which I found more appealing than IntServ purely from an administration perspective. Of course, times changed and just as CLNS/CLNP were adapted to carry IP NLRI, RSVP was adapted to convey MPLS signaling. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
Hi folks, I'm glad the thread spun up an interesting discussion but my original question was more around why would anyone prefer IntServ to DiffServ in an RSVP-TE environment. Personally I prefer RSVP-TE solely for TE purposes and QoS for QoS purposes, but there are always compromises if you haven't found one you're not looking hard enough, so here I am looking... So couple facts first, If you're willing to use IntServ with RSVP-TE then: 1) You're willing to introduce another set of constrains to how tunnels are routed across the core links in terms of available BW in pools and sub-pools. - These constrains however are not static like say SRLGs/Link colouring, but very dynamic and will be changing through time. 2) You're willing to introduce another dynamic factor and that is BW requirements for the tunnel - once again this will be changing in time Of course both of these will introduce a whole battalion of dynamic variables that will all have effect on each other in various feedback loops. And also since none of these variables is updated in real-time due to scaling reasons (btw there's no such thing as real-time anyways) there will be all kinds of trailing effects and race conditions where the network will not respond to the actual situation on links in a timely fashion. Not to mention the bin packing problem - and the need to make the system even more granular thereby introducing ever more state and change in order to manage the bin packing problem. And the only advantage of IntServ I could think of is admission control, But again this seems a bit dubious considering the non-real-time nature of this solution. And besides, I'm not sure I'd ever want to be in a position where I allow my core links to max out and have TE to try and shuffle flows around so that I can squeeze all traffic in. - sure this would probably not be the case of day to day operation but most likely only employed during link failures, My Conclusion, But still me experience is that designing and managing RSVP-TE solutions for backbones is very complicated even with statically pre-defined path and failure modes. Now I can't even start to imagine the level of complexity if all this would be one dynamic system -how would I know if the system performs as expected or intended whether it's trying to work around some bottleneck or whether what I'm looking at is just normal operation. But maybe there are use cases that are indeed dependent on IntServ RSVP-TE. I'd like to hear your thoughts. Thank you adam netconsultings.com ::carrier-class solutions for the telecommunications industry:: > -Original Message- > From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf > Of adamv0...@netconsultings.com > Sent: Monday, October 01, 2018 11:16 AM > To: juniper-nsp@puck.nether.net > Subject: [j-nsp] Use cases for IntServ in MPLS backbones > > Hi folks, > > Another pooling question from me, > This time I'm interested on what are your thoughts on DiffServ vs IntServ in > MPLS backbones and what use cases for IntServ can you think of please. > So what I have in mind specifically is RSVP-TE in combination with DiffServ > (standard QoS) vs IntServ (in all its glory i.e. BW pools and sub-pools, > allocation models RDM/MAM, etc..). > > > > adam > > > netconsultings.com > ::carrier-class solutions for the telecommunications industry:: > > > ___ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD Distributed Mode for IPv6
On 2/Oct/18 15:30, Виталий Венгловский wrote: > Mark, > > Not exactly your scenario but we had the same problems with eBGP with > IPv6 link-local addresses on QFX10K platform. > Dev Team had replied that rather than hardware limitation it's more of > a "design decision" to not distribute IPv6 LL BFD sessions on PFEs, > it's the same behaviour across the MX/QFX/PTX portfolio and there are > no plans to change it. Thanks for the feedback, Vitaly. Definitely makes the case for anyone that says IPv6 is not as ready as IPv4... did Juniper elaborate what "design decision" actually means? Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 14:03, Tim Cooper wrote: > The QoS obligations has been pretty much cut/paste from PSN into HSCN > obligations, if you haven’t come across that yet. So look forward to that... > ;) > > Tim C Unfortunately yes James. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
> On 2 Oct 2018, at 11:24, James Bensley wrote: > >> On Tue, 2 Oct 2018 at 10:57, Mark Tinka wrote: >> I've never quite understood it when customers ask for 8 or even 16 classes. >> When it comes down to it, I've not been able to distill the queues to more >> than 3. Simply put, High, Medium and Low. The 4th queue is for the network >> itself. > > I'm not saying I agree with this 8 classes - just stating what it was > :) I also agree that most people genuinely don't need more than 3-4. > We often "helped" (nudged) customers to design their traffic into just > a few classes. > > Here in the land of Her Majesty and cups of tea, if you want to > operate as part of the Public Services Network (a national effort to > provide unified services to the public sector across multiple > providers to stamp out any monopoly) you must comply with their 6 > class model [1]: > https://www.gov.uk/government/publications/psn-quality-of-service-qos-specification/psn-quality-of-service-qos-specification > > So this 6 classes, we split voice signalling and media into two, with > the media being an LLQ, and had a separate class to guarantee traffic > for control and MGMT plane traffic (e.g. we can still SSH to our > routers with a customer DoS is filling the pipes) we ended up with 8. > Yay :( > > Cheers, > James. > > [1] As is customary with any tech savvy government, they've since > sacked off various PSN standards without providing any replacement so > everyone is just sticking to the same expired standards for now > > __ The QoS obligations has been pretty much cut/paste from PSN into HSCN obligations, if you haven’t come across that yet. So look forward to that... ;) Tim C ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 14:49, Saku Ytti wrote: > No is saying you must do that. What I'm trying to say, if you already > do overlay QoS, then retaining DSCP should be best practice. If you > cannot do overlay QoS, then of course you must reset. Fair point. In my case, I map all DSCP values used on ingress to corresponding EXP values for handling in the core. But at egress, it's all DSCP again. > But just because there are some bad apples who can't configure their > network, doesn't mean we need to reset for everyone, reset it for them > at egress, not at ingress. I think this element has been discussed at length already... Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] BFD Distributed Mode for IPv6
Hi all. I've been going back & forth with JTAC on this, and they seem a bit lost. Does anyone know whether BFD for IPv6 (IS-IS) is supported in the PFE on the MX? On the router, it clearly isn't: * Destination: aaa.bb.cc.2, Protocol: BFD, Transmission interval: 2000 *Distributed: TRUE*, Distribution handle: 1257, Distribution address: fpc0 IFL-index: 363 Replicated Destination: fe80::1205:caff:fe85:c702, Protocol: BFD, Transmission interval: 2000 *Distributed: FALSE* IFL-index: 363 Tx Jitter: 25, Tx Delayed: FALSE, Delay Difference: 0, Last Tx Interval: 1667, Tx Delayed count: 0, Tx Interval Threshold: 2500, Tx Largest Interval: 0, Tx Counter: 1706639 * JTAC are currently suggesting increasing timers for the IPv6 BFD sessions is the workaround. I'm looking into whether there is known support on the way. Anyone else know more? Currently on 16.2, but jumping to 17.4 soon. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 15:36, Mark Tinka wrote: > In order to do UHP, a network has to run MPLS, and support it up to final > edge where IP/MPLS services are delivered. I know dozens of networks that > will never run MPLS as a matter of principle, and these are not small > networks on a global scale, by any means. No is saying you must do that. What I'm trying to say, if you already do overlay QoS, then retaining DSCP should be best practice. If you cannot do overlay QoS, then of course you must reset. But just because there are some bad apples who can't configure their network, doesn't mean we need to reset for everyone, reset it for them at egress, not at ingress. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 14:25, adamv0...@netconsultings.com wrote: > Mark, it's a simple premise based on live and let live. > > MPLS enabled core network allows operators to treat the traffic the way they > want regardless of what QOS markings the traffic is arriving with at the AS > borders. > You just take whatever comes in on your internet ports and dump it in your > MPLS core BE class, you don't need to care what the DSCP markings are on > various packets within your BE pipe, noise in noise out, let the chips fall > where they may. > So I'd encourage everyone, if you can, be part of the solution not part of > the problem (especially with regards to the BCP-38 and uRPF mentioned) BCP-38 is much more achievable because you don't need to do anything more special than is supported in basic IP. In the worst case, go manual and build ACL's if you can't support uRPF. In order to do UHP, a network has to run MPLS, and support it up to final edge where IP/MPLS services are delivered. I know dozens of networks that will never run MPLS as a matter of principle, and these are not small networks on a global scale, by any means. On my end, I have specific reasons (other than QoS) why I need the MPLS header popped at the penultimate hop, making UHP a non-starter for me. That said, in practice, all customers that have requested that we honour DSCP values have been VPN customers. I imagine there are customers that need the same functionality when running their traffic over the general Internet, but I haven't heard from them... and the few that I know are probably running that in GRE in order to maintain that transparency and avoid what intermediate networks could do to their DSCP values. But again, your network, your rules. If anyone can write me to say that my remarking DSCP to 0 for all Internet traffic is breaking their network or customers on the far end of the TCP/IP session, I'll gladly listen. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 13:21, Alexandre Snarskii wrote: > Retaining DSCP in juniper networks leads to consequence of not showing > egress router in traceroute. Well, it's quite easy to classify packet > on ingress (based on interface), and it's quite easy to mark packet > with MPLS EXP and use this marking along the route, but at the egress > router picture is a bit different: in normal situation with > penultimate-hop-popping packet reaches egress router without MPLS > encapsulation, so you have to believe external (customer's or peer's) > marking to [mis]classify packet. Of course, you can use ultimate-hop > popping (explicit-null) so MPLS EXP can be used at egress router too, > but in this case egress router will not decrement ip ttl and generate > icmp unreachable/ttl exceeded and so will not be shown in traceroutes > making troubleshooting much harder. And that is my point - if I want to use PHP or UHP within my network, that should be a choice I can make not because I want to support DSCP honouring, but for other operational reasons that have a more direct impact on my revenue and customer experience. There are networks that do not support MPLS. But saying that all network that support MPLS should run it as UHP is like saying all networks that run MPLS should do LDP. For me, the damage of allowing off-net DSCP values vs. the harmless fix to the general public on my network is not worth the extra time spending on it. I'd see a lot more benefit in pushing for Internet-wide RPKI deployment. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 14:22, Alexandre Snarskii wrote: > marking to [mis]classify packet. Of course, you can use ultimate-hop > popping (explicit-null) so MPLS EXP can be used at egress router too, > but in this case egress router will not decrement ip ttl and generate > icmp unreachable/ttl exceeded and so will not be shown in traceroutes > making troubleshooting much harder. You indeed must run explicit null if you have full pipe model or really want to do any QoS based on MPLS overlay. However the TTL decrement issue isn't inherent, there are platforms where hardware can't do it. And obviously you can't buy that hardware if you intend to do QoS on overlay, so then you're definitely network which does QoS underlay and must reset. Trio handles EXP null and decrement just fine, I think PFC3B catalyst did not but PFC3C catalyst did. Unsure about ICHIP and IP2. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
> Of Mark Tinka > Sent: Tuesday, October 02, 2018 11:56 AM > > > > On 2/Oct/18 12:48, Saku Ytti wrote: > > > You tell to them DSCP values do not affect forwarding or queueing > > behaviour in your network. > > If only the whole Internet was ran by just one AS. > Mark, it's a simple premise based on live and let live. MPLS enabled core network allows operators to treat the traffic the way they want regardless of what QOS markings the traffic is arriving with at the AS borders. You just take whatever comes in on your internet ports and dump it in your MPLS core BE class, you don't need to care what the DSCP markings are on various packets within your BE pipe, noise in noise out, let the chips fall where they may. So I'd encourage everyone, if you can, be part of the solution not part of the problem (especially with regards to the BCP-38 and uRPF mentioned) This is achievable because MPLS (as a form of tunnelling) allows you to maintain various independent queuing models. So by definition from the MPLS core perspective I couldn't care less about what's going on at the edge, but of course the business logic (not the QOS markings I mind you) forces me to pay attention to QOS markings on packets that enter through some types of links. That's why usually MPLS core only cares about whether it was voice, business-in-contract, business-out-of-contract or internet traffic in order to perform scheduling on core links. Now, I'm aware that there are cases were you have to bleach certain DSCP markings at ingress -this is in cases where the internet traffic is carried in native IPv4/6 core as you mentioned. That's where in fact, at ingress to your core, you should bleach DSCP marking you're using internally (but that's a necessary evil). adam netconsultings.com ::carrier-class solutions for the telecommunications industry:: ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 13:03, Saku Ytti wrote: > You continue to miss the point. You brought antispoofing as an > example, it's same thing, not everyone needs to do the right thing, > anyone changing is better. And as I pointed out, the discussion isn't > even 'how widely it works, if it doesn't work widely we shouldn't do > it' that's illogical. Discussion should be 'should work, should we > transit DSCP values as-is'. > > If you globally test (ring, ripe) how DSCP transits Internet, it's not > horror story, it transit fairly well. Having troubleshot 2 cases of this over several weeks each, it's not the kind of time I'd like to put into it again while customers are suffering some form of outage to/from some parts of the Internet. While BCP-38 is poorly deployed globally, a network not doing this in Russia does not - in the immediate term - have as big of an impact on a random user in Papua New Guinea that can't be easily and quickly diagnosed and/or fixed. So I'm more than happy to deploy BCP-38 in my network even though other networks decide not to, because the added protection for my network does not lead to an outage for my customers, or for other networks that do not yet deploy BCP-38. > We should not promote idea that you MUST reset DSCP at edge, we should > promote idea you SHOULD tunnel your internal QoS, do not trust or > modify external DSCP. I fully understand that some networks have no > other recourse, they don't have overlay which to use QoS for, then > they MUST reset DSCP as they MUST act on DSCP, that's entirely valid > excuse. But networks which CAN transit it without caring about it, > SHOULD do it, rather than assume they know how DSCP is used and should > be used, they should assume they don't know and they're not going be > the network which stops others from using DSCP. I'm not asking or advising any network operator to do what I am doing. I am just providing my viewpoint (and as others have shown, this is not a problem I have suffered alone). If a network operator feels that they want to allow DSCP values on Internet traffic as they come into their network, unmolested, as we always say, "Your network, your rules". Again, from a practical standpoint, in my corner of the Internet, remarking DSCP to 0 on ingress into my AS for Internet traffic has not broken anything upstream or downstream. If it did, I'd give this more attention. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 02/10/18 20:26, Saku Ytti wrote: > On Tue, 2 Oct 2018 at 13:21, Julien Goodwin wrote: > >> The trouble is, some providers might use a bit to mean something like >> "prefer cheap EU paths" for Asia->AU traffic, leaving it set then causes >> hundreds of milliseconds of latency. Some might even implement VPNs that >> way causing blackholing. > > I don't think so, If they use DSCP values as-is, then they MUST > nullify them. Full and short pipe assume you use MPLS TC for internal > decision, at which point you don't need to care or trust DSCP. If you > use DSCP internally _AND_ trust Internet DSCP, it's broken > configuration. ... and since when would *any* network ever have a subtly broken config? Except for, you know, pretty much every network in practice. >> We had a few cases where failing to set DSCP to zero at our edge (in >> either direction) caused various issues, those also took weeks to >> troubleshoot. > > It is indeed possible to misconfig things. But it's not the problem of > instance sending DSCP, it's problem of the instance having broken > config based on DSCP. > > > ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, Oct 02, 2018 at 01:10:14PM +0300, Saku Ytti wrote: > Hey Mark, > > > We remark all incoming Internet traffic DSCP values to 0. > > > > A few years ago, not doing this led to issues where customers were > > handling the DSCP values in such a way that any traffic marked as > > such was being dropped. Took weeks to troubleshoot. > > Dropped by who? I as an Internet user would prefer my DSCP information > to traverse internet, I do not ask it to be honoured. > > If customer has problems with DSCP, then of course they can nullify it > in their network, or you can even sell it as service to them. But to > nullify it for everyone seems quite big hammer, unless of course in > your particular deployment it would be technically not possible to > retain them. Retaining DSCP in juniper networks leads to consequence of not showing egress router in traceroute. Well, it's quite easy to classify packet on ingress (based on interface), and it's quite easy to mark packet with MPLS EXP and use this marking along the route, but at the egress router picture is a bit different: in normal situation with penultimate-hop-popping packet reaches egress router without MPLS encapsulation, so you have to believe external (customer's or peer's) marking to [mis]classify packet. Of course, you can use ultimate-hop popping (explicit-null) so MPLS EXP can be used at egress router too, but in this case egress router will not decrement ip ttl and generate icmp unreachable/ttl exceeded and so will not be shown in traceroutes making troubleshooting much harder. Or am I missing something ? ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 13:56, Mark Tinka wrote: > If only the whole Internet was ran by just one AS. You continue to miss the point. You brought antispoofing as an example, it's same thing, not everyone needs to do the right thing, anyone changing is better. And as I pointed out, the discussion isn't even 'how widely it works, if it doesn't work widely we shouldn't do it' that's illogical. Discussion should be 'should work, should we transit DSCP values as-is'. If you globally test (ring, ripe) how DSCP transits Internet, it's not horror story, it transit fairly well. We should not promote idea that you MUST reset DSCP at edge, we should promote idea you SHOULD tunnel your internal QoS, do not trust or modify external DSCP. I fully understand that some networks have no other recourse, they don't have overlay which to use QoS for, then they MUST reset DSCP as they MUST act on DSCP, that's entirely valid excuse. But networks which CAN transit it without caring about it, SHOULD do it, rather than assume they know how DSCP is used and should be used, they should assume they don't know and they're not going be the network which stops others from using DSCP. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:48, Saku Ytti wrote: > You tell to them DSCP values do not affect forwarding or queueing > behaviour in your network. If only the whole Internet was ran by just one AS. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:47, Saku Ytti wrote: > I think this is wrong thing to focus on. Should we focus on 'will it > work now universally' or should we focus on 'should we strive it to > work'. > > I feel like you're saying 'because it does not work universally, we > shouldn't try to ensure it works'. Which is peculiar statement to > make. > > Now I could understand 'it should not work, it's bad if it works' , I > don't agree, but at least it's logical. If you are up to starting a movement that fixes this issue globally so that, as network operators, we all agree that we should make it work, I'm all for it. But I'm also realistic - despite a properly-defined RFC1918 spec. that suggest what IP addresses to use in your private network, you still see a bunch of people using 1.0.0.0/24 for the private LAN. Correcting stuff like this is hard, Saku. Believe me, I've tried. But if you have a better way to approach this, I'm all ears. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 13:45, Mark Tinka wrote: > So when my customer says to me... "I sent DSCP=46 to Yahoo, but I think my > Yahoo is still not faster?", what do I tell them? > > Or, when customer says to me... "I sent DSCP=3 from my office in Nairobi to > my office in Perth, but the Perth side is seeing DSCP=12", what do I tell > them? You tell to them DSCP values do not affect forwarding or queueing behaviour in your network. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 13:43, Mark Tinka wrote: > In theory, the Internet needs to just pass those bits. But in practice, can > you actually expect any guarantee of that when those SSH packets are > traveling from Taipei, to Hong Kong, to Beijing, to Moscow, to Kiev, to > Frankfurt, to London, to Stockholm, and perhaps, finally, to Helsinki? I think this is wrong thing to focus on. Should we focus on 'will it work now universally' or should we focus on 'should we strive it to work'. I feel like you're saying 'because it does not work universally, we shouldn't try to ensure it works'. Which is peculiar statement to make. Now I could understand 'it should not work, it's bad if it works' , I don't agree, but at least it's logical. > Think about it this way, LOCAL_PREF for BGP means nothing outside of your AS. It's non-transitive, it doesn't cross eBGP border. It's bad example. I'd use BGP communities as example, LOT of network remove other people's BGP communities, which I view as bad behaviour, you don't know if receiver can find value in those BGP communities, even if you don't do anything with them. > Again, it's not about the actual feature itself. It's about the lack of a > global standard on how to treat DSCP when in the global space. Immaterial, it's colour in the packet, it may not mean anything to you, but it may mean something to receiver. > TDC lives in a good world where it does not have to talk to the rest of the > Internet. But provided they have to talk to the rest of the Internet, there > is no way they can ensure consistent performance for their customers if they > allow inbound or egress Internet traffic to run amok with whatever DSCP/IPP > vlaues set outside of their control. I feel like we're going on in circles and you continue to focus on something which does not matter and is not proposed. ´ -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:37, Saku Ytti wrote: > I'm not proposing any global agreement on DSCP, I'm proposing passing > them as-is so when there is same instance in ingress and egress, they > can do with them what they want. So when my customer says to me... "I sent DSCP=46 to Yahoo, but I think my Yahoo is still not faster?", what do I tell them? Or, when customer says to me... "I sent DSCP=3 from my office in Nairobi to my office in Perth, but the Perth side is seeing DSCP=12", what do I tell them? In both cases, I do not have full end-to-end for the life of the packet. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:31, Saku Ytti wrote: > This is not the point here, the point is are you guaranteeing > something with the DSCP. The point is opposite, if you can avoid > caring or acting on them, then you do not need to change them. The implementation is irrelevant. Whether I use DSCP, IPP, EXP or carrier-pigeon, the end-goal is to provide some guarantees. In this case, DSCP is a good way to provide guarantees. But even though we cannot provide guarantees to off-net traffic, it doesn't mean we should accept the potential issues that could occur due to the lack of a global standard on how DSCP values should be honoured on the general Internet. > Me-Asia ---> Internet ---> Me-Europe > > I control my ends PE<=>CE, Me-Europe is often congested and I want to > prioritise SSH interactive sessions there. I could just set SSH to EF > in Me-Asia and have Me-Europe honour it. Internet needs just to pass > the bits, not act on them. In theory, the Internet needs to just pass those bits. But in practice, can you actually expect any guarantee of that when those SSH packets are traveling from Taipei, to Hong Kong, to Beijing, to Moscow, to Kiev, to Frankfurt, to London, to Stockholm, and perhaps, finally, to Helsinki? > Now of course I can configure stuff so that I break stuff by DSCP > values. But that's not Internet's fault. Think about it this way, LOCAL_PREF for BGP means nothing outside of your AS. In my mind, DSCP is just like LOCAL_PREF. We are really banging our heads on a wall if we think we can use it effectively outside of a walled-garden. > Danish incumbent TDC has ran full pipe for maybe two decades. There is > no way to guarantee what others do, but they view that it's not their > position to mangle other people's traffic, if they can avoid it. > > I fully understand that some people technically cannot do short or > full pipe, that's fine. But if you can, I think you should. Again, it's not about the actual feature itself. It's about the lack of a global standard on how to treat DSCP when in the global space. TDC lives in a good world where it does not have to talk to the rest of the Internet. But provided they have to talk to the rest of the Internet, there is no way they can ensure consistent performance for their customers if they allow inbound or egress Internet traffic to run amok with whatever DSCP/IPP vlaues set outside of their control. We learned the hard way. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 13:33, Mark Tinka wrote: > So what is easier to fix? Agreeing on a global standard for what DSCP values > to use in our own internal networks? I'm not proposing any global agreement on DSCP, I'm proposing passing them as-is so when there is same instance in ingress and egress, they can do with them what they want. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:26, Saku Ytti wrote: > I don't think so, If they use DSCP values as-is, then they MUST > nullify them. Full and short pipe assume you use MPLS TC for internal > decision, at which point you don't need to care or trust DSCP. If you > use DSCP internally _AND_ trust Internet DSCP, it's broken > configuration. We only use MPLS EXP in the core (for IPv4). At the edge, we use DSCP. For IPv6, we use DSCP in the core since we are not forwarding IPv6 traffic in MPLS yet. > It is indeed possible to misconfig things. But it's not the problem of > instance sending DSCP, it's problem of the instance having broken > config based on DSCP. So what is easier to fix? Agreeing on a global standard for what DSCP values to use in our own internal networks? Did you see how how well BCP-38 and uRPF deployment have gone globally? Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 13:23, Mark Tinka wrote: > Since we cannot guarantee the end-to-end quality of off-net traffic, it does > not make sense to have any DSCP values for Internet traffic other than 0. > Because even though I might not remark it, I can't guarantee that my peers, > transit providers or other customers won't. This is not the point here, the point is are you guaranteeing something with the DSCP. The point is opposite, if you can avoid caring or acting on them, then you do not need to change them. Me-Asia ---> Internet ---> Me-Europe I control my ends PE<=>CE, Me-Europe is often congested and I want to prioritise SSH interactive sessions there. I could just set SSH to EF in Me-Asia and have Me-Europe honour it. Internet needs just to pass the bits, not act on them. Now of course I can configure stuff so that I break stuff by DSCP values. But that's not Internet's fault. > If you are a large network (such as yourselves, Saku) where it's very likely > that the majority your customers are talking to each other directly across > your backbone, then I could see the case. But when you have customers > transiting multiple unique networks before they can talk to each other or to > servers, there is really no way you can guarantee that DSCP=1 from source > will remain DSCP=1 at destination. Danish incumbent TDC has ran full pipe for maybe two decades. There is no way to guarantee what others do, but they view that it's not their position to mangle other people's traffic, if they can avoid it. I fully understand that some people technically cannot do short or full pipe, that's fine. But if you can, I think you should. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:24, James Bensley wrote: > I'm not saying I agree with this 8 classes - just stating what it was > :) I also agree that most people genuinely don't need more than 3-4. > We often "helped" (nudged) customers to design their traffic into just > a few classes. > > Here in the land of Her Majesty and cups of tea, if you want to > operate as part of the Public Services Network (a national effort to > provide unified services to the public sector across multiple > providers to stamp out any monopoly) you must comply with their 6 > class model [1]: > https://www.gov.uk/government/publications/psn-quality-of-service-qos-specification/psn-quality-of-service-qos-specification > > So this 6 classes, we split voice signalling and media into two, with > the media being an LLQ, and had a separate class to guarantee traffic > for control and MGMT plane traffic (e.g. we can still SSH to our > routers with a customer DoS is filling the pipes) we ended up with 8. > Yay :( > > Cheers, > James. > > [1] As is customary with any tech savvy government, they've since > sacked off various PSN standards without providing any replacement so > everyone is just sticking to the same expired standards for now > I feel you - I've just never understood how customers can expect to slice a First Class seat into more First Class seats than it already is :-). Maybe you put your arm in one First Class seat, your head in a 2nd First Class seat, and your buttocks in the another First Class seat :-)... To make life more interesting, with everything now moving to the cloud, let's see how we DiffServ that :-). Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 11:23, Mark Tinka wrote: > If you are a large network (such as yourselves, Saku) where it's very likely > that the majority your customers are talking to each other directly across > your backbone, then I could see the case. But when you have customers > transiting multiple unique networks before they can talk to each other or to > servers, there is really no way you can guarantee that DSCP=1 from source > will remain DSCP=1 at destination. +1. This is what we did in some parts. The way the Internet connectivity was sold to customers was as a best effort service so they had no issues with the DSCP being scrubbed to 0 in these places. Also lots of customer didn't like packets coming in from the Internet and hitting their section of the WAN with a DSCP marking on it. Some explicitly asked us to scrub to 0. Cheers, James. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 13:21, Julien Goodwin wrote: > The trouble is, some providers might use a bit to mean something like > "prefer cheap EU paths" for Asia->AU traffic, leaving it set then causes > hundreds of milliseconds of latency. Some might even implement VPNs that > way causing blackholing. I don't think so, If they use DSCP values as-is, then they MUST nullify them. Full and short pipe assume you use MPLS TC for internal decision, at which point you don't need to care or trust DSCP. If you use DSCP internally _AND_ trust Internet DSCP, it's broken configuration. > We had a few cases where failing to set DSCP to zero at our edge (in > either direction) caused various issues, those also took weeks to > troubleshoot. It is indeed possible to misconfig things. But it's not the problem of instance sending DSCP, it's problem of the instance having broken config based on DSCP. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 10:57, Mark Tinka wrote: > I've never quite understood it when customers ask for 8 or even 16 classes. > When it comes down to it, I've not been able to distill the queues to more > than 3. Simply put, High, Medium and Low. The 4th queue is for the network > itself. I'm not saying I agree with this 8 classes - just stating what it was :) I also agree that most people genuinely don't need more than 3-4. We often "helped" (nudged) customers to design their traffic into just a few classes. Here in the land of Her Majesty and cups of tea, if you want to operate as part of the Public Services Network (a national effort to provide unified services to the public sector across multiple providers to stamp out any monopoly) you must comply with their 6 class model [1]: https://www.gov.uk/government/publications/psn-quality-of-service-qos-specification/psn-quality-of-service-qos-specification So this 6 classes, we split voice signalling and media into two, with the media being an LLQ, and had a separate class to guarantee traffic for control and MGMT plane traffic (e.g. we can still SSH to our routers with a customer DoS is filling the pipes) we ended up with 8. Yay :( Cheers, James. [1] As is customary with any tech savvy government, they've since sacked off various PSN standards without providing any replacement so everyone is just sticking to the same expired standards for now ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:21, Julien Goodwin wrote: > The trouble is, some providers might use a bit to mean something like > "prefer cheap EU paths" for Asia->AU traffic, leaving it set then causes > hundreds of milliseconds of latency. Some might even implement VPNs that > way causing blackholing. > > We had a few cases where failing to set DSCP to zero at our edge (in > either direction) caused various issues, those also took weeks to > troubleshoot. This. And you've reminded me, Julien. In our case, we had 2 issues; one where the customer's firewall was doing strange things to non-zero DSCP value, and a second where a customer ISP network was doing strange things to non-zero DSCP values, because of the way they needed to handle all traffic that had that DSCP value in a locally-significant manner. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 12:10, Saku Ytti wrote: > Dropped by who? I as an Internet user would prefer my DSCP information > to traverse internet, I do not ask it to be honoured. Dropped by the customer's network itself. There was something in their firewall that was dropping packets that had a non-zero value. They couldn't figure it out, and to make matters worse, it happened only if traffic was coming from only some peering point, some transit networks, and some other on-net edge customers. Since we cannot guarantee the end-to-end quality of off-net traffic, it does not make sense to have any DSCP values for Internet traffic other than 0. Because even though I might not remark it, I can't guarantee that my peers, transit providers or other customers won't. > If customer has problems with DSCP, then of course they can nullify it > in their network, or you can even sell it as service to them. But to > nullify it for everyone seems quite big hammer, unless of course in > your particular deployment it would be technically not possible to > retain them. In theory, it would be nice that random DSCP values in Internet-bound packets are honoured. In practice, we have not had any customers who have asked for this, or needed to use it, or complained when we didn't support it. The work that would be required to do this selectively for the large and diverse customer-base does not make sense. If you are a large network (such as yourselves, Saku) where it's very likely that the majority your customers are talking to each other directly across your backbone, then I could see the case. But when you have customers transiting multiple unique networks before they can talk to each other or to servers, there is really no way you can guarantee that DSCP=1 from source will remain DSCP=1 at destination. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 02/10/18 20:10, Saku Ytti wrote: > Hey Mark, > >> We remark all incoming Internet traffic DSCP values to 0. >> >> A few years ago, not doing this led to issues where customers were handling >> the DSCP values in such a way that any traffic marked as such was being >> dropped. Took weeks to troubleshoot. > > Dropped by who? I as an Internet user would prefer my DSCP information > to traverse internet, I do not ask it to be honoured. The trouble is, some providers might use a bit to mean something like "prefer cheap EU paths" for Asia->AU traffic, leaving it set then causes hundreds of milliseconds of latency. Some might even implement VPNs that way causing blackholing. We had a few cases where failing to set DSCP to zero at our edge (in either direction) caused various issues, those also took weeks to troubleshoot. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On Tue, 2 Oct 2018 at 10:10, Saku Ytti wrote: > > Hey James, Hi Saku > > Yeah so not already using RSVP means that we're not going to deploy it > > just to deploy an IntServ QoS model. We also use DSCP and scrub it off > > of dirty Internet packets. > > Have you considered full or short pipe? It would be nice if we'd > transport DSCP bits as-is, when we don't have to care about them. > Far-ends may be able to utilise them for improved UX if we don't strip > them. Yeah in the short pipe model we can map traffic from a transit/peering port into a best-effort class across the core and leave the DSCP markings as is. Caveat: this is depending on your tin. If I remember correctly we had some old tin that gave you the choice of queuing based on received DSCP and then pushing on our core DSCP/EXP at egress but not queuing based upon them, or scrubbing the incoming DSCP value and then queuing on ingress DSCP (which would now be 0). So in the former case, Internet traffic could freely drop into and congest an LLQ on these old boxen. So yeah, tin permitting, short pipe FTW in my opinion. Also most people run multi-vendor networks, I think short pipe is an easy way forward for multi-vendor QoS. That is what I was eluding to here (perhaps not very clearly): > which you > can simplify in your core using pipe mode or short pipe QoS models. We > could offer multiple QoS levels to customers, but simplify the classes > down to like 3-4 in the core, and without any need for RSVP. Cheers, James. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
Hey Mark, > We remark all incoming Internet traffic DSCP values to 0. > > A few years ago, not doing this led to issues where customers were handling > the DSCP values in such a way that any traffic marked as such was being > dropped. Took weeks to troubleshoot. Dropped by who? I as an Internet user would prefer my DSCP information to traverse internet, I do not ask it to be honoured. If customer has problems with DSCP, then of course they can nullify it in their network, or you can even sell it as service to them. But to nullify it for everyone seems quite big hammer, unless of course in your particular deployment it would be technically not possible to retain them. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 11:10, Saku Ytti wrote: > Have you considered full or short pipe? It would be nice if we'd > transport DSCP bits as-is, when we don't have to care about them. > Far-ends may be able to utilise them for improved UX if we don't strip > them. We remark all incoming Internet traffic DSCP values to 0. A few years ago, not doing this led to issues where customers were handling the DSCP values in such a way that any traffic marked as such was being dropped. Took weeks to troubleshoot. For VPN services, there are a few options: * We can honour customer's values. * We can map customer's values to our own internal values. * We can remark customer's values to our own values since the traffic is coming in over a known VLAN. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
On 2/Oct/18 10:21, James Bensley wrote: > Yeah so not already using RSVP means that we're not going to deploy it > just to deploy an IntServ QoS model. We also use DSCP and scrub it off > of dirty Internet packets. > > Like with many things, it depends on your requirements. Having worked > for managed WAN providers where you have a infrastructure shared > amongst multiple customers / stakeholders and provide WAN connectivity > with over the top services like Internet and VoIP, QoS is a product > most customers expect. In this scenario you typically have a set of > queues and customer can access either all of them or a sub-set for a > cost. In a former life we had up-to 8 classes for customers which you > can simplify in your core using pipe mode or short pipe QoS models. We > could offer multiple QoS levels to customers, but simplify the classes > down to like 3-4 in the core, and without any need for RSVP. I feel > this is a good balance between complexity/simplicity and scalability. > If you don't have multiple stakeholders then IntServ becomes more > appealing due to the granularity on offer, but in the shared > infrastructure scenario my experience is that mapping multiple > customer queues down to a fewer core queues helps to protect the > control plane and LLQ traffic in a simply way that covered all stake > holders, and no need for the additional signalling complexity that > RSVP brings. I've never quite understood it when customers ask for 8 or even 16 classes. When it comes down to it, I've not been able to distill the queues to more than 3. Simply put, High, Medium and Low. The 4th queue is for the network itself. As you said, you can give a customer 100 classes, but in the backbone, they'll likely fall into just these 3 queues. We run 3 queues, and also tell customers that we can only offer 3 classes. All Internet traffic is in the "Low" queue. Depending on the type of on-net VPN service you're buying from us, you'll either end up in the "Medium" or "High" queue. This is enforced across all devices, right from the access, edge, core and the same on the other side. Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
Hey James, > Yeah so not already using RSVP means that we're not going to deploy it > just to deploy an IntServ QoS model. We also use DSCP and scrub it off > of dirty Internet packets. Have you considered full or short pipe? It would be nice if we'd transport DSCP bits as-is, when we don't have to care about them. Far-ends may be able to utilise them for improved UX if we don't strip them. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] Use cases for IntServ in MPLS backbones
> On 1/Oct/18 12:16, adamv0...@netconsultings.com wrote: > > > Hi folks, Hi Adam, On Mon, 1 Oct 2018 at 12:00, Mark Tinka wrote: > > So we don't do any label signaling via RSVP-TE at all. > > We use DSCP, but really only for on-net traffic. > > Off-net traffic (like the Internet) is really treated as best-effort. > You can't prioritize what you can't control. Yeah so not already using RSVP means that we're not going to deploy it just to deploy an IntServ QoS model. We also use DSCP and scrub it off of dirty Internet packets. Like with many things, it depends on your requirements. Having worked for managed WAN providers where you have a infrastructure shared amongst multiple customers / stakeholders and provide WAN connectivity with over the top services like Internet and VoIP, QoS is a product most customers expect. In this scenario you typically have a set of queues and customer can access either all of them or a sub-set for a cost. In a former life we had up-to 8 classes for customers which you can simplify in your core using pipe mode or short pipe QoS models. We could offer multiple QoS levels to customers, but simplify the classes down to like 3-4 in the core, and without any need for RSVP. I feel this is a good balance between complexity/simplicity and scalability. If you don't have multiple stakeholders then IntServ becomes more appealing due to the granularity on offer, but in the shared infrastructure scenario my experience is that mapping multiple customer queues down to a fewer core queues helps to protect the control plane and LLQ traffic in a simply way that covered all stake holders, and no need for the additional signalling complexity that RSVP brings. Cheers, James. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp