Re: [Bloat] [aqm] [iccrg] AQM deployment status?
On Sep 25, 2013, at 12:24 PM, Mikael Abrahamsson swm...@swm.pp.se wrote: For higher end platforms, for instance all cisco CPU based routers (for some value of all) can be configured with RED, fair-queue or similar, but they come with FIFO as default. This has been the same way since at least the mid 90ties as far as I know, long way back to cisco 1600 device etc. Higher end Cisco equipment such as ASR9k, 12000, CRS etc, all support WRED, and here it makes sense since they all have ~50ms worth of buffering or more. They also come with FIFO as default setting. Yes. There are two reasons that we don't deploy RED by default. One is that we operate on a principle we call the principle of least surprise; we try to not change our customer's configurations by changing the defaults, and at the time I wrote the RED/WRED code the default was FIFO. Yes, that was quite some time ago, and one could imagine newer equipment coming out with different defaults on deployment, but that's not what the various business units do. The other is this matter of configuration. In the late 1990's, Cisco employed Van and Kathy, and asked them to figure out autoconfiguration values. That didn't work out. I don't say that as a slam on V+K; it's simply a fact. They started by recommending an alternative algorithm called RED-Lite (don't bother googling that; you get discothèques), and are now recommending CoDel. Now, there is probably a simplistic value that one could throw in, such as setting max-threshold to the queue memory allocated to the queue and min-threshold to some function logarithmically related to the bit rate that serves as an estimator of the number of bit-times of delay to tune to. That would probably be better than nothing, but it would be nice to have some science behind it. That brings us back to the auto-tuning discussion. To my way of thinking, a simplistic algorithm such as that logarithmic approach or a lookup table that starts from (perhaps) bit rate and neighbor ping RTT can come up with an initial set of parameters that the algorithm's own processes can modify to ambient traffic behavior. Kathy's simulations in the study I mentioned suggested that a 1.5 MBPS link might do well with min-threshold at 30 ms (e.g., ~4 MTU-sized messages or a larger number of smaller messages), and rates higher at some single digit number of ms, decreasing as the speed increased. CoDel suggests a flat 5 ms value (at 1.5 MBPS, less than a single MTU, and at one gigabit, 83 12K bit messages). I could imagine the initialization algorithm selecting 5 ms above a certain speed, and the equivalent of 4 MTU intervals at lower line speeds where 4*MTU exceeds 5 ms. I could imagine a related algorithm then adjusting that initial value to interact well with Google's IW=10 behavior or whatever else it finds on the link. PIE would probably start with a similar set of values, and tune its mark/drop interval to reach a value that works well with ambient traffic. signature.asc Description: Message signed with OpenPGP using GPGMail ___ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat
Re: [Bloat] [aqm] [iccrg] AQM deployment status?
Curtis all, At 17:07 14/10/2013, Curtis Villamizar wrote: In enterprise and data center there is also very good control over what equipment is used and how it is used. However, clue density decreases exponentially farther from the core and approaches zero in some data centers and in some enterprise IT departments. This is also true of the part of service provider organizations that pick stuff for consumer access. Where clue density is lowest, you'll find ignorance of AQM and buffer issues and lack of consideration for buffering and AQM when picking equipment for deployment and configuring it. We (BT) use WRED extensively at the edge routers into our global enterprise MPLS network (see the http://www2.bt.com/btPortal/application;JSESSIONID_btPortalWebApp=Jq1gl1Qz8sbdOQjvnRTGVGo1IfBs47bTe6Amy69S3bYcsw5A99sp!6984303?namespace=pns_catalogueorigin=mb_navigator_right_cd_editorial.jspevent=link.print.editorialPorS=productscontentType=techinicalspeccontentitemid=editorial/tech_spec/mpls_ts.xmlproductDetail=products/mpls.xmlBT MPLS technical http://www2.bt.com/btPortal/application;JSESSIONID_btPortalWebApp=Jq1gl1Qz8sbdOQjvnRTGVGo1IfBs47bTe6Amy69S3bYcsw5A99sp!6984303?namespace=pns_catalogueorigin=mb_navigator_right_cd_editorial.jspevent=link.print.editorialPorS=productscontentType=techinicalspeccontentitemid=editorial/tech_spec/mpls_ts.xmlproductDetail=products/mpls.xmlspecification). Bob Bob Briscoe, BT ___ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat
Re: [Bloat] [aqm] [iccrg] AQM deployment status?
On Sun, 29 Sep 2013, Bob Briscoe wrote: The shallow marking threshold certainly keeps standing queuing delay low. However, that's only under long-running constant conditions. During dynamics, not waiting a few hundred msec to respond to a change in the queue is what keeps the queuing delay predictably low. Dynamics are the norm, not constant conditions. Well, my original question was in the context of a 3-5ms tail drop queue (such as are frequently available in lower end switches). My understanding from earlier experience is that TCP will severely saw-tooth under these conditions and only way I could see marking as being valuable was if the RTT was less than buffer depth (or at least very low). There was a discussion earlier on bloat-l regarding what the impact of a 10ms CoDel queuing scheme will have on 200ms RTT existing non-ECN TCP performance. Deep queues were invented to handle this specific use-case. One way would absolutely be for TCP to not send a lot of packets back-to-back but instead pace them out at the rate that actually makes the TCP rate be exact in a millisecond resolution instead of as it is today, perhaps tens or hundreds of milliseconds. I believe some implementations already do this. -- Mikael Abrahamssonemail: swm...@swm.pp.se ___ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat
Re: [Bloat] [aqm] [iccrg] AQM deployment status?
Thanks Shahid Although interesting to know that (W)RED has made it into some hardware (in your RE to Lars' point), my question is more about deployment at the edge or the core, whether it's being used or not? Cheers, Naeem On Wed, Sep 25, 2013 at 8:51 PM, Akhtar, Shahid (Shahid) shahid.akh...@alcatel-lucent.com wrote: Hi Steinar and Lars, Please see below examples of support for RED/WRED from switches (from ALU and Cisco websites, search for RED or WRED in document): ALU 7450: https://www.google.com/url?sa=trct=jq=esrc=ssource=webcd=3cad=rjaved=0CDMQFjACurl=http%3A%2F%2Fwww3.alcatel-lucent.com%2Fwps%2FDocumentStreamerServlet%3FLMSG_CABINET%3DDocs_and_Resource_Ctr%26LMSG_CONTENT_FILE%3DData_Sheets%2F7450ESS_HS_MDA_ds.pdfei=pS5DUrIZiI70BLyYgYgKusg=AFQjCNGS8uB-0Ele3TpDIRqbL4p1_kd2DQsig2=v-AgGJMpcZjrpjP2oZIHrAbvm=bv.53077864,d.eWU ALU Omniswitch: https://www.google.com/url?sa=trct=jq=esrc=ssource=webcd=3ved=0CDkQFjACurl=http%3A%2F%2Fwww3.alcatel-lucent.com%2Fwps%2FDocumentStreamerServlet%3FLMSG_CABINET%3DDocs_and_Resource_Ctr%26LMSG_CONTENT_FILE%3DData_Sheets%2Fent_OS6855_datasheet_en.pdfei=9i1DUsXlOo788QTU94HQAQusg=AFQjCNFSXSEpkFjSihK1a0pkSHhItU8sZQsig2=3fitEZLZZI7Dmf0H2T3QbQbvm=bv.53077864,d.eWU Cisco Catalyst 6000/6500: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper09186a0080131086.html Cisco Catalyst 4500: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps4324/white_paper_c11-539588.html Best regards, -Shahid. -Original Message- From: Steinar H. Gunderson [mailto:sgunder...@bigfoot.com] Sent: Wednesday, September 25, 2013 9:35 AM To: Eggert, Lars Cc: Akhtar, Shahid (Shahid); bloat; ic...@irtf.org; a...@ietf.org Subject: Re: [Bloat] [iccrg] [aqm] AQM deployment status? On Wed, Sep 25, 2013 at 02:31:21PM +, Eggert, Lars wrote: Most routers/switches/access equipment support RFC 2309, which is a description of RED. I've heard that statement from many different people. I wonder if it is actually true. Is there any hard data on this? I don't have hard data, but my soft data is that generally, devices branded as switches (including L3 switches) do not support it. Routers might. /* Steinar */ -- Homepage: http://www.sesse.net/ ___ aqm mailing list a...@ietf.org https://www.ietf.org/mailman/listinfo/aqm ___ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat
Re: [Bloat] [aqm] [iccrg] AQM deployment status?
On Wed, 25 Sep 2013, Akhtar, Shahid (Shahid) wrote: Please see below examples of support for RED/WRED from switches (from ALU and Cisco websites, search for RED or WRED in document): I'd venture to claim that putting RED on a device with a few milliseconds worth of buffer depth is pretty much useless (for instance the Cisco 6704 or 6724 linecards). So most datacenter equipment doesn't have this, or if they do, it's not that usable even if they had (who cares about drop probability when taildrop is being done at 4-5 ms buffer depth?). For higher end platforms, for instance all cisco CPU based routers (for some value of all) can be configured with RED, fair-queue or similar, but they come with FIFO as default. This has been the same way since at least the mid 90ties as far as I know, long way back to cisco 1600 device etc. Higher end Cisco equipment such as ASR9k, 12000, CRS etc, all support WRED, and here it makes sense since they all have ~50ms worth of buffering or more. They also come with FIFO as default setting. -- Mikael Abrahamssonemail: swm...@swm.pp.se ___ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat