On Sat, Oct 01, 2005 at 06:53:12PM -0400, Matt Van Mater wrote: > I have a similar setup to what Daniel specifies in > http://www.benzedrine.cx/ackpri.html but have a nagging question that > I haven't been able to find an answer for. > > Why do you need to specify bandwidth on the parent queue in order for > the prioritizations to work correctly?
queueing is only able to queue when there's a queue. there won't be (much of) a queue unless the traffic is traversing an interface slower then the rate at which it wants to. if i had fourteen arms(and a hand on each), and my purpose in life was to carry different coloured blocks from A to B, and i had a traffic shaping mechanism by which i would prioritize one colour of block over an other (let's say yellow gets high priority, then blue, then black gets lowest), and one yellow and one blue block comes in for me to pick up, it doesn't matter what the priority is because i have 12 more arms who aren't used. my limbs aren't congested, and i will be able to deliver the blocks from A to B as quick as my feet can move me. if 14 blocks come in, it still doesn't matter, because there are no blocks which i have to choose to leave behind; even if there are 4 yellow, 5 blue and 5 black, doesn't matter, they all go from A to B at the same rate. if 20 blocks come in for me to move, then i have an opportunity to prioritize. if 10 blocks are blue, 4 are yellow and 6 are black, i am going to leave those 6 black blocks behind, take the 4 yellow and 10 blue with me from A to B, dump them, come back, and then if the number of blocks waiting for me is > 14 again (including the 6 old black ones ), then i prioritize again. if the total number of blocks is <= 14, i just pick 'em all up and take 'em. > Shouldn't the packet scheduler > always transmit higher priority packets before lower priority packets > regardless of the bandwidth cap that is specified? i believe that in the general case, or at least in altq, the answer is no. > In my situation, > the 'lowdelay' SSH sessions are unbearably slow unless I set my > bandwidth down to 300Kb. Of course this solves the problem of poor > interactive SSH performance, but it means I am sacrificing ~200Kb of > potential speed in order to attain that. bandwidth and delay are coupled in priq and cbq schedulers; if you need them to be uncoupled, examine hfsc ( it is possible to setup hfsc to be real darn close to priq or real darn close to cbq, and then have one or two queues which take advantage of hfsc's properties ). priq will obey you. if you set two queues, HI and LO, and all the data coming in to HI is enough to fill the pipe, LO will not be serviced until there is room. for a simple test, make two queues, one for default and one for ICMP; set default higher priority, and set the altq bandwidth to something real low, so that you can test it easily (~32Kb or so). might as well do this on your interface between the openbsd and your LAN, so you can be sure you can saturate the link with whatever you set altq bandwidth to. then start something that eats a lot of bandwith; run worms(6) over ssh a couple of times, open up chargen in inetd, do an ftp, netcat /dev/zero on one host to /dev/null on another, etc. then try pinging the openbsd machine. if you setup your queues right, you will not get a reply until you stop that other traffic enough so that there is bandwidth available for the outgoing echo reply. can't explain specifically to you why the ssh is slow unless you set the cap down, without seeing the rulset/altq declaration, which the example below seems to be a theoretical one and not what you're using (?) > The parser accepts the following lines > > altq on $ext_if priq queue {default, torrent} > queue default priority 15 priq (default) > queue torrent priority 1 priq (red) > .. > pass out quick on $ext_if from any to any keep state queue (torrent, default) > > and the documentation says that if a limit is not specified, it will > simply use the maximum rate of the interface specified (which I think > is translated into 100 mbit in my case). This interface connects to a > cable modem that has roughly 4mbit down so of course that 100 mbit > assumption isn't quite right. 100 mbit isn't an assumption it's the linkspeed of that interface (ethernet between you and cable modem) also, the download capper between the ISP and you is not the issue here. you could have 44Mb/s download and it won't change your scenario as pf sees it; the upload you expect between you and the ISP is what you should be concerned with. the altq bandwidth is, going along with my crappy hindu god analogy up top, setting the number of arms you have. > I have tried the other schedulers, but can't think of why a simple > weighted queueing shouldn't work for me. Can anyone explain this to > me? sounds like you have 6 black blocks waiting at point A while > 14 blue or yellow ones keep coming in. jared -- [ openbsd 3.8 GENERIC ( sep 10 ) // i386 ]