On Wed, 5 Jun 2002, Don Cohen wrote:

> ==== extensions
>  > And all the discussions tend to lead to the conclusion that there should
>  > be an sfq option (when the queue is created) for:
>  >    a) how big the hash is
>  >    b) whether to take into account source ports or not
>  >    c) whether to take into account destination ports or not
>  >    d) etc. :)
>  > 
>  > Maybe someone who's written a qdisc would feel up to this?
> 
> I've been hoping to get to it, since I have other stuff I'd like to
> incorporate into a new sfq version.  

        At first look - i think i've have to incorporate my changes into
your work. I've not done much just added hashes and unlocked what
Alexey Kuznetsov did.

>  >    Plaing with it gives interesting results:  
>  >    higher depth -> makes flows equal slower
>  >    small depth  -> makes flows equal faster
>  >    limit kills big delays when set at about 75-85% of depth.
> 
> I don't understand what these last three lines mean.  Could you
> explain?

        depth is how much packets which are queued on a row of the
hash table. If you have large queues (higher depth) sfq reacts slower when
a new flow appears (it has to do more work to make queue lengths equal
). When you have short queues it reacts faster, so adjusting depth
to your bandwidth and traffic type can make it do better work.
I set bounded cbq class 320kbits and esfq with dst hash:
        Start an upload - it gets 40KB
        Start second one - it should get 20KB asap to be fair.
With depth 128 it would take it let's say 6 sec. to make both 20KB, with
depth 64 about 3sec - drop packets early with shorter queue.
(i've to make some exact measurements since this is just an example
and may not be correct). 

        limit sets a threshold on queued packets - if a packet exceeds
it's dropped so delay is smaller, but when it tries to make flows equal it
counts depth, not limit. With above example depth 128 and limit 100:
        When first upload enqueue 100 packets sfq starts to drop,
but goal to make flows equal is 64 packets in queue. Flow doesn't get
the 28 packets which are to be enqueued and delayed for a long time and
probably dropped when recived.
It's similiar to RED - you have a queue  which you try to keep at about X
percent full but can go over it, limit keeps it at most X percent full.
What about making it like RED - more packets go over limit more are droped
early -> limit goes down, less are dropped -> limit goes up?

> 
>  > 
>  >    Needs testings and mesurements - that's why i made it
>  > separate qdisc and not a patch over sfq, i wanted to compare both.
>  > 
>  >    Any feedback good or bad is welcome. 
> 
> I'll send you my current module, also a variant of SFQ.  It contains
> doc that I think is worth including, also changes some of the code to
> be more understandable, separates the number of packets allowed in the
> queue from the number of buckets, supports the time limit (discussed
> in earlier messages), controls these things via /proc, maybe a few
> other things I'm forgetting.  This version does not support hashing on
> different properties of the packet, cause it uses a totally different
> criterion for identifying "subclasses" of traffic.  You can discard
> that and restore the sfq hash with your modifications.  I think (hope)
> these changes are pretty much independent.
> 

        I've taken a quick look over it and i like the ideas there,
give me some time to get the details. I hope we can make something good of
both. 

-- 
have fun,
alex

_______________________________________________
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

Reply via email to