n 4/1/2014 12:34 PM, Fred Baker (fred) wrote:
> Makes sense to me. I do have one question. Per charter, in December 
> we are supposed to "Submit first algorithm specification to IESG for 
> publication as Proposed Standard”. Would this be a change of 
> direction for the charter?


Yes, it would be a shift in plans, and we'd have to make up some new
milestone targets.  That's why we're looking for feedback before doing
it, because it only makes sense to do if the people actually doing the
work will go along with it :).


> Note that I’m not pushing a given algorithm, nor am I convinced that 
> there should be exactly one. In protocol design, we are worried about
> interoperability, and everyone has to implement the protocol the same
> way. In AQM, the different algorithms, and the ones we think of next,
> have to produce a specific drop or mark rate under a specified
> circumstance (which might be about queue depth, latency in queue, or
> rate through a queue), and the end systems need to respond to that
> predictably. The means by which the mark or drop rate is established
> is semi-irrelevant if the rate itself is maintained. So I’m not
> exactly sure what the terms “Experimental” or “Proposed Standard”
> mean in the context and using the definitions in RFC 2026. It would
> be nice if we had a status that said “recommended for consideration
> for operational use”, and we could put that status on several that
> meet our requirements, whatever we decide those are.


I think (well, I hope) that pretty much everyone agrees on this.

My personal thought is that the Experimental ones may have some warts or
"unknowns", and that should be okay with us, as long as they seem to be
promising and there is wide interest in using them and finding out more
about how they work or how they can be tweaked further, but a baseline
is needed/desirable for multiple people to work from.

The Standards Track algorithm(s) should have substantially less warts or
unknowns about them, and people should be able to put them in their code
and products with strong confidence that:

  1) they're implementing from an unambiguous specification
  2) it will perform with well-understood results

For instance, a hypothetical Algorithm X may have been beat to death by
one set of folks for some particular use case like a home gateway cable
router. They speculate that it will do well for some other scenarios
too, and there are other people interested in implementing and trying it
out over a longer term, but nobody is fully sure that it's absolutely
the best algorithm, and maybe there are some downsides like a bit of
minor tuning or a hidden variable that has to be tweaked for other
scenarios. That sounds like a good candidate for Experimental to me.
Maybe people will go play with it, and either learn good things and fix
it up for Standards Track, or learn bad things and drop it or make it
Historic.


-- 
Wes Eddy
MTI Systems

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to