Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-29 Thread Phillip Hallam-Baker
The reason I chose RFC2821 is that there is really no basis on which the
earlier document is better.

RFC821 was accepted when the process was far less stringent. The working
group that produced 2821 was chartered to improve the documentation of SMTP
rather than redesign the protocol.

What people are asking for is that the transition from one standards state
to another to actually mean something. If we think of the world as a huge
collection of loosely coupled, probabilistic communicating finite state
machines (similar to CSP but without rendezvous), I cannot think of any
system in that world that would undergo a state transition on receiving the
message RFC2821 has gone from DRAFT to STANDARD. Nor can I think of any
system where the probability of a state transition would be affected by
whether RFC2821 was in the one state or the other.


What I and others conclude from this fact is that the current system as
documented is broken. There is no point in fixing the practice to match the
theory because the theory has been rejected FOR GOOD REASON.

The current missmatch between theory and practice is hurting the IETF. I
have been involved in the decision of where to take quite a few standards
proposals now. And it must be said that the fact that nothing now becomes an
IETF standard until the fact is irrelevant causes resistance.


On Mon, Jun 28, 2010 at 3:35 PM, Martin Rex  wrote:

> Phillip Hallam-Baker wrote:
> >
> > The fact remains that RFC 821 has the STANDARD imprimatur and the better
> > specification that was intended to replace it does not.
> >
> > It seems pretty basic to me that when you declare a document Obsolete it
> > should lose its STANDARD status. But under the current system that does
> not
> > happen.
> >
> > This situation has gone on now for 15 years. Why would anyone bother to
> put
> > time an effort into progressing documents along the three step track when
> > most of the documents at the highest rank are actually obsolete?
> >
> > What does STANDARD actually mean if the document it refers to is quite
> > likely obsolete?
>
>
> To me it looks like "Obsolete: " has been used with quite different
> meanings across RFCs, and some current uses might be inappropriate.
>
> Although it's been more than two decades that I read rfc821 (and
> none of the successors), I assume that all those RFC describe _the_same_
> protocol (SMTP) and not backwards-incompatible revisions of a protocol
> family (SMTPv1,v2,v3).  I also would assume that you could implement an
> MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
> fully interoperable with an implementation of rfc821.  So for a large
> part we are looking at a revised specification of the same single protocol,
> and the term "obsoletes" should indicate that you can create an
> implementation of the protocol based solely on a newer version of the
> specification describing it and remain fully interoperable with an
> implementation of the old spec when (at least when using the mandatory
> to implement plus non-controversial recommended protocol features).
>
>
> For RFCs that create backwards-incompatible protocol revisions, and
> in particular when you still need the old specification to implement
> the older protocol revision, there is *NO* obsoletion of the old
> protocol by publication of the new protocol.  Examples where this
> was done correctly:  IPv4&IPv6, LDAPv2&LDAPv3, HTTPv1.0&HTTPv1.1.
>
> A sensible approach to obsolete a previous protocol version is to
> reclassify it as historic when the actual usage in the real world
> drops to insignificant levels and describe&publish that move in an
> informational RFC (I assume that is the intent of rfc-3494).
>
>
> Examples of clearly inappropriate "Obsoletes: " are the
> TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
> backward-incompatible protocol revisions of TLS and where the new RFCs
> specify only the behaviour of the new protocol version and even
> fail to clearly identify the backwards-incompatible changes.
>
>
> And if you look at the actual use of TLS protocol versions in the
> wild, the vast majority is using TLSv1.0, there is a limited use
> of TLSv1.1 and very close to no support for TLSv1.2.
>
> (examples https://www.ssllabs.com/ssldb/index.html
>  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html
>
>
> What irritates me slightly is that I see this announcement
>
> https://www.ietf.org/ibin/c5i?mid=6&rid=49&gid=0&k1=935&k2=34176&tid=1277751536
>
> which is more of a bashing of existing and widely used versions
> of SSLv3 and TLS, instead of an effort to improve _one_ of the
> existing TLS protocol revisions and to advance it on the standards
> maturity level and make it more easily acceptable to the marketplace.
>
> Adding explicit indicators for backwards-incompatible protocol changes
> in rfc-5246 might considerably facilitate the assessment just how much
> changes are necessary to an implementation of a predec

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Douglas Otis

On 6/28/10 12:35 PM, Martin Rex wrote:

To me it looks like "Obsolete: " has been used with quite different
meanings across RFCs, and some current uses might be inappropriate.

Although it's been more than two decades that I read rfc821 (and
none of the successors), I assume that all those RFC describe _the_same_
protocol (SMTP) and not backwards-incompatible revisions of a protocol
family (SMTPv1,v2,v3).  I also would assume that you could implement an
MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
fully interoperable with an implementation of rfc821.  So for a large
part we are looking at a revised specification of the same single protocol,
and the term "obsoletes" should indicate that you can create an
implementation of the protocol based solely on a newer version of the
specification describing it and remain fully interoperable with an
implementation of the old spec when (at least when using the mandatory
to implement plus non-controversial recommended protocol features).


For RFCs that create backwards-incompatible protocol revisions, and
in particular when you still need the old specification to implement
the older protocol revision, there is *NO* obsoletion of the old
protocol by publication of the new protocol.  Examples where this
was done correctly:  IPv4&IPv6, LDAPv2&LDAPv3, HTTPv1.0&HTTPv1.1.

A sensible approach to obsolete a previous protocol version is to
reclassify it as historic when the actual usage in the real world
drops to insignificant levels and describe&publish that move in an
informational RFC (I assume that is the intent of rfc-3494).


Examples of clearly inappropriate "Obsoletes: " are the
TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
backward-incompatible protocol revisions of TLS and where the new RFCs
specify only the behaviour of the new protocol version and even
fail to clearly identify the backwards-incompatible changes.


And if you look at the actual use of TLS protocol versions in the
wild, the vast majority is using TLSv1.0, there is a limited use
of TLSv1.1 and very close to no support for TLSv1.2.

(examples https://www.ssllabs.com/ssldb/index.html
  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html


What irritates me slightly is that I see this announcement
https://www.ietf.org/ibin/c5i?mid=6&rid=49&gid=0&k1=935&k2=34176&tid=1277751536

which is more of a bashing of existing and widely used versions
of SSLv3 and TLS, instead of an effort to improve _one_ of the
existing TLS protocol revisions and to advance it on the standards
maturity level and make it more easily acceptable to the marketplace.

Adding explicit indicators for backwards-incompatible protocol changes
in rfc-5246 might considerably facilitate the assessment just how much
changes are necessary to an implementation of a predecessor version
of TLSv1.2.  Btw. 7.4.1.4.1 Signature Algorithms extension appears
to be a big mess and fixing it wouldn't hurt.

MUST requirements in spec ought to be strictly limited to features
that are absolutely necessary for interoperability _and_ for the
existing market, not just nice to have at some time in the future.
The only TLS extension that deserves a MUST is described
in rfc-5746 (TLS extension RI).


One of the reasons why some working groups recycling protocol
revisions on proposed rather advancing a widely deployed protocol
to draft is the "the better is the enemy of the good".
   


"Make everything as simple as possible, but not simpler." Albert Einstein

The current scheme is already too simple.  Too simple because any 
resulting utility does not justify promotion efforts.  Reducing the the 
status categories will not greatly impact the goal of spending less time 
at advancing related RFCs to the same level, where often an originating 
wg will have closed.  Making changes that impact a large number of 
interrelated protocols will have these efforts causing as much 
disruption as utility.  Rather than providing stability, efforts at 
"simplification" are likely to inject as many errors, as those corrected.


Four years ago, an effort to create a "cover sheet" for "standard" 
protocols was attempted.  After gaining majority support within the wg, 
subsequently the wg closed without a clear explanation for the IESG 
push-back.  Often no single RFC or STD encapsulates a "standard".  In 
addition, references to a wider set depends upon an evolving set of 
numbered RFCs, where tracking these relationships often requires complex 
graphs.


With a "cover sheet" approach, "core" elements are described separately 
from "extension",  "guidance", "replaces", "experimental", and 
"companion" elements.  Many overlapping protocols can be defined as 
representing different associations of RFCs.  This scheme lessens 
dependence on a concise relationship described in each RFC,  or 
attempting to resolve relationships based upon the roughly maintained 
categories that frequently offer little insight or refl

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread ned+ietf
> Phillip Hallam-Baker wrote:
> >
> > The fact remains that RFC 821 has the STANDARD imprimatur and the better
> > specification that was intended to replace it does not.
> >
> > It seems pretty basic to me that when you declare a document Obsolete it
> > should lose its STANDARD status. But under the current system that does not
> > happen.
> >
> > This situation has gone on now for 15 years. Why would anyone bother to put
> > time an effort into progressing documents along the three step track when
> > most of the documents at the highest rank are actually obsolete?
> >
> > What does STANDARD actually mean if the document it refers to is quite
> > likely obsolete?

Simple: It means we're letting technical correctness get in the way of clarity.

> To me it looks like "Obsolete: " has been used with quite different
> meanings across RFCs, and some current uses might be inappropriate.

> Although it's been more than two decades that I read rfc821 (and
> none of the successors), I assume that all those RFC describe _the_same_
> protocol (SMTP) and not backwards-incompatible revisions of a protocol
> family (SMTPv1,v2,v3).

That assumption is incorrect. The diffrences are minor, but there are
differences - a couple of things, like EHLO instead of HELO or periods in
unquoted phrases, are allowed now, whereas lots of stuff that used to be
allowed has been removed.

The protocols don't even hahve the same names in common usage. The term "ESMTP"
is often used to refer to the SMTP variant described in RFC 5321 (RFC 2821 is
obsolete, BTW) that uses EHLO, and "SMTP" refers to the original RFC821 
protocol.

> I also would assume that you could implement an
> MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
> fully interoperable with an implementation of rfc821.

Fully interoperable? Not even close. A lot of stuff has been removed from RFC
5311. If someone attempts to use those RFC 821 features, things aren't going to
interoperate.

Now, the consensus is that those features are useless, dangerous, rarely if
ever implemented, or sometimes all three, and we're are better off for their
absence, but that doesn't make your assertion valid.

> So for a large
> part we are looking at a revised specification of the same single protocol,

It is a revised specification and the *service* it provides remains the same.
But the protocol has changed. Some things have been removed; other things have
been added. There's fallback where it is necessary, but there are also cases
where functionality has simply been removed.

> and the term "obsoletes" should indicate that you can create an
> implementation of the protocol based solely on a newer version of the
> specification describing it and remain fully interoperable with an
> implementation of the old spec when (at least when using the mandatory
> to implement plus non-controversial recommended protocol features).

That would be an absolutely absurd requirement to impose. Full interoperability
is far too high a bar.

> For RFCs that create backwards-incompatible protocol revisions, and
> in particular when you still need the old specification to implement
> the older protocol revision, there is *NO* obsoletion of the old
> protocol by publication of the new protocol.  Examples where this
> was done correctly:  IPv4&IPv6, LDAPv2&LDAPv3, HTTPv1.0&HTTPv1.1.

THat's also absurd and overly constaining. These choices aren't amenable to
being codified as a fixed set of rules. Context has to be considered.

> A sensible approach to obsolete a previous protocol version is to
> reclassify it as historic when the actual usage in the real world
> drops to insignificant levels and describe&publish that move in an
> informational RFC (I assume that is the intent of rfc-3494).

This approach may indeed be appropriate in some cases. There are bound
to be cases where it is inappropriate, though.

> Examples of clearly inappropriate "Obsoletes: " are the
> TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
> backward-incompatible protocol revisions of TLS and where the new RFCs
> specify only the behaviour of the new protocol version and even
> fail to clearly identify the backwards-incompatible changes.

> And if you look at the actual use of TLS protocol versions in the
> wild, the vast majority is using TLSv1.0, there is a limited use
> of TLSv1.1 and very close to no support for TLSv1.2.

> (examples https://www.ssllabs.com/ssldb/index.html
>  http://www.ietf.org/mail-archive/web/tls/current/msg06432.html

Whereas, in the case of email, the vast majority of MTAs now support ESMTP and
the *overwhelming* majority of MUAs support MIME.

> What irritates me slightly is that I see this announcement
> https://www.ietf.org/ibin/c5i?mid=6&rid=49&gid=0&k1=935&k2=34176&tid=1277751536

> which is more of a bashing of existing and widely used versions
> of SSLv3 and TLS, instead of an effort to improve _one_ of the
> existing TLS protocol revisions and to a

Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Martin Rex
Phillip Hallam-Baker wrote:
> 
> The fact remains that RFC 821 has the STANDARD imprimatur and the better
> specification that was intended to replace it does not.
> 
> It seems pretty basic to me that when you declare a document Obsolete it
> should lose its STANDARD status. But under the current system that does not
> happen.
> 
> This situation has gone on now for 15 years. Why would anyone bother to put
> time an effort into progressing documents along the three step track when
> most of the documents at the highest rank are actually obsolete?
> 
> What does STANDARD actually mean if the document it refers to is quite
> likely obsolete?


To me it looks like "Obsolete: " has been used with quite different
meanings across RFCs, and some current uses might be inappropriate.

Although it's been more than two decades that I read rfc821 (and
none of the successors), I assume that all those RFC describe _the_same_
protocol (SMTP) and not backwards-incompatible revisions of a protocol
family (SMTPv1,v2,v3).  I also would assume that you could implement an
MTA with rfc2821 alone (i.e. without ever reading rfc821), that is still
fully interoperable with an implementation of rfc821.  So for a large
part we are looking at a revised specification of the same single protocol,
and the term "obsoletes" should indicate that you can create an
implementation of the protocol based solely on a newer version of the
specification describing it and remain fully interoperable with an
implementation of the old spec when (at least when using the mandatory
to implement plus non-controversial recommended protocol features).


For RFCs that create backwards-incompatible protocol revisions, and
in particular when you still need the old specification to implement
the older protocol revision, there is *NO* obsoletion of the old
protocol by publication of the new protocol.  Examples where this
was done correctly:  IPv4&IPv6, LDAPv2&LDAPv3, HTTPv1.0&HTTPv1.1.

A sensible approach to obsolete a previous protocol version is to
reclassify it as historic when the actual usage in the real world
drops to insignificant levels and describe&publish that move in an
informational RFC (I assume that is the intent of rfc-3494).


Examples of clearly inappropriate "Obsoletes: " are the
TLS protocol revisions (v1.1:rfc-4346 and v1.2:rfc-5246) which describe
backward-incompatible protocol revisions of TLS and where the new RFCs
specify only the behaviour of the new protocol version and even
fail to clearly identify the backwards-incompatible changes.


And if you look at the actual use of TLS protocol versions in the
wild, the vast majority is using TLSv1.0, there is a limited use
of TLSv1.1 and very close to no support for TLSv1.2.

(examples https://www.ssllabs.com/ssldb/index.html
 http://www.ietf.org/mail-archive/web/tls/current/msg06432.html


What irritates me slightly is that I see this announcement
https://www.ietf.org/ibin/c5i?mid=6&rid=49&gid=0&k1=935&k2=34176&tid=1277751536

which is more of a bashing of existing and widely used versions
of SSLv3 and TLS, instead of an effort to improve _one_ of the
existing TLS protocol revisions and to advance it on the standards
maturity level and make it more easily acceptable to the marketplace.

Adding explicit indicators for backwards-incompatible protocol changes
in rfc-5246 might considerably facilitate the assessment just how much
changes are necessary to an implementation of a predecessor version
of TLSv1.2.  Btw. 7.4.1.4.1 Signature Algorithms extension appears
to be a big mess and fixing it wouldn't hurt.

MUST requirements in spec ought to be strictly limited to features
that are absolutely necessary for interoperability _and_ for the
existing market, not just nice to have at some time in the future.
The only TLS extension that deserves a MUST is described
in rfc-5746 (TLS extension RI).


One of the reasons why some working groups recycling protocol
revisions on proposed rather advancing a widely deployed protocol
to draft is the "the better is the enemy of the good".


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: draft-housley-two-maturity-levels-00

2010-06-28 Thread Dearlove, Christopher (UK)
Scott Lawrence wrote:
> I think that a different numbering series needs to be created so that 
> 'RFC' means what most people (incorrectly) think that it means now:
that 
> something is a standard that has passed the IETF review and approval 
> process.

I think this is far too late. Come what may, RFC 1149 (just to take one
deliberately extreme example) is forever going to be RFC 1149, it's too
late to rename it now. Separating the categories while still using RFC
with legacy examples is just not possible. Telling people "you can't
still call it an RFC" is not going to work.

> Only standards track documents should get an RFC number, and 
> all others (Informational, Experimental, Historic, and any other 
> archival documents we invent that are not standards track) should get 
> numbers in this new series (IAP - Internet Archived Publication ?) 
> instead.

There are a lot of important RFCs that aren't standards track, often
more
important than some on the standards track. For example (not the best,
just
one I happen to have to hand) RFC 2309 is the only IETF document I'm
aware
of that describes RED, but it is Informational, not Standards Track.

-- 
Christopher Dearlove
Technology Leader, Communications Group
Networks, Security and Information Systems Department
BAE Systems Advanced Technology Centre
West Hanningfield Road, Great Baddow, Chelmsford, CM2 8HN, UK
Tel: +44 1245 242194  Fax: +44 1245 242124

BAE Systems (Operations) Limited
Registered Office: Warwick House, PO Box 87,
Farnborough Aerospace Centre, Farnborough, Hants, GU14 6YU, UK
Registered in England & Wales No: 1996687


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-28 Thread Phillip Hallam-Baker
The fact remains that RFC 821 has the STANDARD imprimatur and the better
specification that was intended to replace it does not.

It seems pretty basic to me that when you declare a document Obsolete it
should lose its STANDARD status. But under the current system that does not
happen.

This situation has gone on now for 15 years. Why would anyone bother to put
time an effort into progressing documents along the three step track when
most of the documents at the highest rank are actually obsolete?


What does STANDARD actually mean if the document it refers to is quite
likely obsolete?


On Fri, Jun 25, 2010 at 4:35 PM, Yoav Nir  wrote:

> On Thursday, June 24, 2010 22:01 Phillip Hallam-Baker wrote:
>
> 
> > We currently have the idiotic position where RFC821 is a full standard
> and RFC2821 which obsoletes it is not.
>
> Why is this idiotic. RFC 821 needed to be obsoleted. It had some features
> that needed to be removed, and some things that may have been appropriate in
> 1982, but no longer so in 2001. "Proposed", "Draft" and "Full" refer to the
> maturity of a standard, not to how well it fits the current Internet. One
> could argue that 821 was very mature, because it needed a revision only
> after 19 years.
>
> Just because the old standard needs replacing, does not automatically mean
> that the new standard is just as mature as the old one.
>
> It does, however mean that the distinction is meaningless to implementers.
> In 2001 or 2002 we would expect someone implementing SMTP to implement 2821,
> a proposed standard, rather than 821, a full standard. While implementing a
> full standard gives you more assurance about the quality of the spec, it
> doesn't mean that "they" are not going to obsolete it ever.
>
>


-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-27 Thread Bill McQuillan
It seems to me that this discussion is conflating two related but distinct
things: protocols and specifications.

The IETF is concerned with producing and refining *protocols*; however the
work products are specifications(RFCs).

A *protocol* such as SMTP is very mature and thus can be used by many
different parties to enable e-mail exchange with high confidence in its
interoperability. For example, SMTP has matured over the last several
decades by adopting DNS and MX routing, creating a mechanism for allowing
enhancements (EHLO), dropping unuseful features such as SEND and source
routing, separating Submission from forwarding (SUBMIT), among others.

The *specifications* for SMTP (RFC821, etc.) have been of varying quality
measured by their accuracy in describing the *protocol*. The goal of a
specification should be its capability for allowing someone to implement
the protocol accurately, not whether the protocol itself is well designed.

Therefore I would suggest that the SMTP protocol remains a Full Standard
even while successor specifications to RFC821, which are trying to describe
it, are cycling through levels of wordsmithing. Although the words
"Proposed" and "Draft" seem reasonable to describe these editing cycles I
am not sure that "Full" quite captures the goal of this process.

For what it's worth.

-- 
Bill McQuillan 

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-27 Thread SM

Hi Yoav,
At 00:36 27-06-10, Yoav Nir wrote:
Yes, but most of the RFC repositories, including 
http://tools.ietf.org/html/rfc821 show "Obsoleted by: 2821" right 
there at the top next to the word "STANDARD". Anyone looking


Yes.

at this RFC now (as opposed to 10 years ago) would immediately know 
that while this *was* a standard, it is now obsolete.


If you then go to RFC 2821 and you will see that it is a Proposed 
Standard which has been obsoleted by RFC 5321 (Draft 
Standard).  There are implementations out there that are STD 10 
compliant.  There are still a lot of implementations out there that 
are RFC 2821 compliant.  I'll ignore the differences between these 
specifications.  Most people mention RFC 2821 when it comes to 
SMTP.  Do I implement RFC 2821 or RFC 5321?


Let's try another example.  RFC 4871 was published in May 2007 as 
Proposed Standard.  It was updated by RFC 5672 in August 2009.  Do I 
want to implement a specification that could be changed in two years 
or do I stick to the Draft or Internet Standard for maturity?


This raises another question. What does "obsolete" mean?  RFC 821 
and RFC 2821 describe


It means that there is a newer version of the specification (RFC).

 the same standard. Upgrading implementations to comply with RFC 
2821 was not supposed to
 break any connectivity. They describe the same protocol, so unless 
you are interoperating with a peer that implemented some deprecated 
features, you're good. OTOH,


In Section 3.3 of RFC 821:

 "The VRFY and EXPN commands are not included in the minimum
  implementation (Section 4.5.1), and are not required to work
  across relays when they are implemented."

In Section 3.5.2 of RFC 2821:

  "Server implementations SHOULD support both VRFY and EXPN."

In Section 3.5.2 of RFC 5321:

  "Server implementations SHOULD support both VRFY and EXPN."

If I know which specification is widespread, I can decide whether I 
should be able to rely on VRFY being available or not.  It is also 
unlikely that the VRFY is removed as the Standard is updated.  If the 
IETF decides to do that anyway, the specification is recycled at a 
lower maturity level.  In practice, it does not work like that for 
reasons I won't get into.


It's true that under the current system RFCs never change. Even 
advancing them to a higher level gives them a different number.


Actually no, the RFC only gets a different number if the text is changed.

I don't think there's any incentive to do so.  RFC 4478 has been at 
"Experimental" for 4 years, with at least 3 independent 
implementations. But when I thought it was time to advance it to PS, 
I was told (by an AD) "why bother?". It certainly didn't stop 
implementers from implementing it.


If the specification has mind share, it will be implemented even if 
the RFC is Experimental.  If RFC 4478 fulfills the requirements for 
PS, it could be advanced unless it can be shown that it is 
technically unsound.  The PS could mean that someone bothered to look 
up the three independent implementations to see whether they are 
interoperable.  It might also help the author determine where the 
text is clearly understood.


Also, it seems that in the last 4 years, the IETF has published only 
3 full standards, 18 draft standards, and 740 proposed standards. I 
think this tells us that there is very little incentive for 
advancing a standard.


Or it might mean that the ADs are not interested in seeing the a 
specification advanced through the standards track.


If there isn't any motivation to advance a "standard", we might as 
well publish these RFCs as "Informational".


Regards,
-sm 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-27 Thread Yoav Nir

On Jun 26, 2010, at 12:56 AM, Phillip Hallam-Baker wrote:

> The fact remains that RFC 821 has the STANDARD imprimatur and the better 
> specification that was intended to replace it does not.

Yes, but most of the RFC repositories, including 
http://tools.ietf.org/html/rfc821 show "Obsoleted by: 2821" right there at the 
top next to the word "STANDARD". Anyone looking at this RFC now (as opposed to 
10 years ago) would immediately know that while this *was* a standard, it is 
now obsolete.

This raises another question. What does "obsolete" mean?  RFC 821 and RFC 2821 
describe the same standard. Upgrading implementations to comply with RFC 2821 
was not supposed to break any connectivity. They describe the same protocol, so 
unless you are interoperating with a peer that implemented some deprecated 
features, you're good. OTOH, looking at RFC 2409, it says that RFC 4306 
obsoletes it. But RFC 2409 is IKEv1, while RFC 4306 is IKEv2.  If you had 
upgraded an implementation to comply with RFC 4306 *instead of* RFC 2409 in 
2005, you would not be able to finish an IKE exchange at all. If you need to 
implement IKEv1 (that is still much more widely used than IKEv2), the RFC to 
look at is 2409, not 4306.  IMO this is a totally different meaning of 
"obsolete"

> It seems pretty basic to me that when you declare a document Obsolete it 
> should lose its STANDARD status. But under the current system that does not 
> happen.

It's true that under the current system RFCs never change. Even advancing them 
to a higher level gives them a different number.

> This situation has gone on now for 15 years. Why would anyone bother to put 
> time an effort into progressing documents along the three step track when 
> most of the documents at the highest rank are actually obsolete?

I don't think there's any incentive to do so.  RFC 4478 has been at 
"Experimental" for 4 years, with at least 3 independent implementations. But 
when I thought it was time to advance it to PS, I was told (by an AD) "why 
bother?". It certainly didn't stop implementers from implementing it.

Also, it seems that in the last 4 years, the IETF has published only 3 full 
standards, 18 draft standards, and 740 proposed standards. I think this tells 
us that there is very little incentive for advancing a standard.
http://www.rfc-editor.org/std-index.html
http://www.rfc-editor.org/fyi-index.html

> What does STANDARD actually mean if the document it refers to is quite likely 
> obsolete?

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-25 Thread Yoav Nir
On Thursday, June 24, 2010 22:01 Phillip Hallam-Baker wrote:


> We currently have the idiotic position where RFC821 is a full standard and 
> RFC2821 which obsoletes it is not.

Why is this idiotic. RFC 821 needed to be obsoleted. It had some features that 
needed to be removed, and some things that may have been appropriate in 1982, 
but no longer so in 2001. "Proposed", "Draft" and "Full" refer to the maturity 
of a standard, not to how well it fits the current Internet. One could argue 
that 821 was very mature, because it needed a revision only after 19 years.

Just because the old standard needs replacing, does not automatically mean that 
the new standard is just as mature as the old one.

It does, however mean that the distinction is meaningless to implementers. In 
2001 or 2002 we would expect someone implementing SMTP to implement 2821, a 
proposed standard, rather than 821, a full standard. While implementing a full 
standard gives you more assurance about the quality of the spec, it doesn't 
mean that "they" are not going to obsolete it ever.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-25 Thread Phillip Hallam-Baker
I can't remember offhand if DNS got to full standard or not, lets say for
the sake of argument that it did.

If we want to make a significant change to DNS, such as yank out some
features that were never used, we have a minimum of about six years before
the change can be made.

First we would have to get an ID written and progress it through the working
group, that would be two years. Then we would have to get the proposed
standard to draft, a minimum of two years. Then we would have to go from
draft to standard, which has not happened to a DNS spec since the fall of
the Soviet Union.


We currently have the idiotic position where RFC821 is a full standard and
RFC2821 which obsoletes it is not.



On Tue, Jun 22, 2010 at 11:16 AM, Andrew Sullivan  wrote:

> On Tue, Jun 22, 2010 at 10:12:13AM +0200, Eliot Lear wrote:
>
> > Question #1: Is such a signal needed today?  If we look at the 1694
> > Proposed Standards, are we seeing a lack of implementation due to lack
> > of stability?  I would claim that there are quite a number of examples
> > to the contrary (but see below).
>
> In connection with that question, I'll observe that a very large
> number of the DNS protocol documents have not advanced along the
> standards track, and efforts to do something about that state of
> affairs have not been very successful.  In addition, any time there is
> an effort to make a change to anything already deployed is met by
> arguments that we shouldn't change the protocol in even the slightest
> detail, because of all the deployed code.  (I've been known to make
> that argument myself.)
>
> I don't know whether the DNS is special in this regard, though I have
> doubts.
>
> A
>
> --
> Andrew Sullivan
> a...@shinkuro.com
> Shinkuro, Inc.
> ___
> Ietf mailing list
> Ietf@ietf.org
> https://www.ietf.org/mailman/listinfo/ietf
>



-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-25 Thread Scott Lawrence

Scott Lawrence wrote:

  The main drawback of this would be
>  that a document would sometimes need to exist for longer as an I-D while
>  implementations are developed, but balancing that is the fact that those
>  implementations would then inform the first RFC version rather than some
>  subsequent update, and it would be harder to get an RFC published for
>  something no one is really going to build.
 

On 2010-06-23 0:48, Russ Housley wrote:

This would seem to encourage publication as Informational (perhaps on
the Independent Submission Stream) as a first step.  I'm not sure that
really reduces the work load, but it does shift it out of the standards
track
I think that Experimental would be more appropriate, but (and I hate to 
bring this into it yet again because I agree that what Russ has 
suggested is a useful step and that we should not make the perfect the 
enemy of the better), I think that there is a more fundamental 'brand 
management' issue here that just must be faced:


Hardy anyone not active in the IETF (and apparently some who are) 
understands that there is a difference between the various document 
types (Informational, Experimental, Historical, and the different 
flavors of Standard Track) within the RFC series.  Product managers and 
customers rarely ask 'is this standards track in the IETF?'; they ask 
'is there an RFC for this?'.


I think that a different numbering series needs to be created so that 
'RFC' means what most people (incorrectly) think that it means now: that 
something is a standard that has passed the IETF review and approval 
process.  Only standards track documents should get an RFC number, and 
all others (Informational, Experimental, Historic, and any other 
archival documents we invent that are not standards track) should get 
numbers in this new series (IAP - Internet Archived Publication ?) 
instead.  Yes, I know that the 'STD' and/or 'BCP' numbering schemes were 
supposed to do something of this kind, but they are 1) not understood at 
all outside the IETF, and 2) not really archival, since what they point 
to can change (which from an implementation and conformance point of 
view actually makes them less useful, I think).


Many people don't even make a distinction between an I-D and an RFC, 
despite the plain boilerplate text in every I-D that explains it; once 
upon a time, the fact that an I-D disappeared helped, but the web and 
search engines demolished the practical difficulty of that years ago.  
This argues that making the distinction I suggest may be futile, but I 
still think that it's important.


There are a lot of people out there that think 'RFC' means something 
that it actually does not, and I think we have established that we can't 
change this misconception, so I think we should just agree that from now 
on that's what it means, and create one or more new labels to mean other 
things.




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-23 Thread Eliot Lear


On 6/23/10 3:31 PM, RJ Atkinson wrote:
> In the quote (below), I also mentioned the "various IPv6
> Profile documents around the world", which you ignore,
> apparently in order to incorrectly characterise my note 
> as using a single example.  

I did not do this, and you have ignored the language that I used, and
ascribed motives to me in the process of doing so, the very thing you
accuse me of.

Eliot
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-23 Thread RJ Atkinson

On 23  Jun 2010, at 08:45 , Eliot Lear wrote:
> And now as to your specifics, you have placed a lot of weight on one
> example, seemingly extrapolating from it, the Joint Interoperability
> Test Command.  I have no experience working with them, and defer to
> yours.  However, when you say,

Again, you setup an incorrect strawman, and then knock it down.

In the quote (below), I also mentioned the "various IPv6
Profile documents around the world", which you ignore,
apparently in order to incorrectly characterise my note 
as using a single example.  There are a number of cases
where a large customer's requirements (in RFPs or Tender
opportunities) have driven feature priorities.  The TIC
and JITC are merely examples.  Numerous other examples
exist, including a large bank in central Europe and 
several ISPs.

>>  As examples, the JITC and TIC requirements pay a great
>>  deal of attention to whether some technology is past PS.
>>  Various IPv6 Profile documents around the world also pay
>>  much attention to whether a particular specification is
>>  past PS.
> 
> It leads to the following questions:
> 
>* Would the vendors have implemented the functionality ANYWAY? 
>  Specifically, would other RFPs have already driven vendors in this
>  direction?  Can you cite a counter example, where that was not the
>  case?

Yes.

There are certainly numerous cases where vendor implementation 
timing and new feature prioritisation were directly impacted 
by a profile document cited in some RFP, and where that profile
document's contents were directly impacted by whether a
particular technology was at Proposed Standard or some more
advanced stage in the IETF processes.

The most obvious examples come from the various IPv6 Profiles
around the world.  There are some number of these in Japan,
in Europe, in the USA, and in other countries.

Various examples also exist outside the IPv6 Profile universe,
including but not limited to large customers (e.g. the JITC and TIC).

>* Is the defense industry at all representative of the broader
>  market?  My own experience leads to an answer of, “barely at all”,
>  and this has been assuredly the case with the Internet where a
>  huge portion has run on on PS, Internet-Drafts, and proprietary
>  standards, and not waited for advancement.  Examples have included
>  BGP, MPLS-VPNs, HTTP, SSL, and Netflow, just to name a few. 

I provided non-defense examples in both my original note
(which examples you have ignored for some reason) and also
in my response above.

>> The IETF already has a tendency to be very vendor-focused &
>> vendor-driven.  It is best, however, if the IETF keeps the 
>> interests of both communities balanced (rather than tilting 
>> towards commercial vendors).
> While this is a perhaps laudable idea, someone has to do the work to get
> specifications to the next standards level.  The whole point of my
> questions is to determine what motivations that someone might have for
> actually performing that work.

I was quite detailed on that front, although you seem to have
selectively ignored that part of my note.

> There's no need to be rude or snarky with me, even if you disagree.  

I wasn't rude, and can't find "snarky" in the OED.

> You are looking at this from the angle of the customers, and that's
> perfectly reasonable.  I'm looking at it from the developers' point of
> view, and from the supply side of your equation. 

I've been both customer/user/operator and vendor/implementer
at various points in time.  So I look at it from both points
of view, and my earlier note included discussion of both
vendor advantages and user/operator/customer advantages.

It seems quite odd that you seem to have ignored my note 
so selectively.

>>  B) whether that signal has a feedback loop to implementers/
>> vendors that still works.
>>  The answer to this is also clearly YES.  Technologies that
>>  appear in RFPs or Tender Requirements have a stronger
>>  business case for vendors/implementers, hence are more
>>  likely to be widely implemented.
> 
> Certainly so, but I don't understand how you made the leap of logic from
> your question to your answer.  Do we have situations, for instance,
> where a proposed standard is compared to a draft standard, or a draft
> standard is compared to a full standard, and one is chosen over the
> other?  

Yes, we do.

> If so, are they the norm, and are they likely to drive
> implementation?  

Such decisions in various IPv6 Profiles around the world,
in large customer requirements documents around the world
(e.g. JITC, TIC) regularly have driven implementation priorities 
and new feature timetables in the past.  

Folks at many vendors have experienced this.  I witnessed
it at every vendor I've ever worked for.  It isn't a surprise 
that a business case would drive these things NOR is it 
a surprise that standards status would drive an RFP 
(and hence d

Re: draft-housley-two-maturity-levels-00

2010-06-23 Thread Jari Arkko
I'm with Ran and others who stated that best is the enemy of good in 
this case. I think Russ' draft is a step in the right direction and will 
reduce complexity and effort. An incremental improvement. We should 
adopt it in Maastricht. And lets avoid too much fine-tuning or 
fragmentation of the proposals...


Having said that, I did have a couple of other observations. First, it 
has been repeatedly noted the IETF community has given up on advancing 
documents on the standards ladder. In some sense this is true. Out of 
the 122 documents currently in the RFC Editor queue, 0 are for Full 
Standard, 1 document (0.8%) is for Draft Standard, 8 (6%) are for 
Experimental, 28 (23%) are for Informational, and 83 (68%) are for 
Proposed Standard. However, 13 (11%) are bis documents of various sorts. 
And that's not a special occurrence, we do produce overall quite many 
revisions of existing RFCs. My interpretation is that while overall the 
community is not that interested in the standards levels, the IETF is 
still very interested in keeping our specifications up to date, 
correcting bugs and maybe in some cases even removing or adding some 
features. I think it is valuable work and needs to continue. And here is 
where in my opinion the possible value of the two-step ladder lies. The 
implementation reports may help in directing the "bis" draft to become 
simplified, and based on actual experience.


But I would also be OK with a one step model. You can draw "running 
code" support for that model from the above data.


Jari

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-23 Thread Jari Arkko

Spencer,

As I read http://www.rfc-editor.org/rfc/rfc5741.txt, Experimental RFCs 
would be Category: Experimental on the first page, and I'd expect them 
to be revised when they are reclassified, if only to make this say 
Category: Standards Track. So that's at least a small barrier to 
reclassification in place.


Yes, though that is something which could be handled by setting the 
tracker intended status and perhaps an RFC Editor note correctly to PS. 
The last call notice would say the correct thing after this, for 
instance. Its really very similar to what we've been doing for some 
documents already. "RFC such and such for Draft" and at that time such 
and such is still a PS, and a new RFC # will be allocated for the DS RFC.


Jari

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-23 Thread Spencer Dawkins

Hi, Jari,

We should be able to say that for a particular experimental RFC there 
have been this many independent implementation, and they interoperate OK, 
and only so-and-so clarifications need to be added, and the document is 
ready for "Proposed".


I think we already have that. There is really no requirement to produce a 
new draft, for instance. You can reclassify an RFC to a different status 
(and we do it sometimes). Additional knowledge outside the document about 
implementations, market acceptance, and lack of problems would easily 
convince me at least that the document is worthy of PS status.


I think a key question in choosing between a multi-level standards track and 
a single-level standards track is what the first publication step looks 
like, so I've been interested in recent proposals to use Experimental as a 
first publication step. Having said that ...


As I read http://www.rfc-editor.org/rfc/rfc5741.txt, Experimental RFCs would 
be Category: Experimental on the first page, and I'd expect them to be 
revised when they are reclassified, if only to make this say Category: 
Standards Track. So that's at least a small barrier to reclassification in 
place.


Thanks,

Spencer 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-23 Thread Jari Arkko

Yoav,
I like this proposal, but there should be a (relatively) easy process 
to advance from Experimental to Proposed, especially if implementation 
experience shows no need for bits-on-the-wire changes.


We should be able to say that for a particular experimental RFC there 
have been this many independent implementation, and they interoperate 
OK, and only so-and-so clarifications need to be added, and the 
document is ready for "Proposed".


I think we already have that. There is really no requirement to produce 
a new draft, for instance. You can reclassify an RFC to a different 
status (and we do it sometimes). Additional knowledge outside the 
document about implementations, market acceptance, and lack of problems 
would easily convince me at least that the document is worthy of PS status.


(Of course, you might end up making an edit in the document anyway if it 
talked about experiments, and it is very likely most documents would 
have at least some details to correct.)


Jari

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-23 Thread Jari Arkko

John,


That's news to me: I can't recall any recent discusses calling for
operational experience before publishing as Proposed Standard.

Some years ago, there was such a requirement for Routing Area, but
that was declared obsolete. (In actuality, there seems to still be a
somewhat informal requirement to document implementations for _some_
Routing Area documents, but it is not by IESG direction.)
  


AFAICT the IESG is not setting any requirements like this. We are 
careful about documents stating correctly what their limitations or not 
so well understood areas are. And sometimes a working group may decide 
that they do not want to forward a document for publication until it has 
some operational experience. But as a general rule the IESG is not 
demanding this.


I don't want to say that we would never do it for any document, however. 
I remember a recent case were one AD wanted operational experience on a 
new, relatively complex design. Other ADs pushed back and the document 
was approved without that experience. However, there might be cases 
where we should have that experience. One case that comes to mind is 
draft-ietf-intarea-ipv4-unique-id which among other things does an 
Update: 791 and we would never do that without years of very widespread 
experience.


In short: its a judgment call but generally speaking the IESG does not 
require operational experience for PS.


Jari

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-23 Thread Eliot Lear
 Hi Ran, and thanks for your reply.

There are two separate issues that we need to distill.  First, what to
do about draft-housley-two-maturity-levels-00, and second, how do take
input to improve the overall process?

I have not really come down on one side or the other on this draft
(yet).  To be sure, two maturity levels seem better than three, and as
you know, I've proposed a single maturity level in the past, so to me,
the draft goes in the right direction.  However, I do not know how many
times we get to change this sort of procedure, and I believe the
community and IESG choice could be better informed than it is today. 
Having been involved in NEWTRK, and having produced what I think was the
only output from that group, in terms of RFCs or processes, I think I
know about which I write, when I say this community can become a bit of
an echo chamber, and could use a bit of formal academic input. 
Conveniently, there are researchers in this area.  This is an even
stronger reason for for me to not state an opinion about whether to
advance the draft.

As to the questions I asked, you and I obviously hold very different
views.  In the end, it is of course for researchers (who are the real
target of my questions) to ask what questions that they think might be
telling about our process.  I hope this discussion informs them, if and
when they review it.

You claim I have a vendor bias.  Guilty, but I am also concerned that we
do the right things for the right reasons, and that motivations are
reasonably aligned so that we have some reason to believe that what is
proposed will work and not have perverse impact.  Absent some serious
analysis, we also are making the assumption that the logic of decisions
of over twenty years ago holds today, when in fact we don't really even
know if it held then.

And now as to your specifics, you have placed a lot of weight on one
example, seemingly extrapolating from it, the Joint Interoperability
Test Command.  I have no experience working with them, and defer to
yours.  However, when you say,

>   As examples, the JITC and TIC requirements pay a great
>   deal of attention to whether some technology is past PS.
>   Various IPv6 Profile documents around the world also pay
>   much attention to whether a particular specification is
>   past PS.

It leads to the following questions:

* Would the vendors have implemented the functionality ANYWAY? 
  Specifically, would other RFPs have already driven vendors in this
  direction?  Can you cite a counter example, where that was not the
  case?
* Is the defense industry at all representative of the broader
  market?  My own experience leads to an answer of, “barely at all”,
  and this has been assuredly the case with the Internet where a
  huge portion has run on on PS, Internet-Drafts, and proprietary
  standards, and not waited for advancement.  Examples have included
  BGP, MPLS-VPNs, HTTP, SSL, and Netflow, just to name a few. 

But again, I would like to see a rigorous analysis, rather than simply
rely on either of our personal experiences.

> The IETF already has a tendency to be very vendor-focused &
> vendor-driven.  It is best, however, if the IETF keeps the 
> interests of both communities balanced (rather than tilting 
> towards commercial vendors).
While this is a perhaps laudable idea, someone has to do the work to get
specifications to the next standards level.  The whole point of my
questions is to determine what motivations that someone might have for
actually performing that work.

>> If we look at the 1694
>> Proposed Standards, are we seeing a lack of implementation due to lack
>> of stability?  I would claim that there are quite a number of examples
>> to the contrary (but see below).
> Wrong question.  How clever to knock down the wrong strawman.

There's no need to be rude or snarky with me, even if you disagree.  You
are looking at this from the angle of the customers, and that's
perfectly reasonable.  I'm looking at it from the developers' point of
view, and from the supply side of your equation.  Both seem reasonably
valid, and so I have no qualms with the question part of your (A),
although as I mentioned above, I question your answer.

>   B) whether that signal has a feedback loop to implementers/
>  vendors that still works.
>   The answer to this is also clearly YES.  Technologies that
>   appear in RFPs or Tender Requirements have a stronger
>   business case for vendors/implementers, hence are more
>   likely to be widely implemented.

Certainly so, but I don't understand how you made the leap of logic from
your question to your answer.  Do we have situations, for instance,
where a proposed standard is compared to a draft standard, or a draft
standard is compared to a full standard, and one is chosen over

Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Russ Housley
Bernard:

> In practice, we often see a document initial go to Proposed Standard,
> then go through a “bis” to enable clarifications and interop improvements.
> 
> Often these changes are too substantial to enable advancement to Draft,
> but they nevertheless represent an important advancement in status.   
> 
> I’d like to see some way that this advancement can be recognized formally. 

I do not see how the document we are discussing encourages or
discourages recycling at Proposed Standard.  This is common with "bis"
documents today.  If there is an interoperability report, this can
happen at the proposed Interoperability Standard maturity level too,
which is not possible under the current set of rules.

Russ
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Russ Housley

>I'm willing to be corrected: does anyone want to document a single
> case where this was required _by_the_IESG_ in the last two years?

I only know of one case where it was even discussed.  The IESG felt that
it was necessary to tell the WG that such a requirement was needed in
their charter.  Since that was not the case, the discussion was abandoned.

The IESG concluded that it could require interoperability testing in the
future only if the WG was warned ahead of time by including the
requirement in their charter.

Russ
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Russ Housley
Dave & Scott:

>> On 6/20/2010 11:53 AM, SM wrote:
>>> The reader will note that neither implementation nor operational
>>> experience is required. In practice, the IESG does "require
>>> implementation and/or operational experience prior to granting Proposed
>>> Standard status".
>>
>>
>> Well, they do not /always/ require it.
>>
>>
>> That said, the fact that they often do and that we've lived with the
>> reality of that for a long time could make it interesting to simplify
>> things significantly:
>>
>>1.  Have the current requirements for Draft be the entry-level
>> requirement for a standard  -- do away with Proposed, not Draft.
>>
>>2.  Have a clear demonstration of industry acceptance (deployment
>> and use) be the criterion for "Internet Standard" (ie, Full.)
>>
>> Having two interoperable implementations required for /all/ new
>> specifications takes care of two interesting questions.
>>
>>   a.  Whether the specification can be at all understood.
>>
>>   b.  Whether there is any meaningful industry motivation to
>>   care about the work.
>>
>> With these two questions satisfied, the nature of challenges against
>> standardization might tend to be more pragmatic than theoretical.
> I strongly support this approach.  The main drawback of this would be
> that a document would sometimes need to exist for longer as an I-D while
> implementations are developed, but balancing that is the fact that those
> implementations would then inform the first RFC version rather than some
> subsequent update, and it would be harder to get an RFC published for
> something no one is really going to build.

This would seem to encourage publication as Informational (perhaps on
the Independent Submission Stream) as a first step.  I'm not sure that
really reduces the work load, but it does shift it out of the standards
track.

Russ
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Russ Housley
We often replace a Proposed Standard with an updated document that
remains at the Proposed Standard maturity level.  There does not seem to
be confusion when this happens.

Russ

On 6/20/2010 8:01 AM, Alessandro Vesely wrote:
>> "In several situations, a Standard is obsoleted by a Proposed Standard"
>>
>> A Standard is not obsoleted by a Proposed Standard. A RFC with a status
>> of Internet Standard can be obsoleted by a RFC at Proposed Standard.
> 
> In some cases, it should be possible to replace an RFC with a reviewed
> version, at the same maturity level.  For example, the attention that
> successive SMTP documents have to pay to source routing decreases over
> time.  There is no reason why a new RFC aimed at reviewing a mature spec
> would need to reduce its maturity level, if it accomplishes the current
> requirements for third level.  I hope this point will be made clearer.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Russ Housley
I agree with these comments, and I'll tackle them in -01 of the draft.

Russ

On 6/20/2010 5:53 AM, SM wrote:
> In Section 6:
> 
>   'The current rule prohibiting "down references" is a major cause
>of stagnation in the advancement of documents.'
> 
> There isn't any current rule that prohibits "down references".  The
> reason for discouraging downward references is to have the specification
> at the same maturity level.
> "Downward reference by annotation" can still be used.  That allows the
> community to balance the importance of getting a document published.
> 
> In Section 7:
> 
>   "In several situations, a Standard is obsoleted by a Proposed Standard"
> 
> A Standard is not obsoleted by a Proposed Standard.  A RFC with a status
> of Internet Standard can be obsoleted by a RFC at Proposed Standard.
> 
> In Section 8:
> 
>   "On the day these changes are published as a BCP, all existing Draft
>Standard and Standard documents automatically get reclassified as
>Interoperable Standard documents"
> 
> One of the benefits of doing this is that the IP Version 6 Addressing
> Architecture can be recognized as a "Standard" for whatever definition
> of standard this community finds suitable.
> 
> This document has RFC 2606 as an Informative Reference.  That should at
> the very least be a Normative Reference.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Martin Rex
RJ Atkinson wrote:
> 
> Rather than quibble about the details of this, I'd
> urge folks to support the move to 2-track.  
> 
> If it becomes clear later, after experience with 2-track, 
> that 2-track needs to be further refined later, then
> the community can always do that.  In the meantime, it
> is quite clear the 3-track system is not working.

I'd rather redefine the qualification criteria for the
3rd maturity level than getting rid of it.

I think it would be ridiculous if the IETF declares
a specification a "full standard" if >>90% of the
installed base is still one or more protocol revisions
behind.

Take TLS as an example.  While I assume it might be
possible to find a few interoperable implementations
of TLSv1.2 (rfc-5248, Aug-2008), it would be ridiculous
to declare this a full IETF standard, because in reality
it is not actually used.

>From a recent survey done by Yngve N. Pettersen about
rfc-5746 patch status of public TLS servers on the
internet:

http://www.ietf.org/mail-archive/web/tls/current/msg06432.html

  - 99 of 383531 [servers] support TLS 1.1
  -  2 of 383531 [servers] support TLS 1.2 (both are known test servers)

A specification only deserves the "full standard" label if
there is a significant amount of usage (IMHO > 20%) of the
installed base actually uses this particular specification/protocol
revision.

If there is a lag in adoption, then this is likely an
indicator of feature creep, i.e. insufficient separation
of non-essential functionality into true options.


Look at IPv6 as another example:
Although there is a significant installed base at least in
theory, this part of the implementation is normally
disabled because it cannot be used anyway.

The full standard label should be reserved for a protocol
or technology that has achieved a significant usage
in the marketplace and by that proven that it is an
adequate technology living up to the market and
consumer requirements as-is.


The current "adoption lag" for several IETF specs is a clear
indicator that there is something wrong with the development
process in the IETF.

An approach of making the demonstration of independent interop
the qualifier for "full standard" is how Management nowadays
creates success -- by definition rather than by achievements.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread RJ Atkinson
Earlier, Mike StJohns wrote:
> One side note - MIBs.  
> 
> MIBs by their nature are actually collections of mini-standards 
> - the objects.  Once an object is defined and published in a 
> non-transitional document (RFC), the OID associated with that 
> object is pretty much stuck with that definition.  And that 
> permanence tends to percolate up to the collection of objects 
> that make up a MIB.  
> 
> I'd suggest only a single standards level for a MIB - stable - 
> tied to a specific conformance statement.  Obviously, this is 
> sort of a sketch of an idea, but given the immutability of each 
> MIB object, advancing a MIB is pretty much impossible unless 
> there are absolutely no changes.

NOTE WELL:
I would rather adopt draft-housley-two-maturity-levels 
quickly than delay it to add special text for MIBs.

That noted, I think that it isn't terribly meaningful
to talk about interoperable MIBs.  One can test whether
a device lets an SNMP client walk a particular MIB,
and one can test whether the SNMP agent inside that
device returns an in-range value for a given object.
For example, the DOCSIS RF MIB has some objects that
return a SNR or a power level.  As near as I can tell,
no one has verified that if the SNR is claimed to be
numeric value N dBmv that the actual measured SNR in a 
test environment is also N dBmv.

However, it is either very difficult or impossible 
to test whether that SNMP agent is accurately reporting 
the current value of many objects defined for MIBs.

So MIBs probably ought to be handled differently by
the standards process.  I could see publishing all
MIBs as BCPs, for example, or as Mike suggests
publishing them directly at some other terminal level.

Yours,

Ran

 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-22 Thread RJ Atkinson
On 22nd June 2010, at 10:12:13 CET, Eliot Lear wrote:
> This then leads to a question of motivations.  What are the motivations
> for the IESG, the IETF, and for individual implementers?  Traditionally
> for the IETF and IESG, the motivation was meant to be a signal to the
> market that a standard won't change out from underneath the developer.

The above seems fairly muddled as written.

Traditionally, "the market" refers to consumers, users, 
and operators, rather than "implementers" or "developers" 
of products.

Indeed, moving beyond Proposed Standard has long been a signal 
to users, consumers, and operators that a technology now has
demonstrated multi-vendor interoperability.  

Further, by moving technology items that lacked multi-vendor 
interoperability into optional Appendices, or downgrading
them to "MAY implement" items, that process also makes clear
which parts of the technology really were readily available, 
as different from (for example) an essentially proprietary 
feature unique to one implementation.

In turn, that tends (even now) to increase the frequency that 
a particular IETF-standardised technology appears in RFPs 
(or Tender Announcements).  In turn, that enhanced the business 
case for vendors to implement the interoperable standards.

Standards are useful both for vendors/implementers and also
for consumers/users/operators.  However, standards are useful
to those 2 different communities in different ways.  

The IETF already has a tendency to be very vendor-focused &
vendor-driven.  It is best, however, if the IETF keeps the 
interests of both communities balanced (rather than tilting 
towards commercial vendors).

> Question #1: Is such a signal needed today?  

Yes.  Users/operators/consumers actively want and need
independent validation that a standard is both interoperable
and reasonably stable.

> If we look at the 1694
> Proposed Standards, are we seeing a lack of implementation due to lack
> of stability?  I would claim that there are quite a number of examples
> to the contrary (but see below).

Wrong question.  How clever to knock down the wrong strawman.

The right questions are:
A) whether that signal is useful to consumers/users/operators 

The answer to this is clearly YES, as technologies that
have advanced beyond Proposed Standard (PS) have a higher
probability of showing up in RFPs and Tender Requirements.

As examples, the JITC and TIC requirements pay a great
deal of attention to whether some technology is past PS.
Various IPv6 Profile documents around the world also pay
much attention to whether a particular specification is
past PS.

B) whether that signal has a feedback loop to implementers/
   vendors that still works.

The answer to this is also clearly YES.  Technologies that
appear in RFPs or Tender Requirements have a stronger
business case for vendors/implementers, hence are more
likely to be widely implemented.

Items that appear in the TIC or JITC requirements are very
very likely to be broadly implemented by many network
equipment vendors.  The same is true for technologies 
in various IPv6 Profiles around the world.

> Question #2: Is the signal actually accurate?  

Yes.

> Is there any reason for a developer to believe that the day after
> a "mature" standard is announced, a new Internet Draft won't
> in some way obsolete that work? 

Again, the wrong question, and an absurdly short measurement
time of 1 day.  Reductio ad absurdum is an often used technique
to divert attention when one lacks a persuasive substantial
argument for one's position.

By definition, Internet-Drafts cannot obsolete any 
standards-track document while they remain Internet-Drafts.  

Only an IESG Standards Action can obsolete some mature standard, 
and that kind of change happens slowly, relatively infrequently, 
and with long highly-visible lead times.

> What does history say about this effort? 

History says that 2-track has NOT happened several times already
because people (e.g. Eliot Lear) quibble over the details,
rather than understand that moving to 2-track is an improvement
and that "optimum" is the enemy of "better" in this situation.

> Question #3: What does such a signal say to the IETF?  

It is a positive feedback loop, indicating that work is
stable and interoperable.  It also says that gratuitous
changes are very unlikely to happen.  By contrast, 
technologies at Proposed Standard very frequently have 
substantial changes, often re-cycling back to PS with
those major changes.

Further, the new approach will have the effect of making
it easier to publish technologies at Proposed Standard,
which would be good all around.

> I know of at least one case where work was not permitted
> in the IETF precisely because a FULL STANDARD was said
> to need soak time.  It was SNMP, and the work that was
> not permitted 

Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread RJ Atkinson
All,

I support this change, as written in Russ's draft.
This is not a surprise, as I've proposed this kind
of change myself in the past (as have several other
folks).

I see various people quibbling about aspects of this
proposal, but I haven't seen any serious defence of
the obviously broken (i.e. hasn't worked for at least
10 years now as near as I can tell) current 3-tier
system.

Rather than quibble about the details of this, I'd
urge folks to support the move to 2-track.  

If it becomes clear later, after experience with 2-track, 
that 2-track needs to be further refined later, then
the community can always do that.  In the meantime, it
is quite clear the 3-track system is not working.

I'll note that past proposals for 2-track have failed
because of such quibbles.  "Best" really is the enemy
of "better" in this situation.  Lets not freeze up
due to quibbles about micro-optimisation.

Yours,

Ran

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Phillip Hallam-Baker
Feature 'bloat' in PKIX is largely due to the fact that the features have
already been accepted in the ITU version of the spec that is being profiled.

But the other reason is that the original PKIX core did not cover the full
set of requirements and that these were added as additional protocols
instead.

If you start from a clean slate, as I had the luxury of doing with the work
that led to XKMS and the SAML assertion infrastructure, you can implement
the whole of PKIX in a core spec of about twenty pages (not including
examples, schemas, etc).

The basic problem in PKIX is not feature bloat, it is the opposite. When
OCSP was proposed people bid the feature set down to the absolute minimum
core. So SCVP was a separate protocol and not an integral part of OCSP as
would have been my approach. And despite each being more complex than XKMS,
SCVP and OCSP combined provide less functionality than XKMS/XKISS.

That is not due to better designers, it is due to being able to look at the
totality of the requirements at once rather than discovering them on the
way.

If you look at a certificate, a crl and an OCSP response you will note that
all three share a set of core properties but they are different entities in
PKIX because they were designed as such. If you have the luxury of a
redesign you can make something simpler. But the only way to slim the spec
down otherwise is to hack out chunks of the spec that people are using - or
at least are going to say they are using.

I can't see anything good coming from an attempt to slim down specs after
PROPOSED. Unless the decision to deprecate a set of functionality is
genuinely uncontroversial there is going to be a faction looking to protect
their code. And IETF process gives them an endless series of opportunities
to do so. Some DNSSEC folk spent four years and then another three years
resisting two changes to their spec that were asserted to be absolutely
necessary to make deployment possible. Trying to remove functionality at
stage three because some people felt the problem should have a simpler
solution is a recipe for paralysis and a huge amount of make-work that will
probably never result in a single feature being deleted.

Take PKIX policy constraints for example. 99% of all Internet applications
would work just fine without any of that code. But there is one very
specific party that has rather a lot of code that is based on the premise
that the code will be there. And nobody can know whether my 99% guess is
accurate or not, it might be 90% or I might be completely wrong and everyone
uses that feature. The point is that nobody is going to know for sure what
people have built on top of a protocol expecting some proposed feature to
stay in the spec. There could be absolutely nobody out there actually using
that stuff in a real application and it would be almost impossible to
distinguish that case from 'almost nobody'. How do you prove a negative?


In short the reason a lot of specs are too complex is not that they try to
do too much. It is the opposite, the original spec did not have enough
functionality and adding this functionality as extensions led to a more
complex result than could have been achieved with a larger set of initial
requirements.


On Mon, Jun 21, 2010 at 11:45 AM, Martin Rex  wrote:

> Dave CROCKER wrote:
> >
> > Interoperability testing used to be an extremely substantial
> demonstration
> > of industry interest and of meaningful learning. The resulting repair and
> > streamlining of specifications was signficant.  If that's still
> happening,
> > I've been missing the reports about lessons learned, as well as
> > indications that significant protocol simplifications have resulted.
> > While the premise of streamlining specifications, based on
> > interoperability testing, is a good one, where is the indication that
> > it is (still) of interest to industry?  (I believe that most protocols
> > reaching Proposed these days already have some implementation
> > experience; it's still not required, but is quite common, no?)
> >
> > My own proposal was to have the second status level simply take note of
> > industry acceptance.  It would be a deployment and use acknowledgement,
> > rather than a technical assessment.  That's not meant to lobby for it,
> > but rather give an example of a criterion for the second label that is
> > different, cheap and meaningful.  By contrast, history has demonstrated
> > that Draft is expensive and of insufficient community interest.
> > We might wish otherwise, but community rough consensus on the point
> > is clear.  We should listen to it.
>
> I would prefer if the IETF retains the third level and puts an emphasis
> on cutting down on protocol feature bloat when going from draft to
> full standard.
>
> What I see happening is that Proposed Standards often start out with
> a lot of (unnecessary) features, and some of them even inappropriately
> labelled as "MUST implement".
>
> The draft standard only does some int

Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Phillip Hallam-Baker
Another thing to consider here is the point at which code points are
assigned.

In a lot of cases we have had incompatibility resulting from experimental
code points being used and then changed when the final draft is agreed.

For some spaces code points are scarce and it is necessary to conserve. But
for most there is no real harm done by wasting a few. And in the case of
OIDs and URIs the code space is infinite.

On Mon, Jun 21, 2010 at 4:33 PM, Michael StJohns wrote:

> I think its a good idea to readdress this.  Part of the issue with the
> current system, is that there is both no great benefit to advancing a
> standard to the next level for the advocates, and no real downside to not
> advancing it.  In many cases, having gone through the pain of getting to RFC
> status, one is unwilling to place their body in the firing line again.  Any
> change to the system should consider the real world implications and try and
> add in the appropriate carrots and sticks.
>
> One side note - MIBs.  MIBs by their nature are actually collections of
> mini-standards - the objects.  Once an object is defined and published in a
> non-transitional document (RFC), the OID associated with that object is
> pretty much stuck with that definition.  And that permanance tends to
> percolate up to the collection of objects that make up a MIB.
>
> I'd suggest only a single standards level for a MIB - stable - tied to a
> specific conformance statement.  Obviously, this is sort of a sketch of an
> idea, but given the immutability of each MIB object, advancing a MIB is
> pretty much impossible unless there are absolutely no changes.
>
> Mike
>
> ___
> Ietf mailing list
> Ietf@ietf.org
> https://www.ietf.org/mailman/listinfo/ietf
>



-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Polk, William T.
On 6/21/10 1:12 PM, "Scott Lawrence"  wrote:

> On 2010-06-20 10:41, Dave CROCKER wrote:
>> 
>> 
>> On 6/20/2010 11:53 AM, SM wrote:
>>> The reader will note that neither implementation nor operational
>>> experience is required. In practice, the IESG does "require
>>> implementation and/or operational experience prior to granting Proposed
>>> Standard status".
>> 
>> 
>> Well, they do not /always/ require it.
>> 
>> 
>> That said, the fact that they often do and that we've lived with the
>> reality of that for a long time could make it interesting to simplify
>> things significantly:
>> 
>>1.  Have the current requirements for Draft be the entry-level
>> requirement for a standard  -- do away with Proposed, not Draft.
>> 
>>2.  Have a clear demonstration of industry acceptance (deployment
>> and use) be the criterion for "Internet Standard" (ie, Full.)
>> 
>> Having two interoperable implementations required for /all/ new
>> specifications takes care of two interesting questions.
>> 
>>   a.  Whether the specification can be at all understood.
>> 
>>   b.  Whether there is any meaningful industry motivation to
>>   care about the work.
>> 
>> With these two questions satisfied, the nature of challenges against
>> standardization might tend to be more pragmatic than theoretical.
> I strongly support this approach.  The main drawback of this would be
> that a document would sometimes need to exist for longer as an I-D while
> implementations are developed, but balancing that is the fact that those
> implementations would then inform the first RFC version rather than some
> subsequent update, and it would be harder to get an RFC published for
> something no one is really going to build.

On first blush, I like this approach (doing away with Proposed rather than
Draft) as well, although I see rather different process implications.

In many ways, this still supports a three step maturity level. If two
independent implementations are not available, then the I-D can still be
submitted for publication as an Informational RFC (or Experimental).  Once
two implementations are available, the document could be resubmitted for
Draft Standard.  

This is still better than the current three step process for a number of
reasons: 
(1) the line between proposed standard and informational rfc is often
blurry, but the difference between informational and draft standard is
relatively clear; 
(2) there is no way to differentiate between a proposed standard with lots
of supporting code, and one that has little implementation support, but
neither is likely to be progressed to draft these days; and
(3) a streamlined two-step maturity process is available to specifications
that have significant support.

Tim

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Phillip Hallam-Baker
I disagree.

For changes such as DNSSEC there is no way to move as many parts of the
industry as need to be involved with an Internet draft. Microsoft is not
going to implement a draft in Windows Server, neither is Apple.

Operational experience in this case means at a minimum taking two conformant
DNS servers and having them exchange messages successfully. But that is
several orders of magnitude less than Internet wide deployment.

But anyone who knows PKI and looks at the current specs knows that what is
described there is not sufficient to deploy on. No liability model for a
start. And the assumption that a new form of PKI is going to suddenly deploy
in three weeks time using a technology base entirely different from
X.509v3/PKIX without any concurrence between the two is interesting to say
the least.

So when an infrastructure change is being proposed there has to be a
starting point for a technical discussion that has a pretty high degree of
buy-in, even though the ultimate shape of the infrastructure is unknowable
at that point. Most of what is finally deployed as DNSSEC will look like the
current proposal. But there will be important differences and those need to
be captured.


On Sun, Jun 20, 2010 at 10:41 AM, Dave CROCKER  wrote:

>
>
> On 6/20/2010 11:53 AM, SM wrote:
>
>> The reader will note that neither implementation nor operational
>> experience is required. In practice, the IESG does "require
>> implementation and/or operational experience prior to granting Proposed
>> Standard status".
>>
>
>
> Well, they do not /always/ require it.
>
>
> That said, the fact that they often do and that we've lived with the
> reality of that for a long time could make it interesting to simplify things
> significantly:
>
>   1.  Have the current requirements for Draft be the entry-level
> requirement for a standard  -- do away with Proposed, not Draft.
>
>   2.  Have a clear demonstration of industry acceptance (deployment and
> use) be the criterion for "Internet Standard" (ie, Full.)
>
> Having two interoperable implementations required for /all/ new
> specifications takes care of two interesting questions.
>
>  a.  Whether the specification can be at all understood.
>
>  b.  Whether there is any meaningful industry motivation to
>  care about the work.
>
> With these two questions satisfied, the nature of challenges against
> standardization might tend to be more pragmatic than theoretical.
>
>
> d/
>
> --
>
>  Dave Crocker
>  Brandenburg InternetWorking
>  bbiw.net
>
> ___
> Ietf mailing list
> Ietf@ietf.org
> https://www.ietf.org/mailman/listinfo/ietf
>



-- 
Website: http://hallambaker.com/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Dave CROCKER



On 6/21/2010 5:57 PM, Peter Saint-Andre wrote:

Here's an idea:
1. The first level is simply "Request for Comments".
Once an RFC is published, we start to gather comments.



Peter,

How is that different from an Internet Draft?

d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Dave CROCKER



On 6/21/2010 5:45 PM, Martin Rex wrote:

I would prefer if the IETF retains the third level and puts an emphasis
on cutting down on protocol feature bloat when going from draft to
full standard.



OK.  All you need is to develop an IETF rough consensus in support of that 
change.

d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Phillip Hallam-Baker
In reply to a number of different threads:

* This proposal, however flawed some might think it is is certainly a much
better description of current practice than the process document.

It is a fact that almost every 'IETF Standard' of any consequence was
developed before the first meeting of the IETF. Meanwhile the most
consequential IETF Protocol developed since is not an IETF standard. Except
for MIBs, the only protocol documents that have achieved standard status
have done so by being grandfathered.

My principle criticism of the existing process has always been that the IESG
has been unwilling to either apply it as written or change the description
to meet the practice.


* The reluctance to spend time on progression to DRAFT status is not
significant.

There is no way that I could justify the time and expense required to
progress a spec from Proposed to Draft as matters stand because it is a
mid-point to a destination I do not expect to reach. There is really no
audience in which the distinction between proposed and draft is going to
encourage adoption of a specification.

I would not expect every Internet spec to go through the full process
though. In fact there would be little point if they did. I would expect most
Internet specs to stop at stage 1 and only foundational specs would go to
stage 2 and then only if they had been successful.

For example, TLS and PKIX are clearly very successful and foundational.
Stuff gets built on them both all the time. DKIM on the other hand is not
foundational in the same way. At least not at present. And many of the
design decisions taken in DKIM were made for reasons of expediency and not
necessarily something you would want to see copied.

If people do start using the DKIM approach as foundational for other specs
then we really should go back and revisit some of the design decisions and
progress the spec to the next level. Otherwise there is no point.

Another similar example is DNSSEC, which is not going to be a real standard
until it is deployed and being used to actually reject traffic. Any proposal
of any real consequence is going to have to change during deployment (if
deployment succeeds). So having two stages makes sense to capture the
descriptions before and after.


* Current Proposed is actually the original requirement for draft

If you look at current practice, there is in fact usually a third stage only
it occurs before the Proposed RFC is published. In the old days it was
acceptable to throw some ideas together and slap out a 'proposed' RFC at an
early stage of development. In the old days RFC stood for Request For
Comments.

Today that step usually takes place outside the IETF or in Internet-drafts.
In fact it is usually encouraged for a group of proposers to have developed
something and have a writeup before going for a BOF. Such documents
frequently end up as 'informational'.

Current requirements for publishing a Proposed Standard are considerably
higher than they were for a draft standard.


* Down References do not cause harm

The only criteria for accepting a reference should be that the description
is sufficiently accessible and sufficiently well defined to enable
interoperable implementations.

I don't think it helped matters in the slightest to delay the publication of
specs depending on PKIX while PKIX was revised to draft standard. The parts
of SMIME and TLS that depended on PKIX were not the parts that were blocking
the progress of PKIX.

Similarly, there should be some language in there to point out that a
reference to a 50 year old expired patent is not a reason to object to a
standards proposal. Referencing a patent does not change the liability
incurred in the slightest. If the patent is enforceable it will apply
whether cited in the text or not. The reason to avoid references to patents
is that patents do not usually have a sufficiently specific description of
the process to be implementable without ambiguity.


* Internet Standard status should require periodic review

I think it is pretty obvious that the Internet mail standard is not the one
described in RFC821 and RFC822. It is even pretty obvious that you need to
know and implement more than RFC2821 and RFC2822 if you want to get mail to
arrive successfully.

Achieving standards status should not be the end of the matter.

Rather than having a third stage in the standards process I would like to
see a periodic examination of standards to see if what they describe is a
sufficiently complete description of reality.

Any worthwhile standard is going to evolve or die. Today NNTP and FTP are
still in the canon. But further development is clearly impossible, they are
both overtaken by events.

PKIX is going to remain relevant for some decades yet. But it will grow and
change and as it does some parts will need some pruning.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: draft-housley-two-maturity-levels-00

2010-06-22 Thread Bernard Aboba
Overall, I'd suggest that the goal should be to merely recognize and
document the maturity levels that already exist in practice, not to change
them.  

 

My understanding is that the  process for advancing from Experimental to
Proposed today largely involves review of implementation experience (e.g.
the results of the "experiment"), and in Transport, a demonstration that the
proposal is not catastrophic to the Internet.  Sometimes the changes
required can be substantial (e.g. changes made in EAP going from RFC 2284 to
3748, and in SIP going from 2543 to 3261),  but I don't think this should
hinder advancement to Proposed.   Problems in interoperability are often
addressed in "bis" documents, so we shouldn't require a detailed interop
assessment either (that's for Draft level).   I can imagine blocking
advancement of a "bis" to Proposed in only a limited number of situations,
such as where there was no implementation experience.  Today "recycling at
Experimental" is pretty rare -- if there is motivation for a "bis" typically
this implies that there was interest/usefulness.   

 

From: Yoav Nir [mailto:y...@checkpoint.com] 
Sent: Tuesday, June 22, 2010 1:00 AM
To: Bernard Aboba
Cc: ietf@ietf.org
Subject: Re: draft-housley-two-maturity-levels-00

 

I like this proposal, but there should be a (relatively) easy process to
advance from Experimental to Proposed, especially if implementation
experience shows no need for bits-on-the-wire changes.

 

We should be able to say that for a particular experimental RFC there have
been this many independent implementation, and they interoperate OK, and
only so-and-so clarifications need to be added, and the document is ready
for "Proposed".

 

On Jun 21, 2010, at 9:09 PM, Bernard Aboba wrote:





Russ,

 

I'd also like to think you for revisiting this topic.

 

I support the recommendation to eliminate the "Standard" maturity level, and
also agree with your recommendation on Maturity Level 2 (similar to Draft
Standard).

 

We need more thought on what to do with the other levels though.

 

In practice, we often see a document initial go to Proposed Standard, then
go through a "bis" to enable clarifications and interop improvements.

 

Often these changes are too substantial to enable advancement to Draft, but
they nevertheless represent an important advancement in status.   

 

I'd like to see some way that this advancement can be recognized formally. 

 

Also, in some areas (e.g. Transport) the first stage is publication of an
Experimental RFC.  These documents are published with the understanding that
implementation experience will be incorporated into a future revision.

 

So perhaps the hierarchy should be:

 

a.   Experimental. 

b.  Proposed Standard (e.g. a "bis").

c.   Interoperable Standard/Draft Standard.



 

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-22 Thread Andrew Sullivan
On Tue, Jun 22, 2010 at 10:12:13AM +0200, Eliot Lear wrote:

> Question #1: Is such a signal needed today?  If we look at the 1694
> Proposed Standards, are we seeing a lack of implementation due to lack
> of stability?  I would claim that there are quite a number of examples
> to the contrary (but see below).

In connection with that question, I'll observe that a very large
number of the DNS protocol documents have not advanced along the
standards track, and efforts to do something about that state of
affairs have not been very successful.  In addition, any time there is
an effort to make a change to anything already deployed is met by
arguments that we shouldn't change the protocol in even the slightest
detail, because of all the deployed code.  (I've been known to make
that argument myself.) 

I don't know whether the DNS is special in this regard, though I have
doubts.

A

-- 
Andrew Sullivan
a...@shinkuro.com
Shinkuro, Inc.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


motivations (was: Re: draft-housley-two-maturity-levels-00)

2010-06-22 Thread Eliot Lear
 Russ,

Thank you for bringing this topic full circle.  Having considered this
topic for a long time, and having lead a cleanup around old standards in
newtrk, I share the following thoughts for your consideration.

In concurring in part with what Bernard and Mike wrote, the basic
question I ask, in several parts, is whether standards maturity levels
have been overtaken by events or time?  For each level above PS, a
substantial amount of work must go into advancement, perhaps without a
single line of code actually being written or changed by implementers. 
This then leads to a question of motivations.  What are the motivations
for the IESG, the IETF, and for individual implementers?  Traditionally
for the IETF and IESG, the motivation was meant to be a signal to the
market that a standard won't change out from underneath the developer.

Question #1: Is such a signal needed today?  If we look at the 1694
Proposed Standards, are we seeing a lack of implementation due to lack
of stability?  I would claim that there are quite a number of examples
to the contrary (but see below).

Question #2: Is the signal actually accurate?  Is there any reason for a
developer to believe that the day after a "mature" standard is
announced, a new Internet Draft won't in some way obsolete that work? 
What does history say about this effort? 

Question #3: What does such a signal say to the IETF?  I know of at
least one case where work was not permitted in the IETF precisely
because a FULL STANDARD was said to need soak time.  It was SNMP, and
the work that was not permitted at the time was what would later become
ISMS.

Question #4:  Is there a market advantage gained by an implementer
working to advance a specification's maturity?  If there is none, then
why would an implementer participate?  If there *is* a market advantage,
is that something a standards organization wants?  Might ossification of
a standard retard innovation by discouraging extensions or changes?

Question #5:  Are these the correct questions, and are there others that
should be asked?

I do not mean to answer these research questions here, but I claim that
that they should be answered, perhaps with some academic rigor, and may
be a worth subject for the economics and policy research group that
Aaron is considering.

Referring to SM's and Dave's messages, judging maturity based on
industry-wide acceptance requires a similar analysis, but with an added
twist: the grey areas of industry acceptance make these sorts of
decisions for the IESG rather difficult.  Having gone through the
effort, it was clear to me that there have been some losers, and I think
we can all spot some winners that remain at Proposed.

Eliot

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Yoav Nir
I like this proposal, but there should be a (relatively) easy process to 
advance from Experimental to Proposed, especially if implementation experience 
shows no need for bits-on-the-wire changes.

We should be able to say that for a particular experimental RFC there have been 
this many independent implementation, and they interoperate OK, and only 
so-and-so clarifications need to be added, and the document is ready for 
"Proposed".

On Jun 21, 2010, at 9:09 PM, Bernard Aboba wrote:

Russ,

I’d also like to think you for revisiting this topic.

I support the recommendation to eliminate the “Standard” maturity level, and 
also agree with your recommendation on Maturity Level 2 (similar to Draft 
Standard).

We need more thought on what to do with the other levels though.

In practice, we often see a document initial go to Proposed Standard, then go 
through a “bis” to enable clarifications and interop improvements.

Often these changes are too substantial to enable advancement to Draft, but 
they nevertheless represent an important advancement in status.

I’d like to see some way that this advancement can be recognized formally.

Also, in some areas (e.g. Transport) the first stage is publication of an 
Experimental RFC.  These documents are published with the understanding that 
implementation experience will be incorporated into a future revision.

So perhaps the hierarchy should be:

a.   Experimental.
b.  Proposed Standard (e.g. a “bis”).
c.   Interoperable Standard/Draft Standard.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread Yoav Nir
I don't think I agree with this.

On Jun 21, 2010, at 6:45 PM, Martin Rex wrote:
> 
> I would prefer if the IETF retains the third level and puts an emphasis
> on cutting down on protocol feature bloat when going from draft to
> full standard.

You want to be very careful cutting down on feature bloat. Some people may be 
using those features you consider "bloat". The right time to cut down on bloat 
is before publication of the original RFC. That's when it gets the most 
scrutiny, and that's the time to tell the author(s) that certain features 
should either clearly be OPTIONAL (aka MAY), or cut out entirely and placed in 
an extension document that may or may not later be advanced in maturity level.

> What I see happening is that Proposed Standards often start out with
> a lot of (unnecessary) features, and some of them even inappropriately
> labelled as "MUST implement".

Perhaps this should explicitly be part of the review process. Think of a 
minimal implementation, and make sure all the features it doesn't need are 
optional

> The draft standard only does some interop testing on a small number
> of implementations, not unlikely those participating the standardization
> process.  It neither addresses what subset other implementations implement
> and what subset is actually necessary for the general use case in the
> installed base.

The small group of those participating in the standardization process doesn't 
necessarily change later. Even if more implementers have joined the fray, they 
don't necessarily come to the IETF. Their "contribution" is only reflected in 
"horror stories" from the same implementers of the original standard.

With the TLS renegotiation thing late last year, some people thought that five 
leading implementations were responsible for almost all of TLS. It later turned 
out that there were dozens of implementations in active use. And yet, most of 
these implementers either don't participate in the TLS WG, or don't identify as 
such. I had no idea SAP had their own TLS implementation, although you had 
participated in the TLS WG for a while, and I have never said anything about 
Check Point's TLS implementation.

> One of the worst feature bloat examples is PKIX.
> 
> It contains an awkward huge number of features that a number of
> implementations do not support -- and work happily without.
> There should either be a split of e.g. 5280 into a "basic profile"
> and a "advanced feature profile", or the status for some of the
> extensions should be fixed from "MUST implement" to "SHOULD implement"
> to match the real world and real necessity.

I don't like SHOULDs that only a small subset implement. Advanced features 
beyond the basic profile should not be an all-or-nothing thing like an 
"advanced feature profile" implies.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-22 Thread SM

At 05:01 20-06-10, Alessandro Vesely wrote:
pay to source routing decreases over time.  There is no reason why a 
new RFC aimed at reviewing a mature spec would need to reduce its 
maturity level, if it accomplishes the current requirements for 
third level.  I hope this point will be made clearer.


In theory, major clean-ups are done for the Draft Standard.  The 
third level is about significant implementation and successful 
operational experience.  There is no reduction in maturity level for 
a clean-up.


At 07:41 20-06-10, Dave CROCKER wrote:

Well, they do not /always/ require it.


It's an unpublished guideline followed by authors to enter the 
publication loop.


That said, the fact that they often do and that we've lived with the 
reality of that for a long time could make it interesting to 
simplify things significantly:


   1.  Have the current requirements for Draft be the entry-level 
requirement for a standard  -- do away with Proposed, not Draft.


   2.  Have a clear demonstration of industry acceptance 
(deployment and use) be the criterion for "Internet Standard" (ie, Full.)


What should be done if there isn't significant deployment and use?

At 08:45 21-06-10, Martin Rex wrote:

I would prefer if the IETF retains the third level and puts an emphasis
on cutting down on protocol feature bloat when going from draft to
full standard.


That could be added as a requirement.  It may turn into a significant 
effort and produce the same results as the current situation.


At 08:57 21-06-10, Peter Saint-Andre wrote:

Here's an idea:

1. The first level is simply "Request for Comments".


That would not fit as an intended status.

At 10:46 21-06-10, John Leslie wrote:

   That's news to me: I can't recall any recent discusses calling for
operational experience before publishing as Proposed Standard.


It's an expectation that the authors have accepted and not a DISCUSS topic.


   In truth, there are interlocking reasons why advancement beyond
Proposed Standard is so difficult -- but I'd like to call attention
to one particular reason: the IESG is overworked.

   Look at any bi-weekly agenda.

   Count the pages in the I-Ds.


Yes, they do end up getting a lot of work.


   Glance through some of the questions raised. Even if you think
the majority of them are spurious, documents _do_ reach the IESG
in a state which essentially precludes implementation working only
from the document.


Yes.

At 11:09 21-06-10, Bernard Aboba wrote:

So perhaps the hierarchy should be:

a.   Experimental.


"Experimental" could be used to get a stable version published.  Does 
the fact that there are more "Proposed Standard" RFCs compared to 
"Experimental" RFCs suggest anything?


Regards,
-sm 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


RE: draft-housley-two-maturity-levels-00

2010-06-21 Thread Ross Callon

The only thing that I disagree with in the draft is the term "interoperable 
standard". Looking at each change in a bit more detail: 

 - Two levels of standards rather than three:  
I strongly support this. It is pretty clear that in most cases people don't 
bother with the effort to move past Proposed Standard. The different between 
"proposed" and "draft" seems too small to be worth the trouble. If there was 
only one step required after "proposed", then people *might* bother to do it. 
Moving to two steps instead of three is therefore a step in the right direction 
(and IMHO a smaller step that is clearly in the right direction is usually 
preferred to a larger step that might or might not be in the right direction). 

 - Calling the second (more mature) step "Interoperable Standard":  
I don't like this, for the simple reason that it makes it sound as if "proposed 
standard" might not be interoperable. In practice there are lots of proposed 
standard documents that are widely deployed in multi-vendor networks, and 
interoperate just fine (in many cases multi-vendor deployment begins well 
before the document is submitted for publication as a proposed standard). I 
prefer the term "internet standard" that some others have proposed, or "full 
standard" would be fine also. 

 - Removing the requirement for a six month wait between "proposed standard" 
and the next step: 
I don't have an opinion on this.

 - Removing the required review of proposed standards every two years: Support. 
Given that we have never done this, it seems like a very good idea to write the 
rules to match reality. 

 - Allowing Downward References (from "internet standard" to "proposed 
standard):
Support. I can recall lots of cases where downward references resulted in 
slowing down document publication, required second (or third) IETF last calls, 
and caused more work for Area Directors (and/or WG chairs and authors). I can't 
think of any cases where restricting downrefs actually helped, nor caused 
anyone to respond to the extra IETF last call on the issue. 

 - Abolishing STD numbers:
Support. 

 - Transition to new scheme: All existing "draft" and "full" standards get 
moved to the new more mature standards level.
Support. 

Russ, thanks for putting this together. 

Ross

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Michael StJohns
I think its a good idea to readdress this.  Part of the issue with the current 
system, is that there is both no great benefit to advancing a standard to the 
next level for the advocates, and no real downside to not advancing it.  In 
many cases, having gone through the pain of getting to RFC status, one is 
unwilling to place their body in the firing line again.  Any change to the 
system should consider the real world implications and try and add in the 
appropriate carrots and sticks.

One side note - MIBs.  MIBs by their nature are actually collections of 
mini-standards - the objects.  Once an object is defined and published in a 
non-transitional document (RFC), the OID associated with that object is pretty 
much stuck with that definition.  And that permanance tends to percolate up to 
the collection of objects that make up a MIB.  

I'd suggest only a single standards level for a MIB - stable - tied to a 
specific conformance statement.  Obviously, this is sort of a sketch of an 
idea, but given the immutability of each MIB object, advancing a MIB is pretty 
much impossible unless there are absolutely no changes.

Mike

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Bernard Aboba
Russ,

 

I'd also like to think you for revisiting this topic. 

 

I support the recommendation to eliminate the "Standard" maturity level, and
also agree with your recommendation on Maturity Level 2 (similar to Draft
Standard). 

 

We need more thought on what to do with the other levels though. 

 

In practice, we often see a document initial go to Proposed Standard, then
go through a "bis" to enable clarifications and interop improvements. 

 

Often these changes are too substantial to enable advancement to Draft, but
they nevertheless represent an important advancement in status.   

 

I'd like to see some way that this advancement can be recognized formally.  

 

Also, in some areas (e.g. Transport) the first stage is publication of an
Experimental RFC.  These documents are published with the understanding that
implementation experience will be incorporated into a future revision. 

 

So perhaps the hierarchy should be:

 

a.   Experimental.  

b.  Proposed Standard (e.g. a "bis").

c.   Interoperable Standard/Draft Standard.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Paul Hoffman
At 1:12 PM -0400 6/21/10, Scott Lawrence wrote:
>On 2010-06-20 10:41, Dave CROCKER wrote:
>>
>>
>>On 6/20/2010 11:53 AM, SM wrote:
>>>The reader will note that neither implementation nor operational
>>>experience is required. In practice, the IESG does "require
>>>implementation and/or operational experience prior to granting Proposed
>>>Standard status".
>>
>>
>>Well, they do not /always/ require it.
>>
>>
>>That said, the fact that they often do and that we've lived with the reality 
>>of that for a long time could make it interesting to simplify things 
>>significantly:
>>
>>   1.  Have the current requirements for Draft be the entry-level requirement 
>> for a standard  -- do away with Proposed, not Draft.
>>
>>   2.  Have a clear demonstration of industry acceptance (deployment and use) 
>> be the criterion for "Internet Standard" (ie, Full.)
>>
>>Having two interoperable implementations required for /all/ new 
>>specifications takes care of two interesting questions.
>>
>>  a.  Whether the specification can be at all understood.
>>
>>  b.  Whether there is any meaningful industry motivation to
>>  care about the work.
>>
>>With these two questions satisfied, the nature of challenges against 
>>standardization might tend to be more pragmatic than theoretical.
>I strongly support this approach.  The main drawback of this would be that a 
>document would sometimes need to exist for longer as an I-D while 
>implementations are developed, but balancing that is the fact that those 
>implementations would then inform the first RFC version rather than some 
>subsequent update, and it would be harder to get an RFC published for 
>something no one is really going to build.

It would only be harder to get a standards track RFC published for something no 
one is really going to build: there will still be Experimental and 
Informational RFCs.

Such a change would put an new and interesting set of pressures on WGs, and on 
individuals who go through the individual submission process for standards 
track. It is well worth considering.

--Paul Hoffman, Director
--VPN Consortium
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread John Leslie
Dave CROCKER  wrote:
> On 6/20/2010 11:53 AM, SM wrote:
> 
>> The reader will note that neither implementation nor operational
>> experience is required. In practice, the IESG does "require
>> implementation and/or operational experience prior to granting Proposed
>> Standard status".

   That's news to me: I can't recall any recent discusses calling for
operational experience before publishing as Proposed Standard.

   Some years ago, there was such a requirement for Routing Area, but
that was declared obsolete. (In actuality, there seems to still be a
somewhat informal requirement to document implementations for _some_
Routing Area documents, but it is not by IESG direction.)

> Well, they do not /always/ require it.

   I'm willing to be corrected: does anyone want to document a single
case where this was required _by_the_IESG_ in the last two years?

> That said, the fact that they often do and that we've lived with the 
> reality of that for a long time could make it interesting to simplify 
> things significantly:
> 
> 1.  Have the current requirements for Draft be the entry-level 
> requirement for a standard  -- do away with Proposed, not Draft.

   That strikes me as a terrible idea.

   I can think of several cases where WG LastCall was postponed until
two implementations were demonstrated.

   By that time, folks were so set on their implementations that it
proved "just too difficult" to dislodge that with facts about faults
in the specification.

   (I realize that only flame-wars can follow such a statement as
"the reason" to keep 2026 the way it is for Proposed Standard. I
shall try to find a sufficiently flame-proof suit...)

   Please consider human nature here. We do have folks that are good
at pointing out theoretical weaknesses of proposed algorithms. But
they stand no chance of prevailing against "running code" that's
already deployed in the field.

   I grant that IETF runs on "rough consensus and running code"; and
I have no particular interest in even _trying_ to change that.

   But can't we PLEASE keep the review stage before we cast the
"running code" in silicon?

> 2.  Have a clear demonstration of industry acceptance (deployment and 
> use) be the criterion for "Internet Standard" (ie, Full.)

   In truth, there are interlocking reasons why advancement beyond
Proposed Standard is so difficult -- but I'd like to call attention
to one particular reason: the IESG is overworked.

   Look at any bi-weekly agenda.

   Count the pages in the I-Ds.

   Glance through some of the questions raised. Even if you think
the majority of them are spurious, documents _do_ reach the IESG
in a state which essentially precludes implementation working only
from the document.

   Now, imagine bringing every standard-track document back two more
times. :^(

   We've had several Working Groups over the years consider this;
and IMHO they've mostly agreed the IESG shouldn't have to act on
these documents three times.

   But we don't have to abolish the three-levels of 2026 to accomplish
that: there have been several proposals to simplify the process of
advancement.

   It is a characteristic of consensus process that sometimes it's
"obvious" to a majority that the "wrong consensus" was reached.

   That doesn't mean, however, that proposing a "better consensus"
is workable. We've gone through years of proposing "better consensus"
here; and I'm frankly not convinced it's worth another half hour of
Plenary time to propose another.

   We have three levels in RFC 2026. The levels are reasonable,
starting with a clear specification, progressing through interoperable
implementation, and finishing with experience in the field.

   Frankly, the fact that we seldom get past the first _doesn't_
convince me there's anything wrong with the levels. And I don't see
why we should expect it to convince anyone who doesn't already
subscribe to one or another "better consensus".

   I wish we could instead discuss how to improve the _process_ of
advancing through the levels. It may be that some prior IESG was
unwilling to let go of a death-grip on blocking advancement for any
perceived imperfection. (I simply don't know...)

   I do NOT believe, however, that the current IESG has any such
interest in keeping tight control of advancement.

--
John Leslie 
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Scott Lawrence

On 2010-06-20 10:41, Dave CROCKER wrote:



On 6/20/2010 11:53 AM, SM wrote:

The reader will note that neither implementation nor operational
experience is required. In practice, the IESG does "require
implementation and/or operational experience prior to granting Proposed
Standard status".



Well, they do not /always/ require it.


That said, the fact that they often do and that we've lived with the 
reality of that for a long time could make it interesting to simplify 
things significantly:


   1.  Have the current requirements for Draft be the entry-level 
requirement for a standard  -- do away with Proposed, not Draft.


   2.  Have a clear demonstration of industry acceptance (deployment 
and use) be the criterion for "Internet Standard" (ie, Full.)


Having two interoperable implementations required for /all/ new 
specifications takes care of two interesting questions.


  a.  Whether the specification can be at all understood.

  b.  Whether there is any meaningful industry motivation to
  care about the work.

With these two questions satisfied, the nature of challenges against 
standardization might tend to be more pragmatic than theoretical.
I strongly support this approach.  The main drawback of this would be 
that a document would sometimes need to exist for longer as an I-D while 
implementations are developed, but balancing that is the fact that those 
implementations would then inform the first RFC version rather than some 
subsequent update, and it would be harder to get an RFC published for 
something no one is really going to build.



___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Dave CROCKER



On 6/20/2010 11:53 AM, SM wrote:

The reader will note that neither implementation nor operational
experience is required. In practice, the IESG does "require
implementation and/or operational experience prior to granting Proposed
Standard status".



Well, they do not /always/ require it.


That said, the fact that they often do and that we've lived with the reality of 
that for a long time could make it interesting to simplify things significantly:


   1.  Have the current requirements for Draft be the entry-level requirement 
for a standard  -- do away with Proposed, not Draft.


   2.  Have a clear demonstration of industry acceptance (deployment and use) 
be the criterion for "Internet Standard" (ie, Full.)


Having two interoperable implementations required for /all/ new specifications 
takes care of two interesting questions.


  a.  Whether the specification can be at all understood.

  b.  Whether there is any meaningful industry motivation to
  care about the work.

With these two questions satisfied, the nature of challenges against 
standardization might tend to be more pragmatic than theoretical.


d/

--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Peter Saint-Andre
On 6/20/10 6:01 AM, Alessandro Vesely wrote:
> On 20/Jun/10 11:53, SM wrote:
>
>> This proposal removes Draft Standard and Internet Standard and replaces
>> it with Interoperable Standard. I won't quibble over the choice of the
>> name yet.
> 
> If there are two levels and the first one is "Proposed Standard", then
> the other one ought to be "Accepted Standard", "Official Standard", or
> something that truly reflects such change (which usually does not affect
> its interoperability or security, as Yaron said.)

Here's an idea:

1. The first level is simply "Request for Comments".

Once an RFC is published, we start to gather comments. Naturally, we
need to write a process document that specifies in greater detail the
kinds of comments we are requesting -- i.e., regarding implementation,
deployment (the real meaning of "running code"), security as observed in
the real world, manageability, etc.

2. The second level is "Internet Standard".

Once we have gathered comments, we go through a "bis" process to
incorporate all of that feedback into an improved specification. We
obsolete the original Request for Comments and go on with our lives. We
still accept comments (errata etc.) about the Standard, but don't make
any substantive changes to it (unless we rev the protocol version).

Peter

-- 
Peter Saint-Andre
https://stpeter.im/





smime.p7s
Description: S/MIME Cryptographic Signature
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Martin Rex
Dave CROCKER wrote:
> 
> Interoperability testing used to be an extremely substantial demonstration
> of industry interest and of meaningful learning. The resulting repair and 
> streamlining of specifications was signficant.  If that's still happening,
> I've been missing the reports about lessons learned, as well as
> indications that significant protocol simplifications have resulted.
> While the premise of streamlining specifications, based on
> interoperability testing, is a good one, where is the indication that
> it is (still) of interest to industry?  (I believe that most protocols
> reaching Proposed these days already have some implementation
> experience; it's still not required, but is quite common, no?)
> 
> My own proposal was to have the second status level simply take note of
> industry acceptance.  It would be a deployment and use acknowledgement,
> rather than a technical assessment.  That's not meant to lobby for it,
> but rather give an example of a criterion for the second label that is
> different, cheap and meaningful.  By contrast, history has demonstrated
> that Draft is expensive and of insufficient community interest.
> We might wish otherwise, but community rough consensus on the point
> is clear.  We should listen to it.

I would prefer if the IETF retains the third level and puts an emphasis
on cutting down on protocol feature bloat when going from draft to
full standard.

What I see happening is that Proposed Standards often start out with
a lot of (unnecessary) features, and some of them even inappropriately
labelled as "MUST implement".

The draft standard only does some interop testing on a small number
of implementations, not unlikely those participating the standardization
process.  It neither addresses what subset other implementations implement
and what subset is actually necessary for the general use case in the
installed base.

One of the worst feature bloat examples is PKIX.

It contains an awkward huge number of features that a number of
implementations do not support -- and work happily without.
There should either be a split of e.g. 5280 into a "basic profile"
and a "advanced feature profile", or the status for some of the
extensions should be fixed from "MUST implement" to "SHOULD implement"
to match the real world and real necessity.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Olafur Gudmundsson

Russ
I strongly support this approach.
In particular I think the downward ref relaxation is of great value as 
chair of WG with with 30+ RFC's at PS and advancing them in order or 
RFC's up the standards track vs. advancing the ones that are important 
will hopefully be the happy consequence of a change like this.


Olafur


On 21/06/2010 9:39 AM, Russ Housley wrote:

Yaron:


In general, I think this is a good idea. It might succeed in reviving
the notion of formal interoperability reports. A few comments though:

- Sec. 2 mentions that the criteria for Proposed Standard will not
change. But the preceding section just described that our criteria (or
processes) for publication are too onerous. So do we not address what's
mentioned as a leading motivation for this change?


What a meant here is that the requirements in RFC 2026 for Proposed
Standard do not change.  With the opportunity for a "second bite at the
apple", I hope that the escelation that has happened over the decades
can be pushed back.


- I think the name "Interoperable Standard" is unfortunate. First, it's
a mouthful. And second, it implies that whereas we didn't care about
interoperability before, now we suddenly do. As an analogy, suppose we
had "Proposed Standard" and "Secure Standard". Instead, I think "Full
Standard" or "IETF Standard" would be better names. After all, people
are looking to the IETF for standards.


I am not wed to the name.  I'd prefer "Internet Standard" above the
names you suggest.


- This is not to criticize the draft, but I am really wondering: at
IPsecME we are close to publishing IKEv2-bis, and we went to great
lengths to make it as faithful as possible to the original IKEv2 (RFC
4306), so that implementations that are compliant don't suddenly become
non-compliant. Suppose we were to advance this large and complex
protocol to the 2nd maturity level, is there a manageable process to
eliminate features from the protocol (because there are no two
implementations that implement said features) without worrying that some
implementations out there have become non-compliant overnight?


We have always published a "bis" RFC that removes the features.  These
RFCs ought to explain the reason for the feature removal.  Support of an
obsoleted feature does not have to make an implementation
non-conformant.   For example, "legacy" features could be documented in
an informative appendix with a warning about the consequences on
interoperability if they are supported.

Russ
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf





___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Russ Housley
Dave:

The questions that you raise will be discussed for 30 minutes at the
front of the Wednesday plenary at IETF 78.  The next step is to find out
what the community thinks about these choices.  I fully expect there to
be changes to the draft after that discussion.

As you say, we really want to see improvement, not just change.

Russ

On 6/20/2010 2:10 AM, Dave CROCKER wrote:
> 
> Russ,
> 
> Thanks for reviving this topic.  As the YAM working group has been
> finding, trying to elevate even the most well-established and
> widely-used protocols to Full standard remains problematic.
> 
> As your Acknowledgments section cites, your proposal nicely adds to the
> considerable repertoire of variations that explore how to simplify things.
> 
> What is less clear is the model or theory or perspective that makes this
> particular variation the one to prefer.  Perhaps it does indeed offer
> the best result, but what is the basis for deciding?
> 
> The fact that few protocols have sought Draft, nevermind Full, status is
> a rather strong indication that the industry does not care about or need
> either. Absent changes in the criteria for a label and/or the process
> for achieving a second (or third) status level, what is going to
> motivate the community to behave differently?
> 
> Interoperability testing used to be an extremely substantial
> demonstration of industry interest and of meaningful learning. The
> resulting repair and streamlining of specifications was signficant.  If
> that's still happening, I've been missing the reports about lessons
> learned, as well as indications that significant protocol
> simplifications have resulted.  While the premise of streamlining
> specifications, based on interoperability testing, is a good one, where
> is the indication that it is (still) of interest to industry?  (I
> believe that most protocols reaching Proposed these days already have
> some implementation experience; it's still not required, but is quite
> common, no?)
> 
> My own proposal was to have the second status level simply take note of
> industry acceptance.  It would be a deployment and use acknowledgement,
> rather than a technical assessment.  That's not meant to lobby for it,
> but rather give an example of a criterion for the second label that is
> different, cheap and meaningful.  By contrast, history has demonstrated
> that Draft is expensive and of insufficient community interest.  We
> might wish otherwise, but community rough consensus on the point is
> clear.  We should listen to it.
> 
> Since your proposal is to use the existing criteria for Draft as the
> second label, why should we expect it to be more popular than it has been?
> 
> It's clear that our 3-stage model is not working.  In my view, YAM is
> demonstrating that, frankly, it's not /going/ to.  The cost is too high
> and the benefit is too low. We ought to change that because, well, the
> current situation is embarrassing.
> 
> But in making the change, there should be a fairly strong basis for
> believing that the new model will be successful.
> 
> d/
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-21 Thread Russ Housley
Yaron:

> In general, I think this is a good idea. It might succeed in reviving
> the notion of formal interoperability reports. A few comments though:
> 
> - Sec. 2 mentions that the criteria for Proposed Standard will not
> change. But the preceding section just described that our criteria (or
> processes) for publication are too onerous. So do we not address what's
> mentioned as a leading motivation for this change?

What a meant here is that the requirements in RFC 2026 for Proposed
Standard do not change.  With the opportunity for a "second bite at the
apple", I hope that the escelation that has happened over the decades
can be pushed back.

> - I think the name "Interoperable Standard" is unfortunate. First, it's
> a mouthful. And second, it implies that whereas we didn't care about
> interoperability before, now we suddenly do. As an analogy, suppose we
> had "Proposed Standard" and "Secure Standard". Instead, I think "Full
> Standard" or "IETF Standard" would be better names. After all, people
> are looking to the IETF for standards.

I am not wed to the name.  I'd prefer "Internet Standard" above the
names you suggest.

> - This is not to criticize the draft, but I am really wondering: at
> IPsecME we are close to publishing IKEv2-bis, and we went to great
> lengths to make it as faithful as possible to the original IKEv2 (RFC
> 4306), so that implementations that are compliant don't suddenly become
> non-compliant. Suppose we were to advance this large and complex
> protocol to the 2nd maturity level, is there a manageable process to
> eliminate features from the protocol (because there are no two
> implementations that implement said features) without worrying that some
> implementations out there have become non-compliant overnight?

We have always published a "bis" RFC that removes the features.  These
RFCs ought to explain the reason for the feature removal.  Support of an
obsoleted feature does not have to make an implementation
non-conformant.   For example, "legacy" features could be documented in
an informative appendix with a warning about the consequences on
interoperability if they are supported.

Russ
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-20 Thread Spencer Dawkins

OK, we really do seem determined to relive the early 2000s...

It seems to me that abolishing the third level is possible, now, because 
the handling of I-Ds has been enhanced.  IMHO, it is an advantage to 
require some experience before giving an I-D the rank of Proposed 
Standard.  Because I-Ds can change more rapidly and informally than an 
official standardization round, the early adoption phase can be much more 
agile that way.


However, some I-Ds become RFCs unexpectedly soon, and may ship untested 
prototypes.  If it is agreed that this is rather a shift of maturity 
levels than simply the abolishment of the last, then some of the current 
criteria for Draft Standard should be formally shifted to Proposed 
Standard accordingly.


There were proposals for Stable Snap Shots (SSS) from Scott Bradner, and 
Working Group Snapshots (WGS) from me, Dave, and Charlie Perkins. If I'm 
remembering correctly, both were intended to say "this is stable NOW, but I 
wouldn't put it in firmware, because we're still getting experience with it, 
and it could change".


Working Group Snapshots (WGS) in
http://www.watersprings.org/pub/id/draft-dawkins-pstmt-twostage-01.txt

Stable SnapShots (SSS), in
http://www.watersprings.org/pub/id/draft-bradner-ietf-stds-trk-01.txt

For extra credit, we could implement these with no 2026/2418 changes, if
changing 2026/2418 is as impossible as it looks - neither BCP says we CAN'T
do WGS/SSS.

We probably don't want to restart these discussions without someone 
summarizing the state of play in previous discussions, because Groundhog's 
Day was a great movie, but a lousy standards process :D


Thanks,

Spencer 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-20 Thread Alessandro Vesely

On 20/Jun/10 11:53, SM wrote:

The reader will note that neither implementation nor operational
experience is required. In practice, the IESG does "require
implementation and/or operational experience prior to granting Proposed
Standard status". Implementors do not treat Proposed Standards as
immature specifications.


It seems to me that abolishing the third level is possible, now, 
because the handling of I-Ds has been enhanced.  IMHO, it is an 
advantage to require some experience before giving an I-D the rank of 
Proposed Standard.  Because I-Ds can change more rapidly and 
informally than an official standardization round, the early adoption 
phase can be much more agile that way.


However, some I-Ds become RFCs unexpectedly soon, and may ship 
untested prototypes.  If it is agreed that this is rather a shift of 
maturity levels than simply the abolishment of the last, then some of 
the current criteria for Draft Standard should be formally shifted to 
Proposed Standard accordingly.



This proposal removes Draft Standard and Internet Standard and replaces
it with Interoperable Standard. I won't quibble over the choice of the
name yet.


If there are two levels and the first one is "Proposed Standard", then 
the other one ought to be "Accepted Standard", "Official Standard", or 
something that truly reflects such change (which usually does not 
affect its interoperability or security, as Yaron said.)


"Accepted Standard" would call for a somewhat better feedback from the 
community, though.



"In several situations, a Standard is obsoleted by a Proposed Standard"

A Standard is not obsoleted by a Proposed Standard. A RFC with a status
of Internet Standard can be obsoleted by a RFC at Proposed Standard.


In some cases, it should be possible to replace an RFC with a reviewed 
version, at the same maturity level.  For example, the attention that 
successive SMTP documents have to pay to source routing decreases over 
time.  There is no reason why a new RFC aimed at reviewing a mature 
spec would need to reduce its maturity level, if it accomplishes the 
current requirements for third level.  I hope this point will be made 
clearer.


JM2C
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-20 Thread SM

At 23:10 19-06-10, Dave CROCKER wrote:
Thanks for reviving this topic.  As the YAM working group has been 
finding, trying to elevate even the most well-established and 
widely-used protocols to Full standard remains problematic.


It is problematic because there isn't any consensus on what an 
Internet Standard is and the requirements to attain that level of maturity.


In Section 2 of draft-housley-two-maturity-levels-00:

  "The requirements for Proposed Standard are unchanged; they remain
   exactly as specified in RFC 2026."

Quoting Section 4.1.1 of RFC 2026:

  "A Proposed Standard specification is generally stable, has resolved
   known design choices, is believed to be well-understood, has received
   significant community review, and appears to enjoy enough community
   interest to be considered valuable.  However, further experience
   might result in a change or even retraction of the specification
   before it advances.

   Usually, neither implementation nor operational experience is
   required for the designation of a specification as a Proposed
   Standard.  However, such experience is highly desirable, and will
   usually represent a strong argument in favor of a Proposed Standard
   designation."

The reader will note that neither implementation nor operational 
experience is required.  In practice, the IESG does "require 
implementation and/or operational experience prior to granting 
Proposed Standard status".  Implementors do not treat Proposed 
Standards as immature specifications.


This proposal removes Draft Standard and Internet Standard and 
replaces it with Interoperable Standard.  I won't quibble over the 
choice of the name yet.  In Section 5:


  'The requirement for six months between "Proposed Standard" and
   "Interoperable Standard" is removed.  If an interoperability report
   is provided with the initial protocol action request, then the
   document can be approved directly at the Interoperable Standard
   maturity level without first being approved at the Proposed Standard
   maturity level.'

What this proposal advocates is in effect having one level of 
maturity, i.e. turning the Proposed Standard into Standard".


  "In practice the annual review of Proposed Standard documents after
   two years has not taken place.  Lack of this review has not revealed
   any ill effects on the Internet Standards Process."

There seems to be a confusion between the Internet Standards Process 
and the Internet Standards Track.  An over-simplified view of the 
process is that it is about publishing RFCs.  Documents are published 
once one can get through the DISCUSSes raised by the IESG and simply 
forgotten.  As the IESG does not conduct the review of Proposed 
Standard documents, the authors are happy and the IETF Community does 
not complain as it has either forgotten or doesn't know that there 
should have been such a review.


If the IETF only needs a publishing mechanism, it could adopt this 
proposal as-is and do away with the IESG.  I'll note that the IESG is 
supposed to make its final determination known in a timely 
fashion.  The overall processing time currently is approximately 250 days.


In Section 6:

  'The current rule prohibiting "down references" is a major cause
   of stagnation in the advancement of documents.'

There isn't any current rule that prohibits "down references".  The 
reason for discouraging downward references is to have the 
specification at the same maturity level.
"Downward reference by annotation" can still be used.  That allows 
the community to balance the importance of getting a document published.


In Section 7:

  "In several situations, a Standard is obsoleted by a Proposed Standard"

A Standard is not obsoleted by a Proposed Standard.  A RFC with a 
status of Internet Standard can be obsoleted by a RFC at Proposed Standard.


In Section 8:

  "On the day these changes are published as a BCP, all existing Draft
   Standard and Standard documents automatically get reclassified as
   Interoperable Standard documents"

One of the benefits of doing this is that the IP Version 6 Addressing 
Architecture can be recognized as a "Standard" for whatever 
definition of standard this community finds suitable.


In the Acknowledgements Section:

  "In May 2010, the IESG discussed the topic at length and came to the
   conclusion that the current situation was becoming more and more
   difficult."

The current situation would certainly become more and more difficult 
if a particular charter is invoked.


This document has RFC 2606 as an Informative Reference.  That should 
at the very least be a Normative Reference.


Regards,
-sm 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels-00

2010-06-19 Thread Dave CROCKER


Russ,

Thanks for reviving this topic.  As the YAM working group has been finding, 
trying to elevate even the most well-established and widely-used protocols to 
Full standard remains problematic.


As your Acknowledgments section cites, your proposal nicely adds to the 
considerable repertoire of variations that explore how to simplify things.


What is less clear is the model or theory or perspective that makes this 
particular variation the one to prefer.  Perhaps it does indeed offer the best 
result, but what is the basis for deciding?


The fact that few protocols have sought Draft, nevermind Full, status is a 
rather strong indication that the industry does not care about or need either. 
Absent changes in the criteria for a label and/or the process for achieving a 
second (or third) status level, what is going to motivate the community to 
behave differently?


Interoperability testing used to be an extremely substantial demonstration of 
industry interest and of meaningful learning. The resulting repair and 
streamlining of specifications was signficant.  If that's still happening, I've 
been missing the reports about lessons learned, as well as indications that 
significant protocol simplifications have resulted.  While the premise of 
streamlining specifications, based on interoperability testing, is a good one, 
where is the indication that it is (still) of interest to industry?  (I believe 
that most protocols reaching Proposed these days already have some 
implementation experience; it's still not required, but is quite common, no?)


My own proposal was to have the second status level simply take note of industry 
acceptance.  It would be a deployment and use acknowledgement, rather than a 
technical assessment.  That's not meant to lobby for it, but rather give an 
example of a criterion for the second label that is different, cheap and 
meaningful.  By contrast, history has demonstrated that Draft is expensive and 
of insufficient community interest.  We might wish otherwise, but community 
rough consensus on the point is clear.  We should listen to it.


Since your proposal is to use the existing criteria for Draft as the second 
label, why should we expect it to be more popular than it has been?


It's clear that our 3-stage model is not working.  In my view, YAM is 
demonstrating that, frankly, it's not /going/ to.  The cost is too high and the 
benefit is too low. We ought to change that because, well, the current situation 
is embarrassing.


But in making the change, there should be a fairly strong basis for believing 
that the new model will be successful.


d/
--

  Dave Crocker
  Brandenburg InternetWorking
  bbiw.net
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


draft-housley-two-maturity-levels-00

2010-06-19 Thread Yaron Sheffer
In general, I think this is a good idea. It might succeed in reviving 
the notion of formal interoperability reports. A few comments though:


- Sec. 2 mentions that the criteria for Proposed Standard will not 
change. But the preceding section just described that our criteria (or 
processes) for publication are too onerous. So do we not address what's 
mentioned as a leading motivation for this change?


- I think the name "Interoperable Standard" is unfortunate. First, it's 
a mouthful. And second, it implies that whereas we didn't care about 
interoperability before, now we suddenly do. As an analogy, suppose we 
had "Proposed Standard" and "Secure Standard". Instead, I think "Full 
Standard" or "IETF Standard" would be better names. After all, people 
are looking to the IETF for standards.


- This is not to criticize the draft, but I am really wondering: at 
IPsecME we are close to publishing IKEv2-bis, and we went to great 
lengths to make it as faithful as possible to the original IKEv2 (RFC 
4306), so that implementations that are compliant don't suddenly become 
non-compliant. Suppose we were to advance this large and complex 
protocol to the 2nd maturity level, is there a manageable process to 
eliminate features from the protocol (because there are no two 
implementations that implement said features) without worrying that some 
implementations out there have become non-compliant overnight?


Thanks,
Yaron
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf