Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-03 Thread Spencer Dawkins

Hi, Yoav,

Recognizing that we all work in different parts of the IETF, so our 
experiences reflect that ...


RFCs have one big advantage over all kinds of "blessed" internet drafts. 
The process of publishing an RFC gets the IANA allocations. Every 
implementation you make based on a draft will ultimately not work with the 
finished RFC because you have to use some "private" ranges of numbers. I 
have a feeling (not backed by any evidence) that part of the reason people 
rush to publish documents is the need to get IANA assignments for their 
implementations.


I know that getting IANA allocations is a major consideration for one of the 
SDOs I'm liaison shepherd for, so my experience matches this (of course, 
there are various IANA policies - if a registry is first-come first-served, 
this isn't a consideration).


If we could have some kind of "viable", "promising" or "1st review" status 
for an Internet Draft, where the IANA assignments can be done on a 
temporary basis, I think this could allow for better review later on. I 
have no idea, though, how to get rid of the "need to support legacy 
implementations" argument that will arise later on, if changes to the 
protocol are proposed as part of the review.


Again, this depends on the instructions to IANA - some policies are easier 
to accommodate than others.


Thanks,

Spencer 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-03 Thread Yoav Nir

On Nov 3, 2010, at 1:42 PM, t.petch wrote:

> 
> Perhaps we should step back a little further, and refuse to charter work that
> will become an RFC unless there are two or more independent organisations that
> commit to producing code.  There is nothing like interoperability for
> demonstrating the viability (or not) of a specification, and likewise, two
> independent organisations are likely to bring two rival views of what should 
> and
> should not be specified.  Those not implementing can watch the two slugging it
> out, and provide a balanced judgement when something needs consensus.
> 
> And two organisations with an interest might want to see a ROI sooner rather
> than later.
> 
> Tom Petch
> 

That's being a killjoy. Organizations never commit to producing code. Besides, 
sometimes people get ideas and would like to get them published even before 
they have convinced their management that implementing is a good idea.

Now if someone produces an RFC just to convince management that it's a good 
idea to implement (because there's an RFC) that's a different thing.

Anyway, I have followed three cases where there were competing proposals for 
doing the same thing, one in NEA, the other two in IPSECME. As it turned out, 
those not implementing watched the two slugging it out, and provided no 
judgement whatsoever. In all three cases. I get a feeling that working groups 
are much better at polishing a single proposal than they are at choosing 
between competing proposals. I think Martin pointed out a similar thing 
yesterday. The RI proposal was not the only one on the table, but it was 
essential to have just one proposal to get the process running.

RFCs have one big advantage over all kinds of "blessed" internet drafts. The 
process of publishing an RFC gets the IANA allocations. Every implementation 
you make based on a draft will ultimately not work with the finished RFC 
because you have to use some "private" ranges of numbers. I have a feeling (not 
backed by any evidence) that part of the reason people rush to publish 
documents is the need to get IANA assignments for their implementations.

If we could have some kind of "viable", "promising" or "1st review" status for 
an Internet Draft, where the IANA assignments can be done on a temporary basis, 
I think this could allow for better review later on. I have no idea, though, 
how to get rid of the "need to support legacy implementations" argument that 
will arise later on, if changes to the protocol are proposed as part of the 
review.

Yoav
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-03 Thread t.petch
- Original Message -
From: "Yoav Nir" 
To: 
Cc: "t.petch" ; 
Sent: Tuesday, November 02, 2010 5:08 PM

Strange. I look at the same facts, and reach the opposite conclusions.

The fact that there were many implementations based on drafts of standards shows
that industry (not just us, but others as well) does not wait for SDOs to be
"quite done".  They are going to implement something even we label them
"danger - still a draft pretty please don't implement"

Everybody in our industry has heard of Internet Drafts. They know that these are
the things that end up being RFCs, which are, as others have said, synonymous
with standards. If we don't get the drafts reviewed well enough to be considered
"good enough to implement" fast enough, industry is just going to ignore us and
implement the draft.

My conclusion is that we can't just ignore industry and keep polishing away, but
that we have to do things in a timely manner.  One thing we've learned from the
TLS renegotiation thing was that it is possible to get a document from concept
to RFC in 3 months. Yes, you need commitment from ADs and IETFers in general
(IIRC you and I were among those pushing to delay a little), but it can be done.

It's a shame that we can't summon that energy to regular documents, and that's
how we get the SCEP draft which has been "in process" for nearly 11 years, and
it's still changing. But that is partially because we (IETFers) all have day
jobs, and our employers (or customers) severely limit the amount of time we can
devote to the IETF. But that's a subject for another thread.

Time to get back to that bug now...


Perhaps we should step back a little further, and refuse to charter work that
will become an RFC unless there are two or more independent organisations that
commit to producing code.  There is nothing like interoperability for
demonstrating the viability (or not) of a specification, and likewise, two
independent organisations are likely to bring two rival views of what should and
should not be specified.  Those not implementing can watch the two slugging it
out, and provide a balanced judgement when something needs consensus.

And two organisations with an interest might want to see a ROI sooner rather
than later.

Tom Petch






Yoav

On Nov 2, 2010, at 5:09 PM, Martin Rex wrote:

> t.petch wrote:
>>
>> From: "Andrew Sullivan" 
>>>
>>> Supppse we actually have the following problems:
>>>
>>>1.  People think that it's too hard to get to PS.  (Never mind the
>>>competing anecdotes.  Let's just suppose this is true.)
>>>
>>>2.  People think that PS actually ought to mean "Proposed" and not
>>>"Permanent".  (i.e. people want a sort of immature-ish level for
>>>standards so that it's possible to build and deploy something
>>>interoperable without first proving that it will never need to
>>>change.)
>>>
>>>3.  We want things to move along and be Internet STANDARDs.
>>>
>>>4.  Most of the world thinks "RFC" == "Internet Standard".
>>
>> I think that this point is crucial and much underrated.  I would express
>> slightly differently,  That, for most of the world, an RFC is a Standard
>> produced by the IETF, and that the number of organisations that know
>> differently are so few in number, even if some are politically
>> significant, that they can be ignored.
>
>
> The underlying question is acutally more fundamental:
> do we want to dillute specification so much that there will be
> multiple incompatible / non-interoperable versions of a specification
> for the sake of having a document earlier that looks like the
> real thing?
>
> There have been incompatible versions of C++ drafts (and compilers
> implementing it) over many years.  HD television went through
> incompatible standards.  WiFi 802.11 saw numerous of "draft-N"
> and "802.11g+" products.  ASN.1 went through backwards incompatible
> revisions.  Java/J2EE went through backwards-incompatible revisions.
>
>
> Publishing a specification earlier, with the provision "subject to change"
> appears somewhat unrealistic and counterproductive to me.  It happens
> regularly that some vendor(s) create an installed base that is simply
> to large to ignore, based on early proposals of a spec, and not
> necessarily a correct implementation of the early spec -- which is
> realized after having created an installed base of several millions
> or more...
>
>
> Would the world be better off if the IETF had more variants of
> IP-Protocols (IPv7, IPv8, IPv9 besides IPv4 and IPv6)? Or if
> we had SNMP v4+v5+v6 in addition to v3 (and historic v2)?
> Or if we had HTTP v1.2 + v1.3 + v1.4 in addition to HTTPv1.0 & v1.1?
>
>
> I do not believe that having more incompatible variants of a protocol
> is going to improve the situation in the long run, and neither do
> I believe in getting entirely rid of cross-pollination by issuing
> WG-only documents as "proposed standards".
>
>
> What other motivation could there be to publishing documents earlier
>

Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-02 Thread Martin Rex
Andrew Sullivan wrote:
> 
> One way -- the one you seem to be (quite reasonably) worried about --
> is that you get a bunch of non-interoperating, half-baked things that
> are later rendered broken by updates to the specification.

No, I'm more worried about ending up with a set of half a dozen
specs for any _new_ technology that are considered a valid "standard"
for _ALL_ vendors to implement and ship rather than one single document.

If there is only one "proposed standard", then it is much easier
for me as an implemetor, especially if I am _not_ an early adopter
to _not_ implement certain stuff shipped by some early adopters,
and it is up to the early adopters to fix broken implementations
that they've aggressively shipped without sufficiently planning ahead.
Standardization only makes sense if there is convergence on ONE
standard, rather than a half a dozen variants of every spec.


>
> Note that this method is actually the one we claim potentially to have;
> the two-maturity-levels draft does not change that.  The idea is that you
> are supposed to try things out as early as reasonable, and if there
> are significant changes, they'll turn up when an effort is made to
> move things along the maturity track.

There are early adopters of new technogies.
There are early adopter which ship defective implementations of early specs.
There are early adopters which recklessly ship implementations of early
specs with no sensible planning ahead for potential changes in the
final spec and how to deal with it _gracefully_ (which in some cases
means distributing updates to the entire installed base).

Personally, I do not have a problem with early adopters in general,
but only with those that do not plan ahead -- i.e. build an implementation
that is (a) highly likely to be fully and flawlessly interoperable with the
final spec and (b) plan for updating *ALL* shipped products when the spec
is finalized (which is more about the date of IESG approval than the
date when the RFC is issued).


> 
> Some people have argued in this thread (and the other related ones)
> that there is a problem from the IETF attempting to prevent the
> problem you're talking about.


The IETF can not stop early adopters from doing stupid things,
but the IETF can refuse to create a mess instead of a single
proposed standard for later implementors of the protocol.
Documenting characteristics of early-adopter implementations
in an appendix (as well as non-negligable breakage) would
also be helpful to later implementors.


For every aggressive early adopter implementor within the IETF
there are 10 more thoughtful and conservative implementors outside
of the IETF, and it is really them for which the IETF should be
producing the standards (as well as for those who want to create
a successor protocol version and a transition/migration plan
to the new version.


>
> That attempt, which is not documented anywhere, amounts to a high
> bar of difficulty in getting an RFC published at all.

Getting a good and consistent document published is likely easy.
Obtaining a good and consistent document can be quite difficult
when you start with a very thin outline and make this thin outline
a highlander proposal by making it a WG document early in its pity shape
and subject every change to "WG consensus" change control.
If there are not enough engineers and not enough implementation
experience involved in the WG discussion, document progression
may become quite slow.


An approach that might work much better would be that a vendor
that is very interested in a particular technology starts documenting
his prototyping work early on as an I-D and announces that I-D in
the relevant working group -- and continues improving the I-D
and the prototype while discussing ideas.

I really think the bigggest problem in today's IETF is that
vendors are walling off their engineering from the IETF
WG discussion too much, and that is the root cause of the problem.

An IETF working group is not a crowd of highly skilled engineers
that is just sitting there twiddling their thumbs and waiting for
opportunities to start implementing new proposals and have fruitful
discussions on running code.  If vendors do not involve their
engineers in the IETF standardization work, then WG discussions
have a much higher likelyhood to become heated, contentious and
unproductive at a purely theoretical level.


>
> I'm not actually sure I have the empirical data to support a claim
> that it really is harder to get that initial-version RFC published;
> but people seem to think that it is, anyway.

Converging on the initial proposal for a new technology is harder
than revisioning an existing technology.  And there is experience
with the installed base, even for those that are otherwise
participating discussion free of implementation experience.


> 
> The argument in favour of publish-early, revise-often approaches is
> that iterations will, or ought to, improve things.

Only if the early implementat

Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-02 Thread Martin Rex
Yoav Nir wrote:
> 
> My conclusion is that we can't just ignore industry and keep polishing
> away, but that we have to do things in a timely manner.  One thing
> we've learned from the TLS renegotiation thing was that it is possible
> to get a document from concept to RFC in 3 months.  Yes, you need
> commitment from ADs and IETFers in general (IIRC you and I were among
> those pushing to delay a little), but it can be done.

Funny that you mention TLS renego.  This document was actually a
severe abuse of most IETF process rules.  And a significant part
of the document contents was written and published against
WG consensus and without IETF consensus.


We could have easily finished the work in two month with a simpler
and more robust solution and higher quality spec if there had not
been so much resistence from a group of conspiring vendors plus AD,
WG co-chair and document editor.

I realized it is *MUCH* easier to write a new I-D from scratch than getting
the problems of the original proposal fixed by discussion in the WG because
of the existing bias -- which is why I think that both, the document
editing process and the "WG item, documented progress hobbled by
consensus process" is the most significant roadblock in IETF document
progression if there is strong bias in the system.



If there is a need to ship draft versions of a spec, there should
be a field in the protocol identifying which version of a draft a
particular implementation is based on -- or the "early adopters"
should voluntarily bear the burden to either limit control or
field update distributed implementations of early drafts.


Another worrysome event was with the TLS channel bindings document,
when the definition of "TLS-unique" channel bindings had to be changed
underneath the RFC Editors feet to match the installed base of some
vendor (who failed to point out this issue earlier, although
the original definition had been like that for a few years).


Or the gssapi zero length message protection, which had been explicitly
described for GSS-APIv2 (rfc-2743,Jan-2000) because it is a non-expendable
protocol element required by the FTP security extensions (rfc2228,oct-1997)
specified in rfc-2743, section 2.3 last paragraph of page 62:

https://tools.ietf.org/html/rfc2743#page-62

but foobar'ed in Microsoft Windows Vista & Windows7
(fails decryption of gss_wrap(conf=TRUE) tokens with SEC_E_ALTERED_MESSAGE).

This features is tested and the error is reported by the OpenSource
gssapi test tool that I've been providing since 2000.

But as I've previously commented:
http://www.ietf.org/mail-archive/web/kitten/current/msg00640.html

testing seems not very high up on some vendors priority lists.
It is a real pity that either Microsoft didn't bring any
Vista betas to this event:
https://lists.anl.gov/pipermail/ietf-krb-wg/2006-March/005885.html

or there was no interest in performing rfc-4121 interop tests with a
GSS-API interop test tool that works quite well for interop testing
an MIT Kerberos for windows with Microsoft Win2K&Win2K3 Kerberos SSP.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-02 Thread Yoav Nir
Strange. I look at the same facts, and reach the opposite conclusions.

The fact that there were many implementations based on drafts of standards 
shows that industry (not just us, but others as well) does not wait for SDOs to 
be "quite done".  They are going to implement something even we label them 
"danger - still a draft pretty please don't implement"

Everybody in our industry has heard of Internet Drafts. They know that these 
are the things that end up being RFCs, which are, as others have said, 
synonymous with standards. If we don't get the drafts reviewed well enough to 
be considered "good enough to implement" fast enough, industry is just going to 
ignore us and implement the draft.

My conclusion is that we can't just ignore industry and keep polishing away, 
but that we have to do things in a timely manner.  One thing we've learned from 
the TLS renegotiation thing was that it is possible to get a document from 
concept to RFC in 3 months. Yes, you need commitment from ADs and IETFers in 
general (IIRC you and I were among those pushing to delay a little), but it can 
be done.

It's a shame that we can't summon that energy to regular documents, and that's 
how we get the SCEP draft which has been "in process" for nearly 11 years, and 
it's still changing. But that is partially because we (IETFers) all have day 
jobs, and our employers (or customers) severely limit the amount of time we can 
devote to the IETF. But that's a subject for another thread.

Time to get back to that bug now...

Yoav

On Nov 2, 2010, at 5:09 PM, Martin Rex wrote:

> t.petch wrote:
>> 
>> From: "Andrew Sullivan" 
>>> 
>>> Supppse we actually have the following problems:
>>> 
>>>1.  People think that it's too hard to get to PS.  (Never mind the
>>>competing anecdotes.  Let's just suppose this is true.)
>>> 
>>>2.  People think that PS actually ought to mean "Proposed" and not
>>>"Permanent".  (i.e. people want a sort of immature-ish level for
>>>standards so that it's possible to build and deploy something
>>>interoperable without first proving that it will never need to
>>>change.)
>>> 
>>>3.  We want things to move along and be Internet STANDARDs.
>>> 
>>>4.  Most of the world thinks "RFC" == "Internet Standard".
>> 
>> I think that this point is crucial and much underrated.  I would express
>> slightly differently,  That, for most of the world, an RFC is a Standard
>> produced by the IETF, and that the number of organisations that know
>> differently are so few in number, even if some are politically
>> significant, that they can be ignored.
> 
> 
> The underlying question is acutally more fundamental:
> do we want to dillute specification so much that there will be
> multiple incompatible / non-interoperable versions of a specification
> for the sake of having a document earlier that looks like the
> real thing?
> 
> There have been incompatible versions of C++ drafts (and compilers
> implementing it) over many years.  HD television went through
> incompatible standards.  WiFi 802.11 saw numerous of "draft-N"
> and "802.11g+" products.  ASN.1 went through backwards incompatible
> revisions.  Java/J2EE went through backwards-incompatible revisions.
> 
> 
> Publishing a specification earlier, with the provision "subject to change"
> appears somewhat unrealistic and counterproductive to me.  It happens
> regularly that some vendor(s) create an installed base that is simply
> to large to ignore, based on early proposals of a spec, and not
> necessarily a correct implementation of the early spec -- which is
> realized after having created an installed base of several millions
> or more...
> 
> 
> Would the world be better off if the IETF had more variants of
> IP-Protocols (IPv7, IPv8, IPv9 besides IPv4 and IPv6)? Or if
> we had SNMP v4+v5+v6 in addition to v3 (and historic v2)?
> Or if we had HTTP v1.2 + v1.3 + v1.4 in addition to HTTPv1.0 & v1.1?
> 
> 
> I do not believe that having more incompatible variants of a protocol
> is going to improve the situation in the long run, and neither do 
> I believe in getting entirely rid of cross-pollination by issuing
> WG-only documents as "proposed standards".
> 
> 
> What other motivation could there be to publishing documents earlier
> than vendors implementing and shipping it earlier?  And if they do
> that, there is hardly any room for any substantial or backwards-
> incompatible changes.  And the size of the installed base created
> by the early adopters significantly limits usefulness of any features
> or backwards-compatible changes that are incorporated into later
> revisions of the document.
> 
> 
> -Martin
> ___
> Ietf mailing list
> Ietf@ietf.org
> https://www.ietf.org/mailman/listinfo/ietf
> 
> Scanned by Check Point Total Security Gateway.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-02 Thread Spencer Dawkins

You could call me a blue-eyed optimist, but I have brown eyes ...


What other motivation could there be to publishing documents earlier
than vendors implementing and shipping it earlier?  And if they do
that, there is hardly any room for any substantial or backwards-
incompatible changes.  And the size of the installed base created
by the early adopters significantly limits usefulness of any features
or backwards-compatible changes that are incorporated into later
revisions of the document.


I think you're conflating two steps here:

- implement and test, including interop testing

- shipping in a product

There is a reason to implement and test, including Interop testing. Robert 
Sparks can speak (orders of magnitude :-) more authoritatively than I can, 
but my impression from looking at SIPit reports, is that people in the RAI 
community really do have implentations, including implementations used for 
interop testing, that are NOT present in their products, and they do expect 
to feed back based on interop testing and see changes to the protocol that 
might require implementation changes.


But beyond this ... you can implement any draft - you just have to think 
that's a good idea, and right now, people are making the decision to 
implement any particular draft with no guidance.


In a previous life, I worked for a company making network monitor equipment. 
We hade requests from large carriers to support monitoring one of the 
SIGTRAN specs - I'm thinking it was 
https://datatracker.ietf.org/doc/draft-ietf-sigtran-sua/ - in about six 
different flavors of the draft (out of the 16 versions published as working 
group documents).


- All had been implemented.

- All had been deployed.

- No two were 100-percent compatible.

- None carried an identifier that said "this is -09" or whatever, so we used 
heuristics to guess which version of the protocol we were looking at.


- Some carriers had more than one version of the protocol running in the 
networks (-07 between these two boxes, -09 between these other two boxes).


And none of these large carriers had waited for a proposed standard (it 
still hadn't been approved for publication when they were asking us to 
support six flavors of the draft).


If having a working group say "we think this version of the draft is stable 
enough to code to" is going to make things worse, I wouldn't mind 
understanding why that's true.


I will agree that I hear people say "we can't change the spec from v03 of 
this because people have implemented it", but looking back, they were 
pointing to TINY communities, compared to real deployments that happened 
later. In the case I was talking about with Eric Berger yesterday, the 
"can't change the spec from -03 because people have implemented it" draft 
was finally published at -08. -03 DID change, several times.


If someone manages to make a convincing argument to protect an early 
deployment to a tiny user community, that prevents us from solving problems 
with a protocol that would allow wider deployment, that's our fault.


Spencer 


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-02 Thread Andrew Sullivan
On Tue, Nov 02, 2010 at 04:09:53PM +0100, Martin Rex wrote:

> Would the world be better off if the IETF had more variants of
> IP-Protocols (IPv7, IPv8, IPv9 besides IPv4 and IPv6)? Or if
> we had SNMP v4+v5+v6 in addition to v3 (and historic v2)?
> Or if we had HTTP v1.2 + v1.3 + v1.4 in addition to HTTPv1.0 & v1.1?

Maybe.  Arguing about counterfactuals is always fun, though a little
difficult.  But in general, there is more than one way in which a
plethora of different standards can affect things.

One way -- the one you seem to be (quite reasonably) worried about --
is that you get a bunch of non-interoperating, half-baked things that
are later rendered broken by updates to the specification.  Note that
this method is actually the one we claim potentially to have; the
two-maturity-levels draft does not change that.  The idea is that you
are supposed to try things out as early as reasonable, and if there
are significant changes, they'll turn up when an effort is made to
move things along the maturity track.  

Some people have argued in this thread (and the other related ones)
that there is a problem from the IETF attempting to prevent the
problem you're talking about.  That attempt, which is not documented
anywhere, amounts to a high bar of difficulty in getting an RFC
published at all.  I'm not actually sure I have the empirical data to
support a claim that it really is harder to get that initial-version
RFC published; but people seem to think that it is, anyway.
Suppositing that it really is harder, then the current documented
standards track, and the two-maturity-levels document, will both be
wrong.  There will be one effective maturity level.

The argument in favour of publish-early, revise-often approaches is
that iterations will, or ought to, improve things.  Imagine: in some
other possible world, they're up to IPv10 now, but it took those
intervening versions to discover that you really really needed some
clean interoperation layer with the "legacy" IPv4 networks.  In that
other possible world, the early deployment failures of
quickly-produced IPv6 specifications were taken to mean that
deployment was too hard, and so a _more_ interoperable system was
crafted.  (Having come from a community where people talked about "the
possible world where I am a mud puddle", the possible world where
deployment failure leads protocol designers to think harder about real
deployment doesn't seem unrealistic to me.)

I should say that personally, I'm not sure where I land on the
early-often/forward-compatible continuum; I suspect it differs
depending on the protocol.  But as others have been pointing out, it
sure looks like there's more than one problem here, and we need to be
clear on what we think we're solving before we jump in and start doing
it.

Best regards,

A

-- 
Andrew Sullivan
a...@shinkuro.com
Shinkuro, Inc.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe:

2010-11-02 Thread Martin Rex
t.petch wrote:
> 
> From: "Andrew Sullivan" 
> >
> > Supppse we actually have the following problems:
> >
> > 1.  People think that it's too hard to get to PS.  (Never mind the
> > competing anecdotes.  Let's just suppose this is true.)
> >
> > 2.  People think that PS actually ought to mean "Proposed" and not
> > "Permanent".  (i.e. people want a sort of immature-ish level for
> > standards so that it's possible to build and deploy something
> > interoperable without first proving that it will never need to
> > change.)
> >
> > 3.  We want things to move along and be Internet STANDARDs.
> >
> > 4.  Most of the world thinks "RFC" == "Internet Standard".
> 
> I think that this point is crucial and much underrated.  I would express
> slightly differently,  That, for most of the world, an RFC is a Standard
> produced by the IETF, and that the number of organisations that know
> differently are so few in number, even if some are politically
> significant, that they can be ignored.


The underlying question is acutally more fundamental:
do we want to dillute specification so much that there will be
multiple incompatible / non-interoperable versions of a specification
for the sake of having a document earlier that looks like the
real thing?

There have been incompatible versions of C++ drafts (and compilers
implementing it) over many years.  HD television went through
incompatible standards.  WiFi 802.11 saw numerous of "draft-N"
and "802.11g+" products.  ASN.1 went through backwards incompatible
revisions.  Java/J2EE went through backwards-incompatible revisions.


Publishing a specification earlier, with the provision "subject to change"
appears somewhat unrealistic and counterproductive to me.  It happens
regularly that some vendor(s) create an installed base that is simply
to large to ignore, based on early proposals of a spec, and not
necessarily a correct implementation of the early spec -- which is
realized after having created an installed base of several millions
or more...


Would the world be better off if the IETF had more variants of
IP-Protocols (IPv7, IPv8, IPv9 besides IPv4 and IPv6)? Or if
we had SNMP v4+v5+v6 in addition to v3 (and historic v2)?
Or if we had HTTP v1.2 + v1.3 + v1.4 in addition to HTTPv1.0 & v1.1?


I do not believe that having more incompatible variants of a protocol
is going to improve the situation in the long run, and neither do 
I believe in getting entirely rid of cross-pollination by issuing
WG-only documents as "proposed standards".


What other motivation could there be to publishing documents earlier
than vendors implementing and shipping it earlier?  And if they do
that, there is hardly any room for any substantial or backwards-
incompatible changes.  And the size of the installed base created
by the early adopters significantly limits usefulness of any features
or backwards-compatible changes that are incorporated into later
revisions of the document.


-Martin
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Alternate entry document model (was: Re: IETF processes (wasRe: draft-housley-two-maturity-levels))

2010-11-01 Thread t.petch
 Original Message -
From: "Andrew Sullivan" 
To: 
Sent: Friday, October 29, 2010 9:39 PM
> On Fri, Oct 29, 2010 at 01:20:23PM -0700, SM wrote:
> > It would be difficult to get buy-in if the document is not published as a
> > RFC.
>
> Supppse we actually have the following problems:
>
> 1.  People think that it's too hard to get to PS.  (Never mind the
> competing anecdotes.  Let's just suppose this is true.)
>
> 2.  People think that PS actually ought to mean "Proposed" and not
> "Permanent".  (i.e. people want a sort of immature-ish level for
> standards so that it's possible to build and deploy something
> interoperable without first proving that it will never need to
> change.)
>
> 3.  We want things to move along and be Internet STANDARDs.
>
> 4.  Most of the world thinks "RFC" == "Internet Standard".

I think that this point is crucial and much underrated.  I would express
slightly differently,
that, for most of the world, an RFC is a Standard produced by the IETF, and
that the number of organisations that know differently are so few in number,
even if
some are politically significant, that they can be ignored.

And that this is something outside our control and that we are powerless to
change.

So whether we have XStandard, YStandard or ZStandard and how we move
between them is irrelevant (to most of the world).

Hence my focus is on how we can get an RFC published in the first place, in a
more
timely manner with, ideally, an improvement in the quality.

Tom Petch


> If all of those things are right and we're actually trying to solve
> them all, then it seems to me that the answer is indeed to move to _n_
> maturity levels of RFC, where _n_ < 3 (I propose 1), but that we
> introduce some new document series (call them TRFC, for "Tentative
> Request For Comment", or whatever) that is the first step.  Then we
> get past the thing that people are optimizing for ("everything stays
> as Proposed Standard once it gets published") by simply eliminating
> that issue permanently.
>
> Ah, you say, but now things will stick at TRFC.  Maybe.  But we could
> on purpose make it easier to get TRFC than it is now to get PS (say,
> by adopting John's limited DISCUSS community for TRFC, or one of the
> other things discussed in this thread).  Also, the argument about
> everyone thinking that RFCs are "standard", and the resulting pressure
> to make them perfect and permanent, would be explicitly relieved (at
> least for a while), because nobody thinks that TRFCs are standards.
>
> Note that this is not to denigrate SM's suggestion, which also doesn't
> seem wrong to me.  But since one of the issues appears to be that
> anything called "RFC" is set in stone, then if we just stop calling
> the early-publication documents "RFC" that and introduce something
> after I-D (which is formally only on the _way_ to some consensus, and
> not actually the product of it), the blockage might be removed.
>
> A
> --
> Andrew Sullivan
> a...@shinkuro.com
> Shinkuro, Inc.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf