Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Abdussalam Baryun
might preference would be just to pick one, and
 provide a stick for hitting those who do it the other way.

I think that IESG is already using that stick :)

AB

On 1/9/13, Dean Willis dean.wil...@softarmor.com wrote:

 On Jan 8, 2013, at 12:57 PM, Abdussalam Baryun abdussalambar...@gmail.com
 wrote:

 but the question of
 error in process is; does the RFC lack communication requirement with
 the community?


 Sorry if not clear. I mean that as some participant are requesting a
 scientific approach to struggling with 2119 (i.e. thread-subject),
 does that mean in some RFCs the use or not use (i.e. we see that
 participant use different approaches to follow the 2119) that it may
 add communication confuse with some of community?


 I'm absolutely certain that some of our community is confused about
 something related to this thread. Given the absence of information that
 would help in a decision, might preference would be just to pick one, and
 provide a stick for hitting those who do it the other way.

 --
 Dean





Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Martin Rex
John Day wrote:

For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.
 
 C'mon. You don't really believe that do you?

Which one in particular.

The KIV-tool, which I was shown and explained in 1996 by a student doing
his master thesis with it on a module of our software, can output
source code at the end of its processing.  Code that is a guaranteed
bug-free implementation of the spec.  (which MUST NOT be confused
with the given spec providing the desired behaviour and being free
of unintended (and unthough-of) behaviour).

(While the tool's name appears to originate from the University of
 Karlsruhe (-Karlsruhe Interactive Verifier), and that student back
 then was from Karlsruhe, the current homepage of the tool appears
 to be at the University of Augsburg:

  http://www.informatik.uni-augsburg.de/lehrstuehle/swt/se/kiv/
)


 Ever see a compiler with bugs?

Compiler bugs are comparatively rare, many of them are problems
of the optimizer (therefore avoidable) and many happen under
obscure situations that can be avoided by defensive programming
style.  Code generators often produce defensive style code.
Using a robust and mature programming language, such as C89,
may also help to avoid compiler problems of fancy or complex
languagues or fancy features.

Look at the long list of defects of modern CPUs.  Apps programmers
that use a compiler rarely have to know (let alone take into account)
those CPU defects because most compiler code generators use only
a subset of CPU features anyway.



 Who verified the verifier?  How do you know the verifier is bug free?

Now you must be kidding.

Are you suggesting that anyone using the programming language Prolog
would be gambling and could never expect deterministic results?


With a decent amount of expertise and the right mindset,
even mere mortal humans can produce code with a very low error rate.
IIRC, Donald E. Knuth didn't need a formal specification and a
formal proofing tool to achieve a low error rate for TeX and Metafont.

Having started assembler programming at the age of 12 myself, I could
easily understand why DEK used Assembler for his algorithm code examples
in TAOCP when I first encountered these books at age 21.


 
 As I indicated before, I have been working on this problem since we 
 discovered Telnet wasn't as good as we thought it was.  For data 
 transfer protocols, it is relatively straightforward and can be 
 considered solved for anyone who wants to bother.  The problem is 
 most don't.

There is a perverted conditioning in our education system and
business life by rewarding mediocrity (cheating, improvising, being
superficial) which is IMO one of the underlying causes for
  http://en.wikipedia.org/wiki/Dunning_Kruger
and impairs many implementors' motivation to watch out and compensate for
defects in the spec and to recognize and respect design limits of a spec,
i.e. avoid underspecified protocol features or require official
clarification and adjust the code accordingly before shipping
implementations of underspecified protocol features.


 
 The real problem is the so-called application protocols where dealing 
 the different semantics of objects in different systems makes the 
 problem very hard and very subtle.  The representation of the 
 semantics is always of what you think its seminal properties are. 
 This is not always obvious.


I believe that the real difficulty is about designing and discussing
at the abstract level and then performing formally correct transitions
from abstract spec level to concrete specification proposla level
and comparing implementation options at the spec level ... and getting
everyone that wants to participate to understand the abstraction
and how to perform transitions from and to concrete spec proposals.


I'm experiencing that difficulty myself every once in a while.
In 1995/1996 I had a adopted notion/bias for the uselessness of
traditional GSS-API channel bindings (as defined in GSS-APIv1 rfc1508/1509
and GSS-APIv2 rfc2743/2744) and it took me a few weeks of discussion
to really understand what Nico was trying to accomplish with
cryptographic channel bindings, such as tls-unique, its prerequisites
and real properties and its benefits.


 
But maybe you're not really thinking of a defect in the implementation
(commonly called bug) but rather a gap in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.
 
 One person's gap is another person's bug.  What may be obvious to one 
 as something that must occur may not be so to the other.

There is a significant difference between a bug in an implementation
and an unintended consequence due to a gap in the specification.

The SSLv3 protocol and the IETF successor TLS contains a somewhat
underspecified 

Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Martin Rex
John Day wrote:

 It would be interesting to see you apply that.
 
 This is what I have been talking about.  The human mind's ability to 
 believe that the whole world sees everything the same way they do. 
 It really is quite amazing.
 
 These so-called gaps often arise because they were unstated 
 assumptions or things that the author believed were patently obvious 
 and didn't need to be stated. Actually didn't know needed to be 
 stated.  From his point of view, no one would do it differently. 
 Nothing had been left out and he didn't make the mistake.   What 
 the other guys did was a bug.
 
 There is a much greater range of interpreting these things than it 
 appears that you imagine.

With bug, I mean behaviour that is non-compliant with the spec,
where the difference is not to accomodate for a defect of the spec,
and where this behaviour is detrimental to the operation of that
implementation (vulnerability or robustness issue) or detrimental
to the interop with other implementations.

With conflict in a spec, I refer to distinct statements in a
spec that contradict each other (such as the same symbolic protocol
element is assigned two different values in two seperate locations
of the spec).

With ambiguity in a spec, I refer to an omission that precludes
that a certain feature can be implemented at all.  Such as a symbolic
protocol element that represents an integer value in a PDU, but
the spec lacks the definition of the numeric value for the 
on-the-wire representation.

With gap in a spec, I refer to omissions that do not preclude
the implementation, but can lead to unexpected behaviour of an
implementation.  This result is vaguely similar to what happens when
a switch() statement lacks a default: case and is unexpectedly faced
with a value that is not covered by case statements.  The result
can be undefined / unpredictable, yet remain formally provable
correct with respect to the given specification.


Finally, there are omissions in a spec, i.e. properties that a
protocol does and can not provide, yet consumers of the protocol may
believe in magic security pixie dust and develop a flawed assumption
about a non-existing protocol property, as it happened in with the
TLS renegotiation.  TLS renegotiation being secure/protected against
MitM was a property that was retrofitted into SSLv3/TLS with rfc5746,
and could not possibly have existed by accident in any of the existing
implementations.  (Although servers that did not support renegotiation
at all never were vunerable to the interesting attacks.)



-Martin


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Hector Santos
Maybe the survey to be done is a review of all the RFC, STD and see 
which ones


- had a great abstract and introduction,
- had the better writing styles
- had the least endorsement resistance
- progress faster than most,
- had the most implementators,
- with least support/questions need to be asked,

etc, etc.

Are the RFCs that followed 2119 the more successful ones, the ones 
written with clarity and knew how to separate functional and technical 
descriptions?  Had the right target audience(s) in mind?


Maybe we can also review which I-D and RFCs have high failure or have 
trouble getting industry endorsement, and even if it did, why has it 
failed in the market place?


What is the overall problem anyway?   Is this part of the reason why 
there are efforts to fast track documents? Because there are so much 
resistance by such a broad range of disciplines with different level 
mindsets and philosophies?


--
HLS

Martin Rex wrote:

John Day wrote:

It would be interesting to see you apply that.

This is what I have been talking about.  The human mind's ability to 
believe that the whole world sees everything the same way they do. 
It really is quite amazing.


These so-called gaps often arise because they were unstated 
assumptions or things that the author believed were patently obvious 
and didn't need to be stated. Actually didn't know needed to be 
stated.  From his point of view, no one would do it differently. 
Nothing had been left out and he didn't make the mistake.   What 
the other guys did was a bug.


There is a much greater range of interpreting these things than it 
appears that you imagine.


With bug, I mean behaviour that is non-compliant with the spec,
where the difference is not to accomodate for a defect of the spec,
and where this behaviour is detrimental to the operation of that
implementation (vulnerability or robustness issue) or detrimental
to the interop with other implementations.

With conflict in a spec, I refer to distinct statements in a
spec that contradict each other (such as the same symbolic protocol
element is assigned two different values in two seperate locations
of the spec).

With ambiguity in a spec, I refer to an omission that precludes
that a certain feature can be implemented at all.  Such as a symbolic
protocol element that represents an integer value in a PDU, but
the spec lacks the definition of the numeric value for the 
on-the-wire representation.


With gap in a spec, I refer to omissions that do not preclude
the implementation, but can lead to unexpected behaviour of an
implementation.  This result is vaguely similar to what happens when
a switch() statement lacks a default: case and is unexpectedly faced
with a value that is not covered by case statements.  The result
can be undefined / unpredictable, yet remain formally provable
correct with respect to the given specification.


Finally, there are omissions in a spec, i.e. properties that a
protocol does and can not provide, yet consumers of the protocol may
believe in magic security pixie dust and develop a flawed assumption
about a non-existing protocol property, as it happened in with the
TLS renegotiation.  TLS renegotiation being secure/protected against
MitM was a property that was retrofitted into SSLv3/TLS with rfc5746,
and could not possibly have existed by accident in any of the existing
implementations.  (Although servers that did not support renegotiation
at all never were vunerable to the interesting attacks.)



-Martin








Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-09 Thread Joe Touch

On 1/7/2013 6:01 PM, John Day wrote:

All standards groups that I am aware of have had the same view.  This is
not uncommon.

Although, I would point out that the TCP specification nor do most
protocols specifications of this type follow this rule.  State
transitions are not visible on the wire.  The rules for sliding window
are not described entirely in terms of the behavior seen on the line, etc.

I have seen specifications that attempted this and the implementations
built from them were very different and did not come close to
interoperating or in some cases of even doing the same thing.

In fact, I remember that we thought the new Telnet spec (1973) was a
paragon of clarity until a new site joined the Net that had not been
part of the commuity and came up with an implementation that bore no
relation to what anyone else had done.

This problem is a lot more subtle than you imagine.


+1

A protocol *is*:

- the states at the endpoints
- the messages on the wire
- a description of input events (message arrival, upper-layer
interface requests, timer expiration) that indicates
the subsequent change of state and output event
(message departure, upper layer indication,
or timers to set)

(i.e., a Mealy machine, attaching events to arcs)

The wire is the second of these, and entirely insufficient as a 
protocol spec.


Yes, there are two ways to try to write a protocol spec:
- procedural
defining the above explicitly
- behavioral
defining a protocol only from its external
behavior

The difference between these is easy to see for sort algorithms:

procedural:
quicksort
heapsort
etc.

behavioral:
sort

AFAICT, on the wire often implies behavioral equivalence, but it's a 
lot more complicated than just the on-wire messages. A behavioral 
description of a protocol would treat the protocol as a black box and 
explain its behavior under every possible input.


I'll take procedural descriptions of protocols over behavioral any day.

Joe


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
Hi John,

thank you for your reply (i.e. learned alot), so I understand that a
RFC standard track may have more than one implementation but same
behavior enough not to make an error. Regarding following 2119, I
understand most text follow it only when there are normative actions.
Regarding implementer claiming following a RFC, but the question of
error in process is does the RFC lack communication requirement with
the community?

AB

On 1/7/13, John Day jeanj...@comcast.net wrote:
 Strictly speaking, the language of 2119 should be followed wherever
 necessary in order for the text to be normative and make it mandatory
 that a conforming implementation meets some requirement.  Otherwise,
 someone could build an implementation and claim it was correct and
 possibly cause legal problems. However, in the IETF there is also a
 requirement that there be two independent but communicating
 implementations for an RFC to standards-track. Correct?

 For all practical purposes, this requirement makes being able to
 communicate with one of the existing implementations the formal and
 normative definition of the RFC.  Any debate over the content of the
 RFC text is resolved by what the implementations do.  It would seem
 to be at the discretion of the authors of the implementations to
 determine whether or not any problems that are raised are bugs or not.

 Then it would seem that regardless of whether 2119 is followed, the
 RFCs are merely informative guides.

 So while the comments are valid that RFC 2119 should be followed,
 they are also irrelevant.

 Given that any natural language description is going to be ambiguous,
 this is probably for the best.

 Take care,
 John Day

 At 9:41 AM +0100 1/6/13, Abdussalam Baryun wrote:
Hi Marc Petit-Huguenin ,

I read the responses so far, and what can be said today is that there is
 2
philosophies, with supporters in both camps.  The goal of the IETF is to
 make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does
 not
help this goal.

I like the approach, and agree with you that we need a solution in
IETF which still is not solved or ignored by participants fo some
reasons. However, I agree of a survey or an experiment what ever we
call it, that makes IETF reflects to the RFC2119 performance on the
end-user of such product implementation of RFC protocols. I think many
old participants already have good experience to inform us of some
reality of IETF standards' end-user production.

AB




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dick Franks
On 5 January 2013 19:14, Marc Petit-Huguenin petit...@acm.org wrote:
[snip]

 Another way to look at it would be to run the following experiment:

 1. Someone design a new protocol, something simple but not obvious, and write
 in a formal language and keep it secret.

Which raises the obvious question:  Why do we not write protocol specs
in a formal specification language instead of struggling with the
ambiguities of natural language?

Theorem provers and automated verification tools could then be brought
to bear on both specifiations and implementations.



Dick
--


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day
The reasons have been discussed or at least alluded to previously in 
this thread.  The short answer is we have been there and done that: 
30 years ago.


All those tools were developed and used successfully in the 80s.   I 
know of cases where doing the formal specification alongside the 
design phase caught lots of problems.   However, there are two 
central problems:  First, in general, programmers are lazy and just 
want to code. ;-)  Using a formal method is a lot more work.  Second, 
the complexity of the formal statements that must be written down is 
greater than the code.  So there is a higher probability of mistakes 
in the formal description than in the code.  Admittedly, if those 
statements are made, one has a far greater understanding of what you 
are doing.


Once you have both, there is still the problem that if a bug or 
ambiguity shows up, neither the code nor the formal spec nor a prose 
spec can be taken be the right one.  What is right is still in the 
heads of the authors.  All of these are merely approximations.  One 
has to go back and look at all of them and determine what the right 
answer is.  Of course, the more things one has to look at the better. 
(for a bit more, see Chapter 1 of PNA).


John


Which raises the obvious question:  Why do we not write protocol specs
in a formal specification language instead of struggling with the
ambiguities of natural language?

Theorem provers and automated verification tools could then be brought
to bear on both specifiations and implementations.



Dick
--




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Donald Eastlake
Another problem is maintenance. Protocols change. Having to maintain a
formal specification is commonly at least an order of magnitude more
effort than maintaining a prose description. So it doesn't happen and
they very rapidly get out of synch in any living protocol. As an
example, the IEEE 802.11 (Wi-Fi) standard used to have a normative
formal description of the MAC operation (see Annex C of 802.11-1999).
By 802.11-2007 this was out of synch but was still included as
informational material on the theory it might be of some use and the
section began with the words This clause is no longer maintained
Although still present as informational in 802.11-2012, the first
words of that section are now This annex is obsolete.

And, as has been mentioned before, I'd like to emphasize that the IETF
experience and principal is that, if you want interoperation,
compliance testing is useless. The way to interoperation is
interoperation testing between implementations and, to a first
approximation, the more and the earlier you do interoperation testing,
the better.

Thanks,
Donald
=
 Donald E. Eastlake 3rd   +1-508-333-2270 (cell)
 155 Beaver Street, Milford, MA 01757 USA
 d3e...@gmail.com


On Tue, Jan 8, 2013 at 9:45 AM, John Day jeanj...@comcast.net wrote:
 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.   I know of
 cases where doing the formal specification alongside the design phase caught
 lots of problems.   However, there are two central problems:  First, in
 general, programmers are lazy and just want to code. ;-)  Using a formal
 method is a lot more work.  Second, the complexity of the formal statements
 that must be written down is greater than the code.  So there is a higher
 probability of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

 Once you have both, there is still the problem that if a bug or ambiguity
 shows up, neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.  All
 of these are merely approximations.  One has to go back and look at all of
 them and determine what the right answer is.  Of course, the more things one
 has to look at the better. (for a bit more, see Chapter 1 of PNA).

 John


 Which raises the obvious question:  Why do we not write protocol specs
 in a formal specification language instead of struggling with the
 ambiguities of natural language?

 Theorem provers and automated verification tools could then be brought
 to bear on both specifiations and implementations.



 Dick
 --




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day

Hear. Hear.  Ditto!  Absolutely.  etc.

At 10:12 AM -0500 1/8/13, Donald Eastlake wrote:

Another problem is maintenance. Protocols change. Having to maintain a
formal specification is commonly at least an order of magnitude more
effort than maintaining a prose description. So it doesn't happen and
they very rapidly get out of synch in any living protocol. As an
example, the IEEE 802.11 (Wi-Fi) standard used to have a normative
formal description of the MAC operation (see Annex C of 802.11-1999).
By 802.11-2007 this was out of synch but was still included as
informational material on the theory it might be of some use and the
section began with the words This clause is no longer maintained
Although still present as informational in 802.11-2012, the first
words of that section are now This annex is obsolete.

And, as has been mentioned before, I'd like to emphasize that the IETF
experience and principal is that, if you want interoperation,
compliance testing is useless. The way to interoperation is
interoperation testing between implementations and, to a first
approximation, the more and the earlier you do interoperation testing,
the better.

Thanks,
Donald
=
 Donald E. Eastlake 3rd   +1-508-333-2270 (cell)
 155 Beaver Street, Milford, MA 01757 USA
 d3e...@gmail.com


On Tue, Jan 8, 2013 at 9:45 AM, John Day jeanj...@comcast.net wrote:

 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.   I know of
 cases where doing the formal specification alongside the design phase caught
 lots of problems.   However, there are two central problems:  First, in
 general, programmers are lazy and just want to code. ;-)  Using a formal
 method is a lot more work.  Second, the complexity of the formal statements
 that must be written down is greater than the code.  So there is a higher
 probability of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

 Once you have both, there is still the problem that if a bug or ambiguity
 shows up, neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.  All
 of these are merely approximations.  One has to go back and look at all of
 them and determine what the right answer is.  Of course, the more things one
 has to look at the better. (for a bit more, see Chapter 1 of PNA).

 John



 Which raises the obvious question:  Why do we not write protocol specs
 in a formal specification language instead of struggling with the
 ambiguities of natural language?

 Theorem provers and automated verification tools could then be brought
 to bear on both specifiations and implementations.



 Dick
 --







Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
but the question of
 error in process is; does the RFC lack communication requirement with
 the community?


Sorry if not clear. I mean that as some participant are requesting a
scientific approach to struggling with 2119 (i.e. thread-subject),
does that mean in some RFCs the use or not use (i.e. we see that
participant use different approaches to follow the 2119) that it may
add communication confuse with some of community?

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
Why not participant follow one approach to use 2119 in IDs and done,
and if not/another, then please specify in the language section.

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dean Willis

On Jan 7, 2013, at 4:53 AM, Stewart Bryant stbry...@cisco.com wrote:

 Speaking as both a reviewer and an author, I would like
 to ground this thread to some form of reality.
 
 Can anyone point to specific cases where absence or over
 use of an RFC2119 key word caused an interoperability failure,
 or excessive development time?


I'm anecdotally familiar with some early pre-RFC 2543 SIP implementations where 
the implementors ignored everything that didn't say MUST and got something that 
didn't work. At all. But it was apparently really easy to develop, as the spec 
only had a few dozen MUST clauses, and the developers didn't 
include-by-reference any of the cited specs, such as SDP.

When we were trying to decide whether to make RFC 3261 (the replacement of RFC 
2543) a draft standards instead of a proposed standard, I recall Robert 
Sparks and some others attempting to define a fully interoperable 
implementation test that tabulated all of the RFC 2119 invocations that had 
sprouted in FC 3261. They then immediately gave up the idea as impractical, we 
recycled at proposed, and gave up on every making full. The testing 
methodology has greatly improved since then, and it makes lithe use of RFC 2119 
language for test definition or construction. 

--
Dean




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dean Willis

On Jan 8, 2013, at 12:57 PM, Abdussalam Baryun abdussalambar...@gmail.com 
wrote:

 but the question of
 error in process is; does the RFC lack communication requirement with
 the community?
 
 
 Sorry if not clear. I mean that as some participant are requesting a
 scientific approach to struggling with 2119 (i.e. thread-subject),
 does that mean in some RFCs the use or not use (i.e. we see that
 participant use different approaches to follow the 2119) that it may
 add communication confuse with some of community?
 

I'm absolutely certain that some of our community is confused about something 
related to this thread. Given the absence of information that would help in a 
decision, might preference would be just to pick one, and provide a stick for 
hitting those who do it the other way.

--
Dean




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Martin Rex
 
John Day jeanj...@comcast.net wrote:

 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 years ago.

 All those tools were developed and used successfully in the 80s.
 I know of cases where doing the formal specification alongside the
 design phase caught lots of problems.   However, there are two
 central problems:  First, in general, programmers are lazy and just
 want to code. ;-)  Using a formal method is a lot more work.
 Second, the complexity of the formal statements that must be written
 down is greater than the code.  So there is a higher probability
 of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.

I believe that the problem with the formal logic is that it is difficult
to both write as well as read/understand, and to verify that the chosen
axioms actually reflect (and lead to) the desired behaviour/outcome,
the relative of scarcity of suitable tools, the complexity of the
underlying theory and tools, and the tools' resulting lack of
intuitive usability.



 Once you have both, there is still the problem that if a bug or ambiguity
 shows up,

For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.



  neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.

But maybe you're not really thinking of a defect in the implementation
(commonly called bug) but rather a gap in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.

(see Section 4 of this paper for an example:
  http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/1827
)


Donald Eastlake wrote:

 Another problem is maintenance. Protocols change. Having to maintain a
 formal specification is commonly at least an order of magnitude more
 effort than maintaining a prose description.

Definitely.  Discussing protocols at the level of formal specification
language would be quite challenging (typically impossible in an open
forum, I believe it is even be difficult in a small and well-trained
design team).


 
 And, as has been mentioned before, I'd like to emphasize that the IETF
 experience and principal is that, if you want interoperation,
 compliance testing is useless.

Ouch.  I believe this message is misleading or wrong.

Compliance testing is VERY important, rather than useless,

Compliance testing would actually be perfectly sufficient iff the spec
was formally proven to be free of conflicts and ambiguities among
specified protocol elements -- plus a significant effort spent on
ensuring there were no gaps in the specification.

As it turns out, however, a significant amount of implementations will
be created by humans interpreting natural language specifications,
rather than implemenations created as 100% bug-free by-products of a
formal proof tool, and often, interop with such buggy, and often
spec-incompliant implementations is desirable and necessary since
they make up a significant part of the installed base.



 The way to interoperation is interoperation testing between
 implementations and, to a first approximation, the more and the
 earlier you do interoperation testing, the better.


The #1 reason for interop problems and road-block to evolution of
protocols is the wide-spread lack of compliance testing (on the flawed
assumption that it is useless) and focus on black-box interop testing
of the intersection of two subsets of protocol features.

The problem with interop testing is that it really doesn't provide
much of a clue, and the results can _not_ be extrapolated to features
and areas that were not interop tested (typically the other 90% of the
specification, many optional features and most of the extensibility).


The near complete breakage of TLS protocol version negotiation is a
result of the narrow-minded interop testing in companion with a complete
lack of compliance testing.


If done right, pure compliance testing can go a long way to providing
interop.  The only area where real interop testing is necessary is
those protocol areas where the spec contains conflicts/inconsistencies
that implementors didn't notice or ignored, or where there is a
widespread inconsistencies of implementations with the specification,
of created by careless premature shipping of a defective implementation
and factual creation of an installed base that is too large to ignore.


-Martin


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Marc Petit-Huguenin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 01/08/2013 05:41 AM, Dick Franks wrote:
 On 5 January 2013 19:14, Marc Petit-Huguenin petit...@acm.org wrote: 
 [snip]
 
 Another way to look at it would be to run the following experiment:
 
 1. Someone design a new protocol, something simple but not obvious, and
 write in a formal language and keep it secret.
 
 Which raises the obvious question:  Why do we not write protocol specs in a
 formal specification language instead of struggling with the ambiguities of
 natural language?
 
 Theorem provers and automated verification tools could then be brought to
 bear on both specifiations and implementations.
 

IMO, the main problem with using a formal specification language is that it
would made RFCs a little bit too close to mathematics for some people, as math
is not patentable.  So do not expect any change in this direction.

- -- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJQ7MGHAAoJECnERZXWan7EFSEP/0fyEuzFAQMoGvgKFxZG5ZKZ
H94ALXz6pFiSyBC9xImwlUR1TzcKqQbu5BMkPjirsWq/Xt8vlRIEuv7DzpHg0Dp6
A4IuU6t/ddtf9FEHRdvmkUEkHrKd99BjiIBywRoPCRgnqPhYrk+Q7OmyFZzz6BI4
tobQLgIiS9EPZDync9iu8atfubMhuy95VM+v6VfIfOLkdw6HcJWUArRXMP5j2rgK
AD7vQ9SmYXI1x3ZFVjTzWDg4KCAk1uET7+r2QyvmYMgWS3B+bMTl0vCO6kz8TDzw
B8f6thEXZ2/VzLOmoklFxN3ZhR+VDyMzHC0BWgD1Ro1YNogB2G2En8HLlxrmoj6e
T2PIZNnT9xgzkVRkTDmerwLlg7dwUjn8nK/462WTlCRcrAVQ52PHzfDUlBDBdX+A
l10Teah5PbpE1cK285b2CX+nbXbaR3HrhXFdTwvfLruExgsm571isfH3WVmfxXsI
z+3hl+HW92FQ7nreV+BrX/BqwAMpcjG4lkNlM1GgduDuv/rv1nh0lKD9dazNvwmg
r7Qx8pi6TyYL2YbKb0GPRSUACNR8qAXOWHWD3TMuJqiuzIZFj1wCm0mqwzNt7KFG
6RFyTez/7jZWqXx8RPJ1awRnDSv5wiKcH2lgI0h6kvxWdVtnwGyWb3e/W/uY3//p
hcBfwtvCdfw/OiwxicJv
=PDy4
-END PGP SIGNATURE-


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day

At 1:36 AM +0100 1/9/13, Martin Rex wrote:



John Day jeanj...@comcast.net wrote:


 The reasons have been discussed or at least alluded to previously in this
 thread.  The short answer is we have been there and done that: 30 
years ago.


 All those tools were developed and used successfully in the 80s.
 I know of cases where doing the formal specification alongside the
 design phase caught lots of problems.   However, there are two
 central problems:  First, in general, programmers are lazy and just
 want to code. ;-)  Using a formal method is a lot more work.
 Second, the complexity of the formal statements that must be written
 down is greater than the code.  So there is a higher probability
 of mistakes in the formal description than in the code.
 Admittedly, if those statements are made, one has a far greater
 understanding of what you are doing.


I believe that the problem with the formal logic is that it is difficult
to both write as well as read/understand, and to verify that the chosen
axioms actually reflect (and lead to) the desired behaviour/outcome,
the relative of scarcity of suitable tools, the complexity of the
underlying theory and tools, and the tools' resulting lack of
intuitive usability.



The tools have been available for quite some time.  It is still very difficult.





 Once you have both, there is still the problem that if a bug or ambiguity
 shows up,


For any formal proofing system worth its dime, this can be 100% ruled out,
since the proofing system will emit a 100% bugfree implementation of the
spec in the programming language of your choice as a result/byproduct of the
formal proofing process.


C'mon. You don't really believe that do you? The statement either is 
a tautology or naive.  Ever see a compiler with bugs?  Who verified 
the verifier?  How do you know the verifier is bug free?


As I indicated before, I have been working on this problem since we 
discovered Telnet wasn't as good as we thought it was.  For data 
transfer protocols, it is relatively straightforward and can be 
considered solved for anyone who wants to bother.  The problem is 
most don't.


The real problem is the so-called application protocols where dealing 
the different semantics of objects in different systems makes the 
problem very hard and very subtle.  The representation of the 
semantics is always of what you think its seminal properties are. 
This is not always obvious.






  neither the code nor the formal spec nor a prose spec can be taken
 be the right one.  What is right is still in the heads of the authors.


But maybe you're not really thinking of a defect in the implementation
(commonly called bug) but rather a gap in the specification that
leads to unintended or undesired behaviour of a 100% bug-free
implementation of your spec.


One person's gap is another person's bug.  What may be obvious to one 
as something that must occur may not be so to the other.  Then there 
is that fine line between what part of the specification is required 
for the specification and what part is the environment of the 
implementation.


The human mind has an amazing ability to convince us that how we see 
the world is the way others do.  Having been on the Net when there 
were 15-20 very different machine architectures, I can assure you 
that implementation strategies can differ wildly.


I had had great hopes for the Temporal ordering approaches to 
specification since they said the least about the implementation. 
(the specification is entirely in terms of ordering events) However, 
I never met anyone who could design in them.




(see Section 4 of this paper for an example:
  http://digbib.ubka.uni-karlsruhe.de/volltexte/documents/1827
)


Donald Eastlake wrote:


 Another problem is maintenance. Protocols change. Having to maintain a

  formal specification is commonly at least an order of magnitude more

 effort than maintaining a prose description.


Definitely.  Discussing protocols at the level of formal specification
language would be quite challenging (typically impossible in an open
forum, I believe it is even be difficult in a small and well-trained
design team).


Which means there will always be a danger of lost in translation.





 And, as has been mentioned before, I'd like to emphasize that the IETF
 experience and principal is that, if you want interoperation,
 compliance testing is useless.


Ouch.  I believe this message is misleading or wrong.

Compliance testing is VERY important, rather than useless,

Compliance testing would actually be perfectly sufficient iff the spec
was formally proven to be free of conflicts and ambiguities among
specified protocol elements -- plus a significant effort spent on
ensuring there were no gaps in the specification.


Do you realize the cost of this amount of testing? (Not saying it 
isn't a good idea, just the probability of convincing a development 
manager to do it is pretty slim.)  ;-)




As it turns out, however, a significant 

Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Dick Franks
On 9 January 2013 01:19, John Day jeanj...@comcast.net wrote:
[snip]

 One person's gap is another person's bug.  What may be obvious to one as
 something that must occur may not be so to the other.  Then there is that
 fine line between what part of the specification is required for the
 specification and what part is the environment of the implementation.

Disagree

A gap in the specification will result in all implementations having
the same unintended behaviour, because the developers understood and
followed the spec 100%.

Bugs are distinguishable from gaps because they occur in some
implementations but not others and arise from misinterpretation of
some aspect of the specification.  In this context, over-engineering
is a bug, as distinct from competitive advantage.


Dick
--


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread John Day

It would be interesting to see you apply that.

This is what I have been talking about.  The human mind's ability to 
believe that the whole world sees everything the same way they do. 
It really is quite amazing.


These so-called gaps often arise because they were unstated 
assumptions or things that the author believed were patently obvious 
and didn't need to be stated. Actually didn't know needed to be 
stated.  From his point of view, no one would do it differently. 
Nothing had been left out and he didn't make the mistake.   What 
the other guys did was a bug.


There is a much greater range of interpreting these things than it 
appears that you imagine.


At 2:46 AM + 1/9/13, Dick Franks wrote:

On 9 January 2013 01:19, John Day jeanj...@comcast.net wrote:
[snip]


 One person's gap is another person's bug.  What may be obvious to one as
 something that must occur may not be so to the other.  Then there is that
 fine line between what part of the specification is required for the
 specification and what part is the environment of the implementation.


Disagree

A gap in the specification will result in all implementations having
the same unintended behaviour, because the developers understood and
followed the spec 100%.

Bugs are distinguishable from gaps because they occur in some
implementations but not others and arise from misinterpretation of
some aspect of the specification.  In this context, over-engineering
is a bug, as distinct from competitive advantage.


Dick
--




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-08 Thread Abdussalam Baryun
This is what I have been talking about. The human mind's ability to believe 
that the whole world sees everything the same way they do. It really is quite 
amazing.

These so-called gaps often arise because they were unstated assumptions or 
things that the author believed were patently obvious and didn't need to be 
stated. Actually didn't know needed to be stated. From his point of view, no 
one would do it differently. Nothing had been left out and he didn't make the 
mistake. What the other guys did was a bug.

That is why I think the IETF names the RFC or standards or
specifications as *Request For Comments*, if there is a gap or the
human mind failed, then it should communicate to the world and report
such gap/bug.

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Stewart Bryant

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart






Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
As you are guessing that is unlikely, however, the more pertinent 
question is whether it has prevented some innovative approach to 
implementations.  This would be the more interesting question.


We tend to think of these as state machines and describe them 
accordingly.  There are other approaches which might be prevented if 
using a MUST when it wasn't needed.


At 10:53 AM + 1/7/13, Stewart Bryant wrote:

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Stewart Bryant

Indeed an interesting additional question.

My view is that you MUST NOT use RFC2119 language, unless you MUST use 
it, for exactly that reason. What is important is on the wire (a term 
that from experience is very difficult to define) inter-operation, and 
implementers need to be free to achieve that though any means that suits 
them.


- Stewart

On 07/01/2013 12:22, John Day wrote:
As you are guessing that is unlikely, however, the more pertinent 
question is whether it has prevented some innovative approach to 
implementations.  This would be the more interesting question.


We tend to think of these as state machines and describe them 
accordingly.  There are other approaches which might be prevented if 
using a MUST when it wasn't needed.


At 10:53 AM + 1/7/13, Stewart Bryant wrote:

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart


.




--
For corporate legal information go to:

http://www.cisco.com/web/about/doing_business/legal/cri/index.html



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Brian E Carpenter
On 07/01/2013 12:42, Stewart Bryant wrote:
 Indeed an interesting additional question.
 
 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.

Agreed. Imagine the effect if the TCP standard had said that a particular
congestion control algorithm was mandatory. Oh, wait...

... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
congestion control algorithm is non-conformant. Oh, wait...

... RFC 2001 is a proposed standard defining congestion control algorithms,
but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
use RFC 2119 keywords, but they still don't update RFC 1122.

This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
Control in the RFC Series) and of course it's still an open research topic.

Attempting to validate TCP implementations on the basis of conformance
with RFC 2119 keywords would be, well, missing the point.

I know this is an extreme case, but I believe it shows the futility of
trying to be either legalistic or mathematical in this area.

   Brian


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
Let me get this straight, Brian.  It would seem you are pointing out 
that the IETF does not have a clear idea of what it is doing?  ;-)  I 
could believe that.


No, your example is not an example of what I suggested at all.

Yours is an example of not specifying the conditions that a 
congestion control algorithm must have rather than the congestion 
control algorithm itself.


What I was suggesting (and it is very easy trap to fall into) was 
defining a spec with one implementation environment in mind and not 
realizing you are constraining things unnecessarily. Consider the 
difference between defining TCP as a state machine with that sort of 
implementation in mind and building an implementation in LISP. (I 
know someone who did it.)  It would be very easy to make assumptions 
about how something was described that made a LISP implementation 
unduly messy, or missed an opportunity for a major simplification.


It is quite easy to do some thing mathematical in this area (not 
necessarily alluding to formal specification), but you do have to 
have a clear concept of the levels of abstraction.  Of course, once 
you do, you still have the question whether there is a higher 
probability of errors in the math or the program.


Yes, programming is just math of a different kind, which of course is 
the point.


Take care,
John

At 1:31 PM + 1/7/13, Brian E Carpenter wrote:

On 07/01/2013 12:42, Stewart Bryant wrote:

 Indeed an interesting additional question.

 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.


Agreed. Imagine the effect if the TCP standard had said that a particular
congestion control algorithm was mandatory. Oh, wait...

... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
congestion control algorithm is non-conformant. Oh, wait...

... RFC 2001 is a proposed standard defining congestion control algorithms,
but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
use RFC 2119 keywords, but they still don't update RFC 1122.

This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
Control in the RFC Series) and of course it's still an open research topic.

Attempting to validate TCP implementations on the basis of conformance
with RFC 2119 keywords would be, well, missing the point.

I know this is an extreme case, but I believe it shows the futility of
trying to be either legalistic or mathematical in this area.

   Brian




Re: I'm struggling with 2219 language again

2013-01-07 Thread Pete Resnick
Dean, I am struggling constantly with 2119 as an AD, because if I take 
the letter (and the spirit) of 2119 at face value, a lot of people are 
doing this wrong. And 2119 is a BCP; it's one of our process documents. 
So I'd like this to be cleared up as much as you. I think there is 
active harm in the misuse we are seeing.


To Ned's points:

On 1/4/13 7:05 PM, ned+i...@mauve.mrochek.com wrote:

+1 to Brian and others saying upper case should be used sparingly, and
only where it really matters. If even then.
 

That's the entire point: The terms provide additional information as to
what the authors consider the important points of compliance to be.
   


We will like end up in violent agreement, but I think the above 
statement is incorrect. Nowhere in 2119 will you find the words 
conform or conformance or comply or compliance, and I think 
there's a reason for that: We long ago found that we did not really care 
about conformance or compliance in the IETF. What we cared about was 
interoperability of independently developed implementations, because 
independently developing implementations that interoperate with other 
folks is what makes the Internet robust. Importantly, we specifically 
did not want to dictate how you write your code or tell you specific 
algorithms to follow; that makes for everyone implementing the same 
brittle code.


The useful function of 2119 is that it allows us to document the 
important *behavioral* requirements that I have to be aware of when I am 
implementing (e.g., even though it's not obvious, my implementation MUST 
send such-and-so or the other side is going to crash and burn; e.g., 
even though it's not obvious, the other side MAY send this-and-that, and 
therefore my implementation needs to be able to handle it). And those 
even though it's not obvious statements are important. It wastes my 
time as an implementer to try to figure out what interoperability 
requirement is meant by, You MUST implement a variable to keep track of 
such-and-so-state (and yes, we see these in specs lately), and it makes 
for everyone potentially implementing the same broken code.



The notion (that some have) that MUST means you have to do something
to be compliant and that a must (lower case) is optional is just
nuts.
 


You bet, Thomas!


In some ways I find the use of SHOULD and SHOULD NOT be to be more useful
than MUST and MUST NOT. MUST and MUST NOT are usually obvious. SHOULD and
SHOULD NOT are things on the boundary, and how boundary cases are handled
is often what separated a good implementation from a mediocre or even poor
one.
   


Agreed. Indeed, if you have a MUST or MUST NOT, I'd almost always be 
inclined to have a because clause. If you can't given an explanation 
of why I need this warning, there's a good chance the MUST is 
inappropriate. (One that I've seen of late is If the implementation 
wants to send, it MUST set the send field to 'true'. If it wants to 
receive, it MUST set the send field to 'false'. I have no idea what 
those MUSTs are telling me. Under what circumstances could I possibly 
want to send but set the send field to false?)



The idea that upper case language can be used to identify all the
required parts of a specificition from a
compliance/conformance/interoperability perspective is just
wrong. This has never been the case (and would be exceeding painful to
do), though (again) some people seem to think this would be useful and
thus like lots of upper case language.
 

At most it provides the basis for a compliance checklist. But such checklists
never cover all the points involved in compliance. Heck, most specifications in
toto don't do that. Some amount of common sense is always required.
   


And again, it's worse than incomplete. It also makes for brittle code. I 
don't want you checking to see if you coded things the same way that I 
did, which is what a compliance list gets you. I want you checking that 
your *behavior* from the net interoperates with me. Insofar as you want 
to call *that* compliance, well, OK, but I don't think that's what 
people mean.



Where you want to use MUST is where an implementation might be tempted
to take a short cut -- to the detriment of the Internet -- but could
do so without actually breaking interoperability.


Exactly!


IMO, too many specs seriously overuse/misuse 2119 language, to the
detriment of readability, common sense, and reserving the terms to
bring attention to those cases where it really is important to
highlight an important point that may not be obvious to a casual
reader/implementor.
 

Sadly true.
   


And to the detriment of good code..

pr

--
Pete Resnickhttp://www.qualcomm.com/~presnick/
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Scudder
On Jan 6, 2013, at 11:50 PM, John Day jeanj...@comcast.net wrote:

 However, in the IETF there is also a requirement that there be two 
 independent but communicating implementations for an RFC to standards-track. 
 Correct?

Alas, no. 

--John


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread ned+ietf
 On 07/01/2013 12:42, Stewart Bryant wrote:
  Indeed an interesting additional question.
 
  My view is that you MUST NOT use RFC2119 language, unless you MUST use
  it, for exactly that reason. What is important is on the wire (a term
  that from experience is very difficult to define) inter-operation, and
  implementers need to be free to achieve that though any means that suits
  them.

 Agreed. Imagine the effect if the TCP standard had said that a particular
 congestion control algorithm was mandatory. Oh, wait...

 ... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
 which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
 congestion control algorithm is non-conformant. Oh, wait...

 ... RFC 2001 is a proposed standard defining congestion control algorithms,
 but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

 RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
 use RFC 2119 keywords, but they still don't update RFC 1122.

 This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
 Control in the RFC Series) and of course it's still an open research topic.

 Attempting to validate TCP implementations on the basis of conformance
 with RFC 2119 keywords would be, well, missing the point.

 I know this is an extreme case, but I believe it shows the futility of
 trying to be either legalistic or mathematical in this area.

Exactly. Looking for cases where the use/non-use of capitalized terms caused an
interoperability failure is a bit silly, because the use/non-use of such terms
doesn't carry that sort of weight.

What does happen is that implementation and therefore interoperability quality
can suffer when standards emphasize the wrong points of compliance. Things
work, but not as well as they should or could.

A fairly common case of this in application protocols is an emphasis on
low-level limits and restrictions while ignoring higher-level requirements. For
example, our email standards talk a fair bit about so-called minimim maximums
that in practice are rarely an issue, all the while failing to specify a
mandatory minimum set of semantics all agents must support. This has led to
lack of interoperable functionality in the long term.

Capitalized terms are both a blessing and a curse in this regard. They make it
easy to point out the really important stuff. But in doing so, they also
make it easy to put the emphasis in the wrong places.

tl;dr: Capitulized terms are a tool, and like any tool they can be misused.

Ned


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day

Alas, indeed.  ;-)


At 3:50 PM + 1/7/13, John Scudder wrote:

On Jan 6, 2013, at 11:50 PM, John Day jeanj...@comcast.net wrote:

 However, in the IETF there is also a requirement that there be two 
independent but communicating implementations for an RFC to 
standards-track. Correct?


Alas, no.

--John




Re: I'm struggling with 2219 language again

2013-01-07 Thread ned+ietf

Dean, I am struggling constantly with 2119 as an AD, because if I take
the letter (and the spirit) of 2119 at face value, a lot of people are
doing this wrong. And 2119 is a BCP; it's one of our process documents.
So I'd like this to be cleared up as much as you. I think there is
active harm in the misuse we are seeing.



To Ned's points:



On 1/4/13 7:05 PM, ned+i...@mauve.mrochek.com wrote:
 +1 to Brian and others saying upper case should be used sparingly, and
 only where it really matters. If even then.

 That's the entire point: The terms provide additional information as to
 what the authors consider the important points of compliance to be.




We will like end up in violent agreement, but I think the above
statement is incorrect. Nowhere in 2119 will you find the words
conform or conformance or comply or compliance, and I think
there's a reason for that: We long ago found that we did not really care
about conformance or compliance in the IETF. What we cared about was
interoperability of independently developed implementations, because
independently developing implementations that interoperate with other
folks is what makes the Internet robust. Importantly, we specifically
did not want to dictate how you write your code or tell you specific
algorithms to follow; that makes for everyone implementing the same
brittle code.


Meh. I know the IETF has a thing about these terms, and insofar as they  can
lead to the use of and/or overreliance on compliance testing rather than
interoperability testing, I agree with that sentiment.

OTOH, when it comes to actually, you know, writing code, this entire attitude
is IMNSHO more than a little precious. Maybe I've missed them, but in my
experience our avoidance of these terms has not resulted in the magical
creation of a widely available perfect reference implementation that allows me
to check interoperability. In fact in a lot of cases when I write code I have
absolutely nothing to test against - and this is often true even when I'm
implementing a standard that's been around for many years.

In such cases the use of compliance language - and yes, it is compliance
language, the avoidance of that term in RFC 2119 notwithstanding - is
essential. And for that matter it's still compliance language even if RFC 2119
terms are not used.

I'll also note that RFC 1123 most certainly does use the term compliant in
regards to capitalized terms it defines, and if nitpicking on this point
becames an issue I have zero problem replacing references to RFC 2119 with
references to RFC 1123 in the future.

All that said, I'll again point out that these terms are a double-edged sword,
and can be used to put the emphasis in the wrong place or even to specify
downright silly requirements. But that's a argument for better review of our
specifications, because saying MUST do this stupid and couterproductive thing
isn't fixed in any real sense by removing the capitalization.


The useful function of 2119 is that it allows us to document the
important *behavioral* requirements that I have to be aware of when I am
implementing (e.g., even though it's not obvious, my implementation MUST
send such-and-so or the other side is going to crash and burn; e.g.,
even though it's not obvious, the other side MAY send this-and-that, and
therefore my implementation needs to be able to handle it). And those
even though it's not obvious statements are important. It wastes my
time as an implementer to try to figure out what interoperability
requirement is meant by, You MUST implement a variable to keep track of
such-and-so-state (and yes, we see these in specs lately), and it makes
for everyone potentially implementing the same broken code.


Good point. Pointing out the nonobvious bits where things have to be done in a
certain way is probably the most important use-case for these terms.

Ned


Re: I'm struggling with 2219 language again

2013-01-07 Thread Dean Willis

Well, I've learned some things here, and shall attempt to summarize:

1) First. the 1 key is really close to the 2 key, and my spell-checker 
doesn't care. Apparently,  I'm not alone in this problem.

2) We're all over the map in our use of 2119 language, and it is creating many 
headaches beyond my own.

3) The majority of respondents feel that 2119 language should be used as stated 
in 2119 -- sparingly, and free that MUST is not a substitute for does. But 
some people feel we need a more formal specification language that goes beyond 
key point compliance or requirements definition, and some are using 2119 
words in that role and like it.

I'm torn as to what to do with the draft in question. I picked up an editorial 
role after the authors fatigued in response to some 100+ AD comments (with 
several DISCUSSes) and a gen-art review that proposed adding several hundred 
2119 invocations (and that was backed up with a DISCUSS demeaning that the 
gen-art comments be dealt with). My co-editor, who is doing most of the 
key-stroking, favors lots of 2119 language. And I think it turns the draft into 
unreadable felrgercarb.

But there's nothing hard we can point to and say This is the guideline, 
because usage has softened the guidelines in 2119 itself. It's rather like 
those rules in one's Home Owner's Association handbooks that can no longer be 
enforced because widespread violations have already been approved.

There appears to be interest in clarification, but nobody really wants to 
revise the immortal words of RFC 2119, although there is a proposal to add a 
few more words, like IF and THEN to the vocabulary (I'm hoping for GOTO, 
myself; perhaps we can make 2119 a Turing-complete language.)

--
Dean
 






Re: I'm struggling with 2219 language again

2013-01-07 Thread Riccardo Bernardini

 There appears to be interest in clarification, but nobody really wants to 
 revise the immortal words of RFC 2119, although there is a proposal to add a 
 few more words, like IF and THEN to the vocabulary (I'm hoping for GOTO, 
 myself; perhaps we can make 2119 a Turing-complete language.)


Hmm... GOTO is bad style... Why not the COME FROM from
Intercal?http://en.wikipedia.org/wiki/INTERCAL   :-) (sorry, could not
resist...)


Re: I'm struggling with 2219 language again

2013-01-07 Thread Scott Brim
On 01/07/13 15:40, Riccardo Bernardini allegedly wrote:

 There appears to be interest in clarification, but nobody really wants to 
 revise the immortal words of RFC 2119, although there is a proposal to add a 
 few more words, like IF and THEN to the vocabulary (I'm hoping for GOTO, 
 myself; perhaps we can make 2119 a Turing-complete language.)

 
 Hmm... GOTO is bad style... Why not the COME FROM from
 Intercal?http://en.wikipedia.org/wiki/INTERCAL   :-) (sorry, could not
 resist...)
 

unwind-protect shall rule them all

but I digress



Re: I'm struggling with 2219 language again

2013-01-07 Thread Marc Petit-Huguenin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 01/07/2013 12:19 PM, Dean Willis wrote:
 
 Well, I've learned some things here, and shall attempt to summarize:
 
 1) First. the 1 key is really close to the 2 key, and my spell-checker 
 doesn't care. Apparently,  I'm not alone in this problem.
 
 2) We're all over the map in our use of 2119 language, and it is creating 
 many headaches beyond my own.
 
 3) The majority of respondents feel that 2119 language should be used as 
 stated in 2119 -- sparingly, and free that MUST is not a substitute for 
 does. But some people feel we need a more formal specification language 
 that goes beyond key point compliance or requirements definition, and 
 some are using 2119 words in that role and like it.
 
 I'm torn as to what to do with the draft in question. I picked up an 
 editorial role after the authors fatigued in response to some 100+ AD 
 comments (with several DISCUSSes) and a gen-art review that proposed
 adding several hundred 2119 invocations (and that was backed up with a
 DISCUSS demeaning that the gen-art comments be dealt with). My co-editor,
 who is doing most of the key-stroking, favors lots of 2119 language. And I
 think it turns the draft into unreadable felrgercarb.

My proposal for the aforementioned draft is to put on hold for now the edits
related to this discussion, then let the WG, IETF (during last call) and IESG
decide what to do.  The edits are done and ready to be merged, so the painful
part is already done: https://github.com/petithug/p2psip-base-master/branches.

- -- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJQ6zjKAAoJECnERZXWan7EIQ0P/RhLlXZVgXdpaxFbdO3PE1Fz
CQTaOt5oK7FT8/GGm+8AAMDOUSmlAiG5NeJ3+M0pRn3sRgNnJN/erRhv51tjax7i
PVAQrKnFFSICnIpDjLKlW86e8iTuP+9tfxDScP7Tnqwgt5f85p0FWDJls4NNVZQA
rBacSPMp8rIjoxPpWRclqSEkUGOE2TUuNHIs5ucLXKsyBu4d3JmIPFlOfD3x/Lid
FpNApKRB88tIt9IK7V8DJKOAv6wPYMtXAJrtV5clDAUGx2ntcZO9OgY50dkNs7p5
RpPLpEEq0PQb6JTRv/91hOJZ8xff4xxvqUsSNgkGniwsUKLRW3tca6yUGNowPzD1
W49/JOyojEbKx/x+AL2My6rMgor6VpatsbPx4ToVGNZ62/eo1YavSShg6DN44kvR
PNCvTAC6MhGEgATD5p9OPze5ucJDhWKh5X10gngmF6NS2i2jdQzpqkjO3zsjLoWS
zO9K1WYQaLZz81JWFvWAMUzNSz1PEXJU6dTMHZ5BqtllXShzkrinJGeDErC4gkSA
qLaG8IhDvLu3Skm1f/ygts6+2W/9W7onJaKr3ned0BLk4Hgm24hynXWP5rAoFacg
AFO7Gim9I0nTyk9gz7GdVeZcgw1kOpSnMhjC1GNFvZKa+Zmgtux5oWF0emXZemQy
9ZBoossXwqrEk6BmL6An
=4vXi
-END PGP SIGNATURE-


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Thomas Narten
Stewart Bryant stbry...@cisco.com writes:

 Indeed an interesting additional question.

 My view is that you MUST NOT use RFC2119 language, unless you MUST use 
 it, for exactly that reason. What is important is on the wire (a term 
 that from experience is very difficult to define) inter-operation, and 
 implementers need to be free to achieve that though any means that suits 
 them.

The latter goes without saying. It's one of the obvious assumptions
that underlies all IETF protocols. It may not be written down, but its
always been an underlying principle.

E.g., from RFC 1971:

   A host maintains a number of data structures and flags related to
   autoconfiguration. In the following, we present conceptual variables
   and show how they are used to perform autoconfiguration. The specific
   variables are used for demonstration purposes only, and an
   implementation is not required to have them, so long as its external
   behavior is consistent with that described in this document.

Other document (that I've long forgotten) say similar things.   

That sort of language was put into specific documents specifically
because some individuals sometimes would raise the concern that a spec
was trying to restrict an implementation.
   
IETF specs have always been about describing external behavior
(i.e. what you can see on the wire), and how someone implements
internally to produce the required external behavior is none of the
IETF's business (and never has been).

(Someone earlier on this thread seemed to maybe think the above is not
a given, but it really always has been.)

Thomas



Re: I'm struggling with 2219 language again

2013-01-07 Thread C. M. Heard
On Mon, 7 Jan 2013, ned+i...@mauve.mrochek.com wrote:
 I'll also note that RFC 1123 most certainly does use the term compliant in
 regards to capitalized terms it defines, and if nitpicking on this point
 becames an issue I have zero problem replacing references to RFC 2119 with
 references to RFC 1123 in the future.

+1.  There is similar language in RFC 1122 and RFC 1812.  From the standpoint 
of making the requirements clear for an implementor, I think that these three 
specifications were among the best the IETF ever produced.

//cmh


Compliance to a protocol description? (wasRE: I'm struggling with 2219 language again)

2013-01-07 Thread Robin Uyeshiro
Maybe part of the job of a working group should be to both/either produce or
approve a reference implementation and/or a test for interoperability?  I
always thought a spec should include an acceptance test.  Contracts often
do.

If a company submits code that becomes reference code for interoperability
tests, that code is automatically interoperable and certified.  That might
mean more companies would spend money to produce working code.  It might
mean that more working code gets submitted earlier, as the earliest approved
code would tend to become the reference.  By code, I don't mean source,
necessarily.

Then there would be a more objective test for compliance and less dependence
on capitalization and the description.



  Meh. I know the IETF has a thing about these terms, and insofar as they
can
  lead to the use of and/or overreliance on compliance testing rather than
  interoperability testing, I agree with that sentiment.

  OTOH, when it comes to actually, you know, writing code, this entire
attitude
  is IMNSHO more than a little precious. Maybe I've missed them, but in my
  experience our avoidance of these terms has not resulted in the magical
  creation of a widely available perfect reference implementation that
allows me
  to check interoperability. In fact in a lot of cases when I write code I
have
  absolutely nothing to test against - and this is often true even when I'm
  implementing a standard that's been around for many years.

Ned



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
All standards groups that I am aware of have had the same view.  This 
is not uncommon.


Although, I would point out that the TCP specification nor do most 
protocols specifications of this type follow this rule.  State 
transitions are not visible on the wire.  The rules for sliding 
window are not described entirely in terms of the behavior seen on 
the line, etc.


I have seen specifications that attempted this and the 
implementations built from them were very different and did not come 
close to interoperating or in some cases of even doing the same thing.


In fact, I remember that we thought the new Telnet spec (1973) was a 
paragon of clarity until a new site joined the Net that had not been 
part of the commuity and came up with an implementation that bore no 
relation to what anyone else had done.


This problem is a lot more subtle than you imagine.

Take care,
John Day

At 4:46 PM -0500 1/7/13, Thomas Narten wrote:

Stewart Bryant stbry...@cisco.com writes:


 Indeed an interesting additional question.



 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.


The latter goes without saying. It's one of the obvious assumptions
that underlies all IETF protocols. It may not be written down, but its
always been an underlying principle.

E.g., from RFC 1971:

   A host maintains a number of data structures and flags related to
   autoconfiguration. In the following, we present conceptual variables
   and show how they are used to perform autoconfiguration. The specific
   variables are used for demonstration purposes only, and an
   implementation is not required to have them, so long as its external
   behavior is consistent with that described in this document.

Other document (that I've long forgotten) say similar things.  


That sort of language was put into specific documents specifically
because some individuals sometimes would raise the concern that a spec
was trying to restrict an implementation.
  
IETF specs have always been about describing external behavior

(i.e. what you can see on the wire), and how someone implements
internally to produce the required external behavior is none of the
IETF's business (and never has been).

(Someone earlier on this thread seemed to maybe think the above is not
a given, but it really always has been.)

Thomas




Re: I'm struggling with 2219 language again

2013-01-07 Thread John Levine
 But some people feel we need a more formal specification language
 that goes beyond key point compliance or requirements definition,
 and some are using 2119 words in that role and like it.

Having read specs like the Algol 68 report and ANSI X3.53-1976, the
PL/I standard that's largely written in VDL, I have an extremely low
opinion of specs that attempt to be very formal.  

The problem is not unlike the one with the fad for proofs of program
correctness back in the 1970s and 1980s.  Your formal thing ends up
being in effect a large chunk of software, which will have just as
many bugs as any other large chunk of software.  The PL/I standard is
famous for that; to implement it you both need to be able to decode
the VDL and to know PL/I well enough to recognize the mistakes.

What we really need to strive for is clear writing, which is not the
same thing as formal writing.  When you're writing clearly, the places
where you'd need 2119 stuff would be where you want to emphasize that
something that might seem optional or not a big deal is in fact
important and mandatory or important and forbidden.

R's,
John


Re: I'm struggling with 2219 language again

2013-01-07 Thread John Day
I have spent more than a little time on this problem and have 
probably looked at more approaches to specification than most, 
probably well over a 100.  I would have to agree.  Most of the very 
formal methods such as VDL or those based on writing predicates in 
the equivalent of first-order logic end up with very complex 
predicates.  Which of course means there is a higher probability of 
errors in the predicates than in the code.  (Something I pointed out 
in a review for a course of the first PhD thesis (King's Program 
Verifier) that attempted it.  Much to the chagrin of the professor.)


Of course protocols are a much simpler problem that a specification 
of a general program (finite state machine vs Turing machine), but 
even so from what I have seen the same problems exist.  As you say, 
the best answer is good clean code for the parts that are part of the 
protocol and only write requirements the parts that aren't.  The hard 
part is drawing that boundary.  There is much that is specific to the 
implementation that we often don't recognize.  The more approaches 
one can get, the better.  Triangulation works well!  ;-)


At 2:29 AM + 1/8/13, John Levine wrote:

  But some people feel we need a more formal specification language

 that goes beyond key point compliance or requirements definition,
 and some are using 2119 words in that role and like it.


Having read specs like the Algol 68 report and ANSI X3.53-1976, the
PL/I standard that's largely written in VDL, I have an extremely low
opinion of specs that attempt to be very formal. 


The problem is not unlike the one with the fad for proofs of program
correctness back in the 1970s and 1980s.  Your formal thing ends up
being in effect a large chunk of software, which will have just as
many bugs as any other large chunk of software.  The PL/I standard is
famous for that; to implement it you both need to be able to decode
the VDL and to know PL/I well enough to recognize the mistakes.

What we really need to strive for is clear writing, which is not the
same thing as formal writing.  When you're writing clearly, the places
where you'd need 2119 stuff would be where you want to emphasize that
something that might seem optional or not a big deal is in fact
important and mandatory or important and forbidden.

R's,
John




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-06 Thread Abdussalam Baryun
Hi Marc Petit-Huguenin ,

I read the responses so far, and what can be said today is that there is 2
philosophies, with supporters in both camps.  The goal of the IETF is to make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does not
help this goal.

I like the approach, and agree with you that we need a solution in
IETF which still is not solved or ignored by participants fo some
reasons. However, I agree of a survey or an experiment what ever we
call it, that makes IETF reflects to the RFC2119 performance on the
end-user of such product implementation of RFC protocols. I think many
old participants already have good experience to inform us of some
reality of IETF standards' end-user production.

AB


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-06 Thread John Day
Strictly speaking, the language of 2119 should be followed wherever 
necessary in order for the text to be normative and make it mandatory 
that a conforming implementation meets some requirement.  Otherwise, 
someone could build an implementation and claim it was correct and 
possibly cause legal problems. However, in the IETF there is also a 
requirement that there be two independent but communicating 
implementations for an RFC to standards-track. Correct?


For all practical purposes, this requirement makes being able to 
communicate with one of the existing implementations the formal and 
normative definition of the RFC.  Any debate over the content of the 
RFC text is resolved by what the implementations do.  It would seem 
to be at the discretion of the authors of the implementations to 
determine whether or not any problems that are raised are bugs or not.


Then it would seem that regardless of whether 2119 is followed, the 
RFCs are merely informative guides.


So while the comments are valid that RFC 2119 should be followed, 
they are also irrelevant.


Given that any natural language description is going to be ambiguous, 
this is probably for the best.


Take care,
John Day

At 9:41 AM +0100 1/6/13, Abdussalam Baryun wrote:

Hi Marc Petit-Huguenin ,


I read the responses so far, and what can be said today is that there is 2

philosophies, with supporters in both camps.  The goal of the IETF is to make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does not
help this goal.

I like the approach, and agree with you that we need a solution in
IETF which still is not solved or ignored by participants fo some
reasons. However, I agree of a survey or an experiment what ever we
call it, that makes IETF reflects to the RFC2119 performance on the
end-user of such product implementation of RFC protocols. I think many
old participants already have good experience to inform us of some
reality of IETF standards' end-user production.

AB




A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-05 Thread Marc Petit-Huguenin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I read the responses so far, and what can be said today is that there is 2
philosophies, with supporters in both camps.  The goal of the IETF is to make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does not
help this goal.

One way to find out would be to measure which philosophy results in the best
implementations.  Let's say that we can associate with each Standard Track RFC
one of these two philosophy.  If we had statistics on implementations then it
would be a simple matter of counting which one produce the less
interoperability problems, security issues and congestion problems (is there
other criteria?).  But as far as I know, there is no such data available -
maybe we should start collecting these, but that does not help for our current
problem.

Another way to look at it would be to run the following experiment:

1. Someone design a new protocol, something simple but not obvious, and write
in a formal language and keep it secret.

2. The same protocol is rewritten in RFC language but in two different
variants according to the two philosophies.  These also are kept secret.

3. The two variants are distributed randomly to a set of volunteer
implementers, who all implement the spec they received the best they can and
submit the result back, keeping their implementation secret.

4.  A test harness is written from the formal description, and all
implementations are run against each other, collecting stats related to the
criteria listed above (some criterion may be tricky to automatically assess,
we'll see).

5. Results are published, together with the protocol in formal form, the
specs, the results and the recommendation for one or the other philosophy.


That could be an interesting research project, and could even find some
funding from interested parties.


On 01/03/2013 09:15 PM, Dean Willis wrote:
 
 I've always held to the idea that RFC 2119 language is for defining levels
 of compliance to requirements, and is best used very sparingly (as
 recommended in RFC 2119 itself). To me, RFC 2119 language doesn't make
 behavior normative -- rather, it describes the implications of doing
 something different than the defined behavior, from will break the
 protocol if you change it to we have reason to think that there might be
 a reason we don't want to describe here that might influence you not to do
 this to here are some reasons that would cause you to do something
 different and on to doing something different might offend the
 sensibilities of the protocol author, but probably won't hurt anything
 else.
 
 But I'm ghost-editing a document right now whose gen-art review suggested 
 replacing the vast majority of is does and are prose with MUST. The 
 comments seem to indicate that protocol-defining text not using RFC 2119 
 language (specifically MUST) is not normative.
 
 This makes me cringe. But my co-editor likes it a lot. And I see smart
 people like Ole also echoing the though that RFC 2119 language is what
 makes text normative.
 
 For example, the protocol under discussion uses TLS or DTLS for a plethora
 of security reasons. So, every time the draft discusses sending a response
 to a request, we would say the node MUST send a response, and this
 response MUST be constructed by (insert some concatenation procedure here)
 and MUST be transmitted using TLS or DTLS.
 
 Or, a more specific example:
 
 For the text:
 
 In order to originate a message to a given Node-ID or Resource-ID, a node 
 constructs an appropriate destination list.
 
 
 The Gen-ART comment here is: -First sentence: a node constructs - a
 node MUST construct
 
 
 We'll literally end up with hundreds of RFC 2119 invocations (mostly MUST)
 in a protocol specification.
 
 Is this a good or bad thing? My co-editor and I disagree -- he likes 
 formalization of the description language, and I like the English prose.
 But it raises process questions for the IETF as a whole:
 
 Are we deliberately evolving our language to use RFC 2119 terms as the 
 principle verbs  of a formal specification language?
 
 Either way, I'd like to see some consensus. Because my head is throbbing
 and I want to know if it MUST hurt, SHOULD hurst, or just hurts. But I
 MUST proceed in accordance with consensus, because to do otherwise would
 undermine the clarity of our entire specification family.
 

- -- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJQ6Ht3AAoJECnERZXWan7ETHwP/1MwWKyX4ZoTqS2AZr5VdCwx
jGO/0+tbHppplfippPlJRR6cV5rfrrtkKp9j3Xbr477Jeuaaadjv3y0CfkGF+DUb
fDhcB/GQLiN1oC6s3cjiib46Rnd18Ela6xUAZleiLjKKoo0TJKfQ8oAt3tYonokK
onb95NAsF0FsbiqBzoUi23aEf/SFoKOg3a67DAt5XmntnNh5K6jVOmT4GFYtF3LB

Re: I'm struggling with 2219 language again

2013-01-04 Thread Brian E Carpenter
On 04/01/2013 05:15, Dean Willis wrote:
...
 Either way, I'd like to see some consensus. Because my head is throbbing and 
 I want to know if it MUST hurt, SHOULD hurst, or just hurts. But I MUST 
 proceed in accordance with consensus, because to do otherwise would undermine 
 the clarity of our entire specification family.

This Gen-ART reviewer believes that words like must have well defined meanings
in the English language, so shouting is not needed at every use. There are
standards track documents that don't use RFC 2119 at all, and I am not only
referring to RFC 791.

I think the upper case keywords should be used only when necessary to clarify
points of potential non-interoperability or insecurity. I'm quite sure that
I've broken that recommendation quite often, and it will always remain
a judgment call. However, inserting a MUST in every sentence that describes
behaviour is surely going too far. I guess the test is whether a reasonably
careful reader might interpret a sentence incorrectly while writing code;
and if so, would a normative keyword help?

 Brian


Re: I'm struggling with 2219 language again

2013-01-04 Thread Dave Cridland
On Fri, Jan 4, 2013 at 8:03 AM, Brian E Carpenter
brian.e.carpen...@gmail.com wrote:

 This Gen-ART reviewer believes that words like must have well defined 
 meanings
 in the English language, so shouting is not needed at every use. There are
 standards track documents that don't use RFC 2119 at all, and I am not only
 referring to RFC 791.


IMAP's CONDSTORE is a relatively recent example (authored by Pete
Resnick). It got all the way to WGLC before anyone noticed, as I
recall. (Possibly even through, I can't recall).


 I think the upper case keywords should be used only when necessary to clarify
 points of potential non-interoperability or insecurity. I'm quite sure that
 I've broken that recommendation quite often, and it will always remain
 a judgment call. However, inserting a MUST in every sentence that describes
 behaviour is surely going too far. I guess the test is whether a reasonably
 careful reader might interpret a sentence incorrectly while writing code;
 and if so, would a normative keyword help?


RFC 2119 language is not a stick to beat an implementor with, but a
signpost to highlight or clarify important cases. If the signpost is
used too often, the temptation is to mentally downgrade the language
when reading. It's often not needed, and only serves to make the
language look more RFC, or just saves an author from considering how
to best describe the protocol. Neither is good.

But in any case, a Gen Art reviewers comments hold no more weight than
any others, and if your personal style, like mine, leans toward
minimizing the use of RFC 2119 language and putting the effort into
general clarity and readability, then I think you should ignore the
proposed solution and treat the comments as a suggestion that the
areas might benefit from more clarity of intent.

Dave.


Re: I'm struggling with 2219 language again

2013-01-04 Thread Lou Berger


On 1/4/2013 12:15 AM, Dean Willis wrote:
...
 Are we deliberately evolving our language to use RFC 2119 terms as
 the principle verbs  of a formal specification language?
...

My view on this has evolved over time.  I used to follow the practice of
using 2219 language only for emphasis.  Over time, primarily motivated
by reviewers comments and reader questions, I've migrated to the
position that 2119 language should be used whenever and wherever a point
of conformance is being made.  While this may be a bit of an extreme
position, it ensures that authors, reviewers, readers, implementors,
etc. are in sync as to what is expected from an interoperable
implementation that conforms to a standard.  I think the importance of
such unambiguity has increased over time as the number of implementors
and non-native English speakers in our community have increased.

I also think it's important to follow section 6 of 2119, i.e., if it's
not a point of interoperability or harmful behavior, there's no need to
use 2119 conformance language.

So, my view is now:

a) lower case usage of 2119 key words *in RFCs* means the normal English
meaning of such words, but does not place any requirement on
implementations, i.e., is purely informative text.

b) upper case usage of 2119 key words *in RFCs*, as stated in [RFC2119],
places requirements in the specification, i.e., is conformance
language with which an implementation must follow to ensure
interoperability (or harm).  (And does not = shouting as would be the
case in other contexts).

I take this view when writing and reviewing PS drafts...

Lou


Re: I'm struggling with 2219 language again

2013-01-04 Thread Bob Braden
I believe that Brian's interpretation is exactly right. At least, it 
conforms to the Original Intent of the
applicability terms MUST, MAY, and SHOULD as defined in RFC 1122. And I 
sympathize with
Dean Willis whose head hurts; as one-time RFC Editor I was often 
confronted with wildly
inconsistent use of the applicability words. It often seemed that 
authors sprinkled them in

randomly, just enough seasoning to make a normative stew.

Bob Braden

. On 1/4/2013 12:03 AM, Brian E Carpenter wrote:

On 04/01/2013 05:15, Dean Willis wrote:
...

Either way, I'd like to see some consensus. Because my head is throbbing and I 
want to know if it MUST hurt, SHOULD hurst, or just hurts. But I MUST proceed 
in accordance with consensus, because to do otherwise would undermine the 
clarity of our entire specification family.

This Gen-ART reviewer believes that words like must have well defined meanings
in the English language, so shouting is not needed at every use. There are
standards track documents that don't use RFC 2119 at all, and I am not only
referring to RFC 791.

I think the upper case keywords should be used only when necessary to clarify
points of potential non-interoperability or insecurity. I'm quite sure that
I've broken that recommendation quite often, and it will always remain
a judgment call. However, inserting a MUST in every sentence that describes
behaviour is surely going too far. I guess the test is whether a reasonably
careful reader might interpret a sentence incorrectly while writing code;
and if so, would a normative keyword help?

  Brian




Re: I'm struggling with 2219 language again

2013-01-04 Thread Scott Brim
It's a communication problem.  If you want your audience to understand
exactly what you're saying, and implement along very specific lines, you
need to tell them in a way they understand.  Personally I prefer a
quieter approach, but I've been told that these days one MUST use MUST
or implementors just won't get it.  Huh, that's a requirement?  But you
didn't say MUST.  I suggest turning this thread into a survey, and
finding out how people who actually write code look for in order to know
what's required.



Re: I'm struggling with 2219 language again

2013-01-04 Thread Peter Saint-Andre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Wonderful perennial topic. :)

As I always say when this comes up, when writing drafts I've settled
on using the 2119 keywords only in their uppercase form, and otherwise
using need to, ought to, might (etc.) to avoid all possible
confusion. Sure, it's a bit stilted, but we're not writing gorgeous
prose here, we're writing technical specifications that need to be
completely clear.

Peter

- --
Peter Saint-Andre
https://stpeter.im/

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iEYEARECAAYFAlDnHCQACgkQNL8k5A2w/vxKmwCfXKjDtMqQiPp4a0udOB8Q9IbA
q9QAoNiXj2r/q4yRLp0B/13m6Xxg5YN4
=3PER
-END PGP SIGNATURE-


Re: I'm struggling with 2219 language again

2013-01-04 Thread Hector Santos

+1.

I think it is important that we have communications tools for 
documenting strong minimum protocol requirements and we only have 
RFC2119 to make that possible.


Yet, we need to be careful where the lack of RFC2119 upper case 
wordings can be used to leverage an argument for relaxation of 
existing protocols where perhaps only lower case semantics were done.


IMO, where there are multiple protocol state choices or paths and 
where common sense engineering and security functionality is clear, 
upper vs lower should not matter.


I will like to use an example protocol logic I am currently concern 
with that could be altered due to 2119 related debates:


  For protocol state condition A, you may do action B or C.
  If action B is taken, you MUST do D.

So action B has 2119 language to do D.  Part of the 2219 debate is 
that doing action B or C are both optional.


C is not defined, although it could be said it was implied via other 
indirect protocol features:


  You SHOULD do E.

and in a protocol change proposal, it is now:

  You SHOULD do E1 instead of E.

Well, to do E or E1, you have to do C, not B.

Now the history of this is that original drafts/specs (ones some folks 
want us to ignore) did have 2119 language on whether you MAY do B or 
C, and in fact in an extended augmented RFC protocol, it only allowed 
for a MUST action B for protocol condition A.  C was not an option (or 
at least it wasn't stated) in the extended RFC technology.


The problem was the lack of an upper case MAY has provided the notion 
that none of this is normative thus making the action C more possible 
and action B less attractive.


By making C more possible, this makes E more plausible and now that E1 
is a new recommendation and preferred feature than E,  the idea of 
doing action B is now watered down.


The issue is sensitive and honestly, it got to a point where one says 
 Who cares!? If they want to watered down B, so be it.  You don't 
need to support it protocol changes nor the newer push to the E1 
recommendation.


The problem for me?

Is why is the security aspects of this do not make it obvious of what 
needs to be done?  We got into anal debate of 2119 language and I even 
recall a few years ago where another debate of the definition of 
SHOULD got a number of people labeled as 2219 illiterates by an AD.


In the example I cited, politics comes into play because you have two 
industry mindsets under protocol condition A:


  1) Those that believe in action B as the more secured action, and
  2) Those that DO NOT believe in action B and prefer C.

There is where a person like myself is left with the idea of waiting 
for WGLC to appeal changes to the protocol that will watered down 
action B.


This is all due because of the lack of 2119 language that many believe 
were naturally in place with just using lower case wordings.


No upper case, therefore action B is weak and that was unfair to 
current implementations that only do action B as its the most secured 
implementation (and less costly) action to protect users.   By 
allowing RFC2119 or stated arguments the protocol lacked 2119 
language, weakens the protocol and makes it more complex for 
implementations to consider action C which can be more harmful if 
certain forms of C making it functionality equivalent (security wise) 
to action B is not done (this is the added complexity).


I'm sure this is a dime a dozen story, but to me it is common to see 
WG chairs, AD and other key cogs throw the proverbial 2119 book at 
developers to help push specs.   IMO, we need it but we also need some 
common sense engineering and consideration to be done when we review 
current docs.  Ignoring existing implementations SHOULD not be one of 
them.


--
HLS


Lou Berger wrote:


On 1/4/2013 12:15 AM, Dean Willis wrote:
...

Are we deliberately evolving our language to use RFC 2119 terms as
the principle verbs  of a formal specification language?

...

My view on this has evolved over time.  I used to follow the practice of
using 2219 language only for emphasis.  Over time, primarily motivated
by reviewers comments and reader questions, I've migrated to the
position that 2119 language should be used whenever and wherever a point
of conformance is being made.  While this may be a bit of an extreme
position, it ensures that authors, reviewers, readers, implementors,
etc. are in sync as to what is expected from an interoperable
implementation that conforms to a standard.  I think the importance of
such unambiguity has increased over time as the number of implementors
and non-native English speakers in our community have increased.

I also think it's important to follow section 6 of 2119, i.e., if it's
not a point of interoperability or harmful behavior, there's no need to
use 2119 conformance language.

So, my view is now:

a) lower case usage of 2119 key words *in RFCs* means the normal English
meaning of such words, but does not place any requirement on

Re: I'm struggling with 2219 language again

2013-01-04 Thread Richard Barnes
Anecdotal data point number N+1...

As an occasional implementor of IETF specs, I have to say it's much easier to 
check my conformance if I can just grep for MUST and SHOULD.  It's also 
easy for developers to get in the bad habit of ONLY doing those things that are 
clearly marked in that way.  So ISTM that if you're not tagging things you want 
done with RFC 2119 language, then you're risking people not implementing them.



On Jan 4, 2013, at 1:15 PM, Peter Saint-Andre stpe...@stpeter.im wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Wonderful perennial topic. :)
 
 As I always say when this comes up, when writing drafts I've settled
 on using the 2119 keywords only in their uppercase form, and otherwise
 using need to, ought to, might (etc.) to avoid all possible
 confusion. Sure, it's a bit stilted, but we're not writing gorgeous
 prose here, we're writing technical specifications that need to be
 completely clear.
 
 Peter
 
 - --
 Peter Saint-Andre
 https://stpeter.im/
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/
 
 iEYEARECAAYFAlDnHCQACgkQNL8k5A2w/vxKmwCfXKjDtMqQiPp4a0udOB8Q9IbA
 q9QAoNiXj2r/q4yRLp0B/13m6Xxg5YN4
 =3PER
 -END PGP SIGNATURE-



RE: I'm struggling with 2219 language again

2013-01-04 Thread Adrian Farrel
Lou's view matches how I write and review documents.
I would add that there is sometimes value in using 2119-style language in
requirements documents (The protocol solution MUST enable transmission of
data...) although, in my opinion, this requires a tweak to the normal2119
boilerplate.

Adrian

 -Original Message-
 From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On Behalf Of Lou
 Berger
 Sent: 04 January 2013 13:23
 To: Dean Willis
 Cc: ietf@ietf.org
 Subject: Re: I'm struggling with 2219 language again
 
 
 
 On 1/4/2013 12:15 AM, Dean Willis wrote:
 ...
  Are we deliberately evolving our language to use RFC 2119 terms as
  the principle verbs  of a formal specification language?
 ...
 
 My view on this has evolved over time.  I used to follow the practice of
 using 2219 language only for emphasis.  Over time, primarily motivated
 by reviewers comments and reader questions, I've migrated to the
 position that 2119 language should be used whenever and wherever a point
 of conformance is being made.  While this may be a bit of an extreme
 position, it ensures that authors, reviewers, readers, implementors,
 etc. are in sync as to what is expected from an interoperable
 implementation that conforms to a standard.  I think the importance of
 such unambiguity has increased over time as the number of implementors
 and non-native English speakers in our community have increased.
 
 I also think it's important to follow section 6 of 2119, i.e., if it's
 not a point of interoperability or harmful behavior, there's no need to
 use 2119 conformance language.
 
 So, my view is now:
 
 a) lower case usage of 2119 key words *in RFCs* means the normal English
 meaning of such words, but does not place any requirement on
 implementations, i.e., is purely informative text.
 
 b) upper case usage of 2119 key words *in RFCs*, as stated in [RFC2119],
 places requirements in the specification, i.e., is conformance
 language with which an implementation must follow to ensure
 interoperability (or harm).  (And does not = shouting as would be the
 case in other contexts).
 
 I take this view when writing and reviewing PS drafts...
 
 Lou



Re: I'm struggling with 2219 language again

2013-01-04 Thread Ben Campbell
I generally take (what I infer to be) Richard's view on the matter.  If not 
doing something will break interoperability or security, then make it 
normative. (I realize that's a gross oversimplification).

But that still doesn't mean you have to have a MUST for every step an 
implementation has to take. To take Dean's original example, I think it makes 
sense to say the implementation ... MUST send a response according to the 
following procedures... then describe the procedure without peppering it with 
2119 language,  except for special cases when needed for emphasis or clarity. 
This seems to cover the risk of people not implementing stuff, but still avoids 
an explosion of MUSTs (and the resulting requirements matrix).

Sticking with Dean's example, the use of TLS might qualify as one of the 
special cases needing additional normative emphasis, but you can always say 
something like ... MUST send all message over TLS once, rather than restate 
it for every message.


On Jan 4, 2013, at 1:04 PM, Richard Barnes rbar...@bbn.com wrote:

 Anecdotal data point number N+1...
 
 As an occasional implementor of IETF specs, I have to say it's much easier to 
 check my conformance if I can just grep for MUST and SHOULD.  It's also 
 easy for developers to get in the bad habit of ONLY doing those things that 
 are clearly marked in that way.  So ISTM that if you're not tagging things 
 you want done with RFC 2119 language, then you're risking people not 
 implementing them.
 
 
 
 On Jan 4, 2013, at 1:15 PM, Peter Saint-Andre stpe...@stpeter.im wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Wonderful perennial topic. :)
 
 As I always say when this comes up, when writing drafts I've settled
 on using the 2119 keywords only in their uppercase form, and otherwise
 using need to, ought to, might (etc.) to avoid all possible
 confusion. Sure, it's a bit stilted, but we're not writing gorgeous
 prose here, we're writing technical specifications that need to be
 completely clear.
 
 Peter
 
 - --
 Peter Saint-Andre
 https://stpeter.im/
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/
 
 iEYEARECAAYFAlDnHCQACgkQNL8k5A2w/vxKmwCfXKjDtMqQiPp4a0udOB8Q9IbA
 q9QAoNiXj2r/q4yRLp0B/13m6Xxg5YN4
 =3PER
 -END PGP SIGNATURE-
 



Re: I'm struggling with 2219 language again

2013-01-04 Thread Thomas Narten
+1 to Brian and others saying upper case should be used sparingly, and
only where it really matters. If even then.

The notion (that some have) that MUST means you have to do something
to be compliant and that a must (lower case) is optional is just
nuts.

If the ARP spec were to say, upon receipt of an ARP request, the
recipient sends back an ARP response, does the lack of a MUST there
mean the response is optional? Surely not. And if we make it only a
SHOULD (e.g., to allow rate limiting of responses - a very reasonable
thing to do), does lack of MUST now make the feature optional from a
compliance/interoperability perspective?

The idea that upper case language can be used to identify all the
required parts of a specificition from a
compliance/conformance/interoperability perspective is just
wrong. This has never been the case (and would be exceeding painful to
do), though (again) some people seem to think this would be useful and
thus like lots of upper case language.

Where you want to use MUST is where an implementation might be tempted
to take a short cut -- to the detriment of the Internet -- but could
do so without actually breaking interoperability. A good example is
with retransmissions and exponential backoff. You can implement those
incorrectly (or not at all), and still get interoperability. I.e.,
two machines can talk to each other. Maybe you don't get good
intereoperability and maybe not great performance under some
conditions, but you can still build an interoperabile implementation.

IMO, too many specs seriously overuse/misuse 2119 language, to the
detriment of readability, common sense, and reserving the terms to
bring attention to those cases where it really is important to
highlight an important point that may not be obvious to a casual
reader/implementor.

Thomas




Re: I'm struggling with 2219 language again

2013-01-04 Thread Mark Nottingham
+1; this is what we're doing in HTTPbis.


On 05/01/2013, at 5:15 AM, Peter Saint-Andre stpe...@stpeter.im wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Wonderful perennial topic. :)
 
 As I always say when this comes up, when writing drafts I've settled
 on using the 2119 keywords only in their uppercase form, and otherwise
 using need to, ought to, might (etc.) to avoid all possible
 confusion. Sure, it's a bit stilted, but we're not writing gorgeous
 prose here, we're writing technical specifications that need to be
 completely clear.
 
 Peter
 
 - --
 Peter Saint-Andre
 https://stpeter.im/
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/
 
 iEYEARECAAYFAlDnHCQACgkQNL8k5A2w/vxKmwCfXKjDtMqQiPp4a0udOB8Q9IbA
 q9QAoNiXj2r/q4yRLp0B/13m6Xxg5YN4
 =3PER
 -END PGP SIGNATURE-

--
Mark Nottingham   http://www.mnot.net/





Re: I'm struggling with 2219 language again

2013-01-04 Thread ned+ietf
 +1 to Brian and others saying upper case should be used sparingly, and
 only where it really matters. If even then.

That's the entire point: The terms provide additional information as to 
what the authors consider the important points of compliance to be.

 The notion (that some have) that MUST means you have to do something
 to be compliant and that a must (lower case) is optional is just
 nuts.

In some ways I find the use of SHOULD and SHOULD NOT be to be more useful
than MUST and MUST NOT. MUST and MUST NOT are usually obvious. SHOULD and
SHOULD NOT are things on the boundary, and how boundary cases are handled
is often what separated a good implementation from a mediocre or even poor
one.

 If the ARP spec were to say, upon receipt of an ARP request, the
 recipient sends back an ARP response, does the lack of a MUST there
 mean the response is optional? Surely not. And if we make it only a
 SHOULD (e.g., to allow rate limiting of responses - a very reasonable
 thing to do), does lack of MUST now make the feature optional from a
 compliance/interoperability perspective?

 The idea that upper case language can be used to identify all the
 required parts of a specificition from a
 compliance/conformance/interoperability perspective is just
 wrong. This has never been the case (and would be exceeding painful to
 do), though (again) some people seem to think this would be useful and
 thus like lots of upper case language.

At most it provides the basis for a compliance checklist. But such checklists
never cover all the points involved in compliance. Heck, most specifications in
toto don't do that. Some amount of common sense is always required.

 Where you want to use MUST is where an implementation might be tempted
 to take a short cut -- to the detriment of the Internet -- but could
 do so without actually breaking interoperability. A good example is
 with retransmissions and exponential backoff. You can implement those
 incorrectly (or not at all), and still get interoperability. I.e.,
 two machines can talk to each other. Maybe you don't get good
 intereoperability and maybe not great performance under some
 conditions, but you can still build an interoperabile implementation.

 IMO, too many specs seriously overuse/misuse 2119 language, to the
 detriment of readability, common sense, and reserving the terms to
 bring attention to those cases where it really is important to
 highlight an important point that may not be obvious to a casual
 reader/implementor.

Sadly true.

Ned


Re: I'm struggling with 2219 language again

2013-01-04 Thread Hector Santos

Scott Brim wrote:

It's a communication problem.  If you want your audience to understand
exactly what you're saying, and implement along very specific lines, you
need to tell them in a way they understand.  


+1

Personally I prefer a quieter approach, but I've been told that 
these days one MUST use MUST or implementors just won't get it.  
Huh, that's a requirement?  But you didn't say MUST.  


I believe in the technical writing style of Being specific is Terrific!



I suggest
turning this thread into a survey, and
finding out how people who actually write code look for in order to know
what's required.


+1

We have implemented numerous protocols since the 80s. I have a 
specific method of approaching a new protocol implementation which 
allows for fastest implementation, testing proof of concept and above 
all minimum cost.  Why bother with the costly complexities of 
implementing SHOULDs and MAYs, if the minimum is not something you 
want in the end anyway?


A good data point is that for IP/Legal reasons, we do not use other 
people's code if we can help it and in the early days, open source was 
not as wide spread or even acceptable at the corporate level. In other 
words, it was all done in-house, purchased or nothing.  I also believe 
using other people's code has a high cost as well since you don't have 
an in-house expert understanding the inner workings of the externally 
developed software.


o Step 1 for Protocol Implementation:

Look for all the MUST protocol features.  This includes the explicit 
ones and watchful of semantics where its obviously required or things 
will break, perhaps it fell thru the crack.


An important consideration for a MUST is that operators are not given 
the opportunity to disable these protocol required features. So from a 
coding standpoint, this is one area you don't have to worry about 
designing configuration tools, the UI, nor including operation 
guidelines and documentation for these inherent protocol required 
features.


This is the minimum coding framework to allow for all inteop testing 
with other software and systems.


The better RFC spec is the one that has documented a checklist, a 
minimum requirement summary table, etc. Good example is RFC 1113 for 
the various internet hosting protocols. I considered RFC 1123 the bible!


Technical writing tip: Please stay away from verbosity especially of 
subjective concepts and please stop writing as if everyone is stupid.


I always viewed the IETF RFC format as a blend of two steps
of the SE process - functional and technical specifications.
Functional specs tell us what we want and technical specs
tell us how we do it.  So unless a specific functional requirements
RFC was written, maybe some verbosity is needed but it should
be minimized.

Generally, depending on the protocol, we can release code just on 
using MUST requirements - the bottom line framework for client/server 
communications.  Only when this is completely successfully, can your 
implementation consider moving on at extending the protocol 
implementation with additional SHOULD, MAY features and its optional 
complexities.


o Step 2

Look for the SHOULDs.  This is the candies of the protocol.  If the 
SHOULD is really simple to implement, it can be lumped in with step 1.


I know many believe a SHOULD are really a MUST as an alternative 
method perhaps - different version of MUST to be done nonetheless.


However, I believe these folks play down an important consideration 
for implementing SHOULD based protocol features:


   Developers need to offer these as options to deployment operators.

In other words, if the operator can not turn it off then a SHOULD was 
incorrectly used for a MUST which is required with no operator option 
to disable.


o Step 3

Look for the MAYs.   Very similar to SHOULD, a good way to consider a 
SHOULD is as a default enabled (ON out of the box) option and a MAY as 
a default disabled (OFF out of the box) option.


Summary:

  MUST   - required, no operator option to disabled. Of course,
   its possible to have a hidden, undocumented switch
   for questionable stuff.

  SHOULD - good idea, recommended. if implemented, enabled it
   out of the box.

  MAY- similar to SHOULD, does not have to be enabled out
   of box.

In both cases for SHOULD and MAY, the operator can turn these protocol 
features off/on. For a MUST, the operator can not turn the MUST 
feature. These SHOULD/MAY features are documented for operators and 
support.


One last thing, I believe in a concept I call CoComp - Cooperative 
Competition, where all competitive implementators, including the 
protocol technology leader all share a common framework for a minimum 
protocol generic to all parties and the internet community. It is 
least required to solve the problem or provide a communication avenue. 
 All else, the SHOULDs, the MAYs, is added value for competing 
implementators.  

I'm struggling with 2219 language again

2013-01-03 Thread Dean Willis

I've always held to the idea that RFC 2119 language is for defining levels of 
compliance to requirements, and is best used very sparingly (as recommended in 
RFC 2119 itself). To me, RFC 2119 language doesn't make behavior normative -- 
rather, it describes the implications of doing something different than the 
defined behavior, from will break the protocol if you change it to we have 
reason to think that there might be a reason we don't want to describe here 
that might influence you not to do this to here are some reasons that would 
cause you to do something different and on to doing something different might 
offend the sensibilities of the protocol author, but probably won't hurt 
anything else.

But I'm ghost-editing a document right now whose gen-art review suggested 
replacing the vast majority of is does and are prose with MUST. The 
comments seem to indicate that protocol-defining text not using RFC 2119 
language (specifically MUST) is not normative.

This makes me cringe. But my co-editor likes it a lot. And I see smart people 
like Ole also echoing the though that RFC 2119 language is what makes text 
normative.

For example, the protocol under discussion uses TLS or DTLS for a plethora of 
security reasons. So, every time the draft discusses sending a response to a 
request, we would say the node MUST send a response, and this response MUST be 
constructed by (insert some concatenation procedure here) and MUST be 
transmitted using TLS or DTLS.

Or, a more specific example:

For the text:

In order to originate a message to a given Node-ID or
Resource-ID, a node constructs an appropriate destination list.


The Gen-ART comment here is:
-First sentence: a node constructs - a node MUST construct


We'll literally end up with hundreds of RFC 2119 invocations (mostly MUST) in a 
protocol specification.

Is this a good or bad thing? My co-editor and I disagree -- he likes 
formalization of the description language, and I like the English prose. But it 
raises process questions for the IETF as a whole: 

Are we deliberately evolving our language to use RFC 2119 terms as the 
principle verbs  of a formal specification language?

Either way, I'd like to see some consensus. Because my head is throbbing and I 
want to know if it MUST hurt, SHOULD hurst, or just hurts. But I MUST proceed 
in accordance with consensus, because to do otherwise would undermine the 
clarity of our entire specification family.

--
Dean