Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Stewart Bryant

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart






Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
As you are guessing that is unlikely, however, the more pertinent 
question is whether it has prevented some innovative approach to 
implementations.  This would be the more interesting question.


We tend to think of these as state machines and describe them 
accordingly.  There are other approaches which might be prevented if 
using a MUST when it wasn't needed.


At 10:53 AM + 1/7/13, Stewart Bryant wrote:

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart




Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Stewart Bryant

Indeed an interesting additional question.

My view is that you MUST NOT use RFC2119 language, unless you MUST use 
it, for exactly that reason. What is important is on the wire (a term 
that from experience is very difficult to define) inter-operation, and 
implementers need to be free to achieve that though any means that suits 
them.


- Stewart

On 07/01/2013 12:22, John Day wrote:
As you are guessing that is unlikely, however, the more pertinent 
question is whether it has prevented some innovative approach to 
implementations.  This would be the more interesting question.


We tend to think of these as state machines and describe them 
accordingly.  There are other approaches which might be prevented if 
using a MUST when it wasn't needed.


At 10:53 AM + 1/7/13, Stewart Bryant wrote:

Speaking as both a reviewer and an author, I would like
to ground this thread to some form of reality.

Can anyone point to specific cases where absence or over
use of an RFC2119 key word caused an interoperability failure,
or excessive development time?

- Stewart


.




--
For corporate legal information go to:

http://www.cisco.com/web/about/doing_business/legal/cri/index.html



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Brian E Carpenter
On 07/01/2013 12:42, Stewart Bryant wrote:
 Indeed an interesting additional question.
 
 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.

Agreed. Imagine the effect if the TCP standard had said that a particular
congestion control algorithm was mandatory. Oh, wait...

... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
congestion control algorithm is non-conformant. Oh, wait...

... RFC 2001 is a proposed standard defining congestion control algorithms,
but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
use RFC 2119 keywords, but they still don't update RFC 1122.

This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
Control in the RFC Series) and of course it's still an open research topic.

Attempting to validate TCP implementations on the basis of conformance
with RFC 2119 keywords would be, well, missing the point.

I know this is an extreme case, but I believe it shows the futility of
trying to be either legalistic or mathematical in this area.

   Brian


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
Let me get this straight, Brian.  It would seem you are pointing out 
that the IETF does not have a clear idea of what it is doing?  ;-)  I 
could believe that.


No, your example is not an example of what I suggested at all.

Yours is an example of not specifying the conditions that a 
congestion control algorithm must have rather than the congestion 
control algorithm itself.


What I was suggesting (and it is very easy trap to fall into) was 
defining a spec with one implementation environment in mind and not 
realizing you are constraining things unnecessarily. Consider the 
difference between defining TCP as a state machine with that sort of 
implementation in mind and building an implementation in LISP. (I 
know someone who did it.)  It would be very easy to make assumptions 
about how something was described that made a LISP implementation 
unduly messy, or missed an opportunity for a major simplification.


It is quite easy to do some thing mathematical in this area (not 
necessarily alluding to formal specification), but you do have to 
have a clear concept of the levels of abstraction.  Of course, once 
you do, you still have the question whether there is a higher 
probability of errors in the math or the program.


Yes, programming is just math of a different kind, which of course is 
the point.


Take care,
John

At 1:31 PM + 1/7/13, Brian E Carpenter wrote:

On 07/01/2013 12:42, Stewart Bryant wrote:

 Indeed an interesting additional question.

 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.


Agreed. Imagine the effect if the TCP standard had said that a particular
congestion control algorithm was mandatory. Oh, wait...

... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
congestion control algorithm is non-conformant. Oh, wait...

... RFC 2001 is a proposed standard defining congestion control algorithms,
but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
use RFC 2119 keywords, but they still don't update RFC 1122.

This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
Control in the RFC Series) and of course it's still an open research topic.

Attempting to validate TCP implementations on the basis of conformance
with RFC 2119 keywords would be, well, missing the point.

I know this is an extreme case, but I believe it shows the futility of
trying to be either legalistic or mathematical in this area.

   Brian




Re: I'm struggling with 2219 language again

2013-01-07 Thread Pete Resnick
Dean, I am struggling constantly with 2119 as an AD, because if I take 
the letter (and the spirit) of 2119 at face value, a lot of people are 
doing this wrong. And 2119 is a BCP; it's one of our process documents. 
So I'd like this to be cleared up as much as you. I think there is 
active harm in the misuse we are seeing.


To Ned's points:

On 1/4/13 7:05 PM, ned+i...@mauve.mrochek.com wrote:

+1 to Brian and others saying upper case should be used sparingly, and
only where it really matters. If even then.
 

That's the entire point: The terms provide additional information as to
what the authors consider the important points of compliance to be.
   


We will like end up in violent agreement, but I think the above 
statement is incorrect. Nowhere in 2119 will you find the words 
conform or conformance or comply or compliance, and I think 
there's a reason for that: We long ago found that we did not really care 
about conformance or compliance in the IETF. What we cared about was 
interoperability of independently developed implementations, because 
independently developing implementations that interoperate with other 
folks is what makes the Internet robust. Importantly, we specifically 
did not want to dictate how you write your code or tell you specific 
algorithms to follow; that makes for everyone implementing the same 
brittle code.


The useful function of 2119 is that it allows us to document the 
important *behavioral* requirements that I have to be aware of when I am 
implementing (e.g., even though it's not obvious, my implementation MUST 
send such-and-so or the other side is going to crash and burn; e.g., 
even though it's not obvious, the other side MAY send this-and-that, and 
therefore my implementation needs to be able to handle it). And those 
even though it's not obvious statements are important. It wastes my 
time as an implementer to try to figure out what interoperability 
requirement is meant by, You MUST implement a variable to keep track of 
such-and-so-state (and yes, we see these in specs lately), and it makes 
for everyone potentially implementing the same broken code.



The notion (that some have) that MUST means you have to do something
to be compliant and that a must (lower case) is optional is just
nuts.
 


You bet, Thomas!


In some ways I find the use of SHOULD and SHOULD NOT be to be more useful
than MUST and MUST NOT. MUST and MUST NOT are usually obvious. SHOULD and
SHOULD NOT are things on the boundary, and how boundary cases are handled
is often what separated a good implementation from a mediocre or even poor
one.
   


Agreed. Indeed, if you have a MUST or MUST NOT, I'd almost always be 
inclined to have a because clause. If you can't given an explanation 
of why I need this warning, there's a good chance the MUST is 
inappropriate. (One that I've seen of late is If the implementation 
wants to send, it MUST set the send field to 'true'. If it wants to 
receive, it MUST set the send field to 'false'. I have no idea what 
those MUSTs are telling me. Under what circumstances could I possibly 
want to send but set the send field to false?)



The idea that upper case language can be used to identify all the
required parts of a specificition from a
compliance/conformance/interoperability perspective is just
wrong. This has never been the case (and would be exceeding painful to
do), though (again) some people seem to think this would be useful and
thus like lots of upper case language.
 

At most it provides the basis for a compliance checklist. But such checklists
never cover all the points involved in compliance. Heck, most specifications in
toto don't do that. Some amount of common sense is always required.
   


And again, it's worse than incomplete. It also makes for brittle code. I 
don't want you checking to see if you coded things the same way that I 
did, which is what a compliance list gets you. I want you checking that 
your *behavior* from the net interoperates with me. Insofar as you want 
to call *that* compliance, well, OK, but I don't think that's what 
people mean.



Where you want to use MUST is where an implementation might be tempted
to take a short cut -- to the detriment of the Internet -- but could
do so without actually breaking interoperability.


Exactly!


IMO, too many specs seriously overuse/misuse 2119 language, to the
detriment of readability, common sense, and reserving the terms to
bring attention to those cases where it really is important to
highlight an important point that may not be obvious to a casual
reader/implementor.
 

Sadly true.
   


And to the detriment of good code..

pr

--
Pete Resnickhttp://www.qualcomm.com/~presnick/
Qualcomm Technologies, Inc. - +1 (858)651-4478



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Scudder
On Jan 6, 2013, at 11:50 PM, John Day jeanj...@comcast.net wrote:

 However, in the IETF there is also a requirement that there be two 
 independent but communicating implementations for an RFC to standards-track. 
 Correct?

Alas, no. 

--John


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread ned+ietf
 On 07/01/2013 12:42, Stewart Bryant wrote:
  Indeed an interesting additional question.
 
  My view is that you MUST NOT use RFC2119 language, unless you MUST use
  it, for exactly that reason. What is important is on the wire (a term
  that from experience is very difficult to define) inter-operation, and
  implementers need to be free to achieve that though any means that suits
  them.

 Agreed. Imagine the effect if the TCP standard had said that a particular
 congestion control algorithm was mandatory. Oh, wait...

 ... RFC 1122 section 4.2.2.15 says that a TCP MUST implement reference [TCP:7]
 which is Van's SIGCOMM'88 paper. So apparently any TCP that uses a more recent
 congestion control algorithm is non-conformant. Oh, wait...

 ... RFC 2001 is a proposed standard defining congestion control algorithms,
 but it doesn't update RFC 1122, and it uses lower-case. Oh, wait...

 RFC 2001 is obsoleted by RFC 2581 which obsoleted by RFC 5681. These both
 use RFC 2119 keywords, but they still don't update RFC 1122.

 This is such a rat's nest that it has a guidebook (RFC 5783, Congestion
 Control in the RFC Series) and of course it's still an open research topic.

 Attempting to validate TCP implementations on the basis of conformance
 with RFC 2119 keywords would be, well, missing the point.

 I know this is an extreme case, but I believe it shows the futility of
 trying to be either legalistic or mathematical in this area.

Exactly. Looking for cases where the use/non-use of capitalized terms caused an
interoperability failure is a bit silly, because the use/non-use of such terms
doesn't carry that sort of weight.

What does happen is that implementation and therefore interoperability quality
can suffer when standards emphasize the wrong points of compliance. Things
work, but not as well as they should or could.

A fairly common case of this in application protocols is an emphasis on
low-level limits and restrictions while ignoring higher-level requirements. For
example, our email standards talk a fair bit about so-called minimim maximums
that in practice are rarely an issue, all the while failing to specify a
mandatory minimum set of semantics all agents must support. This has led to
lack of interoperable functionality in the long term.

Capitalized terms are both a blessing and a curse in this regard. They make it
easy to point out the really important stuff. But in doing so, they also
make it easy to put the emphasis in the wrong places.

tl;dr: Capitulized terms are a tool, and like any tool they can be misused.

Ned


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day

Alas, indeed.  ;-)


At 3:50 PM + 1/7/13, John Scudder wrote:

On Jan 6, 2013, at 11:50 PM, John Day jeanj...@comcast.net wrote:

 However, in the IETF there is also a requirement that there be two 
independent but communicating implementations for an RFC to 
standards-track. Correct?


Alas, no.

--John




Re: Hello ::Please I need to know LEACH protocol standard???

2013-01-07 Thread Dale R. Worley
 I am a researcher of Master's degree , working on LEACH routing
 protocol for wireless sensor networks and i need to know for which
 standard does LEACH , its family ,or even Layer 3 belong to.

A Google search suggests that LEACH has not been standardized.  LEACH
appears to have been invented by academics; several papers have been
published about it.

In regard to layer 3, see http://en.wikipedia.org/wiki/OSI_model

Dale


Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-07 Thread Matthew Morley
On Sun, Jan 6, 2013 at 8:15 PM, Robert Sayre say...@gmail.com wrote:

 On Sun, Jan 6, 2013 at 4:01 PM, Robert Sayre say...@gmail.com wrote:
  On Sun, Jan 6, 2013 at 3:35 PM, Paul C. Bryan pbr...@anode.ca wrote:
 
 
  Common concurrent editing algorithms should, in my opinion, use
 techniques
  to ensure the state of the resource (relative to the edits) is known. In
  HTTP, we have ETag and If-Match/If-None-Match preconditions. In JSON
 Patch,
  we have (a rudimentary) test operation.
 ...
  links to make sure we're not talking past each other:

 Actually, let me restate my point in terms of RFC5789 (HTTP PATCH).
 That will make it easier to communicate.

 RFC 5789 Section 2.2 (Error Handling) defines error conditions which
 correspond directly to the point at hand: 'Conflicting state' and
 'Conflicting modification'. Section 5 of the JSON Patch document
 directly references RFC5789, Section 2.2.

 With that in mind, let's note that there are several normative
 requirements in the JSON Patch document directed at conflicting state.
 One such example is from Section 4.2 'remove'. It reads: The target
 location MUST exist for the operation to be successful.. If a server
 received an HTTP JSON Patch request attempting to delete a
 non-existent location, this text from RFC5789 would seem to apply:

 Conflicting state:  Can be specified with a 409 (Conflict) status
   code when the request cannot be applied given the state of the
   resource.  For example, if the client attempted to apply a
   structural modification and the structures assumed to exist did
   not exist ...

 The text above wouldn't be necessary in JSON Patch or RFC5789 if
 RFC5789 required checking ETags and preconditions for all use cases
 (it doesn't). The larger point is that RFC5789, and patch formats in
 general, make all sorts of allowances for *non-conflicting* concurrent
 edits to a common ancestor. The problem with leaving this JSON Pointer
 array ambiguity in the draft is that patch messages which should
 trigger '409 Conflict' errors can be mistakenly and 'successfully' (in
 the HTTP sense) applied to a different structure than intended.

 In summary, the JSON Patch draft allows patch documents to be
 formulated that make it impossible to correctly implement RFC5789, a
 normative reference.

 Here are the questions the IESG focuses on during review:
 Reviews should focus on these questions: 'Is this document a
 reasonable basis on which to build the salient part of the Internet
 infrastructure? If not, what changes would make it so?'

 For JSON Patch, the answer to the first question is 'no', because of a
 deficiency in JSON Pointer. The change needed to make these documents
 acceptable as part of the Internet infrastructure is to make Arrays
 explicit in JSON Pointer syntax.

 - Rob


For me the deficiency is not in the pointer, but patch format being
generated.

One approach is to push that *one* test, structure conformity, into the
pointer syntax. Another is via the type operation.

If a vague patch is generated, vague results are to be expected.

Testing for *just* the structure does not really create a verbose patch
either. Which is why I am not overly in favor of a syntax specific to
arrays, with this argument.

For example, if you are replacing a key in a object, the json-pointer to
get there, *and* the value would be required to ensure it is not
vague. Setting /a/b/c to 6 when it was 4, and applying that patch without
such tests to a document where /a/b/c is 13, is also vague.

Without tests, the patch format is optimistic at best, there is no escaping
this fact. Changing the spec for the json-pointer syntax to address
vagueness in json-patch specification seems wrong to me.

If json-pointer is not well suited, because of the desire for a descriptive
path which includes structure and value, perhaps a different specification
is need. One that provides both path, structure, and value confirmation in
the pointer string. Though at that point is more of a query path, so
something like JsonPath (http://goessner.net/articles/JsonPath/)?

I prefer the test/type operations and the json-pointer specification, with
optimistic patches.

-- 
Matthew P. C. Morley


Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-07 Thread Paul C. Bryan
 On Sat, 2013-01-05 at 16:19 -0800, Robert Sayre wrote:


[snip]


 1) The ambiguity around arrays makes the patch format unsuitable for
 common concurrent editing algorithms.


Common concurrent editing algorithms should, in my opinion, use
techniques to ensure the state of the resource (relative to the edits)
is known. In HTTP, we have ETag and If-Match/If-None-Match
preconditions. In JSON Patch, we have (a rudimentary) test operation.

[snip]


 3) It's not possible to tell whether a JSON Pointer document is
 syntactically correct in isolation.


There is no such thing as a JSON Pointer document.


 This issue is a problem in practice, and it's a problem in theory as
 well. JSON-Patch messages aren't sufficiently self-descriptive, so
 they aren't appropriate for use in a RESTful system.


99% of RESTful systems I'm familiar with are based on HTTP. Where
optimistic concurrency is acceptable, HTTP preconditions seems to
provide acceptable coverage. Where more granularity or more pessimistic
concurrency is required, implementors are free to use their own
mechanisms, including more expressive predicates (as has been proposed
here, with my endorsement) and/or resource locking. These are
intentionally out of scope for JSON Patch.

Later in this thread, you wrote:


 Ah. I meant that the WG seems to be favoring running code a little
 too heavily in the presence of a bug. It's an old argument, and it's
 boring: We can't change it now, there are already twelve users!


I don't agree that this is a bug; it lacks a feature that you and some
others have requested. Our reasoning for resisting such change is
legitimate.

The reason I value implementations is because they endorse the
specification through tangible action. Some of their authors have
participated in this forum to help improve the specification and create
consensus around it. Unfortunately, you've raised objections quite late
in the process, and I'm personally not persuaded that the issues you've
raised warrants (likely significant) changes.

Paul


Re: I'm struggling with 2219 language again

2013-01-07 Thread ned+ietf

Dean, I am struggling constantly with 2119 as an AD, because if I take
the letter (and the spirit) of 2119 at face value, a lot of people are
doing this wrong. And 2119 is a BCP; it's one of our process documents.
So I'd like this to be cleared up as much as you. I think there is
active harm in the misuse we are seeing.



To Ned's points:



On 1/4/13 7:05 PM, ned+i...@mauve.mrochek.com wrote:
 +1 to Brian and others saying upper case should be used sparingly, and
 only where it really matters. If even then.

 That's the entire point: The terms provide additional information as to
 what the authors consider the important points of compliance to be.




We will like end up in violent agreement, but I think the above
statement is incorrect. Nowhere in 2119 will you find the words
conform or conformance or comply or compliance, and I think
there's a reason for that: We long ago found that we did not really care
about conformance or compliance in the IETF. What we cared about was
interoperability of independently developed implementations, because
independently developing implementations that interoperate with other
folks is what makes the Internet robust. Importantly, we specifically
did not want to dictate how you write your code or tell you specific
algorithms to follow; that makes for everyone implementing the same
brittle code.


Meh. I know the IETF has a thing about these terms, and insofar as they  can
lead to the use of and/or overreliance on compliance testing rather than
interoperability testing, I agree with that sentiment.

OTOH, when it comes to actually, you know, writing code, this entire attitude
is IMNSHO more than a little precious. Maybe I've missed them, but in my
experience our avoidance of these terms has not resulted in the magical
creation of a widely available perfect reference implementation that allows me
to check interoperability. In fact in a lot of cases when I write code I have
absolutely nothing to test against - and this is often true even when I'm
implementing a standard that's been around for many years.

In such cases the use of compliance language - and yes, it is compliance
language, the avoidance of that term in RFC 2119 notwithstanding - is
essential. And for that matter it's still compliance language even if RFC 2119
terms are not used.

I'll also note that RFC 1123 most certainly does use the term compliant in
regards to capitalized terms it defines, and if nitpicking on this point
becames an issue I have zero problem replacing references to RFC 2119 with
references to RFC 1123 in the future.

All that said, I'll again point out that these terms are a double-edged sword,
and can be used to put the emphasis in the wrong place or even to specify
downright silly requirements. But that's a argument for better review of our
specifications, because saying MUST do this stupid and couterproductive thing
isn't fixed in any real sense by removing the capitalization.


The useful function of 2119 is that it allows us to document the
important *behavioral* requirements that I have to be aware of when I am
implementing (e.g., even though it's not obvious, my implementation MUST
send such-and-so or the other side is going to crash and burn; e.g.,
even though it's not obvious, the other side MAY send this-and-that, and
therefore my implementation needs to be able to handle it). And those
even though it's not obvious statements are important. It wastes my
time as an implementer to try to figure out what interoperability
requirement is meant by, You MUST implement a variable to keep track of
such-and-so-state (and yes, we see these in specs lately), and it makes
for everyone potentially implementing the same broken code.


Good point. Pointing out the nonobvious bits where things have to be done in a
certain way is probably the most important use-case for these terms.

Ned


Re: I'm struggling with 2219 language again

2013-01-07 Thread Dean Willis

Well, I've learned some things here, and shall attempt to summarize:

1) First. the 1 key is really close to the 2 key, and my spell-checker 
doesn't care. Apparently,  I'm not alone in this problem.

2) We're all over the map in our use of 2119 language, and it is creating many 
headaches beyond my own.

3) The majority of respondents feel that 2119 language should be used as stated 
in 2119 -- sparingly, and free that MUST is not a substitute for does. But 
some people feel we need a more formal specification language that goes beyond 
key point compliance or requirements definition, and some are using 2119 
words in that role and like it.

I'm torn as to what to do with the draft in question. I picked up an editorial 
role after the authors fatigued in response to some 100+ AD comments (with 
several DISCUSSes) and a gen-art review that proposed adding several hundred 
2119 invocations (and that was backed up with a DISCUSS demeaning that the 
gen-art comments be dealt with). My co-editor, who is doing most of the 
key-stroking, favors lots of 2119 language. And I think it turns the draft into 
unreadable felrgercarb.

But there's nothing hard we can point to and say This is the guideline, 
because usage has softened the guidelines in 2119 itself. It's rather like 
those rules in one's Home Owner's Association handbooks that can no longer be 
enforced because widespread violations have already been approved.

There appears to be interest in clarification, but nobody really wants to 
revise the immortal words of RFC 2119, although there is a proposal to add a 
few more words, like IF and THEN to the vocabulary (I'm hoping for GOTO, 
myself; perhaps we can make 2119 a Turing-complete language.)

--
Dean
 






Re: I'm struggling with 2219 language again

2013-01-07 Thread Riccardo Bernardini

 There appears to be interest in clarification, but nobody really wants to 
 revise the immortal words of RFC 2119, although there is a proposal to add a 
 few more words, like IF and THEN to the vocabulary (I'm hoping for GOTO, 
 myself; perhaps we can make 2119 a Turing-complete language.)


Hmm... GOTO is bad style... Why not the COME FROM from
Intercal?http://en.wikipedia.org/wiki/INTERCAL   :-) (sorry, could not
resist...)


Re: I'm struggling with 2219 language again

2013-01-07 Thread Scott Brim
On 01/07/13 15:40, Riccardo Bernardini allegedly wrote:

 There appears to be interest in clarification, but nobody really wants to 
 revise the immortal words of RFC 2119, although there is a proposal to add a 
 few more words, like IF and THEN to the vocabulary (I'm hoping for GOTO, 
 myself; perhaps we can make 2119 a Turing-complete language.)

 
 Hmm... GOTO is bad style... Why not the COME FROM from
 Intercal?http://en.wikipedia.org/wiki/INTERCAL   :-) (sorry, could not
 resist...)
 

unwind-protect shall rule them all

but I digress



Re: I'm struggling with 2219 language again

2013-01-07 Thread Marc Petit-Huguenin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 01/07/2013 12:19 PM, Dean Willis wrote:
 
 Well, I've learned some things here, and shall attempt to summarize:
 
 1) First. the 1 key is really close to the 2 key, and my spell-checker 
 doesn't care. Apparently,  I'm not alone in this problem.
 
 2) We're all over the map in our use of 2119 language, and it is creating 
 many headaches beyond my own.
 
 3) The majority of respondents feel that 2119 language should be used as 
 stated in 2119 -- sparingly, and free that MUST is not a substitute for 
 does. But some people feel we need a more formal specification language 
 that goes beyond key point compliance or requirements definition, and 
 some are using 2119 words in that role and like it.
 
 I'm torn as to what to do with the draft in question. I picked up an 
 editorial role after the authors fatigued in response to some 100+ AD 
 comments (with several DISCUSSes) and a gen-art review that proposed
 adding several hundred 2119 invocations (and that was backed up with a
 DISCUSS demeaning that the gen-art comments be dealt with). My co-editor,
 who is doing most of the key-stroking, favors lots of 2119 language. And I
 think it turns the draft into unreadable felrgercarb.

My proposal for the aforementioned draft is to put on hold for now the edits
related to this discussion, then let the WG, IETF (during last call) and IESG
decide what to do.  The edits are done and ready to be merged, so the painful
part is already done: https://github.com/petithug/p2psip-base-master/branches.

- -- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJQ6zjKAAoJECnERZXWan7EIQ0P/RhLlXZVgXdpaxFbdO3PE1Fz
CQTaOt5oK7FT8/GGm+8AAMDOUSmlAiG5NeJ3+M0pRn3sRgNnJN/erRhv51tjax7i
PVAQrKnFFSICnIpDjLKlW86e8iTuP+9tfxDScP7Tnqwgt5f85p0FWDJls4NNVZQA
rBacSPMp8rIjoxPpWRclqSEkUGOE2TUuNHIs5ucLXKsyBu4d3JmIPFlOfD3x/Lid
FpNApKRB88tIt9IK7V8DJKOAv6wPYMtXAJrtV5clDAUGx2ntcZO9OgY50dkNs7p5
RpPLpEEq0PQb6JTRv/91hOJZ8xff4xxvqUsSNgkGniwsUKLRW3tca6yUGNowPzD1
W49/JOyojEbKx/x+AL2My6rMgor6VpatsbPx4ToVGNZ62/eo1YavSShg6DN44kvR
PNCvTAC6MhGEgATD5p9OPze5ucJDhWKh5X10gngmF6NS2i2jdQzpqkjO3zsjLoWS
zO9K1WYQaLZz81JWFvWAMUzNSz1PEXJU6dTMHZ5BqtllXShzkrinJGeDErC4gkSA
qLaG8IhDvLu3Skm1f/ygts6+2W/9W7onJaKr3ned0BLk4Hgm24hynXWP5rAoFacg
AFO7Gim9I0nTyk9gz7GdVeZcgw1kOpSnMhjC1GNFvZKa+Zmgtux5oWF0emXZemQy
9ZBoossXwqrEk6BmL6An
=4vXi
-END PGP SIGNATURE-


Re: [PWE3] Gen-ART review of draft-ietf-pwe3-mpls-eth-oam-iwk-06

2013-01-07 Thread Huub van Helvoort

Hello Nabil,

Greg is almost right.

RDI == Remote Defect Indication

This is the abbreviation used in IEEE 802.1ag, Y.1731, G.707, G.8121
and rfc6428.

Regards, Huub.



can we avoid different interpretation of the same abbreviation (RDI):

RDI   Remote Defect Indication for Continuity Check Message

RDI   Reverse Defect Indication

AFAIK, the latter form is the interpretation used by both IEEE 802.1ag
and Y.1731. How useful is the first form?

Regards,

Greg



On Fri, Jan 4, 2013 at 8:17 AM, Bitar, Nabil N
nabil.n.bi...@verizon.com mailto:nabil.n.bi...@verizon.com wrote:

Hi Dave,
Related to abbreviations comment below and to be clearer, I renamed
the original terminology section to Abbreviations and Terminology.
I also created a subsection called Abbreviations ,and
Terminology became the second subsection.  Here iis how the edits look


  3. Abbreviations and Terminology


3.1. Abbreviations

AISAlarm Indication Signal

ACAttachment Circuit

BFDBidirectional Forwarding Detection

CCContinuity Check

CCMContinuity Check Message

CECustomer Equipment

CVConnectivity Verification

E-LMI Ethernet Local Management Interface

EVCEthernet Virtual Circuit

LDPLabel Distribution Protocol

LoSLoss of Signal

MAMaintenance Association

MDMaintenance Domain

MEMaintenance Entity

MEGMaintenance Entity Group

MEPMaintenance End Point

MIPMaintenance End Point

MPLSMultiprotocol Label Switching

MS-PW Multi-Segment Pseudowire

NSNative Service

OAMOperations, Administration, and Maintenance

PEProvider Edge

PSNPacket Switched Network

PWPseudowire

RDIRemote Defect Indication for Continuity Check Message

RDI   Reverse Defect Indication

S-PESwitching Provider Edge

TLVType Length Value

T-PETerminating Provider Edge

__ __


3.2. Terminology

This document uses the following terms with corresponding
definitions: 

- MD Level:Maintenance Domain (MD) Level which identifies a value in
the range of 0-7 associated with Ethernet OAM frame. MD Level
identifies the span of the Ethernet OAM frame.

- MEP:Maintenance End Point is responsible for origination and
termination of OAM frames for a given MEG.

- MIP: Maintenance Intermediate Point is located between peer MEPs
and can process OAM frames but does not initiate or terminate them.

Further, this document also uses the terminology and conventions
used in [RFC6310].



Thanks,

Nabil


Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread Thomas Narten
Stewart Bryant stbry...@cisco.com writes:

 Indeed an interesting additional question.

 My view is that you MUST NOT use RFC2119 language, unless you MUST use 
 it, for exactly that reason. What is important is on the wire (a term 
 that from experience is very difficult to define) inter-operation, and 
 implementers need to be free to achieve that though any means that suits 
 them.

The latter goes without saying. It's one of the obvious assumptions
that underlies all IETF protocols. It may not be written down, but its
always been an underlying principle.

E.g., from RFC 1971:

   A host maintains a number of data structures and flags related to
   autoconfiguration. In the following, we present conceptual variables
   and show how they are used to perform autoconfiguration. The specific
   variables are used for demonstration purposes only, and an
   implementation is not required to have them, so long as its external
   behavior is consistent with that described in this document.

Other document (that I've long forgotten) say similar things.   

That sort of language was put into specific documents specifically
because some individuals sometimes would raise the concern that a spec
was trying to restrict an implementation.
   
IETF specs have always been about describing external behavior
(i.e. what you can see on the wire), and how someone implements
internally to produce the required external behavior is none of the
IETF's business (and never has been).

(Someone earlier on this thread seemed to maybe think the above is not
a given, but it really always has been.)

Thomas



Re: I'm struggling with 2219 language again

2013-01-07 Thread C. M. Heard
On Mon, 7 Jan 2013, ned+i...@mauve.mrochek.com wrote:
 I'll also note that RFC 1123 most certainly does use the term compliant in
 regards to capitalized terms it defines, and if nitpicking on this point
 becames an issue I have zero problem replacing references to RFC 2119 with
 references to RFC 1123 in the future.

+1.  There is similar language in RFC 1122 and RFC 1812.  From the standpoint 
of making the requirements clear for an implementor, I think that these three 
specifications were among the best the IETF ever produced.

//cmh


Compliance to a protocol description? (wasRE: I'm struggling with 2219 language again)

2013-01-07 Thread Robin Uyeshiro
Maybe part of the job of a working group should be to both/either produce or
approve a reference implementation and/or a test for interoperability?  I
always thought a spec should include an acceptance test.  Contracts often
do.

If a company submits code that becomes reference code for interoperability
tests, that code is automatically interoperable and certified.  That might
mean more companies would spend money to produce working code.  It might
mean that more working code gets submitted earlier, as the earliest approved
code would tend to become the reference.  By code, I don't mean source,
necessarily.

Then there would be a more objective test for compliance and less dependence
on capitalization and the description.



  Meh. I know the IETF has a thing about these terms, and insofar as they
can
  lead to the use of and/or overreliance on compliance testing rather than
  interoperability testing, I agree with that sentiment.

  OTOH, when it comes to actually, you know, writing code, this entire
attitude
  is IMNSHO more than a little precious. Maybe I've missed them, but in my
  experience our avoidance of these terms has not resulted in the magical
  creation of a widely available perfect reference implementation that
allows me
  to check interoperability. In fact in a lot of cases when I write code I
have
  absolutely nothing to test against - and this is often true even when I'm
  implementing a standard that's been around for many years.

Ned



Re: A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-07 Thread John Day
All standards groups that I am aware of have had the same view.  This 
is not uncommon.


Although, I would point out that the TCP specification nor do most 
protocols specifications of this type follow this rule.  State 
transitions are not visible on the wire.  The rules for sliding 
window are not described entirely in terms of the behavior seen on 
the line, etc.


I have seen specifications that attempted this and the 
implementations built from them were very different and did not come 
close to interoperating or in some cases of even doing the same thing.


In fact, I remember that we thought the new Telnet spec (1973) was a 
paragon of clarity until a new site joined the Net that had not been 
part of the commuity and came up with an implementation that bore no 
relation to what anyone else had done.


This problem is a lot more subtle than you imagine.

Take care,
John Day

At 4:46 PM -0500 1/7/13, Thomas Narten wrote:

Stewart Bryant stbry...@cisco.com writes:


 Indeed an interesting additional question.



 My view is that you MUST NOT use RFC2119 language, unless you MUST use
 it, for exactly that reason. What is important is on the wire (a term
 that from experience is very difficult to define) inter-operation, and
 implementers need to be free to achieve that though any means that suits
 them.


The latter goes without saying. It's one of the obvious assumptions
that underlies all IETF protocols. It may not be written down, but its
always been an underlying principle.

E.g., from RFC 1971:

   A host maintains a number of data structures and flags related to
   autoconfiguration. In the following, we present conceptual variables
   and show how they are used to perform autoconfiguration. The specific
   variables are used for demonstration purposes only, and an
   implementation is not required to have them, so long as its external
   behavior is consistent with that described in this document.

Other document (that I've long forgotten) say similar things.  


That sort of language was put into specific documents specifically
because some individuals sometimes would raise the concern that a spec
was trying to restrict an implementation.
  
IETF specs have always been about describing external behavior

(i.e. what you can see on the wire), and how someone implements
internally to produce the required external behavior is none of the
IETF's business (and never has been).

(Someone earlier on this thread seemed to maybe think the above is not
a given, but it really always has been.)

Thomas




Re: I'm struggling with 2219 language again

2013-01-07 Thread John Levine
 But some people feel we need a more formal specification language
 that goes beyond key point compliance or requirements definition,
 and some are using 2119 words in that role and like it.

Having read specs like the Algol 68 report and ANSI X3.53-1976, the
PL/I standard that's largely written in VDL, I have an extremely low
opinion of specs that attempt to be very formal.  

The problem is not unlike the one with the fad for proofs of program
correctness back in the 1970s and 1980s.  Your formal thing ends up
being in effect a large chunk of software, which will have just as
many bugs as any other large chunk of software.  The PL/I standard is
famous for that; to implement it you both need to be able to decode
the VDL and to know PL/I well enough to recognize the mistakes.

What we really need to strive for is clear writing, which is not the
same thing as formal writing.  When you're writing clearly, the places
where you'd need 2119 stuff would be where you want to emphasize that
something that might seem optional or not a big deal is in fact
important and mandatory or important and forbidden.

R's,
John


Re: travel guide for the next IETF...

2013-01-07 Thread John Levine
Oh, if you were considering a visit to one of the nearby theme parks,
check out their latest hi-tech innovation:

http://www.nytimes.com/2013/01/07/business/media/at-disney-parks-a-bracelet-meant-to-build-loyalty-and-sales.html



Re: I'm struggling with 2219 language again

2013-01-07 Thread John Day
I have spent more than a little time on this problem and have 
probably looked at more approaches to specification than most, 
probably well over a 100.  I would have to agree.  Most of the very 
formal methods such as VDL or those based on writing predicates in 
the equivalent of first-order logic end up with very complex 
predicates.  Which of course means there is a higher probability of 
errors in the predicates than in the code.  (Something I pointed out 
in a review for a course of the first PhD thesis (King's Program 
Verifier) that attempted it.  Much to the chagrin of the professor.)


Of course protocols are a much simpler problem that a specification 
of a general program (finite state machine vs Turing machine), but 
even so from what I have seen the same problems exist.  As you say, 
the best answer is good clean code for the parts that are part of the 
protocol and only write requirements the parts that aren't.  The hard 
part is drawing that boundary.  There is much that is specific to the 
implementation that we often don't recognize.  The more approaches 
one can get, the better.  Triangulation works well!  ;-)


At 2:29 AM + 1/8/13, John Levine wrote:

  But some people feel we need a more formal specification language

 that goes beyond key point compliance or requirements definition,
 and some are using 2119 words in that role and like it.


Having read specs like the Algol 68 report and ANSI X3.53-1976, the
PL/I standard that's largely written in VDL, I have an extremely low
opinion of specs that attempt to be very formal. 


The problem is not unlike the one with the fad for proofs of program
correctness back in the 1970s and 1980s.  Your formal thing ends up
being in effect a large chunk of software, which will have just as
many bugs as any other large chunk of software.  The PL/I standard is
famous for that; to implement it you both need to be able to decode
the VDL and to know PL/I well enough to recognize the mistakes.

What we really need to strive for is clear writing, which is not the
same thing as formal writing.  When you're writing clearly, the places
where you'd need 2119 stuff would be where you want to emphasize that
something that might seem optional or not a big deal is in fact
important and mandatory or important and forbidden.

R's,
John




Re: travel guide for the next IETF...

2013-01-07 Thread John C Klensin


--On Tuesday, January 08, 2013 02:56 + John Levine
jo...@taugh.com wrote:

 Oh, if you were considering a visit to one of the nearby theme
 parks, check out their latest hi-tech innovation:
 
 http://www.nytimes.com/2013/01/07/business/media/at-disney-par
 ks-a-bracelet-meant-to-build-loyalty-and-sales.html

Maybe FedEx and Memphis would lend us some geese, so we could
pretend they are ducks and practice getting them lined up.
Otherwise, we may just need to provide the leadership with mouse
ears.  Again.

   john






Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-07 Thread Murray S. Kucherawy
I apologize for being absent for this thread until now.  Vacation and
medical matters interfered with me keeping current.

First, with my participant hat on:

I've been occasionally comparing this work to conventional UNIX patch to
try to maintain a point of reference as these works developed.  As such,
I'm swayed by the argument (which, as I recall, was part of working group
deliberations prior to WGLC) that we have the test operations, so people
generating patch documents should use them to ensure proper context before
applying any of the operations that alter the target.  UNIX patch
accomplishes this by default by surrounding the lines to be changed in the
target with context lines that aren't changed, and so must exist precisely
as-is before the change can be made or the change is rejected.  Consider a
target file comprising 26 lines, each containing the next character of the
upper-case English alphabet and a newline, but the M and the N lines are
swapped.  A typical patch to fix this would look like so:

--- x   Mon Jan  7 20:27:36 2013
+++ y   Mon Jan  7 20:27:40 2013
@@ -10,8 +10,8 @@
 J
 K
 L
-N
 M
+N
 O
 P
 Q

The default for UNIX diff is to produce three lines of context above and
below the change to be made to ensure the change is being made in the right
place.  One could also request no lines of context, yielding:

--- x   Mon Jan  7 20:27:36 2013
+++ y   Mon Jan  7 20:27:40 2013
@@ -13 +12,0 @@
-N
@@ -14,0 +14 @@
+N

But this doesn't bother to check any context, except of course to ensure
that the target file is at least 14 lines long.  Although the top one is
clearly safer, both are actually legal patches.

In my view, these two JSON documents present a language for referencing and
object and changing it, and also for querying for context, just like the
conventional UNIX diff/patch format does.  But in neither the UNIX case nor
the proposed JSON case is the context part mandatory to use (though one
could certainly argue it's foolish to skip doing so).  That seems fine to
me.

Then, with my co-chair hat on:

Although I hear and understand Robert's position that this is an important
thing that needs to be addressed, it is not my view after reviewing this
thread that there's rough consensus to reopen the question.  Please note
that this is not an it's too late in the process to change this position,
but rather one that notes that the burden of supporting a change to
something that already has rough consensus is on the person proposing the
change, and I don't believe Robert has succeeded here.

That said, I would ask the document editors to consider adding a paragraph
or an appendix indicating this issue was considered during development of
the work and the current format was deliberately selected, preferably with
some supporting text.  This will ensure future readers will not interpret
the chosen design as a bug, but rather an intentional design choice.

-MSK, APPSAWG co-chair

On Mon, Jan 7, 2013 at 5:33 PM, Paul C. Bryan pbr...@anode.ca wrote:

 **
 On Sun, 2013-01-06 at 16:01 -0800, Robert Sayre wrote:

  This last assertion really isn't qualified very well.


 It would have been better for me to state this is my opinion, based on
 discussions that were animated from similar objections raised in the past.

 Paul

 ___
 apps-discuss mailing list
 apps-disc...@ietf.org
 https://www.ietf.org/mailman/listinfo/apps-discuss




Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-07 Thread James M Snell
If I may, I would follow this up with a suggestion that a separate I-D that
provides a more complete treatment of fully-concurrent patch operations
would be helpful. JSON-Patch is largely designed around atomic and
sequential modification operations and is not, necessarily a great match
for the kind of OT-style mechanisms Robert was referencing. I don't
personally have any use cases that would require the level of concurrency
Robert is suggesting but it would be an interesting pursuit nonetheless.


On Mon, Jan 7, 2013 at 8:40 PM, Murray S. Kucherawy superu...@gmail.comwrote:

 I apologize for being absent for this thread until now.  Vacation and
 medical matters interfered with me keeping current.

 First, with my participant hat on:

 I've been occasionally comparing this work to conventional UNIX patch to
 try to maintain a point of reference as these works developed.  As such,
 I'm swayed by the argument (which, as I recall, was part of working group
 deliberations prior to WGLC) that we have the test operations, so people
 generating patch documents should use them to ensure proper context before
 applying any of the operations that alter the target.  UNIX patch
 accomplishes this by default by surrounding the lines to be changed in the
 target with context lines that aren't changed, and so must exist precisely
 as-is before the change can be made or the change is rejected.  Consider a
 target file comprising 26 lines, each containing the next character of the
 upper-case English alphabet and a newline, but the M and the N lines are
 swapped.  A typical patch to fix this would look like so:

 --- x   Mon Jan  7 20:27:36 2013
 +++ y   Mon Jan  7 20:27:40 2013
 @@ -10,8 +10,8 @@
  J
  K
  L
 -N
  M
 +N
  O
  P
  Q

 The default for UNIX diff is to produce three lines of context above and
 below the change to be made to ensure the change is being made in the right
 place.  One could also request no lines of context, yielding:

 --- x   Mon Jan  7 20:27:36 2013
 +++ y   Mon Jan  7 20:27:40 2013
 @@ -13 +12,0 @@
 -N
 @@ -14,0 +14 @@
 +N

 But this doesn't bother to check any context, except of course to ensure
 that the target file is at least 14 lines long.  Although the top one is
 clearly safer, both are actually legal patches.

 In my view, these two JSON documents present a language for referencing
 and object and changing it, and also for querying for context, just like
 the conventional UNIX diff/patch format does.  But in neither the UNIX case
 nor the proposed JSON case is the context part mandatory to use (though one
 could certainly argue it's foolish to skip doing so).  That seems fine to
 me.

 Then, with my co-chair hat on:

 Although I hear and understand Robert's position that this is an important
 thing that needs to be addressed, it is not my view after reviewing this
 thread that there's rough consensus to reopen the question.  Please note
 that this is not an it's too late in the process to change this position,
 but rather one that notes that the burden of supporting a change to
 something that already has rough consensus is on the person proposing the
 change, and I don't believe Robert has succeeded here.

 That said, I would ask the document editors to consider adding a paragraph
 or an appendix indicating this issue was considered during development of
 the work and the current format was deliberately selected, preferably with
 some supporting text.  This will ensure future readers will not interpret
 the chosen design as a bug, but rather an intentional design choice.

 -MSK, APPSAWG co-chair

 On Mon, Jan 7, 2013 at 5:33 PM, Paul C. Bryan pbr...@anode.ca wrote:

 **
 On Sun, 2013-01-06 at 16:01 -0800, Robert Sayre wrote:

  This last assertion really isn't qualified very well.


 It would have been better for me to state this is my opinion, based on
 discussions that were animated from similar objections raised in the past.

 Paul

 ___
 apps-discuss mailing list
 apps-disc...@ietf.org
 https://www.ietf.org/mailman/listinfo/apps-discuss



 ___
 apps-discuss mailing list
 apps-disc...@ietf.org
 https://www.ietf.org/mailman/listinfo/apps-discuss




Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-07 Thread Robert Sayre
On Mon, Jan 7, 2013 at 8:40 PM, Murray S. Kucherawy superu...@gmail.com wrote:

 we have the test operations, so people
 generating patch documents should use them to ensure proper context before
 applying any of the operations that alter the target.

I don't see how can the test operation be used to determine the
intended target structure of an operation with a JSON Pointer of /1.
Maybe there should be an example in the draft.

- Rob


Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-07 Thread Robert Sayre
On Mon, Jan 7, 2013 at 8:40 PM, Murray S. Kucherawy superu...@gmail.com wrote:

 Then, with my co-chair hat on:

 it is not my view after reviewing this
 thread that there's rough consensus to reopen the question.

I agree with this assessment.

- Rob


Last Call: draft-ietf-lisp-mib-08.txt (LISP MIB) to Experimental RFC

2013-01-07 Thread The IESG

The IESG has received a request from the Locator/ID Separation Protocol
WG (lisp) to consider the following document:
- 'LISP MIB'
  draft-ietf-lisp-mib-08.txt as Experimental RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org mailing lists by 2013-01-21. Exceptionally, comments may be
sent to i...@ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract


   This document defines managed objects for the Locator/ID Separation
   Protocol (LISP).  These objects provide information useful for
   monitoring LISP devices, including basic configuration information,
   LISP status, and operational statistics.




The file can be obtained via
http://datatracker.ietf.org/doc/draft-ietf-lisp-mib/

IESG discussion can be tracked via
http://datatracker.ietf.org/doc/draft-ietf-lisp-mib/ballot/


No IPR declarations have been submitted directly on this I-D.




Last Call: draft-ietf-roll-security-threats-00.txt (A Security Threat Analysis for Routing over Low Power and Lossy Networks) to Informational RFC

2013-01-07 Thread The IESG

The IESG has received a request from the Routing Over Low power and Lossy
networks WG (roll) to consider the following document:
- 'A Security Threat Analysis for Routing over Low Power and Lossy
   Networks'
  draft-ietf-roll-security-threats-00.txt as Informational RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org mailing lists by 2013-01-21. Exceptionally, comments may be
sent to i...@ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract

   This document presents a security threat analysis for routing over
   low power and lossy networks (LLN).  The development builds upon
   previous work on routing security and adapts the assessments to the
   issues and constraints specific to low power and lossy networks.  A
   systematic approach is used in defining and evaluating the security
   threats.  Applicable countermeasures are application specific and are
   addressed in relevant applicability statements.  These assessments
   provide the basis of the security recommendations for incorporation
   into low power, lossy network routing protocols.


The file can be obtained via
http://datatracker.ietf.org/doc/draft-ietf-roll-security-threats/

IESG discussion can be tracked via
http://datatracker.ietf.org/doc/draft-ietf-roll-security-threats/ballot/


No IPR declarations have been submitted directly on this I-D.


Last Call: draft-ietf-ccamp-lmp-behavior-negotiation-09.txt (Link Management Protocol Behavior Negotiation and Configuration Modifications) to Proposed Standard

2013-01-07 Thread The IESG

The IESG has received a request from the Common Control and Measurement
Plane WG (ccamp) to consider the following document:
- 'Link Management Protocol Behavior Negotiation and Configuration
   Modifications'
  draft-ietf-ccamp-lmp-behavior-negotiation-09.txt as Proposed Standard

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
i...@ietf.org mailing lists by 2013-01-21. Exceptionally, comments may be
sent to i...@ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract

   The Link Management Protocol (LMP) is used to coordinate the
   properties, use, and faults of data links in Generalized
   Multiprotocol Label Switching (GMPLS) networks. This document
   defines an extension to LMP to negotiate capabilities and indicate
   support for LMP extensions. The defined extension is compatible
   with non-supporting implementations.

   This document updates RFC 4204, RFC 4207, RFC 4209 and RFC 5818.


The file can be obtained via
http://datatracker.ietf.org/doc/draft-ietf-ccamp-lmp-behavior-negotiation/

IESG discussion can be tracked via
http://datatracker.ietf.org/doc/draft-ietf-ccamp-lmp-behavior-negotiation/ballot/


No IPR declarations have been submitted directly on this I-D.


Protocol Action: 'Session Description Protocol (SDP) Media Capabilities Negotiation' to Proposed Standard (draft-ietf-mmusic-sdp-media-capabilities-17.txt)

2013-01-07 Thread The IESG
The IESG has approved the following document:
- 'Session Description Protocol (SDP) Media Capabilities Negotiation'
  (draft-ietf-mmusic-sdp-media-capabilities-17.txt) as Proposed Standard

This document is the product of the Multiparty Multimedia Session Control
Working Group.

The IESG contact persons are Gonzalo Camarillo and Robert Sparks.

A URL of this Internet Draft is:
http://datatracker.ietf.org/doc/draft-ietf-mmusic-sdp-media-capabilities/




Technical Summary

The Session Description Protocol (SDP) has a general framework for
endpoints to indicate and negotiate capabilities within SDP. However,
the base framework defines capabilities for negotiating transport
protocols and attributes. In this document, the SDP capability
negotiation framework is extended with the ability to additionally
indicate and negotiate media types and their associated parameters.

Working Group Summary

The first version of the document is dating February 2007. Since
then, the MMUSIC working group has been progressing the document
until getting satisfied with the current version.

Document Quality

The document has been reviewed and contributed by many participants
of the working group, among others: Culleng Jennings, Christer
Holmberg, Matt Lepinski, Joerg Ott, Colin Perkins, Thomas Stach,
Ingemar Johansson, Andrew Allen, and Magnus Westerlund.

The document was first WGLCed on version 10 (July 2010) and subsequently
on version 14 (July 2012). All the open issues have been addressed
and the WG has got consensus on version 15.

Personnel

Miguel Garcia is the Document Shepherd. Gonzalo Camarillo is the
Responsible Area Director. 


RTCWEB MMUSIC WG Joint Interim Meeting, February 5-7, 2013

2013-01-07 Thread IESG Secretary
The RTCWEB and MMUSIC working groups would like to announce an upcoming
joint interim meeting.

The meeting will be held in the Boston, Massachusetts area on February
5-7th, 2013, in conjunction with meetings of the W3C WEBRTC working
group. Full details, including meeting site, nearby hotels, and other
logistics have been posted to both the rtcweb and mmusic mailing lists 
(http://www.ietf.org/mail-archive/web/rtcweb/current/maillist.html, 
http://www.ietf.org/mail-archive/web/mmusic/current/maillist.html).


Call for Comment: Principles for Unicode Code Point Inclusion in Labels in the DNS

2013-01-07 Thread IAB Chair
This is a reminder of an ongoing IETF-wide Call for Comment on
'Principles for Unicode Code Point Inclusion in Labels in the DNS'.

The document is being considered for publication as an Informational
RFC within the IAB stream, and is available for inspection here:
http://tools.ietf.org/html/draft-iab-dns-zone-codepoint-pples

The Call for Comment will last until January 21, 2013. Please send
comments to iab at iab.org or submit them via TRAC (see below).
===
Submitting Comments via TRAC
1. To submit an issue in TRAC, you first need to login to the IAB site
on the tools server:
http://tools.ietf.org/wg/iab/trac/login

2. If you don't already have a login ID, you can obtain one by
navigating to this site:
http://trac.tools.ietf.org/newlogin

3. Once you have obtained an account, and have logged in, you can file
an issue by navigating to the ticket entry form:
http://trac.tools.ietf.org/wg/iab/trac/newticket

4. When opening an issue:
a. The Type: field should be set to defect for an issue with the
current document text, or enhancement for a proposed addition of
functionality (such as
an additional requirement).
b. The Priority: field is set based on the severity of the Issue. For
example, editorial issues are typically minor or trivial.
c. The Milestone: field should be set to milestone1 (useless, I know).
d. The Component: field should be set to the document you are filing
the issue on.
e. The Version: field should be set to 1.0.
f. The Severity: field should be set to based on the status of the
document (e.g. In WG Last Call for a document in IAB last call)
g. The Keywords: and CC: fields can be left blank unless inspiration
seizes you.
h. The Assign To: field is generally filled in with the email address
of the editor.

5. Typically it won't be necessary to enclose a file with the ticket,
but if you need to, select I have files to attach to this ticket.

6. If you want to preview your Issue, click on the Preview button.
When you're ready to submit the issue, click on the Create Ticket
button.

7. If you want to update an issue, go to the View Tickets page:
http://trac.tools.ietf.org/wg/iab/trac/report/1

Click on the ticket # you want to update, and then modify the ticket
fields as required.

Protocol Action: 'Representing IPv6 Zone Identifiers in Address Literals and Uniform Resource Identifiers' to Proposed Standard (draft-ietf-6man-uri-zoneid-06.txt)

2013-01-07 Thread The IESG
The IESG has approved the following document:
- 'Representing IPv6 Zone Identifiers in Address Literals and Uniform
   Resource Identifiers'
  (draft-ietf-6man-uri-zoneid-06.txt) as Proposed Standard

This document is the product of the IPv6 Maintenance Working Group.

The IESG contact persons are Brian Haberman and Ralph Droms.

A URL of this Internet Draft is:
http://datatracker.ietf.org/doc/draft-ietf-6man-uri-zoneid/




Technical Summary:

This document describes how the Zone Identifier of an IPv6 scoped
address can be represented in a literal IPv6 address and in a
Uniform Resource Identifier that includes such a literal address.
It updates RFC 3986 accordingly.

Working Group Summary:

Was there anything in WG process that is worth noting? For example,
was there controversy about particular points or were there decisions
where the consensus was particularly rough?

As outlined in Appendix A of the document, the working group
considered many alternatives, and the document went through several
iterations as a result of consensus evolving. Of particular interest
may be that creating a new separator (-) that would not conflict
with URI escaping was considered and rejected.

Document Quality:

Are there existing implementations of the protocol? Have a significant
number of vendors indicated their plan to implement the specification?
Are there any reviewers that merit special mention as having done a
thorough review, e.g., one that resulted in important changes or a
conclusion that the document had no substantive issues? If there was a
MIB Doctor, Media Type or other expert review, what was its course
(briefly)? In the case of a Media Type review, on what date was the
request posted?

Dave Thaler from Microsoft and Stuart Chesire from Apple, made
significant reviews and jointly proposed text that would later reach
consensus.

Personnel:

Ole Troan is the Document Shepherd.
Brian Haberman is the Responsible Area Director.


IETF 86 - Meeting Information

2013-01-07 Thread IETF Secretariat
86th IETF Meeting
Orlando, FL, USA
March 10-15, 2013
Host: Comcast and NBCUniversal

Meeting venue:  Caribe Royale http://www.cariberoyale.com

Register online at: http://www.ietf.org/meetings/86/

1.  Registration
2.  Social Event
3.  Visas  Letters of Invitation
4.  Accommodations
5.  Companion Program
6.  IETF and IEEE back-to-back Meetings

1. Registration
A. Early-Bird Registration - USD 650.00 Pay by Friday, 1 March 2013 UTC 
24:00
B. After Early-Bird cutoff - USD 800.00
C. Full-time Student Registrations - USD 150.00 (with proper ID)
D. One Day Pass Registration - USD 350.00
E. Registration Cancellation   
Cut-off for registration cancellation is Monday,
4 March 2013 at UTC 24:00.
Cancellations are subject to a 10% (ten percent)
cancellation fee if requested by that date and time.
F. Online Registration and Payment ends Friday, 8 March 2013, 1700 local 
Orlando time.
G. On-site Registration starting Sunday, 10 March 2013 at 1100 local 
Orlando time.

2. Social Event
IETF86's social event sponsored by Comcast and NBCUniversal will be at 
Universal Studios Orlando's Wizarding World of Harry Potter on Tuesday evening. 
The Wizarding World of Harry Potter will be open exclusively for the IETF86 
social. Attendees will be transported from the Caribe Royal to Universal 
Studios where they will enjoy the many attractions of Harry Potter including 
the thrilling Harry Potter and the Forbidden Journey ride.   
Guest will enjoy a buffet dinner, soft drinks, wine, beer and Harry 
Potter's own Butterbeer served in Hogsmeade Village through out the evening.  
Return transportation will be provided to the Caribe Royale throughout the 
evening.
The cost for the event is $30 per person and due to the expected high 
demand, registration is limited to two tickets an attendee.

3. Visas  Letters of Invitation:
Information on Visiting the United States, please visit:
http://travel.state.gov/visa/

After you complete the registration process, you can request an electronic 
IETF letter of invitation as well. The registration system also allows for you 
to request a hard copy IETF letter of invitation. You may also request one at a 
later time by following the link provided in the confirmation email.

Please note that the IETF Letter of Invitation may not be sufficient for 
obtaining a visa to enter the United States.

4.  Accommodations
The IETF is holding a block of guest rooms at the Caribe Royale, the 
meeting venue.
Room Rates include in room high-speed Internet access. Sales/room tax 
(applicable taxes are subject to change) are excluded in the above prices, 
currently 12.5%. Room rates DO NOT include daily breakfast.

Reservations Cut off Date: 13 February 2013 

For additional information on rates and policies, please visit: 
http://www.ietf.org/meeting/86/hotel.html

5.  Companion Program
   If you are traveling with a friend or family member over 18 years of age you 
can register them for the IETF Companion Program for only USD 25.00

   Benefits include:
   - A special welcome reception for companions from 1630-1730 on Sunday, 10 
March
   - Ability to attend the official Welcome Reception from 1700-1900 on Sunday, 
10 March
   - A distinctive meeting badge that grants access to the venue (not to be 
used to attend working sessions)
   - Participation in a separate companion email list if you choose to help 
communicate and make plans with other IETF Companions.

   You can register your companion at any time via the IETF website or onsite 
at the meeting.

   To join the 86 companions mailing list only see:
   https://www.ietf.org/mailman/listinfo/86companions

6.  IETF and IEEE back-to-back Meetings
   The IEEE 802 Executive Committee and the IETF, IESG and IAOC are pleased to 
confirm the March 2013 IEEE 802 Plenary Session and IETF86 will both take place 
at the Caribe Royale Resort  Convention Center in Orlando, Florida, USA during 
the month of March 2013. IETF86 will take place March 10-15, 2013 and the IEEE 
802 Plenary Session will occur March 17-22, 2013.
   Registration for the March 2013 IEEE 802 Plenary Session and IETF86 are now 
available and the Caribe Royale has extended great rates for session attendees 
over the dates of both sessions. We encourage IEEE 802 and IETF participants to 
consider registering for both events in order to take advantage of this fine 
opportunity to learn more about each group.
For Registration, Hotel Reservations and more information about the 
specific events please visit the event websites for the March 2013 IEEE 802 
Plenary Session, https://802world.org/apps/session/79/register1/register and 
the IETF86, http://www.ietf.org/meetings/86/.

Only 61 days until the Orlando IETF!


RFC 6819 on OAuth 2.0 Threat Model and Security Considerations

2013-01-07 Thread rfc-editor

A new Request for Comments is now available in online RFC libraries.


RFC 6819

Title:  OAuth 2.0 Threat Model and 
Security Considerations 
Author: T. Lodderstedt, Ed.,
M. McGloin, 
P. Hunt
Status: Informational
Stream: IETF
Date:   January 2013
Mailbox:tors...@lodderstedt.net, 
mark.mcgl...@ie.ibm.com, 
phil.h...@yahoo.com
Pages:  71
Characters: 158332
Updates/Obsoletes/SeeAlso:   None

I-D Tag:draft-ietf-oauth-v2-threatmodel-08.txt

URL:http://www.rfc-editor.org/rfc/rfc6819.txt

This document gives additional security considerations for OAuth,
beyond those in the OAuth 2.0 specification, based on a comprehensive
threat model for the OAuth 2.0 protocol.  This document is not an 
Internet Standards Track specification; it is published for 
informational purposes.

This document is a product of the Web Authorization Protocol Working Group of 
the IETF.


INFORMATIONAL: This memo provides information for the Internet community.
It does not specify an Internet standard of any kind. Distribution of
this memo is unlimited.

This announcement is sent to the IETF-Announce and rfc-dist lists.
To subscribe or unsubscribe, see
  http://www.ietf.org/mailman/listinfo/ietf-announce
  http://mailman.rfc-editor.org/mailman/listinfo/rfc-dist

For searching the RFC series, see http://www.rfc-editor.org/rfcsearch.html.
For downloading RFCs, see http://www.rfc-editor.org/rfc.html.

Requests for special distribution should be addressed to either the
author of the RFC in question, or to rfc-edi...@rfc-editor.org.  Unless
specifically noted otherwise on the RFC itself, all RFCs are for
unlimited distribution.


The RFC Editor Team
Association Management Solutions, LLC




RFC 6828 on Content Splicing for RTP Sessions

2013-01-07 Thread rfc-editor

A new Request for Comments is now available in online RFC libraries.


RFC 6828

Title:  Content Splicing for RTP Sessions 
Author: J. Xia
Status: Informational
Stream: IETF
Date:   January 2013
Mailbox:xiajin...@huawei.com
Pages:  17
Characters: 38904
Updates/Obsoletes/SeeAlso:   None

I-D Tag:draft-ietf-avtext-splicing-for-rtp-12.txt

URL:http://www.rfc-editor.org/rfc/rfc6828.txt

Content splicing is a process that replaces the content of a main
multimedia stream with other multimedia content and delivers the
substitutive multimedia content to the receivers for a period of
time.  Splicing is commonly used for insertion of local advertisements by
cable operators, whereby national advertisement content is replaced with a
local advertisement.

This memo describes some use cases for content splicing and a set of
requirements for splicing content delivered by RTP.  It provides
concrete guidelines for how an RTP mixer can be used to handle
content splicing.  This document is not an Internet Standards Track 
specification; it is published for informational purposes.

This document is a product of the Audio/Video Transport Extensions Working 
Group of the IETF.


INFORMATIONAL: This memo provides information for the Internet community.
It does not specify an Internet standard of any kind. Distribution of
this memo is unlimited.

This announcement is sent to the IETF-Announce and rfc-dist lists.
To subscribe or unsubscribe, see
  http://www.ietf.org/mailman/listinfo/ietf-announce
  http://mailman.rfc-editor.org/mailman/listinfo/rfc-dist

For searching the RFC series, see http://www.rfc-editor.org/rfcsearch.html.
For downloading RFCs, see http://www.rfc-editor.org/rfc.html.

Requests for special distribution should be addressed to either the
author of the RFC in question, or to rfc-edi...@rfc-editor.org.  Unless
specifically noted otherwise on the RFC itself, all RFCs are for
unlimited distribution.


The RFC Editor Team
Association Management Solutions, LLC




RFC 6829 on Label Switched Path (LSP) Ping for Pseudowire Forwarding Equivalence Classes (FECs) Advertised over IPv6

2013-01-07 Thread rfc-editor

A new Request for Comments is now available in online RFC libraries.


RFC 6829

Title:  Label Switched Path (LSP) Ping for
Pseudowire Forwarding Equivalence Classes (FECs) 
Advertised over IPv6 
Author: M. Chen, P. Pan,
C. Pignataro, R. Asati
Status: Standards Track
Stream: IETF
Date:   January 2013
Mailbox:m...@huawei.com, 
p...@infinera.com, 
cpign...@cisco.com,
raj...@cisco.com
Pages:  8
Characters: 15683
Updates:RFC4379

I-D Tag:draft-ietf-mpls-ipv6-pw-lsp-ping-04.txt

URL:http://www.rfc-editor.org/rfc/rfc6829.txt

The Multiprotocol Label Switching (MPLS) Label Switched Path (LSP)
Ping and traceroute mechanisms are commonly used to detect and
isolate data-plane failures in all MPLS LSPs, including LSPs used for
each direction of an MPLS Pseudowire (PW).  However, the LSP Ping and
traceroute elements used for PWs are not specified for IPv6 address
usage.

This document extends the PW LSP Ping and traceroute mechanisms so
they can be used with PWs that are set up and maintained using IPv6
LDP sessions.  This document updates RFC 4379.  [STANDARDS-TRACK]

This document is a product of the Multiprotocol Label Switching Working Group 
of the IETF.

This is now a Proposed Standard Protocol.

STANDARDS TRACK: This document specifies an Internet standards track
protocol for the Internet community,and requests discussion and suggestions
for improvements.  Please refer to the current edition of the Internet
Official Protocol Standards (STD 1) for the standardization state and
status of this protocol.  Distribution of this memo is unlimited.

This announcement is sent to the IETF-Announce and rfc-dist lists.
To subscribe or unsubscribe, see
  http://www.ietf.org/mailman/listinfo/ietf-announce
  http://mailman.rfc-editor.org/mailman/listinfo/rfc-dist

For searching the RFC series, see http://www.rfc-editor.org/rfcsearch.html.
For downloading RFCs, see http://www.rfc-editor.org/rfc.html.

Requests for special distribution should be addressed to either the
author of the RFC in question, or to rfc-edi...@rfc-editor.org.  Unless
specifically noted otherwise on the RFC itself, all RFCs are for
unlimited distribution.


The RFC Editor Team
Association Management Solutions, LLC